Re: [openstack-dev] Evil Firmware

2014-01-17 Thread Robert Collins
 The physical function is the one with the real PCI config space, so as
 long as the host controls it then there should be minimal risk from the
 guests since they have limited access via the virtual functions--typically
 mostly just message-passing to the physical function.

As long as its a whitelist of audited message handlers, thats fine. Of
course, if the message handlers haven't been audited, who knows whats
lurking in there.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-17 Thread Robert Collins
On 16 January 2014 03:31, Alan Kavanagh alan.kavan...@ericsson.com wrote:
 Hi fellow OpenStackers



 Does anyone have any recommendations on open source tools for disk
 erasure/data destruction software. I have so far looked at DBAN and disk
 scrubber and was wondering if ironic team have some better recommendations?

So for Ironic this is a moderately low priority thing right now - and
certainly I think it should be optional (what the default is is a
different discussion).

It's low priority because there are -so- many other concerns about
sharing bare metal machines between tenants that don't have
comprehensive mutual trust, that it's really not viable today (even on
relatively recent platforms IMNSHO).

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread Robert Collins
On 16 January 2014 14:51, John Griffith john.griff...@solidfire.com wrote:
 On Wed, Jan 15, 2014 at 6:41 PM, Michael Still mi...@stillhq.com wrote:
 John -- I agree with you entirely here. My concern is more that I
 think the CI tests need to run more frequently than weekly.

 Completely agree, but I guess in essence to start these aren't really
 CI tests.  Instead it's just a public health report for the various
 drivers vendors provide.  I'd love to see a higher frequency, but some
 of us don't have the infrastructure to try and run a test against
 every commit.  Anyway, I think there's HUGE potential for growth and
 adjustment as we go along.  I'd like to get something in place to
 solve the immediate problem first though.

You say you don't have the infrastructure - whats missing? What if you
only ran against commits in the cinder trees?

 To be honest I'd even be thrilled just to see every vendor publish a
 passing run against each milestone cut.  That in and of itself would
 be a huge step in the right direction in my opinion.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Cherry picking commit from oslo-incubator

2014-01-17 Thread Flavio Percoco

On 16/01/14 17:32 -0500, Doug Hellmann wrote:

On Thu, Jan 16, 2014 at 3:19 PM, Ben Nemec openst...@nemebean.com wrote:

   On 2014-01-16 13:48, John Griffith wrote:

   Hey Everyone,

   A review came up today that cherry-picked a specific commit to OSLO
   Incubator, without updating the rest of the files in the module.  I
   rejected that patch, because my philosophy has been that when you
   update/pull from oslo-incubator it should be done as a full sync of
   the entire module, not a cherry pick of the bits and pieces that you
   may or may not be interested in.

   As it turns out I've received a bit of push back on this, so it seems
   maybe I'm being unreasonable, or that I'm mistaken in my understanding
   of the process here.  To me it seems like a complete and total waste
   to have an oslo-incubator and common libs if you're going to turn
   around and just cherry pick changes, but maybe I'm completely out of
   line.

   Thoughts??


   I suppose there might be exceptions, but in general I'm with you.  For one
   thing, if someone tries to pull out a specific change in the Oslo code,
   there's no guarantee that code even works.  Depending on how the sync was
   done it's possible the code they're syncing never passed the Oslo unit
   tests in the form being synced, and since unit tests aren't synced to the
   target projects it's conceivable that completely broken code could get
   through Jenkins.

   Obviously it's possible to do a successful partial sync, but for the sake
   of reviewer sanity I'm -1 on partial syncs without a _very_ good reason
   (like it's blocking the gate and there's some reason the full module can't
   be synced).


I agree. Cherry picking a single (or even partial) commit really should be
avoided.

The update tool does allow syncing just a single module, but that should be
used very VERY carefully, especially because some of the changes we're making
as we work on graduating some more libraries will include cross-dependent
changes between oslo modules.


Agrred. Syncing on master should be complete synchornization from Oslo
incubator. IMHO, the only case where cherry-picking from oslo should
be allowed is when backporting patches to stable branches. Master
branches should try to keep up-to-date with Oslo and sync everything
every time.

With that in mind, I'd like to request project's members to do
periodic syncs from Oslo incubator. Yes, it is tedious, painful and
sometimes requires more than just syncing, but we should all try to
keep up-to-date with Oslo. The main reason why I'm asking this is
precisely stable branches. If the project stays way behind the
oslo-incubator, it'll be really painful to backport patches to stable
branches in case of failures.

Unfortunately, there are projects that are quite behind from
oslo-incubator master.

One last comment. FWIW, backwards compatibility is always considered
in all Oslo reviews and if there's a crazy-breaking change, it's
always notified.

Thankfully, this all will be alleviated with the libs that are being
pulled out from the incubator. The syncs will contain fewer modules
and will be smaller.


I'm happy you brought this up now. I was meaning to do it.

Cheers,
FF


--
@flaper87
Flavio Percoco


pgpCcGdi9afUn.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-17 Thread mar...@redhat.com
On 16/01/14 00:28, Clint Byrum wrote:
 Excerpts from James Slagle's message of 2014-01-15 05:07:08 -0800:
 I'll start by laying out how I see editing or updating nodes working
 in TripleO without Tuskar:

 To do my initial deployment:
 1.  I build a set of images for my deployment for different roles. The
 images are different based on their role, and only contain the needed
 software components to accomplish the role they intend to be deployed.
 2.  I load the images into glance
 3.  I create the Heat template for my deployment, likely from
 fragments that are already avaiable. Set quantities, indicate which
 images (via image uuid) are for which resources in heat.
 4.  heat stack-create with my template to do the deployment

 To update my deployment:
 1.  If I need to edit a role (or create a new one), I create a new image.
 2.  I load the new image(s) into glance
 3.  I edit my Heat template, update any quantities, update any image uuids, 
 etc.
 4.  heat stack-update my deployment

 In both cases above, I see the role of Tuskar being around steps 3 and 4.

 
 Agreed!
 

+1 ...


review  /#/c/52045/ is about generating the overcloud template using
merge.py **. Having recently picked this up again and following latest
wireframes and UI design, it seems like most of current Tuskar code is
going away. After initial panick I saw Jay has actually already started
that with /#/c/66062/

Jay: I think at some point (asap) my /#/c/52045/ will be rebased on your
 /#/c/66062/. Currently my code creates templates from the Tuskar
representations, i.e. ResourceClasses. For now I will assume that I'll
be getting something along the lines of:

{
'resource_categories': { 'controller': 1, 'compute': 4, 'object': 1,
'block' 2}
}

i.e. just resource categories and number of instances for each (plus any
other user supplied config/auth info). Will there be controllers (do we
need them, apart from a way to create, update, delete)? Let's talk some
more on irc later. I'll update the commit message on my review to point
to yours for now,

thanks! marios

** merge.py is going to be binned but it is the best thing we've got
_today_ and within the Icehouse timeframe.


 Steps 1 and 2 are really CI's responsibility in a CD cloud. The end of
 the testing phase is new images in glance! For a stable release cloud,
 a tool for pulling new released images from elsewhere into Glance would
 be really useful, but worst case an admin downloads the new images and
 loads them manually.
 
 I may be misinterpreting, but let me say that I don't think Tuskar
 should be building images. There's been a fair amount of discussion
 around a Nova native image building service [1][2]. I'm actually not
 sure what the status/concensus on that is, but maybe longer term,
 Tuskar might call an API to kick off an image build.

 
 Tuskar should just deploy what it has available. I definitely could
 see value in having an image updating service separate from Tuskar,
 but I think there are many different answers for how do images arrive
 in Glance?.
 
 Ok, so given that frame of reference, I'll reply inline:

 On Mon, Jan 13, 2014 at 11:18 AM, Jay Dobies jason.dob...@redhat.com wrote:
 I'm pulling this particular discussion point out of the Wireframes thread so
 it doesn't get lost in the replies.

 = Background =

 It started with my first bulletpoint:

 - When a role is edited, if it has existing nodes deployed with the old
 version, are the automatically/immediately updated? If not, how do we
 reflect that there's a difference between how the role is currently
 configured and the nodes that were previously created from it?

 I would think Roles need to be versioned, and the deployed version
 recorded as Heat metadata/attribute. When you make a change to a Role,
 it's a new version. That way you could easily see what's been
 deployed, and if there's a newer version of the Role to deploy.

 
 Could Tuskar version the whole deployment, but only when you want to
 make it so ? If it gets too granular, it becomes pervasive to try and
 keep track of or to roll back. I think that would work well with a goal
 I've always hoped Tuskar would work toward which would be to mostly just
 maintain deployment as a Heat stack that nests the real stack with the
 parameters realized. With Glance growing Heat template storage capability,
 you could just store these versions in Glance.
 
 Replies:

 I know you quoted the below, but I'll reply here since we're in a new thread.

 I would expect any Role change to be applied immediately. If there is some
 change where I want to keep older nodes how they are set up and apply new
 settings only to new added nodes, I would create new Role then.

 -1 to applying immediately.

 When you edit a Role, it gets a new version. But nodes that are
 deployed with the older version are not automatically updated.

 We will have to store image metadata in tuskar probably, that would map to
 glance, once the image is generated. I would say we need to 

Re: [openstack-dev] [OpenStack-Dev] Cherry picking commit from oslo-incubator

2014-01-17 Thread Roman Podoliaka
Hi all,

Huge +1 for periodic syncs for two reasons:
1) it makes syncs smaller and thus easier
2) code in oslo-incubator often contains important bug fixes (e.g.
incorrect usage of eventlet TLS we found in Nova a few months ago)

Thanks,
Roman

On Fri, Jan 17, 2014 at 10:15 AM, Flavio Percoco fla...@redhat.com wrote:
 On 16/01/14 17:32 -0500, Doug Hellmann wrote:

 On Thu, Jan 16, 2014 at 3:19 PM, Ben Nemec openst...@nemebean.com wrote:

On 2014-01-16 13:48, John Griffith wrote:

Hey Everyone,

A review came up today that cherry-picked a specific commit to OSLO
Incubator, without updating the rest of the files in the module.  I
rejected that patch, because my philosophy has been that when you
update/pull from oslo-incubator it should be done as a full sync of
the entire module, not a cherry pick of the bits and pieces that
 you
may or may not be interested in.

As it turns out I've received a bit of push back on this, so it
 seems
maybe I'm being unreasonable, or that I'm mistaken in my
 understanding
of the process here.  To me it seems like a complete and total
 waste
to have an oslo-incubator and common libs if you're going to turn
around and just cherry pick changes, but maybe I'm completely out
 of
line.

Thoughts??


I suppose there might be exceptions, but in general I'm with you.  For
 one
thing, if someone tries to pull out a specific change in the Oslo code,
there's no guarantee that code even works.  Depending on how the sync
 was
done it's possible the code they're syncing never passed the Oslo unit
tests in the form being synced, and since unit tests aren't synced to
 the
target projects it's conceivable that completely broken code could get
through Jenkins.

Obviously it's possible to do a successful partial sync, but for the
 sake
of reviewer sanity I'm -1 on partial syncs without a _very_ good reason
(like it's blocking the gate and there's some reason the full module
 can't
be synced).


 I agree. Cherry picking a single (or even partial) commit really should be
 avoided.

 The update tool does allow syncing just a single module, but that should
 be
 used very VERY carefully, especially because some of the changes we're
 making
 as we work on graduating some more libraries will include cross-dependent
 changes between oslo modules.


 Agrred. Syncing on master should be complete synchornization from Oslo
 incubator. IMHO, the only case where cherry-picking from oslo should
 be allowed is when backporting patches to stable branches. Master
 branches should try to keep up-to-date with Oslo and sync everything
 every time.

 With that in mind, I'd like to request project's members to do
 periodic syncs from Oslo incubator. Yes, it is tedious, painful and
 sometimes requires more than just syncing, but we should all try to
 keep up-to-date with Oslo. The main reason why I'm asking this is
 precisely stable branches. If the project stays way behind the
 oslo-incubator, it'll be really painful to backport patches to stable
 branches in case of failures.

 Unfortunately, there are projects that are quite behind from
 oslo-incubator master.

 One last comment. FWIW, backwards compatibility is always considered
 in all Oslo reviews and if there's a crazy-breaking change, it's
 always notified.

 Thankfully, this all will be alleviated with the libs that are being
 pulled out from the incubator. The syncs will contain fewer modules
 and will be smaller.


 I'm happy you brought this up now. I was meaning to do it.

 Cheers,
 FF


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on ssh_floating.py and improve it

2014-01-17 Thread Koderer, Marc
Hello Juilen,

I forwarded your mail to the correct mailing list. Please do not use the qa list
any longer.

I am happy that you are interested in stress tests. All the tests in
tempest/stress/actions are more or less deprecated. So what you should use
instead is the stress decorator (e.g. 
https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_actions.py#L55).
Unfortunately it's not yet used for scenarios like you describe. I'd suggest to
build a scenario test in tempest/scenario and use this decorator on it.

Any patch like that is welcome on gerrit. If you are planning to work in that
area for more than just a patch, a blueprint would be nice. A good way to
coordinate your efforts is also in the QA meeting
(https://wiki.openstack.org/wiki/Meetings/QATeamMeeting)

Regards
Marc


From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Wednesday, January 15, 2014 5:57 PM
To: openstack...@lists.openstack.org
Subject: [openstack-qa] [Tempest - Stress Test] : implement a full SSH 
connection on ssh_floating.py and improve it

Hi everyone,

I’m quite new on OpenStack / Tempest and I’m actually working on stress tests. 
I want to suggest a new feature in a currently available stress test.
Not sure if this email should be posted on the QA mailing list or the dev 
mailing list, but I give it a try here since it is about a Tempest stress test ☺

At the moment the “ssh_floating.py” stress test seems really interesting but I 
would like to improve it a bit.

By now this script is simulating an SSH connection by binding a TCP socket on 
the newly created instance. But this test don’t allow us to check if this 
instance is really available. I’m mostly thinking about the metadata service 
unable to provide the SSH key pair to the instance, but surely other scenarios 
can lead to an instance considered “ACTIVE” but actually unusable.

So I’m implementing a full SSH connection test using the “paramiko” SSH library 
and a key pair generated in the same way the other test resources are managed 
in this script : either one SSH key pair for every test runs or a new key pair 
for each run (depends on the JSON configuration file).
I don’t plan to remove the old test (TCP socket binding), rather move this one 
on a separate test function and put the full SSH connection test code instead.

Is this feature interesting for the OpenStack community ?
Should I create a blueprint on the Tempest project on Launchpad in order to 
provide my code through Gerrit ?

On a second time, I plan to overhaul improve this “ssh_floating.py” script by 
clean the code a little bit, and add more cleaning code in order to avoid 
leaving instances/security groups/floating IP behind : I do have this kind of 
behavior right now and I already improved the teardown() on this way.

Should I consider this code as a new functionality (thus create a blueprint) or 
should I create a defect and assign it to myself ?

Cordialement / Best Regards,

Julien LELOUP

RD 3DExperience Platform IaaS Factory Technology Engineer





julien.lel...@3ds.commailto:julien.lel...@3ds.com

[cid:image003.gif@01CF1216.D43ECE20]

3DS.COMhttp://www.3ds.com/


Dassault Systèmes | 10 rue Marcel Dassault, CS 40501 | 78946 
Vélizy-Villacoublay Cedex | France




This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,

(iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.

For other languages, go to http://www.3ds.com/terms/email-disclaimer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Cherry picking commit from oslo-incubator

2014-01-17 Thread Julien Danjou
On Thu, Jan 16 2014, John Griffith wrote:

 As it turns out I've received a bit of push back on this, so it seems
 maybe I'm being unreasonable, or that I'm mistaken in my understanding
 of the process here.  To me it seems like a complete and total waste
 to have an oslo-incubator and common libs if you're going to turn
 around and just cherry pick changes, but maybe I'm completely out of
 line.

 Thoughts??

You're not, as others have already stated, I definitely think this
approach is better too.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-17 Thread Gareth
Hi guys

I have another question about erasing all data from disk.

When using dd from /dev/zero could set bytes to zero from LBA0 on a disk.
But dd a whole disk will cost very very long time and the custom way is to
dd key data on the disk, for example the first 512B as MBR. But this is not
enough, some data still locates in disk which could bring some issues like
device mapper. And my question is how Disk Eraser cleans all data on disk.
dd is the tool, not a strategy.


On Fri, Jan 17, 2014 at 4:14 PM, Robert Collins
robe...@robertcollins.netwrote:

 On 16 January 2014 03:31, Alan Kavanagh alan.kavan...@ericsson.com
 wrote:
  Hi fellow OpenStackers
 
 
 
  Does anyone have any recommendations on open source tools for disk
  erasure/data destruction software. I have so far looked at DBAN and disk
  scrubber and was wondering if ironic team have some better
 recommendations?

 So for Ironic this is a moderately low priority thing right now - and
 certainly I think it should be optional (what the default is is a
 different discussion).

 It's low priority because there are -so- many other concerns about
 sharing bare metal machines between tenants that don't have
 comprehensive mutual trust, that it's really not viable today (even on
 relatively recent platforms IMNSHO).

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Gareth

*Cloud Computing, OpenStack, Fitness, Basketball*
*OpenStack contributor*
*Company: UnitedStack http://www.ustack.com*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel-dev] [OSTF][Ceilometer] ceilometer meters and samples delete

2014-01-17 Thread Dmitry Iakunchikov
Hi,

Ceilometer's tests creates some data which could not be deleted.
For example: after creation new instance, ceilometer creates new meters for
this instance and there  is no possibility to delete them. Even after
instance deletion they would not be deleted.

Should we take attention to this? And find some way to delete all data, for
example database wipe or patch to ceilometer client...
Or just to skip this?

-- 
With Best Regards
QA engineer Dmitry Iakunchikov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] savannaclient v2 api

2014-01-17 Thread Alexander Ignatov
++  for generic PUT for both ‘cancel’ and ‘refresh-status’, Andrew. Thanks!

Regards,
Alexander Ignatov



On 17 Jan 2014, at 06:19, Andrey Lazarev alaza...@mirantis.com wrote:

 My 5 cents:
 
 --
 REMOVE - @rest.put('/node-group-templates/node_group_template_id') - Not 
 Implemented
 REMOVE - @rest.put('/cluster-templates/cluster_template_id') - Not 
 Implemented
 --
 Disagree with that. Samsung people did great job in both 
 savanna/savanna-dashboard
 to make this implemented [2], [3]. We should leave and support these calls in 
 savanna.
 --
 [AL] Agree with Alexander. Ability to modify templates is very useful feature.
 
 
 
 REMOVE - @rest.get('/job-executions/job_execution_id/refresh-status') - 
 refresh
 and return status - GET should not side-effect, status is part of details and
 updated periodically, currently unused
 
 This call goes to Oozie directly to ask it about job status. It allows not to 
 wait
 too long when periodic task will update status JobExecution object in Savanna.
 The current GET asks status of JobExecution from savanna-db. I think we can
 leave this call, it might be useful for external clients.
 
 [AL] Agree that GET shouldn't have side effect (or at least documented side 
 effect). I think it could be generic PUT on 
 '/job-executions/job_execution_id' which can refresh status or cancel job 
 on hadoop side.
 
 
 
 REMOVE - @rest.get('/job-executions/job_execution_id/cancel') - cancel
 job-execution - GET should not side-effect, currently unused,
 use DELETE /job/executions/job_execution_id
 
 Disagree. We have to leave this call. This methods stops job executing on the
 Hadoop cluster but doesn't remove all its related info from savanna-db.
 DELETE removes it completely.
 
 [AL] We need 'cancel'. Vote on generic PUT (see previous item).
 
 
 Thanks,
 Andrew.
 
 
 On Thu, Jan 16, 2014 at 5:10 AM, Alexander Ignatov aigna...@mirantis.com 
 wrote:
 Matthew,
 
 I'm ok with proposed solution. Some comments/thoughts below:
 
 -
 FIX - 
 @rest.post_file('/plugins/plugin_name/version/convert-config/name')
 - this is an RPC call, made only by a client to do input validation,
 move to POST /validations/plugins/:name/:version/check-config-import
 -
 AFAIR, this rest call was introduced not only for validation.
 The main idea was to create method which converts plugin specific config
 for cluster creation to savanna's cluster template [1]. So maybe we may change
 this rest call to: /plugins/convert-config/name and include all need fields
 to data. Anyway we have to know Hortonworks guys opinion. Currently HDP
 plugin implements this method only.
 
 --
 REMOVE - @rest.put('/node-group-templates/node_group_template_id') - Not 
 Implemented
 REMOVE - @rest.put('/cluster-templates/cluster_template_id') - Not 
 Implemented
 --
 Disagree with that. Samsung people did great job in both 
 savanna/savanna-dashboard
 to make this implemented [2], [3]. We should leave and support these calls in 
 savanna.
 
 --
 CONSIDER rename /jobs - /job-templates (consistent w/ cluster-templates  
 clusters)
 CONSIDER renaming /job-executions to /jobs
 ---
 Good idea!
 
 --
 FIX - @rest.get('/jobs/config-hints/job_type') - should move to
 GET /plugins/plugin_name/plugin_version, similar to get_node_processes
 and get_required_image_tags
 --
 Not sure if it should be plugin specific right now. EDP uses it to show some
 configs to users in the dashboard. it's just a cosmetic thing. Also when user
 starts define some configs for some job he might not define cluster yet and
 thus plugin to run this job. I think we should leave it as is and leave only
 abstract configs like Mapper/Reducer class and allow users to apply any
 key/value configs if needed.
 
 -
 CONSIDER REMOVING, MUST ALWAYS UPLOAD TO Swift FOR /job-binaries
 -
 Disagree. It was discussed before starting EDP implementation that there are
 a lot of OS installations which don't have Swift deployed, and ability to run
 jobs using savanna internal db is a good option in this case. But yes, Swift
 is more preferred. Waiting for Trevor's and maybe Nadya's comments here under
 this section.
 
 
 REMOVE - @rest.get('/job-executions/job_execution_id/refresh-status') - 
 refresh
 and return status - GET should not side-effect, status is part of details and
 updated periodically, currently unused
 
 This call goes to Oozie directly to ask it about job status. It allows not to 
 wait
 too long when periodic task will update status JobExecution object in Savanna.
 The current GET asks status of JobExecution from savanna-db. I think we can
 leave this call, it might be useful for external clients.
 
 
 REMOVE - @rest.get('/job-executions/job_execution_id/cancel') - cancel
 job-execution - GET should not side-effect, currently unused,
 use DELETE /job/executions/job_execution_id
 
 Disagree. We have to leave this call. This methods stops job executing on the
 Hadoop 

Re: [openstack-dev] [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on ssh_floating.py and improve it

2014-01-17 Thread LELOUP Julien
Hello Marc,

Thanks for your answer.

At the moment I'm willing to spend some time on this kind of scenario so I will 
see how to use the stress decorator inside a scenario test.
Does this means that all stress tests available in tempest/stress should be 
ported as scenario tests with this decorator ?

I do have some ideas about features on stress test that I find useful for my 
own use case : like adding more statistics on stress test runs in order to use 
them as benchmarks.
I don't know if this kind of feature was already discussed in the OpenStack 
community but since stress tests are a bit deprecated now, maybe there is some 
room for this kind of improvement on fresh stress tests.

Best Regards,

Julien LELOUP

-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Friday, January 17, 2014 9:45 AM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on 
ssh_floating.py and improve it

Hello Juilen,

I forwarded your mail to the correct mailing list. Please do not use the qa 
list any longer.

I am happy that you are interested in stress tests. All the tests in 
tempest/stress/actions are more or less deprecated. So what you should use 
instead is the stress decorator (e.g. 
https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_actions.py#L55).
Unfortunately it's not yet used for scenarios like you describe. I'd suggest to 
build a scenario test in tempest/scenario and use this decorator on it.

Any patch like that is welcome on gerrit. If you are planning to work in that 
area for more than just a patch, a blueprint would be nice. A good way to 
coordinate your efforts is also in the QA meeting
(https://wiki.openstack.org/wiki/Meetings/QATeamMeeting)

Regards
Marc


From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Wednesday, January 15, 2014 5:57 PM
To: openstack...@lists.openstack.org
Subject: [openstack-qa] [Tempest - Stress Test] : implement a full SSH 
connection on ssh_floating.py and improve it

Hi everyone,

I’m quite new on OpenStack / Tempest and I’m actually working on stress tests. 
I want to suggest a new feature in a currently available stress test.
Not sure if this email should be posted on the QA mailing list or the dev 
mailing list, but I give it a try here since it is about a Tempest stress test ☺

At the moment the “ssh_floating.py” stress test seems really interesting but I 
would like to improve it a bit.

By now this script is simulating an SSH connection by binding a TCP socket on 
the newly created instance. But this test don’t allow us to check if this 
instance is really available. I’m mostly thinking about the metadata service 
unable to provide the SSH key pair to the instance, but surely other scenarios 
can lead to an instance considered “ACTIVE” but actually unusable.

So I’m implementing a full SSH connection test using the “paramiko” SSH library 
and a key pair generated in the same way the other test resources are managed 
in this script : either one SSH key pair for every test runs or a new key pair 
for each run (depends on the JSON configuration file).
I don’t plan to remove the old test (TCP socket binding), rather move this one 
on a separate test function and put the full SSH connection test code instead.

Is this feature interesting for the OpenStack community ?
Should I create a blueprint on the Tempest project on Launchpad in order to 
provide my code through Gerrit ?

On a second time, I plan to overhaul improve this “ssh_floating.py” script by 
clean the code a little bit, and add more cleaning code in order to avoid 
leaving instances/security groups/floating IP behind : I do have this kind of 
behavior right now and I already improved the teardown() on this way.

Should I consider this code as a new functionality (thus create a blueprint) or 
should I create a defect and assign it to myself ?

Cordialement / Best Regards,

Julien LELOUP

RD 3DExperience Platform IaaS Factory Technology Engineer





julien.lel...@3ds.commailto:julien.lel...@3ds.com

[cid:image003.gif@01CF1216.D43ECE20]

3DS.COMhttp://www.3ds.com/


Dassault Systèmes | 10 rue Marcel Dassault, CS 40501 | 78946 
Vélizy-Villacoublay Cedex | France




This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,

(iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.

For other languages, go to http://www.3ds.com/terms/email-disclaimer

This email and any attachments are intended solely for the use of the 
individual or entity 

Re: [openstack-dev] [Fuel-dev] [OSTF][Ceilometer] ceilometer meters and samples delete

2014-01-17 Thread Julien Danjou
On Fri, Jan 17 2014, Dmitry Iakunchikov wrote:

 Ceilometer's tests creates some data which could not be deleted.
 For example: after creation new instance, ceilometer creates new meters for
 this instance and there  is no possibility to delete them. Even after
 instance deletion they would not be deleted.

 Should we take attention to this? And find some way to delete all data, for
 example database wipe or patch to ceilometer client...
 Or just to skip this?

Why would you delete this?
There's a time-to-live mechanism in Ceilometer to delete old samples.

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] negative test framework: ready for review

2014-01-17 Thread Koderer, Marc
Hi all,

first part of the negative test framework is ready for review:

https://review.openstack.org/#/c/64733/

Please have a look.

Regards
Marc and David

DEUTSCHE TELEKOM AG
Digital Business Unit, Cloud Services (PI)
Marc Koderer
Cloud Technology Software Developer
T-Online-Allee 1, 64211 Darmstadt
www.telekom.com  

LIFE IS FOR SHARING.

DEUTSCHE TELEKOM AG
Supervisory Board: Prof. Dr. Ulrich Lehner (Chairman)
Board of Management: René Obermann (Chairman),
Reinhard Clemens, Niek Jan van Damme, Timotheus Höttges,
Dr. Thomas Kremer, Claudia Nemat, Prof. Dr. Marion Schick
Commercial register: Amtsgericht Bonn HRB 6794
Registered office: Bonn

BIG CHANGES START SMALL – CONSERVE RESOURCES BY NOT PRINTING EVERY E-MAIL.
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [OSTF][Ceilometer] ceilometer meters and samples delete

2014-01-17 Thread Nadya Privalova
Hi guys,

I would ask in another way.
Ceilometer has a mechanism to add a sample through POST. So it looks not
consistent not to allow user to delete a sample.
IMHO, insertion and deletion through REST looks a little bit hacky: user
always has an ability to fake data collected from OpenStack services. But
maybe I don't see any valuable usecases.
Anyway, it seems reasonable to have both add_sample and delete_sample in
API or not to have neither.

Thanks,
Nadya


On Fri, Jan 17, 2014 at 3:50 PM, Julien Danjou jul...@danjou.info wrote:

 On Fri, Jan 17 2014, Dmitry Iakunchikov wrote:

  Ceilometer's tests creates some data which could not be deleted.
  For example: after creation new instance, ceilometer creates new meters
 for
  this instance and there  is no possibility to delete them. Even after
  instance deletion they would not be deleted.
 
  Should we take attention to this? And find some way to delete all data,
 for
  example database wipe or patch to ceilometer client...
  Or just to skip this?

 Why would you delete this?
 There's a time-to-live mechanism in Ceilometer to delete old samples.

 --
 Julien Danjou
 // Free Software hacker / independent consultant
 // http://julien.danjou.info

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Evil Firmware

2014-01-17 Thread Ian Wells
On 17 January 2014 01:16, Chris Friesen chris.frie...@windriver.com wrote:

 On 01/16/2014 05:12 PM, CARVER, PAUL wrote:

  Jumping back to an earlier part of the discussion, it occurs to me
 that this has broader implications. There's some discussion going on
 under the heading of Neutron with regard to PCI passthrough. I
 imagine it's under Neutron because of a desire to provide passthrough
 access to NICs, but given some of the activity around GPU based
 computing it seems like sooner or later someone is going to try to
 offer multi-tenant cloud servers with the ability to do GPU based
 computing if they haven't already.


 I'd expect that the situation with PCI passthrough may be a bit different,
 at least in the common case.

 The usual scenario is to use SR-IOV to have a single physical device
 expose a bunch of virtual functions, and then a virtual function is passed
 through into a guest.


That entirely depends on the card in question.  Some cards support SRIOV
and some don't (you wouldn't normally use SRIOV on a GPU, as I understand
it, though you might reasonably expect it on a modern network card).  Even
on cards that do support SRIOV there's nothing stopping you assigning the
whole card.

But from the discussion here it seems that (whole card passthrough) +
(reprorgrammable firmware) would be the danger, and programmatically
there's no way to tell from the passthrough code in Nova whether any given
card has programmable firmware.  It's a fairly safe bet you can't reprogram
firmware permanently from a VF, agreed.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Evil Firmware

2014-01-17 Thread Ian Wells
On 17 January 2014 09:12, Robert Collins robe...@robertcollins.net wrote:

  The physical function is the one with the real PCI config space, so as
  long as the host controls it then there should be minimal risk from the
  guests since they have limited access via the virtual
 functions--typically
  mostly just message-passing to the physical function.

 As long as its a whitelist of audited message handlers, thats fine. Of
 course, if the message handlers haven't been audited, who knows whats
 lurking in there.


The description doesn't quite gel with my understanding - SRIOV VFs *do*
have a PCI space that you can map in, and a DMA as well, typically (which
is virtualised via the page table for the VM).  However, some functions of
the card may not be controllable in that space (e.g., for network devices,
VLAN encapsulation, promiscuity, and so on) and you may have to make a
request from the VF in the VM to the PF in the host kernel.

The message channels in question are implemented in the PF and VF drivers
in the Linux kernel code (the PF end being the one where security matters,
since a sufficiently malicious VM can try it on at the VF end and see what
happens).  I don't know whether you consider that audited enough.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [OSTF][Ceilometer] ceilometer meters and samples delete

2014-01-17 Thread Julien Danjou
On Fri, Jan 17 2014, Nadya Privalova wrote:

 I would ask in another way.
 Ceilometer has a mechanism to add a sample through POST. So it looks not
 consistent not to allow user to delete a sample.
 IMHO, insertion and deletion through REST looks a little bit hacky: user
 always has an ability to fake data collected from OpenStack services. But
 maybe I don't see any valuable usecases.
 Anyway, it seems reasonable to have both add_sample and delete_sample in
 API or not to have neither.

From the user PoV, that totally makes sense, agreed.

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [OSTF][Ceilometer] ceilometer meters and samples delete

2014-01-17 Thread Ildikó Váncsa
Hi,

In my opinion it would be better to let the user to define a ttl value for 
his/her own samples. I do not see the use case, where it is needed to delete 
samples, also if the user is able to randomly delete some data, then the 
statistics functions will not generate a valid output for that data set, which 
was affected by the deletion. As the original question was about testing, I do 
not have a deeper knowledge about how testing works now regarding to generating 
and storing the samples. Maybe if there is a need for deleting data there, it 
can be considered that how we can handle that particular case.

How would we identify the samples, which should be deleted? If it is allowed to 
delete any samples from the system via the API, it does not sound good either. 
So if it is an all or nothing situation, I would say nothing.

Best Regards,
Ildiko

-Original Message-
From: Julien Danjou [mailto:jul...@danjou.info] 
Sent: Friday, January 17, 2014 1:35 PM
To: Nadya Privalova
Cc: fuel-...@lists.launchpad.net; OpenStack Development Mailing List (not for 
usage questions); Vladimir Kuklin
Subject: Re: [openstack-dev] [Fuel-dev] [OSTF][Ceilometer] ceilometer meters 
and samples delete

On Fri, Jan 17 2014, Nadya Privalova wrote:

 I would ask in another way.
 Ceilometer has a mechanism to add a sample through POST. So it looks 
 not consistent not to allow user to delete a sample.
 IMHO, insertion and deletion through REST looks a little bit hacky: 
 user always has an ability to fake data collected from OpenStack 
 services. But maybe I don't see any valuable usecases.
 Anyway, it seems reasonable to have both add_sample and delete_sample 
 in API or not to have neither.

From the user PoV, that totally makes sense, agreed.

--
Julien Danjou
# Free Software hacker # independent consultant # http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel-dev] [OSTF][Ceilometer] ceilometer meters and samples delete

2014-01-17 Thread Dmitry Iakunchikov
For now in Fuel we keep samples forever

In case if we will use time_to_live, how long we should keep this data?


2014/1/17 Julien Danjou jul...@danjou.info

 On Fri, Jan 17 2014, Nadya Privalova wrote:

  I would ask in another way.
  Ceilometer has a mechanism to add a sample through POST. So it looks not
  consistent not to allow user to delete a sample.
  IMHO, insertion and deletion through REST looks a little bit hacky: user
  always has an ability to fake data collected from OpenStack services. But
  maybe I don't see any valuable usecases.
  Anyway, it seems reasonable to have both add_sample and delete_sample in
  API or not to have neither.

 From the user PoV, that totally makes sense, agreed.

 --
 Julien Danjou
 # Free Software hacker # independent consultant
 # http://julien.danjou.info




-- 
With Best Regards
QA engineer Dmitry Iakunchikov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-01-17 Thread James Slagle
On Thu, Jan 16, 2014 at 7:29 PM, Clint Byrum cl...@fewbar.com wrote:
 Note that tripleo-incubator is special and should not be released. It
 is intentionally kept unfrozen and unreleased to make sure there is no
 illusion of stability.

I think it would be nice if we could point people at a devtest that
they could use with our other released stuff. Without that, we might
make a change to devtest, such as showing the use of a new heat
parameter in our templates, and if they're trying to follow along with
a released tripleo-heat-templates then they would have a problem.

Without a branch of incubator, there's no story or documentation
around using any of our released stuff.  You could follow along with
devtest to get an idea of how it's supposed to work and indeed it
might even work, but I don't think that's good enough. There is
tooling in incubator that has proved it's usefulness. Take an example
like setup-endpoints, what we're effectively saying without allowing
people to use that is that there is a useful tool that will setup
endpoints for you, but don't use it with our released stuff because
it's not gauranteed to work and instead make these 10'ish calls to
keystone via some other method. Then you'd also end up with a
different but parallel set of instructions for using our released
stuff vs. not.

This is prohibitive to someone who may want to setup a tripleo CI/CD
cloud deploying stable icehouse or from milestone branches. I think
people would just create their own fork of tripleo-incubator and use
that.

 If there are components in it that need releasing, they should be moved
 into relevant projects or forked into their own projects.

I'd be fine with that approach, except that's pretty much everything
in incubator, the scripts, templates, generated docs, etc. Instead of
creating a new forked repo, why don't we just rename tripleo-incubator
to tripleo-deployment and have some stable branches that people could
use with our releases?

I don't feel like that precludes tripleo from having no stability on
the master branch at all.

 Excerpts from Ryan Brady's message of 2014-01-16 07:42:33 -0800:
 +1 for releases.

 In the past I requested a tag for tripleo-incubator to make it easier to 
 build a package and test.

 In my case a common tag would be easier to track than trying to gather all 
 of the commit hashes where
 the projects are compatible.

 Ryan

 - Original Message -
 From: James Slagle james.sla...@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Thursday, January 16, 2014 10:13:58 AM
 Subject: [openstack-dev] [TripleO] milestone-proposed branches

 At last summit, we talked about doing stable branches and releases for
 the TripleO projects for Icehouse.

 I'd like to propose doing a milestone-proposed branch[1] and tagged
 release for icehouse milestones 2 and 3. Sort of as dry run and
 practice, as I think it could help tease out some things we might not
 have considered when we do try to do icehouse stable branches.

 The icehouse milestone 2 date is January 23rd [2]. So, if there is
 concensus to do this, we probably need to get the branches created
 soon, and then do any bugfixes in the branches (master too of course)
 up unitl the 23rd.

 I think it'd be nice if we had a working devtest to use with the
 released tarballs.  This raises a couple of points:
  - We probably need a way in devtest to let people use a different
 branch (or tarball) of the stuff that is git cloned.
 - What about tripleo-incubator itself? We've said in the past we don't
 want to attempt to stabilize or release that due to it's incubator
 nature.  But, if we don't have a stable set of devtest instructions
 (and accompanying scripts like setup-endpoints, etc), then using an
 ever changing devtest with the branches/tarballs is not likely to work
 for very long.

 And yes, I'm volunteering to do the work to support the above, and the
 release work :).

 Thoughts?

 [1] https://wiki.openstack.org/wiki/BranchModel
 [2] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule

 --
 -- James Slagle
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-01-17 Thread David Kranz

On 01/16/2014 10:56 PM, Matthew Treinish wrote:

Hi everyone,

With some recent changes made to Tempest compatibility with nosetests is going
away. We've started using newer features that nose just doesn't support. One
example of this is that we've started using testscenarios and we're planning to
do this in more places moving forward.

So at Icehouse-3 I'm planning to push the patch out to remove nosetests from the
requirements list and all the workarounds and references to nose will be pulled
out of the tree. Tempest will also start raising an unsupported exception when
you try to run it with nose so that there isn't any confusion on this moving
forward. We talked about doing this at summit briefly and I've brought it up a
couple of times before, but I believe it is time to do this now. I feel for
tempest to move forward we need to do this now so that there isn't any ambiguity
as we add even more features and new types of testing.

I'm with you up to here.


Now, this will have implications for people running tempest with python 2.6
since up until now we've set nosetests. There is a workaround for getting
tempest to run with python 2.6 and testr see:

https://review.openstack.org/#/c/59007/1/README.rst

but essentially this means that when nose is marked as unsupported on tempest
python 2.6 will also be unsupported by Tempest. (which honestly it basically has
been for while now just we've gone without making it official)
The way we handle different runners/os can be categorized as tested in 
gate, unsupported (should work, possibly some hacks needed), and 
hostile. At present, both nose and py2.6 I would say are in the 
unsupported category. The title of this message and the content up to 
here says we are moving nose to the hostile category. With only 2 months 
to feature freeze I see no justification in moving py2.6 to the hostile 
category. I don't see what new testing features scheduled for the next 
two months will be enabled by saying that tempest cannot and will not 
run on 2.6. It has been agreed I think by all projects that py2.6 will 
be dropped in J. It is OK that py2.6 will require some hacks to work and 
if in the next few months it needs a few more then that is ok. If I am 
missing another connection between the py2.6 and nose issues, please 
explain.


 -David




-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-17 Thread Thomas Herve
Hi all,

I've been looking at Neutron default LBaaS provider using haproxy, and while 
it's working nicely, it seems to have several shortcomings in terms of 
scalability and high availability. The Libra project seems to offer a more 
robust alternative, at least for scaling. The haproxy implementation in Neutron 
seems to continue to evolve (like with 
https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy), but I'm 
wondering why we can't leverage Libra. The APIs are a bit different, but the 
goals look very similar, and there is a waste of effort with 2 different 
implementations. Maybe we could see a Libra driver for Neutron LBaaS for 
example?

Thanks,

-- 
Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on ssh_floating.py and improve it

2014-01-17 Thread Koderer, Marc
Hi Julien,

most of the cases in tempest/stress are already covered by exiting tests in /api
or /scenario. The only thing that is missing is the decorator on them.

BTW here is the Etherpad from the summit talk that we had:
https://etherpad.openstack.org/p/icehouse-summit-qa-stress-tests

It possible help to understand the state. I didn't managed to work on the action
items that are left.

Your suggestions sound good. So I'd happy so see some patches :)

Regards
Marc

From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Friday, January 17, 2014 11:52 AM
To: Koderer, Marc
Cc: openstack-dev@lists.openstack.org
Subject: RE: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection 
on ssh_floating.py and improve it

Hello Marc,

Thanks for your answer.

At the moment I'm willing to spend some time on this kind of scenario so I will 
see how to use the stress decorator inside a scenario test.
Does this means that all stress tests available in tempest/stress should be 
ported as scenario tests with this decorator ?

I do have some ideas about features on stress test that I find useful for my 
own use case : like adding more statistics on stress test runs in order to use 
them as benchmarks.
I don't know if this kind of feature was already discussed in the OpenStack 
community but since stress tests are a bit deprecated now, maybe there is some 
room for this kind of improvement on fresh stress tests.

Best Regards,

Julien LELOUP

-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Friday, January 17, 2014 9:45 AM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on 
ssh_floating.py and improve it

Hello Juilen,

I forwarded your mail to the correct mailing list. Please do not use the qa 
list any longer.

I am happy that you are interested in stress tests. All the tests in 
tempest/stress/actions are more or less deprecated. So what you should use 
instead is the stress decorator (e.g. 
https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_actions.py#L55).
Unfortunately it's not yet used for scenarios like you describe. I'd suggest to 
build a scenario test in tempest/scenario and use this decorator on it.

Any patch like that is welcome on gerrit. If you are planning to work in that 
area for more than just a patch, a blueprint would be nice. A good way to 
coordinate your efforts is also in the QA meeting
(https://wiki.openstack.org/wiki/Meetings/QATeamMeeting)

Regards
Marc


From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Wednesday, January 15, 2014 5:57 PM
To: openstack...@lists.openstack.org
Subject: [openstack-qa] [Tempest - Stress Test] : implement a full SSH 
connection on ssh_floating.py and improve it

Hi everyone,

I’m quite new on OpenStack / Tempest and I’m actually working on stress tests. 
I want to suggest a new feature in a currently available stress test.
Not sure if this email should be posted on the QA mailing list or the dev 
mailing list, but I give it a try here since it is about a Tempest stress test ☺

At the moment the “ssh_floating.py” stress test seems really interesting but I 
would like to improve it a bit.

By now this script is simulating an SSH connection by binding a TCP socket on 
the newly created instance. But this test don’t allow us to check if this 
instance is really available. I’m mostly thinking about the metadata service 
unable to provide the SSH key pair to the instance, but surely other scenarios 
can lead to an instance considered “ACTIVE” but actually unusable.

So I’m implementing a full SSH connection test using the “paramiko” SSH library 
and a key pair generated in the same way the other test resources are managed 
in this script : either one SSH key pair for every test runs or a new key pair 
for each run (depends on the JSON configuration file).
I don’t plan to remove the old test (TCP socket binding), rather move this one 
on a separate test function and put the full SSH connection test code instead.

Is this feature interesting for the OpenStack community ?
Should I create a blueprint on the Tempest project on Launchpad in order to 
provide my code through Gerrit ?

On a second time, I plan to overhaul improve this “ssh_floating.py” script by 
clean the code a little bit, and add more cleaning code in order to avoid 
leaving instances/security groups/floating IP behind : I do have this kind of 
behavior right now and I already improved the teardown() on this way.

Should I consider this code as a new functionality (thus create a blueprint) or 
should I create a defect and assign it to myself ?

Cordialement / Best Regards,

Julien LELOUP

RD 3DExperience Platform IaaS Factory Technology Engineer





julien.lel...@3ds.commailto:julien.lel...@3ds.com

[cid:image003.gif@01CF1216.D43ECE20]

3DS.COMhttp://www.3ds.com/


Dassault 

Re: [openstack-dev] [TripleO][Tuskar] Domain Model Locations

2014-01-17 Thread Jay Pipes
On Thu, 2014-01-16 at 20:59 -0800, Clint Byrum wrote:
 Excerpts from Jay Pipes's message of 2014-01-12 11:40:41 -0800:
  On Fri, 2014-01-10 at 10:28 -0500, Jay Dobies wrote:
So, it's not as simple as it may initially seem :)
   
Ah, I should have been clearer in my statement - my understanding is 
that
we're scrapping concepts like Rack entirely.
   
   That was my understanding as well. The existing Tuskar domain model was 
   largely placeholder/proof of concept and didn't necessarily reflect 
   exactly what was desired/expected.
  
  Hmm, so this is a bit disappointing, though I may be less disappointed
  if I knew that Ironic (or something else?) planned to account for
  datacenter inventory in a more robust way than is currently modeled.
  
  If Triple-O/Ironic/Tuskar are indeed meant to be the deployment tooling
  that an enterprise would use to deploy bare-metal hardware in a
  continuous fashion, then the modeling of racks, and the attributes of
  those racks -- location, power supply, etc -- are a critical part of the
  overall picture.
  
 
 To be clear, the goal first is to have them be the deployment tooling that
 _somebody_ would use in production. Enterprise is pretty amorphous. If
 I'm running a start-up but it is a start-up that puts all of its money
 into a 5000 node public cloud, am I enterprise?

I, too, hate the term enterprise. Sorry for using it.

In my post, you can replace enterprise with business that has legacy
hardware inventory or datacenter deployment tooling/practices already in
place.

 Nothing in the direction that has been laid out precludes Tuskar and
 Ironic from consuming one of the _many_ data center inventory management
 solutions and CMDB's that exist now.

OK, cool.

 If there is a need for OpenStack to grow one, I think we will. Lord
 knows we've reinvented half the rest of the things we needed. ;-)

LOL, touché.

 For now I think Tuskar should focus on feeding multiple groups into Nova,
 and Nova and Ironic should focus on making sure they can handle multiple
 group memberships for compute resources and schedule appropriately. Do
 that and it will be relatively straight forward to adapt to racks, pods,
 power supplies, or cooling towers.

Completely agreed, which is why I say in my post have it somewhere on
the roadmap and doesn't have to be done tomorrow.

  As an example of why something like power supply is important... inside
  ATT, we had both 8kW and 16kW power supplies in our datacenters. For a
  42U or 44U rack, deployments would be limited to a certain number of
  compute nodes, based on that power supply.
  
  The average power draw for a particular vendor model of compute worker
  would be used in determining the level of compute node packing that
  could occur for that rack type within a particular datacenter. This was
  a fundamental part of datacenter deployment and planning. If the tooling
  intended to do bare-metal deployment of OpenStack in a continual manner
  does not plan to account for these kinds of things, then the chances
  that tooling will be used in enterprise deployments is diminished.
 
 Right the math can be done in advance and racks/psus/boxes grouped
 appropriately. Packing is one of those things that we need a wholistic
 scheduler for to be fully automated. I'm not convinced that is even a
 mid-term win, when there are so many big use-cases that can be handled
 with so much less complexity.

No disagreement from me at all.

  And, as we all know, when something isn't used, it withers. That's the
  last thing I want to happen here. I want all of this to be the
  bare-metal deployment tooling that is used *by default* in enterprise
  OpenStack deployments, because the tooling fits the expectations of
  datacenter deployers.
  
  It doesn't have to be done tomorrow :) It just needs to be on the map
  somewhere. I'm not sure if Ironic is the place to put this kind of
  modeling -- I thought Tuskar was going to be that thing. But really,
  IMO, it should be on the roadmap somewhere.
 
 I agree, however I think the primitive capabilities, informed by helpful
 use-cases such as the one you describe above, need to be understood
 before we go off and try to model a UI around them.

Yup, totally. I'll keep a-lurking and following along various threads,
offering insight where I can.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-17 Thread Clark, Robert Graham
On 17/01/2014 08:19, Robert Collins wrote:
 On 16 January 2014 03:31, Alan Kavanagh alan.kavan...@ericsson.com wrote:
 Hi fellow OpenStackers



 Does anyone have any recommendations on open source tools for disk
 erasure/data destruction software. I have so far looked at DBAN and disk
 scrubber and was wondering if ironic team have some better recommendations?

 So for Ironic this is a moderately low priority thing right now - and
 certainly I think it should be optional (what the default is is a
 different discussion).

 It's low priority because there are -so- many other concerns about
 sharing bare metal machines between tenants that don't have
 comprehensive mutual trust, that it's really not viable today (even on
 relatively recent platforms IMNSHO).

 -Rob



For all but the most paranoid of applications a single pass overwrite is 
enough to ensure that all data is securely removed from a magnetic disk.

There is some truth to the claim that data can still be read after a 
re-write, the technique is known as magnetic force microscopy 
(https://www.usenix.org/legacy/publications/library/proceedings/sec96/full_papers/gutmann/index.html),
 
it's an incredibly expensive method of data recovery, used only by a few 
organisations most of which I assume are intelligence agencies.

A single pass overwrite is fine for wiping the contents of a disk beyond 
all reasonable means of recovery. If you're trying to protect your data 
from recovery by intelligence agencies with access to the hardware, 
there are probably a lot of more important things you need to do to 
secure your data before you try to work out how many deban-re-writes you 
want.

SSD's are more complicated because they have wear-leveling controllers 
that spread data out in ways that mean you can't necessarily guarantee 
that every block will get written during an overwrite.

If you'd like a more detailed answer I'm sure the folks in the OSSG 
would be happy to help: openstack-secur...@lists.openstack.org

Cheers
-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Neutron] auto configration of local_ip

2014-01-17 Thread balaj...@freescale.com
Hi,

Once we configure Interface of compute node for local_ip, then it can be used 
for both IPv4 and IPv6 as well based on the deployment network.

Regards,
Balaji.P

From: Nir Yechiel [mailto:nyech...@redhat.com]
Sent: Thursday, January 16, 2014 9:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack General Mailing List
Subject: Re: [openstack-dev] [Openstack] [Neutron] auto configration of local_ip



From: Martinx - ジェームズ 
thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: OpenStack General Mailing List 
openst...@lists.openstack.orgmailto:openst...@lists.openstack.org
Sent: Thursday, January 16, 2014 5:41:09 PM
Subject: Re: [openstack-dev] [Openstack] [Neutron] auto configration of
local_ip

Guys,

Let me ask something about this...

Apparently, VXLAN can be easier to implement/maintain when using it with IPv6 
(read about it here: 
www.nephos6.com/pdf/OpenStack-on-IPv6.pdfhttp://www.nephos6.com/pdf/OpenStack-on-IPv6.pdf),
 so, I'm wondering if local_ip can be an IPv6 address (for IceHouse-3 / Ubuntu 
14.04) and, of course, if it is better in the end of the day.

Thoughts?!

Cheers!
Thiago

AFAIK the VXLAN draft addresses only IPv4, but IPv6 should be there in the 
future.


On 16 January 2014 12:58, balaj...@freescale.commailto:balaj...@freescale.com 
balaj...@freescale.commailto:balaj...@freescale.com wrote:
 2014/1/16 NOTSU Arata no...@virtualtech.jpmailto:no...@virtualtech.jp:
  The question is, what criteria is appropriate for the purpose. The
 criteria being mentioned so far in the review are:
 
  1. assigned to the interface attached to default gateway 2. being in
  the specified network (CIDR) 3. assigned to the specified interface
 (1 can be considered a special case of 3)
 

 For a certain deployment scenario, local_ip is totally different among
 those nodes, but if we consider local_ip as local_interface, it may
 match most of the nodes. I think it is more convenient to switch from
 ip to interface parameter.

[P Balaji-B37839] We implemented this and using in our test setup. We are glad 
to share this through blue-print/Bug if anybody is interested.

Regards,
Balaji.P


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ ceilometer] The Pagination patches

2014-01-17 Thread Jay Pipes
On Fri, 2014-01-17 at 06:59 +, Gao, Fengqian wrote:
 Hi, Jay,
 
 Thanks for your reply.
 According to you review comments, I rebase my code, please see my comments 
 for your questions.
 For the issue that ensure there is at least one unique column in the sort 
 keys if pagination is used, I have an idea.
 How about add the primary key at the end of sort keys list passed from users?

Thanks, Fengqian,

Yes, that is exactly what I was thinking... make a helper method that
wraps paginate_query() and looks at the sort_keys and, if there is no
column that is unique, it will append the primary key column(s) to the
sort keys.

I'll review your latest patches later today (currently up in Montreal at
the Neutron/Tempest QA code sprint, so a little delayed...)

Best,
-jay

 --fengqian
 
 -Original Message-
 From: Jay Pipes [mailto:jaypi...@gmail.com] 
 Sent: Monday, January 13, 2014 5:10 AM
 To: Gao, Fengqian
 Cc: Julien Danjou (jul...@danjou.info); Doug Hellmann 
 (doug.hellm...@dreamhost.com); Lu, Lianhao; chu...@ca.ibm.com; 
 dper...@linux.vnet.ibm.com; akornie...@mirantis.com; 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev][ ceilometer] The Pagination patches
 
 On Fri, 2014-01-10 at 03:31 +, Gao, Fengqian wrote:
  Since we still have no new conclusion for pagination for now, shall we 
  still go on with the current pagination solution?
  
  Please help to review patches:
  
  https://review.openstack.org/#/c/41869/
  https://review.openstack.org/#/c/35454/
 
 Hi Fengqian,
 
 Please see my review on the first of the above patches. You need to ensure 
 that there is at least one unique column in the sort keys if pagination is 
 used.
 
 There is a similar issue in the second patch (reviewing it now) that uses the 
 sqlalchemy_utils (oslo.db) paginate_query() method...
 
 The sqlalchemy_utils.paginate_query() method **requires** that sort_keys 
 contains a unique column [1]. The docstring for that method says:
 
 Pagination works by requiring a unique sort_key, specified by sort_keys. (If 
 sort_keys is not unique, then we risk looping through values.)
 
 So, care must be taken to pass sort_keys parameter to paginate_query() that 
 already contains at least one unique column in the sort keys.
 
 Best,
 -jay
 
 [1]
 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/utils.py#L66
 
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest - Stress Test] : implement a full SSH connection on ssh_floating.py and improve it

2014-01-17 Thread LELOUP Julien
Hi Marc,

The Etherpad you provided was helpful to know the current state of the stress 
tests.

I admit that I have some difficulties to understand how I can run a single test 
built with the @stresstest decorator (even not a beginner in Python, I still 
have things to learn on this technology and a lot more on OpenStack/Tempest :) 
).
I used to run my test using ./run_stress.py -t JSON configuration pointing at 
my action/.py script -d duration, which allowed me to run only one test 
with a dedicated configuration (number of threads, ...)

For what I understand now in Tempest, I only managed to run all tests, using 
./run_tests.sh and the only configuration I found related to stress tests was 
the [stress] section in tempest.conf.

For example : let say I ported my SSH stress test as a scenario test with the 
@stresstest decorator.
How can I launch this test (and only this one) and use a dedicated 
configuration file like ones we can found in tempest/stress/etc ?

Another question I have now : in the case that I have to use run_test.sh and 
not run_stress.py anymore, how do I get the test runs statistics I used to 
have, and where should I put some code to improve them ?

When I will have cleared my mind with all these kinds of practical details, 
maybe I should add some content on the Wiki about stress tests in Tempest.

Best Regards,

Julien LELOUP
julien.lel...@3ds.com

-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Friday, January 17, 2014 3:07 PM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: RE: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection 
on ssh_floating.py and improve it

Hi Julien,

most of the cases in tempest/stress are already covered by exiting tests in 
/api or /scenario. The only thing that is missing is the decorator on them.

BTW here is the Etherpad from the summit talk that we had:
https://etherpad.openstack.org/p/icehouse-summit-qa-stress-tests

It possible help to understand the state. I didn't managed to work on the 
action items that are left.

Your suggestions sound good. So I'd happy so see some patches :)

Regards
Marc

From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Friday, January 17, 2014 11:52 AM
To: Koderer, Marc
Cc: openstack-dev@lists.openstack.org
Subject: RE: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection 
on ssh_floating.py and improve it

Hello Marc,

Thanks for your answer.

At the moment I'm willing to spend some time on this kind of scenario so I will 
see how to use the stress decorator inside a scenario test.
Does this means that all stress tests available in tempest/stress should be 
ported as scenario tests with this decorator ?

I do have some ideas about features on stress test that I find useful for my 
own use case : like adding more statistics on stress test runs in order to use 
them as benchmarks.
I don't know if this kind of feature was already discussed in the OpenStack 
community but since stress tests are a bit deprecated now, maybe there is some 
room for this kind of improvement on fresh stress tests.

Best Regards,

Julien LELOUP

-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Friday, January 17, 2014 9:45 AM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on 
ssh_floating.py and improve it

Hello Juilen,

I forwarded your mail to the correct mailing list. Please do not use the qa 
list any longer.

I am happy that you are interested in stress tests. All the tests in 
tempest/stress/actions are more or less deprecated. So what you should use 
instead is the stress decorator (e.g. 
https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_actions.py#L55).
Unfortunately it's not yet used for scenarios like you describe. I'd suggest to 
build a scenario test in tempest/scenario and use this decorator on it.

Any patch like that is welcome on gerrit. If you are planning to work in that 
area for more than just a patch, a blueprint would be nice. A good way to 
coordinate your efforts is also in the QA meeting
(https://wiki.openstack.org/wiki/Meetings/QATeamMeeting)

Regards
Marc


From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Wednesday, January 15, 2014 5:57 PM
To: openstack...@lists.openstack.org
Subject: [openstack-qa] [Tempest - Stress Test] : implement a full SSH 
connection on ssh_floating.py and improve it

Hi everyone,

I’m quite new on OpenStack / Tempest and I’m actually working on stress tests. 
I want to suggest a new feature in a currently available stress test.
Not sure if this email should be posted on the QA mailing list or the dev 
mailing list, but I give it a try here since it is about a Tempest stress test ☺

At the moment the “ssh_floating.py” stress test seems really interesting but I 
would like to improve it 

Re: [openstack-dev] [nova] how is resource tracking supposed to work for live migration and evacuation?

2014-01-17 Thread Murray, Paul (HP Cloud Services)
To be clear - the changes that Yunhong describes below are not part of the 
extensible-resource-tracking blueprint. Extensible-resource-tracking has the 
more modest aim to provide plugins to track additional resource data.

Paul.

-Original Message-
From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com] 
Sent: 17 January 2014 05:54
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] how is resource tracking supposed to work 
for live migration and evacuation?

There are some related discussion on this before. 

There is a BP at 
https://blueprints.launchpad.net/nova/+spec/extensible-resource-tracking which 
try to support more resources.

And I have a documentation at  
https://docs.google.com/document/d/1gI_GE0-H637lTRIyn2UPfQVebfk5QjDi6ohObt6MIc0 
. My idea is to keep the claim as an object which can be invoked remotely, and 
the claim result is kept in DB as the instance's usage. I'm working on it now.

Thanks
--jyh

 -Original Message-
 From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
 Sent: Thursday, January 16, 2014 2:27 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] how is resource tracking supposed 
 to work for live migration and evacuation?
 
 
 On Jan 16, 2014, at 1:12 PM, Chris Friesen 
 chris.frie...@windriver.com
 wrote:
 
  Hi,
 
  I'm trying to figure out how resource tracking is intended to work 
  for live
 migration and evacuation.
 
  For a while I thought that maybe we were relying on the call to
 ComputeManager._instance_update() in
 ComputeManager.post_live_migration_at_destination().  However, in
  ResourceTracker.update_usage() we see that on a live migration the
 instance that has just migrated over isn't listed in 
 self.tracked_instances and so we don't actually update its usage.
 
  As far as I can see, the current code will just wait for the audit 
  to run at
 some unknown time in the future and call update_available_resource(), 
 which will add the newly-migrated instance to self.tracked_instances 
 and update the resource usage.
 
  From my poking around so far the same thing holds true for 
  evacuation
 as well.
 
  In either case, just waiting for the audit seems somewhat haphazard.
 
  Would it make sense to do something like
 ResourceTracker.instance_claim() during the migration/evacuate and 
 properly track the resources rather than wait for the audit?
 
 Yes that makes sense to me. Live migration was around before we had a 
 resource tracker so it probably was just never updated.
 
 Vish
 
 
  Chris
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] [UX] Infrastructure Management UI - Icehouse scoped wireframes

2014-01-17 Thread Jaromir Coufal

Sure Steve, that would be awesome!

The only blocker for now is that there are still happening some changes 
based on feedback of what is doable / what is not. So right when we get 
more confident on stable(-ish) version (or at least I'll try to sort out 
widgets which should stay how they are), it will be very valuable input. 
I'll definitely let you know!


-- Jarda

On 2014/17/01 01:16, Steve Doll wrote:

Looking good, let me know if I can be of help to make some high-fidelity
mockups.


On Thu, Jan 16, 2014 at 6:30 AM, Jay Dobies jason.dob...@redhat.com
mailto:jason.dob...@redhat.com wrote:

This is a really good evolution. I'm glad the wireframes are getting
closer to what we're doing for Icehouse.

A few notes...

On page 6, what does the Provisioning Status chart reflect? The math
doesn't add up if that's supposed to reflect the free v. deployed.
That might just be a sample data thing, but the term Provisioning
Status makes it sound like this could be tracking some sort of
ongoing provisioning operation.

What's the distinction between the config shown on the first
deployment page and the ones under more options? Is the idea that
the ones on the overview page must be specified before the first
deployment but the rest can be left to the defaults?

The Roles (Resource Category) subtab disappeared but the edit role
dialog is still there. How do you get to it?

Super happy to see the progress stuff represented. I think it's a
good first start towards handling the long running changes.

I like the addition of the Undeploy button, but since it's largely a
dev utility it feels a bit weird being so prominent. Perhaps
consider moving it under scale deployment; it's a variation of
scaling, just scaling back to nothing  :)

You locked the controller count to 1 (good call for Icehouse) but
still have incrementers on the scale page. That should also be
disabled and hardcoded to 1, right?




On 01/16/2014 08:41 AM, Hugh O. Brock wrote:

On Thu, Jan 16, 2014 at 01:50:00AM +0100, Jaromir Coufal wrote:

Hi folks,

thanks everybody for feedback. Based on that I updated
wireframes
and tried to provide a minimum scope for Icehouse timeframe.


http://people.redhat.com/~__jcoufal/openstack/tripleo/__2014-01-16_tripleo-ui-__icehouse.pdf

http://people.redhat.com/~jcoufal/openstack/tripleo/2014-01-16_tripleo-ui-icehouse.pdf

Hopefully we are able to deliver described set of features.
But if
you find something what is missing which is critical for the
first
release (or that we are implementing a feature which should
not have
such high priority), please speak up now.

The wireframes are very close to implementation. In time,
there will
appear more views and we will see if we can get them in as well.

Thanks all for participation
-- Jarda


These look great Jarda, I feel like things are coming together here.

--Hugh


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

*Steve Doll*
Art Director, Mirantis Inc.
sd...@mirantis.com mailto:sd...@mirantis.com
Mobile: +1-408-893-0525
Skype: sdoll-mirantis


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-17 Thread Robert Li (baoli)
Yunhong,

Thank you for bringing that up on the live migration support. In addition
to the two solutions you mentioned, Irena has a different solution. Let me
put all the them here again:
1. network xml/group based solution.
   In this solution, each host that supports a provider net/physical
net can define a SRIOV group (it's hard to avoid the term as you can see
from the suggestion you made based on the PCI flavor proposal). For each
SRIOV group supported on a compute node, A network XML will be created the
first time the nova compute service is running on that node.
* nova will conduct scheduling, but not PCI device allocation
* it's a simple and clean solution, documented in libvirt as the
way to support live migration with SRIOV. In addition, a network xml is
nicely mapped into a provider net.
2. network xml per PCI device based solution
   This is the solution you brought up in this email, and Ian
mentioned this to me as well. In this solution, a network xml is created
when A VM is created. the network xml needs to be removed once the VM is
removed. This hasn't been tried out as far as I  know.
3. interface xml/interface rename based solution
   Irena brought this up. In this solution, the ethernet interface
name corresponding to the PCI device attached to the VM needs to be
renamed. One way to do so without requiring system reboot is to change the
udev rule's file for interface renaming, followed by a udev reload.

Now, with the first solution, Nova doesn't seem to have control over or
visibility of the PCI device allocated for the VM before the VM is
launched. This needs to be confirmed with the libvirt support and see if
such capability can be provided. This may be a potential drawback if a
neutron plugin requires detailed PCI device information for operation.
Irena may provide more insight into this. Ideally, neutron shouldn't need
this information because the device configuration can be done by libvirt
invoking the PCI device driver.

The other two solutions are similar. For example, you can view the second
solution as one way to rename an interface, or camouflage an interface
under a network name. They all require additional works before the VM is
created and after the VM is removed.

I also agree with you that we should take a look at XenAPI on this.


With regard to your suggestion on how to implement the first solution with
some predefined group attribute, I think it definitely can be done. As I
have pointed it out earlier, the PCI flavor proposal is actually a
generalized version of the PCI group. In other words, in the PCI group
proposal, we have one predefined attribute called PCI group, and
everything else works on top of that. In the PCI flavor proposal,
attribute is arbitrary. So certainly we can define a particular attribute
for networking, which let's temporarily call sriov_group. But I can see
with this idea of predefined attributes, more of them will be required by
different types of devices in the future. I'm sure it will keep us busy
although I'm not sure it's in a good way.

I was expecting you or someone else can provide a practical deployment
scenario that would justify the flexibilities and the complexities.
Although I'd prefer to keep it simple and generalize it later once a
particular requirement is clearly identified, I'm fine to go with it if
that's most of the folks want to do.

--Robert



On 1/16/14 8:36 PM, yunhong jiang yunhong.ji...@linux.intel.com wrote:

On Thu, 2014-01-16 at 01:28 +0100, Ian Wells wrote:
 To clarify a couple of Robert's points, since we had a conversation
 earlier:
 On 15 January 2014 23:47, Robert Li (baoli) ba...@cisco.com wrote:
   ---  do we agree that BDF address (or device id, whatever
 you call it), and node id shouldn't be used as attributes in
 defining a PCI flavor?
 
 
 Note that the current spec doesn't actually exclude it as an option.
 It's just an unwise thing to do.  In theory, you could elect to define
 your flavors using the BDF attribute but determining 'the card in this
 slot is equivalent to all the other cards in the same slot in other
 machines' is probably not the best idea...  We could lock it out as an
 option or we could just assume that administrators wouldn't be daft
 enough to try.
 
 
 * the compute node needs to know the PCI flavor.
 [...] 
   - to support live migration, we need to use
 it to create network xml
 
 
 I didn't understand this at first and it took me a while to get what
 Robert meant here.
 
 This is based on Robert's current code for macvtap based live
 migration.  The issue is that if you wish to migrate a VM and it's
 tied to a physical interface, you can't guarantee that the same
 physical interface is going to be used on the target machine, but at
 the same time you can't change the libvirt.xml as it comes over with
 the migrating machine.  The answer is to define a network 

Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-01-17 Thread Matthew Treinish
On Fri, Jan 17, 2014 at 08:32:19AM -0500, David Kranz wrote:
 On 01/16/2014 10:56 PM, Matthew Treinish wrote:
 Hi everyone,
 
 With some recent changes made to Tempest compatibility with nosetests is 
 going
 away. We've started using newer features that nose just doesn't support. One
 example of this is that we've started using testscenarios and we're planning 
 to
 do this in more places moving forward.
 
 So at Icehouse-3 I'm planning to push the patch out to remove nosetests from 
 the
 requirements list and all the workarounds and references to nose will be 
 pulled
 out of the tree. Tempest will also start raising an unsupported exception 
 when
 you try to run it with nose so that there isn't any confusion on this moving
 forward. We talked about doing this at summit briefly and I've brought it up 
 a
 couple of times before, but I believe it is time to do this now. I feel for
 tempest to move forward we need to do this now so that there isn't any 
 ambiguity
 as we add even more features and new types of testing.
 I'm with you up to here.
 
 Now, this will have implications for people running tempest with python 2.6
 since up until now we've set nosetests. There is a workaround for getting
 tempest to run with python 2.6 and testr see:
 
 https://review.openstack.org/#/c/59007/1/README.rst
 
 but essentially this means that when nose is marked as unsupported on tempest
 python 2.6 will also be unsupported by Tempest. (which honestly it basically 
 has
 been for while now just we've gone without making it official)
 The way we handle different runners/os can be categorized as tested
 in gate, unsupported (should work, possibly some hacks needed),
 and hostile. At present, both nose and py2.6 I would say are in
 the unsupported category. The title of this message and the content
 up to here says we are moving nose to the hostile category. With
 only 2 months to feature freeze I see no justification in moving
 py2.6 to the hostile category. I don't see what new testing features
 scheduled for the next two months will be enabled by saying that
 tempest cannot and will not run on 2.6. It has been agreed I think
 by all projects that py2.6 will be dropped in J. It is OK that py2.6
 will require some hacks to work and if in the next few months it
 needs a few more then that is ok. If I am missing another connection
 between the py2.6 and nose issues, please explain.
 

So honestly we're already at this point in tempest. Nose really just doesn't
work with tempest, and we're adding more features to tempest, your negative test
generator being one of them, that interfere further with nose. I've seen several
patches this cycle that attempted to introduce incorrect behavior while trying
to fix compatibility with nose. That's why I think we need a clear message on
this sooner than later. Which is why I'm proposing actively raising an error
when things are run with nose upfront so there isn't any illusion that things
are expected to work.

This doesn't necessarily mean we're moving python 2.6 to the hostile category.
Nose support is independent of python 2.6 support. Py26 I would still consider
to be unsupported, the issue is that the hack to make py26 work is outside of
tempest. This is why we've recommended that people using python 2.6 run with
nose, which really is no longer an option. Attila's abandoned patch that I
linked above documents points to this bug with a patch to discover which is
need to get python 2.6 working with tempest and testr:

https://code.google.com/p/unittest-ext/issues/detail?id=79


-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Editing Nodes

2014-01-17 Thread Jay Dobies



On 01/17/2014 03:28 AM, mar...@redhat.com wrote:

On 16/01/14 00:28, Clint Byrum wrote:

Excerpts from James Slagle's message of 2014-01-15 05:07:08 -0800:

I'll start by laying out how I see editing or updating nodes working
in TripleO without Tuskar:

To do my initial deployment:
1.  I build a set of images for my deployment for different roles. The
images are different based on their role, and only contain the needed
software components to accomplish the role they intend to be deployed.
2.  I load the images into glance
3.  I create the Heat template for my deployment, likely from
fragments that are already avaiable. Set quantities, indicate which
images (via image uuid) are for which resources in heat.
4.  heat stack-create with my template to do the deployment

To update my deployment:
1.  If I need to edit a role (or create a new one), I create a new image.
2.  I load the new image(s) into glance
3.  I edit my Heat template, update any quantities, update any image uuids, etc.
4.  heat stack-update my deployment

In both cases above, I see the role of Tuskar being around steps 3 and 4.



Agreed!



+1 ...


review  /#/c/52045/ is about generating the overcloud template using
merge.py **. Having recently picked this up again and following latest
wireframes and UI design, it seems like most of current Tuskar code is
going away. After initial panick I saw Jay has actually already started
that with /#/c/66062/

Jay: I think at some point (asap) my /#/c/52045/ will be rebased on your
  /#/c/66062/. Currently my code creates templates from the Tuskar
representations, i.e. ResourceClasses. For now I will assume that I'll
be getting something along the lines of:

{
'resource_categories': { 'controller': 1, 'compute': 4, 'object': 1,
'block' 2}
}

i.e. just resource categories and number of instances for each (plus any
other user supplied config/auth info). Will there be controllers (do we
need them, apart from a way to create, update, delete)? Let's talk some
more on irc later. I'll update the commit message on my review to point
to yours for now,

thanks! marios

** merge.py is going to be binned but it is the best thing we've got
_today_ and within the Icehouse timeframe.


My stuff got merged in today. You should be able to use db's api.py to 
grab everything you need. Ping me (jdob) if you have any questions on it 
or need some different queries.





Steps 1 and 2 are really CI's responsibility in a CD cloud. The end of
the testing phase is new images in glance! For a stable release cloud,
a tool for pulling new released images from elsewhere into Glance would
be really useful, but worst case an admin downloads the new images and
loads them manually.


I may be misinterpreting, but let me say that I don't think Tuskar
should be building images. There's been a fair amount of discussion
around a Nova native image building service [1][2]. I'm actually not
sure what the status/concensus on that is, but maybe longer term,
Tuskar might call an API to kick off an image build.



Tuskar should just deploy what it has available. I definitely could
see value in having an image updating service separate from Tuskar,
but I think there are many different answers for how do images arrive
in Glance?.


Ok, so given that frame of reference, I'll reply inline:

On Mon, Jan 13, 2014 at 11:18 AM, Jay Dobies jason.dob...@redhat.com wrote:

I'm pulling this particular discussion point out of the Wireframes thread so
it doesn't get lost in the replies.

= Background =

It started with my first bulletpoint:

- When a role is edited, if it has existing nodes deployed with the old
version, are the automatically/immediately updated? If not, how do we
reflect that there's a difference between how the role is currently
configured and the nodes that were previously created from it?


I would think Roles need to be versioned, and the deployed version
recorded as Heat metadata/attribute. When you make a change to a Role,
it's a new version. That way you could easily see what's been
deployed, and if there's a newer version of the Role to deploy.



Could Tuskar version the whole deployment, but only when you want to
make it so ? If it gets too granular, it becomes pervasive to try and
keep track of or to roll back. I think that would work well with a goal
I've always hoped Tuskar would work toward which would be to mostly just
maintain deployment as a Heat stack that nests the real stack with the
parameters realized. With Glance growing Heat template storage capability,
you could just store these versions in Glance.


Replies:


I know you quoted the below, but I'll reply here since we're in a new thread.


I would expect any Role change to be applied immediately. If there is some
change where I want to keep older nodes how they are set up and apply new
settings only to new added nodes, I would create new Role then.


-1 to applying immediately.

When you edit a Role, it gets a new version. But nodes that are
deployed with the 

Re: [openstack-dev] [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on ssh_floating.py and improve it

2014-01-17 Thread David Kranz

On 01/17/2014 09:06 AM, Koderer, Marc wrote:

Hi Julien,

most of the cases in tempest/stress are already covered by exiting tests in /api
or /scenario. The only thing that is missing is the decorator on them.

BTW here is the Etherpad from the summit talk that we had:
https://etherpad.openstack.org/p/icehouse-summit-qa-stress-tests

It possible help to understand the state. I didn't managed to work on the action
items that are left.

Your suggestions sound good. So I'd happy so see some patches :)

Regards
Marc
To clarify, it is still possible to have code for a stress test in 
tempest that is not a decorated scenario test. But such a stress test 
case would probably only be accepted if it were a good stress case and 
would be rejected as a regular scenario test for some reason.


 -David



From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Friday, January 17, 2014 11:52 AM
To: Koderer, Marc
Cc: openstack-dev@lists.openstack.org
Subject: RE: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on 
ssh_floating.py and improve it

Hello Marc,

Thanks for your answer.

At the moment I'm willing to spend some time on this kind of scenario so I will 
see how to use the stress decorator inside a scenario test.
Does this means that all stress tests available in tempest/stress should be 
ported as scenario tests with this decorator ?

I do have some ideas about features on stress test that I find useful for my 
own use case : like adding more statistics on stress test runs in order to use 
them as benchmarks.
I don't know if this kind of feature was already discussed in the OpenStack community but 
since stress tests are a bit deprecated now, maybe there is some room for this kind of 
improvement on fresh stress tests.

Best Regards,

Julien LELOUP

-Original Message-
From: Koderer, Marc [mailto:m.kode...@telekom.de]
Sent: Friday, January 17, 2014 9:45 AM
To: LELOUP Julien
Cc: openstack-dev@lists.openstack.org
Subject: [qa] RE: [Tempest - Stress Test] : implement a full SSH connection on 
ssh_floating.py and improve it

Hello Juilen,

I forwarded your mail to the correct mailing list. Please do not use the qa 
list any longer.

I am happy that you are interested in stress tests. All the tests in 
tempest/stress/actions are more or less deprecated. So what you should use 
instead is the stress decorator (e.g. 
https://github.com/openstack/tempest/blob/master/tempest/api/volume/test_volumes_actions.py#L55).
Unfortunately it's not yet used for scenarios like you describe. I'd suggest to 
build a scenario test in tempest/scenario and use this decorator on it.

Any patch like that is welcome on gerrit. If you are planning to work in that 
area for more than just a patch, a blueprint would be nice. A good way to 
coordinate your efforts is also in the QA meeting
(https://wiki.openstack.org/wiki/Meetings/QATeamMeeting)

Regards
Marc


From: LELOUP Julien [julien.lel...@3ds.com]
Sent: Wednesday, January 15, 2014 5:57 PM
To: openstack...@lists.openstack.org
Subject: [openstack-qa] [Tempest - Stress Test] : implement a full SSH connection on 
ssh_floating.py and improve it

Hi everyone,

I’m quite new on OpenStack / Tempest and I’m actually working on stress tests. 
I want to suggest a new feature in a currently available stress test.
Not sure if this email should be posted on the QA mailing list or the dev 
mailing list, but I give it a try here since it is about a Tempest stress test ☺

At the moment the “ssh_floating.py” stress test seems really interesting but I 
would like to improve it a bit.

By now this script is simulating an SSH connection by binding a TCP socket on 
the newly created instance. But this test don’t allow us to check if this 
instance is really available. I’m mostly thinking about the metadata service 
unable to provide the SSH key pair to the instance, but surely other scenarios 
can lead to an instance considered “ACTIVE” but actually unusable.

So I’m implementing a full SSH connection test using the “paramiko” SSH library 
and a key pair generated in the same way the other test resources are managed 
in this script : either one SSH key pair for every test runs or a new key pair 
for each run (depends on the JSON configuration file).
I don’t plan to remove the old test (TCP socket binding), rather move this one 
on a separate test function and put the full SSH connection test code instead.

Is this feature interesting for the OpenStack community ?
Should I create a blueprint on the Tempest project on Launchpad in order to 
provide my code through Gerrit ?

On a second time, I plan to overhaul improve this “ssh_floating.py” script by 
clean the code a little bit, and add more cleaning code in order to avoid 
leaving instances/security groups/floating IP behind : I do have this kind of 
behavior right now and I already improved the teardown() on this way.

Should I consider this code as a new 

Re: [openstack-dev] a common client library

2014-01-17 Thread John Dennis
 Keeping them separate is awesome for *us* but really, really, really
 sucks for users trying to use the system. 
 
 I agree. Keeping them separate trades user usability for developer
 usability, I think user usability is a better thing to strive for.

I don't understand how multiple independent code bases with a lot of
overlapping code/logic is a win for developers. The more we can move to
single shared code the easier code comprehension and maintenance
becomes. From a software engineering perspective the amount of
duplicated code/logic in OpenStack is worrisome. Iterating towards
common code seems like a huge developer win as well as greatly enhancing
robustness in the process.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-17 Thread Jay Pipes
On Fri, 2014-01-17 at 14:34 +0100, Thomas Herve wrote:
 Hi all,
 
 I've been looking at Neutron default LBaaS provider using haproxy, and while 
 it's working nicely, it seems to have several shortcomings in terms of 
 scalability and high availability. The Libra project seems to offer a more 
 robust alternative, at least for scaling. The haproxy implementation in 
 Neutron seems to continue to evolve (like with 
 https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy), but I'm 
 wondering why we can't leverage Libra. The APIs are a bit different, but the 
 goals look very similar, and there is a waste of effort with 2 different 
 implementations. Maybe we could see a Libra driver for Neutron LBaaS for 
 example?

Yep, it's a completely duplicative and wasteful effort.

It would be great for Libra developers to contribute to Neutron LBaaS.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [climate] Meeting minutes

2014-01-17 Thread Dina Belova
Thanks everyone for attending our weekly meeting :)
Meeting minutes are:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-17-15.00.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-17-15.00.txt

Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-01-17-15.00.log.html


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread Matthew Farina
It seems we have two target audiences here. Developers who work on
OpenStack and developers who write apps to use it. In the long run I think
we need to optimize for both of these groups.

If we want developers to write applications to use OpenStack in python we
likely need a common python SDK.

Note, I'm not a fan of the term client because it's not the common language
for this group of developers.


On Fri, Jan 17, 2014 at 10:26 AM, John Dennis jden...@redhat.com wrote:

  Keeping them separate is awesome for *us* but really, really, really
  sucks for users trying to use the system.
 
  I agree. Keeping them separate trades user usability for developer
  usability, I think user usability is a better thing to strive for.

 I don't understand how multiple independent code bases with a lot of
 overlapping code/logic is a win for developers. The more we can move to
 single shared code the easier code comprehension and maintenance
 becomes. From a software engineering perspective the amount of
 duplicated code/logic in OpenStack is worrisome. Iterating towards
 common code seems like a huge developer win as well as greatly enhancing
 robustness in the process.

 --
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-17 Thread Andrew Hutchings

On 17 Jan 2014, at 16:10, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-01-17 at 14:34 +0100, Thomas Herve wrote:
 Hi all,
 
 I've been looking at Neutron default LBaaS provider using haproxy, and while 
 it's working nicely, it seems to have several shortcomings in terms of 
 scalability and high availability. The Libra project seems to offer a more 
 robust alternative, at least for scaling. The haproxy implementation in 
 Neutron seems to continue to evolve (like with 
 https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy), but I'm 
 wondering why we can't leverage Libra. The APIs are a bit different, but the 
 goals look very similar, and there is a waste of effort with 2 different 
 implementations. Maybe we could see a Libra driver for Neutron LBaaS for 
 example?
 
 Yep, it's a completely duplicative and wasteful effort.
 
 It would be great for Libra developers to contribute to Neutron LBaaS.


Hi Jay and Thomas,

I am the outgoing technical lead of Libra for HP.  But will reply whilst the 
new technical lead (Marc Pilon) gets subscribed to this.

I would go as far as duplicative or wasteful.  Libra existed before Neutron 
LBaaS and is originally based on the Atlas API specifications.  Neutron LBaaS 
has started duplicating some of our features recently which we find quite 
flattering.

After the 5.x release of Libra has been stabilised we will be working towards 
integration with Neutron.  It is a very important thing on our roadmap and we 
are already working with 2 other large companies in Openstack to figure that 
piece out.

If anyone else wants to get involved or just wants to play with Libra I’m sure 
the HP team would be happy to hear about it and help where they can.

Hope this helps

Kind Regards
Andrew
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Cherry picking commit from oslo-incubator

2014-01-17 Thread John Griffith
On Fri, Jan 17, 2014 at 1:15 AM, Flavio Percoco fla...@redhat.com wrote:
 On 16/01/14 17:32 -0500, Doug Hellmann wrote:

 On Thu, Jan 16, 2014 at 3:19 PM, Ben Nemec openst...@nemebean.com wrote:

On 2014-01-16 13:48, John Griffith wrote:

Hey Everyone,

A review came up today that cherry-picked a specific commit to OSLO
Incubator, without updating the rest of the files in the module.  I
rejected that patch, because my philosophy has been that when you
update/pull from oslo-incubator it should be done as a full sync of
the entire module, not a cherry pick of the bits and pieces that
 you
may or may not be interested in.

As it turns out I've received a bit of push back on this, so it
 seems
maybe I'm being unreasonable, or that I'm mistaken in my
 understanding
of the process here.  To me it seems like a complete and total
 waste
to have an oslo-incubator and common libs if you're going to turn
around and just cherry pick changes, but maybe I'm completely out
 of
line.

Thoughts??


I suppose there might be exceptions, but in general I'm with you.  For
 one
thing, if someone tries to pull out a specific change in the Oslo code,
there's no guarantee that code even works.  Depending on how the sync
 was
done it's possible the code they're syncing never passed the Oslo unit
tests in the form being synced, and since unit tests aren't synced to
 the
target projects it's conceivable that completely broken code could get
through Jenkins.

Obviously it's possible to do a successful partial sync, but for the
 sake
of reviewer sanity I'm -1 on partial syncs without a _very_ good reason
(like it's blocking the gate and there's some reason the full module
 can't
be synced).


 I agree. Cherry picking a single (or even partial) commit really should be
 avoided.

 The update tool does allow syncing just a single module, but that should
 be
 used very VERY carefully, especially because some of the changes we're
 making
 as we work on graduating some more libraries will include cross-dependent
 changes between oslo modules.


 Agrred. Syncing on master should be complete synchornization from Oslo
 incubator. IMHO, the only case where cherry-picking from oslo should
 be allowed is when backporting patches to stable branches. Master
 branches should try to keep up-to-date with Oslo and sync everything
 every time.

 With that in mind, I'd like to request project's members to do
 periodic syncs from Oslo incubator. Yes, it is tedious, painful and
 sometimes requires more than just syncing, but we should all try to
 keep up-to-date with Oslo. The main reason why I'm asking this is
 precisely stable branches. If the project stays way behind the

Fully agree here, it's something we started in Cinder but sort of died
off and met some push-back (some of that admittedly was from myself at
the beginning).  It is something that we need to look at again though,
if nothing else to prevent falling so far behind that when we do need
a fix/update it's not a monumental undertaking to make it happen.

 oslo-incubator, it'll be really painful to backport patches to stable
 branches in case of failures.

 Unfortunately, there are projects that are quite behind from
 oslo-incubator master.

 One last comment. FWIW, backwards compatibility is always considered
 in all Oslo reviews and if there's a crazy-breaking change, it's
 always notified.

 Thankfully, this all will be alleviated with the libs that are being
 pulled out from the incubator. The syncs will contain fewer modules
 and will be smaller.


 I'm happy you brought this up now. I was meaning to do it.

 Cheers,
 FF


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-01-17 Thread Robert Collins
Hey, so I think my criteria would be:
 - low chance of user confusion
 - little or no overhead for dev until we're more broadly ready for
long term support

So - if you want to make an incubator release branch thats fine with me but:
 - please make sure the docs in the branch and trunk explain the situation:
- no guarantee of release - release stability
- no guarantee of upgrade stability
- it's a point-in-time snapshot of a moving story

-Rob

On 18 January 2014 02:18, James Slagle james.sla...@gmail.com wrote:
 On Thu, Jan 16, 2014 at 7:29 PM, Clint Byrum cl...@fewbar.com wrote:
 Note that tripleo-incubator is special and should not be released. It
 is intentionally kept unfrozen and unreleased to make sure there is no
 illusion of stability.

 I think it would be nice if we could point people at a devtest that
 they could use with our other released stuff. Without that, we might
 make a change to devtest, such as showing the use of a new heat
 parameter in our templates, and if they're trying to follow along with
 a released tripleo-heat-templates then they would have a problem.

 Without a branch of incubator, there's no story or documentation
 around using any of our released stuff.  You could follow along with
 devtest to get an idea of how it's supposed to work and indeed it
 might even work, but I don't think that's good enough. There is
 tooling in incubator that has proved it's usefulness. Take an example
 like setup-endpoints, what we're effectively saying without allowing
 people to use that is that there is a useful tool that will setup
 endpoints for you, but don't use it with our released stuff because
 it's not gauranteed to work and instead make these 10'ish calls to
 keystone via some other method. Then you'd also end up with a
 different but parallel set of instructions for using our released
 stuff vs. not.

 This is prohibitive to someone who may want to setup a tripleo CI/CD
 cloud deploying stable icehouse or from milestone branches. I think
 people would just create their own fork of tripleo-incubator and use
 that.

 If there are components in it that need releasing, they should be moved
 into relevant projects or forked into their own projects.

 I'd be fine with that approach, except that's pretty much everything
 in incubator, the scripts, templates, generated docs, etc. Instead of
 creating a new forked repo, why don't we just rename tripleo-incubator
 to tripleo-deployment and have some stable branches that people could
 use with our releases?

 I don't feel like that precludes tripleo from having no stability on
 the master branch at all.

 Excerpts from Ryan Brady's message of 2014-01-16 07:42:33 -0800:
 +1 for releases.

 In the past I requested a tag for tripleo-incubator to make it easier to 
 build a package and test.

 In my case a common tag would be easier to track than trying to gather all 
 of the commit hashes where
 the projects are compatible.

 Ryan

 - Original Message -
 From: James Slagle james.sla...@gmail.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Thursday, January 16, 2014 10:13:58 AM
 Subject: [openstack-dev] [TripleO] milestone-proposed branches

 At last summit, we talked about doing stable branches and releases for
 the TripleO projects for Icehouse.

 I'd like to propose doing a milestone-proposed branch[1] and tagged
 release for icehouse milestones 2 and 3. Sort of as dry run and
 practice, as I think it could help tease out some things we might not
 have considered when we do try to do icehouse stable branches.

 The icehouse milestone 2 date is January 23rd [2]. So, if there is
 concensus to do this, we probably need to get the branches created
 soon, and then do any bugfixes in the branches (master too of course)
 up unitl the 23rd.

 I think it'd be nice if we had a working devtest to use with the
 released tarballs.  This raises a couple of points:
  - We probably need a way in devtest to let people use a different
 branch (or tarball) of the stuff that is git cloned.
 - What about tripleo-incubator itself? We've said in the past we don't
 want to attempt to stabilize or release that due to it's incubator
 nature.  But, if we don't have a stable set of devtest instructions
 (and accompanying scripts like setup-endpoints, etc), then using an
 ever changing devtest with the branches/tarballs is not likely to work
 for very long.

 And yes, I'm volunteering to do the work to support the above, and the
 release work :).

 Thoughts?

 [1] https://wiki.openstack.org/wiki/BranchModel
 [2] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule

 --
 -- James Slagle
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Nova] why don't we deal with claims when live migrating an instance?

2014-01-17 Thread Vishvananda Ishaya

On Jan 16, 2014, at 9:41 PM, Jiang, Yunhong yunhong.ji...@intel.com wrote:

 I noticed the BP has been approved, but I really want to understand more on 
 the reason, can anyone provide me some hints?
  
 In the BP, it states that “For resize, we need to confirm, as we want to give 
 end user an opportunity to rollback”. But why do we want to give user an 
 opportunity to rollback to resize? And why that reason does not apply to cold 
 migration and live migration?

The confirm is so the user can verify that the instance is still functional in 
the new state. We leave the old instance around so they can abort and return to 
the old instance if something goes wrong. This could apply to cold migration as 
well since it uses the same code paths, but it definitely does not make sense 
in the case of live-migration, because there is no old vm to revert to.

In the case of cold migration, the state is quite confusing as “RESIZE_VERIFY”, 
and the need to confirm is not immediately obvious so I think that is driving 
the change.

Vish

  
 Thanks
 --jyh
  
 From: Jay Lau [mailto:jay.lau@gmail.com] 
 Sent: Thursday, January 16, 2014 3:27 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] why don't we deal with claims when live 
 migrating an instance?
  
 Hi Scott,
 
 I'm now trying to fix this issue at 
 https://blueprints.launchpad.net/nova/+spec/auto-confirm-cold-migration
 
 After the fix, we do not need to confirm the cold migration.
 
 http://lists.openstack.org/pipermail/openstack-dev/2014-January/023726.html
 
 Thanks,
 
 Jay
  
 
 2014/1/17 Scott Devoid dev...@anl.gov
 Related question: Why does resize get called (and the VM put in RESIZE 
 VERIFY state) when migrating from one machine to another, keeping the same 
 flavor?
  
 
 On Thu, Jan 16, 2014 at 9:54 AM, Brian Elliott bdelli...@gmail.com wrote:
 
 On Jan 15, 2014, at 4:34 PM, Clint Byrum cl...@fewbar.com wrote:
 
  Hi Chris. Your thread may have gone unnoticed as it lacked the Nova tag.
  I've added it to the subject of this reply... that might attract them.  :)
 
  Excerpts from Chris Friesen's message of 2014-01-15 12:32:36 -0800:
  When we create a new instance via _build_instance() or
  _build_and_run_instance(), in both cases we call instance_claim() to
  reserve and test for resources.
 
  During a cold migration I see us calling prep_resize() which calls
  resize_claim().
 
  How come we don't need to do something like this when we live migrate an
  instance?  Do we track the hypervisor overhead somewhere in the instance?
 
  Chris
 
 
 It is a good point and it should be done.  It is effectively a bug.
 
 Brian
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread John Griffith
On Fri, Jan 17, 2014 at 1:15 AM, Robert Collins
robe...@robertcollins.net wrote:
 On 16 January 2014 14:51, John Griffith john.griff...@solidfire.com wrote:
 On Wed, Jan 15, 2014 at 6:41 PM, Michael Still mi...@stillhq.com wrote:
 John -- I agree with you entirely here. My concern is more that I
 think the CI tests need to run more frequently than weekly.

 Completely agree, but I guess in essence to start these aren't really
 CI tests.  Instead it's just a public health report for the various
 drivers vendors provide.  I'd love to see a higher frequency, but some
 of us don't have the infrastructure to try and run a test against
 every commit.  Anyway, I think there's HUGE potential for growth and
 adjustment as we go along.  I'd like to get something in place to
 solve the immediate problem first though.

 You say you don't have the infrastructure - whats missing? What if you
 only ran against commits in the cinder trees?

Maybe this is going a bit sideways, but my point was that making a
first step of getting periodic runs on vendor gear and publicly
submitting those results would be a good starting point and a
SIGNIFICANT improvement over what we have today.

It seems to me that requiring every vendor to have a deployment in
house dedicated and reserved 24/7 might be a tough order right out of
the gate.  That being said, of course I'm willing and able to do that
for my employer, but feedback from others hasn't been quite so
amiable.

The feedback here seems significant enough that maybe gating every
change is the way to go though.  I'm certainly willing to opt in to
that model and get things off the ground.  I do have a couple of
concerns (number 3 begin the most significant):

1. I don't want ANY commit/patch waiting for a Vendors infrastructure
to run a test.  We would definitely need a timeout mechanism or
something along those lines to ensure none of this disrupts the gate

2. Isolating this to changes in Cinder seems fine, the intent was
mostly a compatability / features check.  This takes it up a notch and
allows us to detect when something breaks right away which is
certainly a good thing.

3. Support and maintenance is a concern here.  We have a first rate
community that ALL pull together to make our gating and infrastructure
work in OpenStack.  Even with that it's still hard for everybody to
keep up due to number of project and simply the volume of patches that
go in on a daily basis.  There's no way I could do my regular jobs
that I'm already doing AND maintain my own fork/install of the
OpenStack gating infrastructure.

4. Despite all of the heavy weight corporation throwing resource after
resource at OpenStack, keep in mind that it is an Open Source
community still.  I don't want to do ANYTHING that would make it some
unfriendly to folks who would like to commit.  Keep in mind that
vendors here aren't necessarily all large corporations, or even all
paid for proprietary products.  There are open source storage drivers
for example in Cinder and they may or may not have any of the
resources to make this happen but that doesn't mean they should not be
allowed to have code in OpenStack.

The fact is that the problem I see is that there are drivers/devices
that flat out don't work and end users (heck even some vendors that
choose not to test) don't know this until they've purchased a bunch of
gear and tried to deploy their cloud.  What I was initially proposing
here was just a more formal public and community representation of
whether a device works as it's advertised or not.

Please keep in mind that my proposal here was a first step sort of
test case.  Rather than start with something HUGE like deploying the
OpenStack CI in every vendors lab to test every commit (and Im sorry
for those that don't agree but that does seem like a SIGNIFICANT
undertaking), why not take incremental steps to make things better and
learn as we go along?



 To be honest I'd even be thrilled just to see every vendor publish a
 passing run against each milestone cut.  That in and of itself would
 be a huge step in the right direction in my opinion.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum][Keystone] Best practices for storing keystone trusts information

2014-01-17 Thread Georgy Okrokvertskhov
Hi,

In Solum project we want to use Keystone trusts to work with other
OpenStack services on behalf of user. Trusts are long term entities and a
service should keep them for a long time.

I want to understand what are best practices for working with trusts and
storing them in a service?

What are the options to keep trust? I see obvious approaches like keep them
in a service DB or keep them in memory. Are there any other approaches?

Is there a proper way to renew trust? For example if I have a long term
task which is waiting for external event, how to keep trust fresh for a
long and unpredicted period?

Thanks
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread Jonathan LaCour
On Thu, Jan 16, 2014 at 1:23 PM, Donald Stufft don...@stufft.io wrote:


 On Jan 16, 2014, at 4:06 PM, Jesse Noller jesse.nol...@rackspace.com
 wrote:


 On Jan 16, 2014, at 2:22 PM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

 Since it’s pretty easy to get lost among all the opinions I’d like to
 clarify/ask a couple of things:


- Keeping all the clients physically separate/combining them in to a
single library. Two things here:
   - In case of combining them, what exact project are we considering?
   If this list is limited to core projects like nova and keystone what 
 policy
   could we have for other projects to join this list? (Incubation,
   graduation, something else?)
   - In terms of granularity and easiness of development I’m for
   keeping them separate but have them use the same boilerplate code,
   basically we need a OpenStack Rest Client Framework which is flexible
   enough to address all the needs in an abstract domain agnostic manner. I
   would assume that combining them would be an additional organizational
   burden that every stakeholder would have to deal with.


 Keeping them separate is awesome for *us* but really, really, really sucks
 for users trying to use the system.


 I agree. Keeping them separate trades user usability for developer
 usability, I think user usability is a better thing to strive for.


100% agree with this. In order for OpenStack to be its most successful, I
believe firmly that a focus on end-users and deployers needs to take the
forefront. That means making OpenStack clouds as easy to consume/leverage
as possible for users and tool builders, and simplifying/streamlining as
much as possible.

I think that a single, common client project, based upon package
namespaces, with a unified, cohesive feel is a big step in this direction.

--
Jonathan LaCour
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread Jonathan LaCour
On Thu, Jan 16, 2014 at 5:42 PM, Jesse Noller jesse.nol...@rackspace.comwrote:



 On Jan 16, 2014, at 4:59 PM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

  On 16 Jan 2014, at 13:06, Jesse Noller jesse.nol...@rackspace.com
 wrote:

   Since it’s pretty easy to get lost among all the opinions I’d like to
 clarify/ask a couple of things:


- Keeping all the clients physically separate/combining them in to a
single library. Two things here:
   - In case of combining them, what exact project are we considering?
   If this list is limited to core projects like nova and keystone what 
 policy
   could we have for other projects to join this list? (Incubation,
   graduation, something else?)
   - In terms of granularity and easiness of development I’m for
   keeping them separate but have them use the same boilerplate code,
   basically we need a OpenStack Rest Client Framework which is flexible
   enough to address all the needs in an abstract domain agnostic manner. I
   would assume that combining them would be an additional organizational
   burden that every stakeholder would have to deal with.


  Keeping them separate is awesome for *us* but really, really, really
 sucks for users trying to use the system.


 You may be right but not sure that adding another line into
 requirements.txt is a huge loss of usability.


  It is when that 1 dependency pulls in 6 others that pull in 10 more -
 every little barrier or potential failure from the inability to make a
 static binary to how each tool acts different is a paper cut of frustration
 to an end user.



Most of the time the clients don't even properly install because of
 dependencies on setuptools plugins and other things. For developers (as
 I've said) the story is worse: you have potentially 22+ individual packages
 and their dependencies to deal with if they want to use a complete
 openstack install from their code.

  So it doesn't boil down to just 1 dependency: it's a long laundry list
 of things that make consumers' lives more difficult and painful.

  This doesn't even touch on the fact there aren't blessed SDKs or tools
 pointing users to consume openstack in their preferred programming language.

  Shipping an API isn't enough - but it can be fixed easily enough.


+100

OpenStack must be attractive to our end users (developers and deployers),
as I mentioned earlier. Let's make it as simple as pip install openstack
if at all possible!

--
Jonathan LaCour
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [Metadatarepository] Metadata repository initiative status

2014-01-17 Thread Tripp, Travis S
Hello All,

I just took a look at this blueprint and see that it doesn't have any priority. 
 Was there a discussion on priority?  Any idea what, if any of this will make 
it into Icehouse?  Also, are there going to be any further design sessions on 
it?

Thanks,
Travis

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: Friday, December 20, 2013 3:43 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Glance] [Metadatarepository] Metadata repository 
initiative status


Hi,



Metadata repository meeting occurred this Tuesday in #openstack-glance channel. 
Main item that was discussed was an API for a new metadata functions and where 
this API should appear. During discussion it was defined that the main 
functionality will be  a storage for different objects and metadata associated 
with them. Initially all objects will have a specific type which defines 
specific attributes in metadata. There will be also a common set of attributes 
for all objects stored in Glance.


During the discussion there was an input from different projects (Hest, Murano, 
Solum) what kind of objects should be stored for each project and what kind 
functionality is minimally required.


Here is a list of potential objects:

Heat:

* HOT template

Potential Attributes: version, tag, keywords, etc.

  Required Features:

* Object and metadata versioning

* Search by specific attribute\attributes value


Murano

* Murano files

o  UI definition

o  workflow definition

o  HOT templates

o  Scripts

Required Features:

* Object and metadata versioning

* Search by specific attribute

Solum

* Solum Language Packs

  Potential Attributes: name, build_toolchain, OS, language platform, 
versions



Required Features:

* Object and metadata versioning

* Search by specific attribute


After a discussion it was concluded that the best way will be to add a new API 
endpoint /artifacts. This endpoint will be used to work with object's common 
attributes while type specific attributes and methods will be accessible 
through /artifact/object-type endpoint. The endpoint /artifacts will be used 
for filtering objects by searching for specific attributes value. Type specific 
attributes search should also be possible via /artifacts endpoint.

For each object type there will be a separate table for attributes in a 
database.


Currently it is supposed that metadata repository API will be implemented 
inside Glance within v2 version without changing existing API for images. In 
the future, v3 Glance API can fold images related API to the common artifacts 
API.

New artifact's API will reuse as much as possible from existing Glance 
functionality. Most of the stored objects will be non-binary, so it is 
necessary to check how Glance code handle this.

AI

All projects teams should start submit BPs for new functionality in Glance. 
These BPs will be discussed in ML and on Glance weekly meetings.

Related Resources:

Etherpad for Artifacts API design: 
https://etherpad.openstack.org/p/MetadataRepository-ArtifactRepositoryAPI


Heat templates repo BP for Heat:

https://blueprints.launchpad.net/heat/+spec/heat-template-repo


Initial API discussion Etherpad:

https://etherpad.openstack.org/p/MetadataRepository-API



Thanks
Georgy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Keystone] Best practices for storing keystone trusts information

2014-01-17 Thread Lance D Bragstad

Hi Georgy,

The following might help with some of the trust questions you have, if you
haven't looked at it already:
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md


As far as storage implementation, trust uses sql and kvs backends. Trusts
can be given an expiration but if an expiration is not given the trust is
valid until it is explicitly revoked (taken from the link above):

  Optionally, the trust may only be valid for a specified time period, as
defined by expires_at. If noexpires_at is specified, then the trust is
valid until it is explicitly revoked.

Trusts can also be given 'uses' so that you can set a limit to how many
times a trust will issue a token to the trustee. That functionality hasn't
landed yet but it is up for review: https://review.openstack.org/#/c/56243/

Hope this helps!


Best Regards,

Lance Bragstad




From:   Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
To: OpenStack Development Mailing List
openstack-dev@lists.openstack.org,
Date:   01/17/2014 12:11 PM
Subject:[openstack-dev] [Solum][Keystone] Best practices for storing
keystone trusts information



Hi,

In Solum project we want to use Keystone trusts to work with other
OpenStack services on behalf of user. Trusts are long term entities and a
service should keep them for a long time.

I want to understand what are best practices for working with trusts and
storing them in a service?

What are the options to keep trust? I see obvious approaches like keep them
in a service DB or keep them in memory. Are there any other approaches?

Is there a proper way to renew trust? For example if I have a long term
task which is waiting for external event, how to keep trust fresh for a
long and unpredicted period?

Thanks
Georgy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
inline: graycol.gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] stable/havana currently blocked - do not approve or recheck stable/* patches

2014-01-17 Thread Sean Dague
Because of a pip issue with netaddr on stable/grizzly devstack, all the
stable/havana changes for jobs that include grenade are currently
blocked. This is because of stevedore's version enforcement of netaddr
versions inside the cinder scheduler.

John, Chmouel, Dean, and I have got eyes on it, waiting for a check to
return to figure out if we've gotten it sorted this time. Hopefully it
will be resolved soon, but until then havana jobs are just killing the gate.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how is resource tracking supposed to work for live migration and evacuation?

2014-01-17 Thread Jiang, Yunhong
Paul, thanks for clarification.

--jyh

 -Original Message-
 From: Murray, Paul (HP Cloud Services) [mailto:pmur...@hp.com]
 Sent: Friday, January 17, 2014 7:02 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] how is resource tracking supposed to
 work for live migration and evacuation?
 
 To be clear - the changes that Yunhong describes below are not part of the
 extensible-resource-tracking blueprint. Extensible-resource-tracking has
 the more modest aim to provide plugins to track additional resource data.
 
 Paul.
 
 -Original Message-
 From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
 Sent: 17 January 2014 05:54
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] how is resource tracking supposed to
 work for live migration and evacuation?
 
 There are some related discussion on this before.
 
 There is a BP at
 https://blueprints.launchpad.net/nova/+spec/extensible-resource-trackin
 g which try to support more resources.
 
 And I have a documentation at
 https://docs.google.com/document/d/1gI_GE0-H637lTRIyn2UPfQVebfk5Qj
 Di6ohObt6MIc0 . My idea is to keep the claim as an object which can be
 invoked remotely, and the claim result is kept in DB as the instance's usage.
 I'm working on it now.
 
 Thanks
 --jyh
 
  -Original Message-
  From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
  Sent: Thursday, January 16, 2014 2:27 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova] how is resource tracking supposed
  to work for live migration and evacuation?
 
 
  On Jan 16, 2014, at 1:12 PM, Chris Friesen
  chris.frie...@windriver.com
  wrote:
 
   Hi,
  
   I'm trying to figure out how resource tracking is intended to work
   for live
  migration and evacuation.
  
   For a while I thought that maybe we were relying on the call to
  ComputeManager._instance_update() in
  ComputeManager.post_live_migration_at_destination().  However, in
   ResourceTracker.update_usage() we see that on a live migration the
  instance that has just migrated over isn't listed in
  self.tracked_instances and so we don't actually update its usage.
  
   As far as I can see, the current code will just wait for the audit
   to run at
  some unknown time in the future and call update_available_resource(),
  which will add the newly-migrated instance to self.tracked_instances
  and update the resource usage.
  
   From my poking around so far the same thing holds true for
   evacuation
  as well.
  
   In either case, just waiting for the audit seems somewhat haphazard.
  
   Would it make sense to do something like
  ResourceTracker.instance_claim() during the migration/evacuate and
  properly track the resources rather than wait for the audit?
 
  Yes that makes sense to me. Live migration was around before we had a
  resource tracker so it probably was just never updated.
 
  Vish
 
  
   Chris
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-17 Thread Jiang, Yunhong
Robert, thanks for your long reply. Personally I'd prefer option 2/3 as it keep 
Nova the only entity for PCI management.

Glad you are ok with Ian's proposal and we have solution to resolve the libvirt 
network scenario in that framework.

Thanks
--jyh

 -Original Message-
 From: Robert Li (baoli) [mailto:ba...@cisco.com]
 Sent: Friday, January 17, 2014 7:08 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
 support
 
 Yunhong,
 
 Thank you for bringing that up on the live migration support. In addition
 to the two solutions you mentioned, Irena has a different solution. Let me
 put all the them here again:
 1. network xml/group based solution.
In this solution, each host that supports a provider net/physical
 net can define a SRIOV group (it's hard to avoid the term as you can see
 from the suggestion you made based on the PCI flavor proposal). For each
 SRIOV group supported on a compute node, A network XML will be
 created the
 first time the nova compute service is running on that node.
 * nova will conduct scheduling, but not PCI device allocation
 * it's a simple and clean solution, documented in libvirt as the
 way to support live migration with SRIOV. In addition, a network xml is
 nicely mapped into a provider net.
 2. network xml per PCI device based solution
This is the solution you brought up in this email, and Ian
 mentioned this to me as well. In this solution, a network xml is created
 when A VM is created. the network xml needs to be removed once the
 VM is
 removed. This hasn't been tried out as far as I  know.
 3. interface xml/interface rename based solution
Irena brought this up. In this solution, the ethernet interface
 name corresponding to the PCI device attached to the VM needs to be
 renamed. One way to do so without requiring system reboot is to change
 the
 udev rule's file for interface renaming, followed by a udev reload.
 
 Now, with the first solution, Nova doesn't seem to have control over or
 visibility of the PCI device allocated for the VM before the VM is
 launched. This needs to be confirmed with the libvirt support and see if
 such capability can be provided. This may be a potential drawback if a
 neutron plugin requires detailed PCI device information for operation.
 Irena may provide more insight into this. Ideally, neutron shouldn't need
 this information because the device configuration can be done by libvirt
 invoking the PCI device driver.
 
 The other two solutions are similar. For example, you can view the second
 solution as one way to rename an interface, or camouflage an interface
 under a network name. They all require additional works before the VM is
 created and after the VM is removed.
 
 I also agree with you that we should take a look at XenAPI on this.
 
 
 With regard to your suggestion on how to implement the first solution with
 some predefined group attribute, I think it definitely can be done. As I
 have pointed it out earlier, the PCI flavor proposal is actually a
 generalized version of the PCI group. In other words, in the PCI group
 proposal, we have one predefined attribute called PCI group, and
 everything else works on top of that. In the PCI flavor proposal,
 attribute is arbitrary. So certainly we can define a particular attribute
 for networking, which let's temporarily call sriov_group. But I can see
 with this idea of predefined attributes, more of them will be required by
 different types of devices in the future. I'm sure it will keep us busy
 although I'm not sure it's in a good way.
 
 I was expecting you or someone else can provide a practical deployment
 scenario that would justify the flexibilities and the complexities.
 Although I'd prefer to keep it simple and generalize it later once a
 particular requirement is clearly identified, I'm fine to go with it if
 that's most of the folks want to do.
 
 --Robert
 
 
 
 On 1/16/14 8:36 PM, yunhong jiang yunhong.ji...@linux.intel.com
 wrote:
 
 On Thu, 2014-01-16 at 01:28 +0100, Ian Wells wrote:
  To clarify a couple of Robert's points, since we had a conversation
  earlier:
  On 15 January 2014 23:47, Robert Li (baoli) ba...@cisco.com wrote:
---  do we agree that BDF address (or device id, whatever
  you call it), and node id shouldn't be used as attributes in
  defining a PCI flavor?
 
 
  Note that the current spec doesn't actually exclude it as an option.
  It's just an unwise thing to do.  In theory, you could elect to define
  your flavors using the BDF attribute but determining 'the card in this
  slot is equivalent to all the other cards in the same slot in other
  machines' is probably not the best idea...  We could lock it out as an
  option or we could just assume that administrators wouldn't be daft
  enough to try.
 
 
  * the compute node needs to know the PCI flavor.
   

Re: [openstack-dev] [horizon] Blueprint decrypt-and-display-vm-generated-password

2014-01-17 Thread Alessandro Pilotti
+1

Nova's get-password is corrently the only safe way from a security perspective 
to handle guest passwords.

This feature needs to be mirrored in Horizon, otherwise most users will 
continue to resort to unsafe solutions like the clear text admin_pass due to 
lack of practical alternatives.

The design and implementation proposed by Ala is IMO a good one. It provides a 
UX quite similar to what other cloud environments like AWS offer with the 
additional bonus of keeping any sensitive data on the client side.

Alessandro

P.S.: Sorry for replying only now, I somehow skipped the original email in the 
ML!




On 17 Dec 2013, at 13:58 , Ala Rezmerita ala.rezmer...@cloudwatt.com wrote:

 Hi all,
 
 I would like to get your opinion/feedback about the implementation of the 
 blueprint Decrypt and display VM generated password[1]
 
 Our use case is primarily targeting Windows instances with cloudbase-init, 
 but the functionality can be also used on Linux instances.
 The general idea of this blueprint is to give the user the ability to 
 retrieve, through Horizon, administrative password for his Windows session.
 
 There is two ways for the user to set/get his password on cloudbase-init 
 Windows instances:
 - The user sets the desired password as admin_pass key/value as metadata of 
 the new server. Example : https://gist.github.com/arezmerita/8001673. In this 
 case the password is visible in instance description, in metatada section. 
 - The user do not set his password. In this case the cloudbase-init will 
 generate a random password, encrypt it with user provided public key, and 
 will send the result to the metadata server. The only way to get the clear 
 password is to use API/nova client and provide the private key. Example:  
 nova get-password  . The novaclient will retrieve encrypted password from 
 Nova and will use locally the private key in order to decrypt the password. 
 
 Now about our blueprint implementation: 
 - We add an new action Retrieve password on an instance, that shows a form 
 displaying the key pair name used to boot the instance and the encrypted 
 password. The user can provide its private key, that will be used ONLY on the 
 client side for password decryption using JSEncrypt library[2]. 
 - We choose to not send the private key over the network (for decryption on 
 server side), because we consider that the user should not be forced to share 
 this information with the cloud operator.
 Some may argue that the connection is protected, and we are already passing 
 sensitive data over the network. However, openstack user password/tokens are 
 openstack sensitive data, they are related to the openstack user. User's 
 private key on the other hand, is something personal to the user, 
 not-openstack related.
 
 What do you think?  
 
 Note: On the whiteboard of the blueprint[1] I provided two demos and some 
 instructions of how to test this functionality with Linux instances.
 
 Thanks,
 
 References: 
 [1] 
 https://blueprints.launchpad.net/horizon/+spec/decrypt-and-display-vm-generated-password
 [2] JSEncrypt library http://travistidwell.com/jsencrypt/
 
 
 Ala Rezmerita
 Software Engineer || Cloudwatt
 M: (+33) 06 77 43 23 91
 Immeuble Etik
 892 rue Yves Kermen
 92100 Boulogne-Billancourt – France
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: OpenStack Cloud Virtualization Implementation Strategies

2014-01-17 Thread Alan Kavanagh
FYI

From: Light Reading [mailto:sen...@responses.lightreading.com]
Sent: January-17-14 2:20 PM
To: Alan Kavanagh
Subject: OpenStack Cloud Virtualization Implementation Strategies

If you have trouble viewing this email, read the online 
versionhttp://app.reg.techweb.com/e/es.aspx?s=2150e=212470elq=62671bda0e344eeb82af3927ad56d07f.


[http://twimgs.com/audiencedevelopment/JT/OE/promos/LR/logos/LR_Windriver_hdr.jpg]http://www.lightreading.com/radio.asp?webinar_id=78cid=Windriver011714elq=62671bda0e344eeb82af3927ad56d07f





OpenStack Cloud Virtualization Implementation 
Strategieshttp://www.lightreading.com/radio.asp?webinar_id=78cid=Windriver011714elq=62671bda0e344eeb82af3927ad56d07f

Dear Alan,

Join us on Wednesday, January 29, 2014, 12:00 pm New York for this live radio 
show sponsored by Wind River.

[Register 
Now]http://www.lightreading.com/radio.asp?webinar_id=78cid=Windriver011714elq=62671bda0e344eeb82af3927ad56d07f

In only a few years, OpenStack has emerged as a leading approach for meeting 
the massive cloud compute, networking, and storage challenges that datacenters 
and network infrastructures now face. However, since implementation 
requirements differ based on a number of factors, including type of cloud 
architecture supported (public, private, hybrid), no single implementation 
strategy exists. Rather, operators and equipment providers must create 
individualized cloud virtualization strategies to best meet unique service 
agility and monetization requirements.

Accordingly, join Davide Ricci, Product Line Manager at Wind River, and Jim 
Hodges, Heavy Reading Analyst, for a technical discussion addressing the 
factors and design attributes to be considered in creating a cohesive 
OpenStack-based cloud virtualization implementation strategy. Topics they will 
discuss include:

  *   The impact of OpenStack on existing open-source embedded software 
ecosystem projects
  *   The factors and timeline that must be considered in the creation of a 
virtualized cloud network (including cloud-based hardware and software clusters)
  *   Best-practices for OpenStack component integration (including OpenStack 
Neutron and Swift)
  *   The value proposition of existing and new OpenStack release capabilities
  *   OpenStack Cloud implementation challenges, including the need to provide 
carrier-grade reliability ensuring maximum possible uptime
  *   Wind River's OpenStack roadmap and how it fits into its network functions 
virtualization product and solution portfolio
Register 
Todayhttp://www.lightreading.com/radio.asp?webinar_id=78cid=Windriver011714elq=62671bda0e344eeb82af3927ad56d07f




TO UNSUBSCRIBE
This email was sent to 
alan.kavan...@ericsson.commailto:alan.kavan...@ericsson.com.
You are receiving this email because you provided Light Reading with your email 
address.
If you wish to opt-out of webinar promotions via Light Reading, please respond 
herehttp://reg.techweb.com/forms/LightReadingWebinars.


Light Reading
150 West 30th Street, 20th Floor
New York, NY 10001


[http://app.reg.techweb.com/e/FooterImages/FooterImage1?elq=62671bda0e344eeb82af3927ad56d07fsiteid=2150]
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Keystone] Best practices for storing keystone trusts information

2014-01-17 Thread Adrian Otto
Georgy,

For Solum, let's refrain from storing any secrets, whether they be passwords or 
trusts, or tokens. I definitely don't want to be in the business of managing 
how to secure them in an SQL database. I don't even want admin password 
values to appear in the configuration files. I'd prefer to take a hard 
dependency on barbican[1], and store them in there, where they can be centrally 
fortified with encryption and access controls, accesses can be logged, they can 
be revoked, and we have a real auditing story for enterprises who have strict 
security requirements.

Thanks,

Adrian

[1] https://github.com/stackforge/barbican

On Jan 17, 2014, at 11:26 AM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com
 wrote:

Hi Lance,

Thank you for the documentation link. It really solves the problem with trust 
expiration. I really like an idea to restrict trust to specific roles. This is 
great.

As you mentioned, you use sql to store trusts information. Do you use any 
encryption for that? I am thinking from security perspective, if you have trust 
information in DB it might be not safe as this trust is a long term 
authentication.

Thanks
Georgy


On Fri, Jan 17, 2014 at 10:31 AM, Lance D Bragstad 
ldbra...@us.ibm.commailto:ldbra...@us.ibm.com wrote:

Hi Georgy,

The following might help with some of the trust questions you have, if you 
haven't looked at it already:
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md

As far as storage implementation, trust uses sql and kvs backends. Trusts can 
be given an expiration but if an expiration is not given the trust is valid 
until it is explicitly revoked (taken from the link above):

  Optionally, the trust may only be valid for a specified time period, as 
defined by expires_at. If noexpires_at is specified, then the trust is valid 
until it is explicitly revoked.

Trusts can also be given 'uses' so that you can set a limit to how many times a 
trust will issue a token to the trustee. That functionality hasn't landed yet 
but it is up for review: https://review.openstack.org/#/c/56243/

Hope this helps!


Best Regards,

Lance Bragstad


graycol.gifGeorgy Okrokvertskhov ---01/17/2014 12:11:46 PM---Hi, In Solum 
project we want to use Keystone trusts to work with other

From: Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.commailto:gokrokvertsk...@mirantis.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date: 01/17/2014 12:11 PM
Subject: [openstack-dev] [Solum][Keystone] Best practices for storing keystone 
trusts information





Hi,

In Solum project we want to use Keystone trusts to work with other OpenStack 
services on behalf of user. Trusts are long term entities and a service should 
keep them for a long time.

I want to understand what are best practices for working with trusts and 
storing them in a service?

What are the options to keep trust? I see obvious approaches like keep them in 
a service DB or keep them in memory. Are there any other approaches?

Is there a proper way to renew trust? For example if I have a long term task 
which is waiting for external event, how to keep trust fresh for a long and 
unpredicted period?

Thanks
Georgy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [Metadatarepository] Metadata repository initiative status

2014-01-17 Thread Georgy Okrokvertskhov
Hi Travis,

I think it will be discussed on the mini-summit which will be on Jan
27th-28th in Washington DC.
Here is an etherpad with the summit agenda:
https://etherpad.openstack.org/p/glance-mini-summit-agenda

I hope that after F2F discussion all BPs will have priority and assignment.

Thanks
Georgy


On Fri, Jan 17, 2014 at 10:11 AM, Tripp, Travis S travis.tr...@hp.comwrote:

  Hello All,



 I just took a look at this blueprint and see that it doesn’t have any
 priority.  Was there a discussion on priority?  Any idea what, if any of
 this will make it into Icehouse?  Also, are there going to be any further
 design sessions on it?



 Thanks,
 Travis



 *From:* Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
 *Sent:* Friday, December 20, 2013 3:43 PM

 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Glance] [Metadatarepository] Metadata
 repository initiative status



 Hi,



 Metadata repository meeting occurred this Tuesday in #openstack-glance
 channel. Main item that was discussed was an API for a new metadata
 functions and where this API should appear. During discussion it was
 defined that the main functionality will be  a storage for different
 objects and metadata associated with them. Initially all objects will have
 a specific type which defines specific attributes in metadata. There will
 be also a common set of attributes for all objects stored in Glance.



 During the discussion there was an input from different projects (Hest,
 Murano, Solum) what kind of objects should be stored for each project and
 what kind functionality is minimally required.



 Here is a list of potential objects:
  Heat:

 · HOT template

 Potential Attributes: version, tag, keywords, etc.

   *Required Features:*

 · Object and metadata versioning

 · Search by specific attribute\attributes value



 *Murano*

 · *Murano files*

 o  UI definition

 o  workflow definition

 o  HOT templates

 o  Scripts

 *Required Features:*

 · Object and metadata versioning

 · Search by specific attribute


  Solum

 · Solum Language Packs

   *Potential Attributes:* name, build_toolchain, OS, language
 platform, versions



 *Required Features:*

 · Object and metadata versioning

 · Search by specific attribute



 After a discussion it was concluded that the best way will be to add a new
 API endpoint /artifacts. This endpoint will be used to work with object’s
 common attributes while type specific attributes and methods will be
 accessible through /artifact/object-type endpoint. The endpoint /artifacts
 will be used for filtering objects by searching for specific attributes
 value. Type specific attributes search should also be possible via
 /artifacts endpoint.

 For each object type there will be a separate table for attributes in a
 database.



 Currently it is supposed that metadata repository API will be implemented
 inside Glance within v2 version without changing existing API for images.
 In the future, v3 Glance API can fold images related API to the common
 artifacts API.

 New artifact’s API will reuse as much as possible from existing Glance
 functionality. Most of the stored objects will be non-binary, so it is
 necessary to check how Glance code handle this.


  AI

 All projects teams should start submit BPs for new functionality in
 Glance. These BPs will be discussed in ML and on Glance weekly meetings.


  Related Resources:

 Etherpad for Artifacts API design:
 https://etherpad.openstack.org/p/MetadataRepository-ArtifactRepositoryAPI



 Heat templates repo BP for Heat:

 https://blueprints.launchpad.net/heat/+spec/heat-template-repo



 Initial API discussion Etherpad:

 https://etherpad.openstack.org/p/MetadataRepository-API




 Thanks

 Georgy

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [wsme] Undefined attributes in WSME

2014-01-17 Thread Kurt Griffiths
FWIW, I believe Nova is looking at using JSON Schema as well, since they
need to handle API extensions. This came up during a design session at the
HK summit.

On 1/12/14, 5:33 PM, Jamie Lennox jamielen...@redhat.com wrote:

I would prefer not to have keystone using yet another framework from the
rest of openstack


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-17 Thread Jay Pipes
On Fri, 2014-01-17 at 17:03 +, Andrew Hutchings wrote:
 On 17 Jan 2014, at 16:10, Jay Pipes jaypi...@gmail.com wrote:
 
  On Fri, 2014-01-17 at 14:34 +0100, Thomas Herve wrote:
  Hi all,
  
  I've been looking at Neutron default LBaaS provider using haproxy, and 
  while it's working nicely, it seems to have several shortcomings in terms 
  of scalability and high availability. The Libra project seems to offer a 
  more robust alternative, at least for scaling. The haproxy implementation 
  in Neutron seems to continue to evolve (like with 
  https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy), but I'm 
  wondering why we can't leverage Libra. The APIs are a bit different, but 
  the goals look very similar, and there is a waste of effort with 2 
  different implementations. Maybe we could see a Libra driver for Neutron 
  LBaaS for example?
  
  Yep, it's a completely duplicative and wasteful effort.
  
  It would be great for Libra developers to contribute to Neutron LBaaS.
 
 Hi Jay and Thomas,
 
 I am the outgoing technical lead of Libra for HP.  But will reply whilst the 
 new technical lead (Marc Pilon) gets subscribed to this.

:( I had no idea, Andrew!

 I would go as far as duplicative or wasteful. Libra existed before Neutron 
 LBaaS and is originally based on the Atlas API specifications.  Neutron LBaaS 
 has started duplicating some of our features recently which we find quite 
 flattering.

I presume you meant you would *not* go as far as duplicative or
wasteful :)

So, please don't take this the wrong way... but does anyone other than
HP run Libra? Likewise, does anyone other than Rackspace run Atlas?

I find it a little difficult to comprehend why, if Libra preceded work
on Neutron LBaaS, that it wasn't used as the basis of Neutron's LBaaS
work. I can understand this for Atlas, since it's Java, but Libra is
Python code... so it's even more confusing to me.

Granted, I don't know the history of Neutron LBaaS, but it just seems to
be that this particular area (LBaaS) has such blatantly overlapping
codebases with separate contributor teams. Just baffling really.

Any background or history you can give me (however opinionated!) would
be very much appreciated :)

 After the 5.x release of Libra has been stabilised we will be working towards 
 integration with Neutron.  It is a very important thing on our roadmap and we 
 are already working with 2 other large companies in Openstack to figure that 
 piece out.

Which large OpenStack companies? Are these companies currently deploying
Libra?

Thanks,
-jay

 If anyone else wants to get involved or just wants to play with Libra I’m 
 sure the HP team would be happy to hear about it and help where they can.
 
 Hope this helps
 
 Kind Regards
 Andrew
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum][Keystone] Best practices for storing keystone trusts information

2014-01-17 Thread Georgy Okrokvertskhov
Hi Adrian,

Barbican looks good for this purpose. I will do a prototype with it.

Thanks
Georgy


On Fri, Jan 17, 2014 at 11:43 AM, Adrian Otto adrian.o...@rackspace.comwrote:

  Georgy,

  For Solum, let's refrain from storing any secrets, whether they be
 passwords or trusts, or tokens. I definitely don't want to be in the
 business of managing how to secure them in an SQL database. I don't even
 want admin password values to appear in the configuration files. I'd
 prefer to take a hard dependency on barbican[1], and store them in there,
 where they can be centrally fortified with encryption and access controls,
 accesses can be logged, they can be revoked, and we have a real auditing
 story for enterprises who have strict security requirements.

  Thanks,

  Adrian

  [1] https://github.com/stackforge/barbican

  On Jan 17, 2014, at 11:26 AM, Georgy Okrokvertskhov 
 gokrokvertsk...@mirantis.com
  wrote:

  Hi Lance,

  Thank you for the documentation link. It really solves the problem with
 trust expiration. I really like an idea to restrict trust to specific
 roles. This is great.

  As you mentioned, you use sql to store trusts information. Do you use
 any encryption for that? I am thinking from security perspective, if you
 have trust information in DB it might be not safe as this trust is a long
 term authentication.

  Thanks
 Georgy


 On Fri, Jan 17, 2014 at 10:31 AM, Lance D Bragstad ldbra...@us.ibm.comwrote:

  Hi Georgy,

 The following might help with some of the trust questions you have, if
 you haven't looked at it already:

 *https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md*https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-trust-ext.md


 As far as storage implementation, trust uses sql and kvs backends. Trusts
 can be given an expiration but if an expiration is not given the trust is
 valid until it is explicitly revoked (taken from the link above):

   Optionally, the trust may only be valid for a specified time period, as
 defined by expires_at. If noexpires_at is specified, then the trust is
 valid until it is explicitly revoked.

 Trusts can also be given 'uses' so that you can set a limit to how many
 times a trust will issue a token to the trustee. That functionality hasn't
 landed yet but it is up for review:
 *https://review.openstack.org/#/c/56243/*https://review.openstack.org/#/c/56243/


 Hope this helps!


 Best Regards,

 Lance Bragstad


 graycol.gifGeorgy Okrokvertskhov ---01/17/2014 12:11:46 PM---Hi, In
 Solum project we want to use Keystone trusts to work with other


 From: Georgy Okrokvertskhov gokrokvertsk...@mirantis.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org,

 Date: 01/17/2014 12:11 PM
 Subject: [openstack-dev] [Solum][Keystone] Best practices for storing
 keystone trusts information

 --



 Hi,

 In Solum project we want to use Keystone trusts to work with other
 OpenStack services on behalf of user. Trusts are long term entities and a
 service should keep them for a long time.

 I want to understand what are best practices for working with trusts and
 storing them in a service?

 What are the options to keep trust? I see obvious approaches like keep
 them in a service DB or keep them in memory. Are there any other approaches?

 Is there a proper way to renew trust? For example if I have a long term
 task which is waiting for external event, how to keep trust fresh for a
 long and unpredicted period?

 Thanks
  Georgy___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Technical Program Manager,
Cloud and Infrastructure Services,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-01-17 Thread Clint Byrum
tl;dr: You're right, it would be useful. Points on what is blocking it
below:

Excerpts from James Slagle's message of 2014-01-17 05:18:01 -0800:
 On Thu, Jan 16, 2014 at 7:29 PM, Clint Byrum cl...@fewbar.com wrote:
  Note that tripleo-incubator is special and should not be released. It
  is intentionally kept unfrozen and unreleased to make sure there is no
  illusion of stability.
 
 I think it would be nice if we could point people at a devtest that
 they could use with our other released stuff. Without that, we might
 make a change to devtest, such as showing the use of a new heat
 parameter in our templates, and if they're trying to follow along with
 a released tripleo-heat-templates then they would have a problem.
 
 Without a branch of incubator, there's no story or documentation
 around using any of our released stuff.  You could follow along with
 devtest to get an idea of how it's supposed to work and indeed it
 might even work, but I don't think that's good enough. There is
 tooling in incubator that has proved it's usefulness. Take an example
 like setup-endpoints, what we're effectively saying without allowing
 people to use that is that there is a useful tool that will setup
 endpoints for you, but don't use it with our released stuff because
 it's not gauranteed to work and instead make these 10'ish calls to
 keystone via some other method. Then you'd also end up with a
 different but parallel set of instructions for using our released
 stuff vs. not.
 

I'll address the bigger points here below, but for the record, I think
setup-endpoints and register-endpoint are stable enough now that they
should just be included with keystoneclient or keystone. Perhaps rewritten
as subcommands to the keystone cli, but even as-is they would be useful
in keystoneclient's bin dir IMO.

 This is prohibitive to someone who may want to setup a tripleo CI/CD
 cloud deploying stable icehouse or from milestone branches. I think
 people would just create their own fork of tripleo-incubator and use
 that.
 
  If there are components in it that need releasing, they should be moved
  into relevant projects or forked into their own projects.
 
 I'd be fine with that approach, except that's pretty much everything
 in incubator, the scripts, templates, generated docs, etc. Instead of
 creating a new forked repo, why don't we just rename tripleo-incubator
 to tripleo-deployment and have some stable branches that people could
 use with our releases?
 
 I don't feel like that precludes tripleo from having no stability on
 the master branch at all.

If we are prepared to make basic release-to-release stability guarantees
for everything in incubator (or kick the few things we aren't prepared
to do that for out to a new incubator) then huzzah! Lets do what you
suggest above. :)

I just don't think we're there yet, and I'd rather see us fork off the
things that are ready as they get to that point rather than try to make
a giant push to freeze the whole thing. I'm afraid we'd have users in a
bad position by expecting the icehouse version of assert-user to still
be there and keep working in Juno.

To put some analysis where my rhetoric is, I've taken a look at all
of the scripts. By my count there are 43 scripts, 10 of which are not
really part of TripleO. By lines of code, there are about 1954 lines
of actual code and 600 lines in the not TripleO parts. So between 20
and 30 percent of the code shouldn't be part of TripleO and should be
pushed into their own releasable places. I've included the
classifications below.

Also before any of it gets released we need:

* Gating
* Commitment to real documentation
* Thierry's blessing. :)

==Script classifications==

CD cloud related - never freezes
assert-users
assert-admin-users
assert-user -- Perhaps merge with os-adduser

--Belongs in keystoneclient
setup-endpoints
register-endpoint
os-adduser -- Might need a rename?

--Belongs in glanceclient:
load-image

--Could be forked into a tripleo-devtest project:
boot-seed-vm
setup-network
configure-vm
create-nodes
set-os-type
devtest_setup.sh
devtest_seed.sh
devtest_variables.sh
devtest.sh
refresh-env
cleanup-env
devtest_ramdisk.sh
extract-docs.awk
devtest_undercloud.sh
devtest_overcloud.sh
write-tripleorc
install-dependencies
devtest_end.sh
register-nodes
extract-docs
devtest_testenv.sh
takeovernode

--Not devtest only, releasable, but specific to TripleO:
init-keystone
setup-overcloud-passwords
pull-tools
setup-seed-vm
stack-ready
user-config
get-vm-mac
setup-clienttools
setup-neutron
setup-undercloud-passwords
setup-baremetal

--Useful, but too tiny for their own repo:
os-make-password -- arguably we should just use pwgen
wait_for
send-irc

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-17 Thread Alan Kavanagh
Hi Rob

Then apart from the disk eraser and reinstalling the blade from scratch 
everytime it is returned from lease, and ensure network isolation, what are the 
other many concerns you are worried about for sharing the bare metal then? 
Would really like to know what the other major issues are that you see?

/Aaln

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: January-17-14 3:15 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ironic] Disk Eraser

On 16 January 2014 03:31, Alan Kavanagh alan.kavan...@ericsson.com wrote:
 Hi fellow OpenStackers



 Does anyone have any recommendations on open source tools for disk 
 erasure/data destruction software. I have so far looked at DBAN and 
 disk scrubber and was wondering if ironic team have some better 
 recommendations?

So for Ironic this is a moderately low priority thing right now - and certainly 
I think it should be optional (what the default is is a different discussion).

It's low priority because there are -so- many other concerns about sharing bare 
metal machines between tenants that don't have comprehensive mutual trust, that 
it's really not viable today (even on relatively recent platforms IMNSHO).

-Rob


--
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-17 Thread Alex Freedland
Andrew, Jay and all,

Thank you for bringing this topic up. Incidentally, just a month ago at
OpenStack Israel I spoke to Monty and other HP folks about getting the
Libra initiatives integrated into LBaaS.  I am happy that this discussion
is now happening on the mailing list.

I remember the history of how this got started. Mirantis was working with a
number of customers (GAP, PayPal, and a few others) who were asking for
LBaaS feature. At that time, Atlas was the default choice in the community,
but its Java-based implementation did not agree with the rest of OpenStack.

There was no Libra anywhere in the OpenStack sandbox, so we have proposed a
set of blueprints and Eugene Nikonorov and the team started moving ahead
with the implementation. Even before the code was accepted into Quantum, a
number of customers started to use it and a number of vendors (F5, Radware,
etc.) joined the community to add there own plugins.

Consequently, the decision was made to add LBaaS to Quantum (aka Neutron).

We would love to see the Libra developers join the Neutron team and
collaborate on the ways to bring the two initiatives together.


Alex Freedland
Community Team
Mirantis, Inc.




On Fri, Jan 17, 2014 at 11:53 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-01-17 at 17:03 +, Andrew Hutchings wrote:
  On 17 Jan 2014, at 16:10, Jay Pipes jaypi...@gmail.com wrote:
 
   On Fri, 2014-01-17 at 14:34 +0100, Thomas Herve wrote:
   Hi all,
  
   I've been looking at Neutron default LBaaS provider using haproxy,
 and while it's working nicely, it seems to have several shortcomings in
 terms of scalability and high availability. The Libra project seems to
 offer a more robust alternative, at least for scaling. The haproxy
 implementation in Neutron seems to continue to evolve (like with
 https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy), but I'm
 wondering why we can't leverage Libra. The APIs are a bit different, but
 the goals look very similar, and there is a waste of effort with 2
 different implementations. Maybe we could see a Libra driver for Neutron
 LBaaS for example?
  
   Yep, it's a completely duplicative and wasteful effort.
  
   It would be great for Libra developers to contribute to Neutron LBaaS.
 
  Hi Jay and Thomas,
 
  I am the outgoing technical lead of Libra for HP.  But will reply whilst
 the new technical lead (Marc Pilon) gets subscribed to this.

 :( I had no idea, Andrew!

  I would go as far as duplicative or wasteful. Libra existed before
 Neutron LBaaS and is originally based on the Atlas API specifications.
  Neutron LBaaS has started duplicating some of our features recently which
 we find quite flattering.

 I presume you meant you would *not* go as far as duplicative or
 wasteful :)

 So, please don't take this the wrong way... but does anyone other than
 HP run Libra? Likewise, does anyone other than Rackspace run Atlas?

 I find it a little difficult to comprehend why, if Libra preceded work
 on Neutron LBaaS, that it wasn't used as the basis of Neutron's LBaaS
 work. I can understand this for Atlas, since it's Java, but Libra is
 Python code... so it's even more confusing to me.

 Granted, I don't know the history of Neutron LBaaS, but it just seems to
 be that this particular area (LBaaS) has such blatantly overlapping
 codebases with separate contributor teams. Just baffling really.

 Any background or history you can give me (however opinionated!) would
 be very much appreciated :)

  After the 5.x release of Libra has been stabilised we will be working
 towards integration with Neutron.  It is a very important thing on our
 roadmap and we are already working with 2 other large companies in
 Openstack to figure that piece out.

 Which large OpenStack companies? Are these companies currently deploying
 Libra?

 Thanks,
 -jay

  If anyone else wants to get involved or just wants to play with Libra
 I’m sure the HP team would be happy to hear about it and help where they
 can.
 
  Hope this helps
 
  Kind Regards
  Andrew
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Sharing information

2014-01-17 Thread Jay Pipes
On Thu, 2014-01-16 at 15:37 +, Sullivan, Jon Paul wrote:
  From: Jay Pipes [mailto:jaypi...@gmail.com]
  On Thu, 2014-01-16 at 10:39 +, Sullivan, Jon Paul wrote:
From: Kyle Mestery [mailto:mest...@siliconloons.com]

FYI, here [1] are the meeting logs from today’s meeting.
   
A couple of things have become apparent here:
   
1. No one has a working Neutron 3rd party testing rig yet which is
voting
consistently. If I’ve missed something, please, someone correct
  me.
2. People are still hung on issues around Jenkins/gerrit
  integration.
  
   This issue can be very easily resolved if people were to use Jenkins
  Job Builder [2] for the creation of their Jenkins testing jobs.  This
  would allow the reuse of simple macros already in existence to guarantee
  correct configuration of Jenkins jobs at 3rd party sites.  This would
  also allow simple reuse of the code used by the infra team to create the
  openstack review and gate jobs, ensuring 3rd party testers can generate
  the correct code from the gerrit change and also publish results back in
  a standard way.
  
   I can't recommend Jenkins Job Builder highly enough if you use
  Jenkins.
  
   [2] https://github.com/openstack-infra/jenkins-job-builder
  
  ++ It's a life-saver. We used it heavily in ATT with our
  Gerrit/Jenkins/Zuul CI system.
  
  -jay
 
 It seems to me that shared JJB macros could be the most concise and simple way
 of describing 3rd party testing integration requirements.
 
 So the follow-on questions are:
 1. Can the 3rd party testing blueprint enforce, or at least link to,
use of specific JJB macros for integration to the openstack gerrit?
   1a. Where should shared JJB code be stored?

Well, technically, this already exists. The openstack-infra/config
project already has pretty much everything a 3rd party would ever need
to setup an OpenStack environment, execute Tempest (or other) tests
against the environment, save and publish artifacts, and send
notifications of test results upstream.

 2. Is it appropriate for 3rd party testers to share their tests as
JJB code, if they are willing?
   2a. Would this live in the same location as (1a)?

Why would 3rd party testers be using anything other than Tempest for
integration testing? Put another way... if a 3rd party *is* using
something other than Tempest, why not put it in Tempest :)

 For those unfamiliar with JJB, here is a little example of what you might do:
 
 Example of (untested) JJB macro describing how to configure Jenkins to
 trigger from gerrit:
 snip

As much as JJB is total awesomesauce -- as it prevents people needing to
manually update Jenkins job config.xml files -- any 3rd party that is
attempting to put together a test environment/platform for which you
intend to interact with the upstream CI system should go check out
devstack-gate [1], read the scripts, and grok it.

I'm working on some instructions to assist admins in 3rd party testing
labs in setting all of their platform up using the upstream tools like
devstack-gate and JJB, and this documentation should be done around
middle of next week. I'll post to the ML with links to that
documentation when it's done.

Best,
-jay

[1] https://github.com/openstack-infra/devstack-gate


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] why don't we deal with claims when live migrating an instance?

2014-01-17 Thread yunhong jiang
On Fri, 2014-01-17 at 09:39 -0800, Vishvananda Ishaya wrote:
 
 On Jan 16, 2014, at 9:41 PM, Jiang, Yunhong yunhong.ji...@intel.com
 wrote:
 
  I noticed the BP has been approved, but I really want to understand
  more on the reason, can anyone provide me some hints?
   
  In the BP, it states that “For resize, we need to confirm, as we
  want to give end user an opportunity to rollback”. But why do we
  want to give user an opportunity to rollback to resize? And why that
  reason does not apply to cold migration and live migration?
 
 
 The confirm is so the user can verify that the instance is still
 functional in the new state. We leave the old instance around so they
 can abort and return to the old instance if something goes wrong. This
 could apply to cold migration as well since it uses the same code
 paths, but it definitely does not make sense in the case of
 live-migration, because there is no old vm to revert to.

Thanks for clarification.
 
 In the case of cold migration, the state is quite confusing as
 “RESIZE_VERIFY”, and the need to confirm is not immediately obvious so
 I think that is driving the change.
 
I didn't saw patch to change the state in that BP, so possibly it's
still on way.

So basically the idea is, while we keep the implementation code path
combined for resize/code migration as much as possible, but we will keep
them different from user point of view, like different configuration
option, different state etc, right?
 
--jyh





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][compute] Why prune all compute node stats when sync up compute nodes

2014-01-17 Thread yunhong jiang
On Thu, 2014-01-16 at 00:22 +0800, Jay Lau wrote:
 Greeting,
 
 In compute/manager.py, there is a periodic task named as
 update_available_resource(), it will update resource for each compute
 periodically.
 
  @periodic_task.periodic_task
 def update_available_resource(self, context):
 See driver.get_available_resource()
 
 Periodic process that keeps that the compute host's
 understanding of
 resource availability and usage in sync with the underlying
 hypervisor.
 
 :param context: security context
 
 new_resource_tracker_dict = {}
 nodenames = set(self.driver.get_available_nodes())
 for nodename in nodenames:
 rt = self._get_resource_tracker(nodename)
 rt.update_available_resource(context)  Update
 here
 new_resource_tracker_dict[nodename] = rt
 
 In resource_tracker.py,
 https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L384
 
 self._update(context, resources, prune_stats=True)
 
 It always set prune_stats as True, this caused some problems for me.
 As now I'm putting some metrics to compute_node_stats table, those
 metrics does not change frequently, so I did not update it frequently.
 But the periodic task always prune the new metrics that I added.

 
IIUC, it's because the host resource may change dynamically, at least in
original design?

 What about add a configuration parameter in nova.cont to make
 prune_stats as configurable?

Instead of make prune_stats to be configuration, will it make more sense
to be lazy update, i.e. not update the DB if no changes?
 
 Thanks,
 
 
 Jay
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-17 Thread Devananda van der Veen
On Fri, Jan 17, 2014 at 12:35 PM, Alan Kavanagh
alan.kavan...@ericsson.comwrote:

 Hi Rob

 Then apart from the disk eraser and reinstalling the blade from scratch
 everytime it is returned from lease, and ensure network isolation, what are
 the other many concerns you are worried about for sharing the bare metal
 then? Would really like to know what the other major issues are that you
 see?

 /Aaln


Alan,

Disk erasure is, in my opinion, more suitable to policy compliance, for
instance wiping HIPAA / protected information from a machine before
returning it to the pool of available machines within a trusted
organization. It's not just about security. We discussed it briefly at the
HKG summit, and it fits within the long-tail of this blueprint:

https://blueprints.launchpad.net/ironic/+spec/utility-ramdisk

To answer your other question, the security implications of putting
untrusted tenants on bare metal today are numerous. The really big attack
vector which, AFAIK, no one has completely solved is firmware. Even though
we can use UEFI (in hardware which supports it) to validate the main
firmware and the OS's chain of trust, there are still many micro
controllers, PCI devices, storage controllers, etc, whose firmware can't be
validated out-of-band and thus can not be trusted. The risk is that a prior
tenant maliciously flashed a new firmware which will lie about its status
and remain a persistent infection even if you attempt to re-flash said
device. There are other issues which are easier to solve (eg, network
isolation during boot, IPMI security, a race condition if the data center
power cycles and the node boots before the control plane is online, etc)
but these are, ultimately, not enough as long as the firmware attack vector
still exists.

tl;dr, We should not be recycling bare metal nodes between untrusted
tenants at this time. There's a broader discussion about firmware security
going on, which, I think, will take a while for the hardware vendors to
really address. Fixing the other security issues around it, while good,
isn't a high priority for Ironic at this time.

-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-17 Thread Robert Li (baoli)
Yunhong,

I'm hoping that these comments can be directly addressed:
  a practical deployment scenario that requires arbitrary attributes.
  detailed design on the following (that also take into account the
introduction of predefined attributes):
* PCI stats report since the scheduler is stats based
* the scheduler in support of PCI flavors with arbitrary
attributes and potential overlapping.
  networking requirements to support multiple provider nets/physical
nets

I guess that the above will become clear as the discussion goes on. And we
also need to define the deliveries
 
Thanks,
Robert

On 1/17/14 2:02 PM, Jiang, Yunhong yunhong.ji...@intel.com wrote:

Robert, thanks for your long reply. Personally I'd prefer option 2/3 as
it keep Nova the only entity for PCI management.

Glad you are ok with Ian's proposal and we have solution to resolve the
libvirt network scenario in that framework.

Thanks
--jyh

 -Original Message-
 From: Robert Li (baoli) [mailto:ba...@cisco.com]
 Sent: Friday, January 17, 2014 7:08 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [neutron] PCI pass-through network
 support
 
 Yunhong,
 
 Thank you for bringing that up on the live migration support. In
addition
 to the two solutions you mentioned, Irena has a different solution. Let
me
 put all the them here again:
 1. network xml/group based solution.
In this solution, each host that supports a provider net/physical
 net can define a SRIOV group (it's hard to avoid the term as you can see
 from the suggestion you made based on the PCI flavor proposal). For each
 SRIOV group supported on a compute node, A network XML will be
 created the
 first time the nova compute service is running on that node.
 * nova will conduct scheduling, but not PCI device allocation
 * it's a simple and clean solution, documented in libvirt as the
 way to support live migration with SRIOV. In addition, a network xml is
 nicely mapped into a provider net.
 2. network xml per PCI device based solution
This is the solution you brought up in this email, and Ian
 mentioned this to me as well. In this solution, a network xml is created
 when A VM is created. the network xml needs to be removed once the
 VM is
 removed. This hasn't been tried out as far as I  know.
 3. interface xml/interface rename based solution
Irena brought this up. In this solution, the ethernet interface
 name corresponding to the PCI device attached to the VM needs to be
 renamed. One way to do so without requiring system reboot is to change
 the
 udev rule's file for interface renaming, followed by a udev reload.
 
 Now, with the first solution, Nova doesn't seem to have control over or
 visibility of the PCI device allocated for the VM before the VM is
 launched. This needs to be confirmed with the libvirt support and see if
 such capability can be provided. This may be a potential drawback if a
 neutron plugin requires detailed PCI device information for operation.
 Irena may provide more insight into this. Ideally, neutron shouldn't
need
 this information because the device configuration can be done by libvirt
 invoking the PCI device driver.
 
 The other two solutions are similar. For example, you can view the
second
 solution as one way to rename an interface, or camouflage an interface
 under a network name. They all require additional works before the VM is
 created and after the VM is removed.
 
 I also agree with you that we should take a look at XenAPI on this.
 
 
 With regard to your suggestion on how to implement the first solution
with
 some predefined group attribute, I think it definitely can be done. As I
 have pointed it out earlier, the PCI flavor proposal is actually a
 generalized version of the PCI group. In other words, in the PCI group
 proposal, we have one predefined attribute called PCI group, and
 everything else works on top of that. In the PCI flavor proposal,
 attribute is arbitrary. So certainly we can define a particular
attribute
 for networking, which let's temporarily call sriov_group. But I can see
 with this idea of predefined attributes, more of them will be required
by
 different types of devices in the future. I'm sure it will keep us busy
 although I'm not sure it's in a good way.
 
 I was expecting you or someone else can provide a practical deployment
 scenario that would justify the flexibilities and the complexities.
 Although I'd prefer to keep it simple and generalize it later once a
 particular requirement is clearly identified, I'm fine to go with it if
 that's most of the folks want to do.
 
 --Robert
 
 
 
 On 1/16/14 8:36 PM, yunhong jiang yunhong.ji...@linux.intel.com
 wrote:
 
 On Thu, 2014-01-16 at 01:28 +0100, Ian Wells wrote:
  To clarify a couple of Robert's points, since we had a conversation
  earlier:
  On 15 January 2014 23:47, Robert Li (baoli) ba...@cisco.com wrote:

Re: [openstack-dev] [ironic] [QA] some notes on ironic functional testing

2014-01-17 Thread Chris K
Hi Alexander,

Reading your post got me to thinking. What if we modified the ssh driver so
it used the libvirt api. Just off the top of my head some thing along the
lines of changing the ssh driver to issue python-libvirt commands would
work. As an example:



  shh user@host python -c \import libvirt;conn =
 libvirt.openReadOnly(None);dom0 = conn.lookupByName('seed');print 'Seed: id
 %d running %s' % (dom0.ID(), dom0.OSType())\



 Seed: id 2 running hvm


This seems like a straight forward improvement to the driver, and should
improve overall performance.


Chris Krelle
NobodyCam


On Wed, Jan 15, 2014 at 2:22 AM, Alexander Gordeev agord...@mirantis.comwrote:

 Hi, Devananda


 On Wed, Jan 15, 2014 at 8:19 AM, Devananda van der Veen 
 devananda@gmail.com wrote:


 On Tue, Jan 14, 2014 at 6:28 AM, Alexander Gordeev agord...@mirantis.com
  wrote:


- Secondly, virsh has some performance issues if you deal with 30
VMs (it is not our case for now but who knows).

 This is a reason why you want to use python libvirt api instead of virsh
 CLI, correct? I don't see a problem, but I will defer to the tempest devs
 on whether that's OK.


 Yes, that's correct. In short, using of python API binding makes possible
 to execute all operations inside just one opened libvirt connection. Virsh
 CLI opens new connection every time when you call it. Every new connection
 produces a fork of libvirt daemon. When you're going to spawn/create/modify
 few dozens of VMs in short period of time this performance issue becomes
 very noticeable.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party-testing] Sharing information

2014-01-17 Thread Mohammad Banikazemi


Jay Pipes jaypi...@gmail.com wrote on 01/17/2014 04:32:55 PM:

 From: Jay Pipes jaypi...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 01/17/2014 04:37 PM
 Subject: Re: [openstack-dev] [neutron] [third-party-testing] Sharing
 information

 On Thu, 2014-01-16 at 15:37 +, Sullivan, Jon Paul wrote:
   From: Jay Pipes [mailto:jaypi...@gmail.com]
   On Thu, 2014-01-16 at 10:39 +, Sullivan, Jon Paul wrote:
 From: Kyle Mestery [mailto:mest...@siliconloons.com]

 FYI, here [1] are the meeting logs from today’s meeting.

 A couple of things have become apparent here:

 1. No one has a working Neutron 3rd party testing rig yet which
is
 voting
 consistently. If I’ve missed something, please, someone
correct
   me.
 2. People are still hung on issues around Jenkins/gerrit
   integration.
   
This issue can be very easily resolved if people were to use
Jenkins
   Job Builder [2] for the creation of their Jenkins testing jobs.  This
   would allow the reuse of simple macros already in existence to
guarantee
   correct configuration of Jenkins jobs at 3rd party sites.  This would
   also allow simple reuse of the code used by the infra team to create
the
   openstack review and gate jobs, ensuring 3rd party testers can
generate
   the correct code from the gerrit change and also publish results back
in
   a standard way.
   
I can't recommend Jenkins Job Builder highly enough if you use
   Jenkins.
   
[2] https://github.com/openstack-infra/jenkins-job-builder
  
   ++ It's a life-saver. We used it heavily in ATT with our
   Gerrit/Jenkins/Zuul CI system.
  
   -jay
 
  It seems to me that shared JJB macros could be the most concise
 and simple way
  of describing 3rd party testing integration requirements.
 
  So the follow-on questions are:
  1. Can the 3rd party testing blueprint enforce, or at least link to,
 use of specific JJB macros for integration to the openstack gerrit?
1a. Where should shared JJB code be stored?

 Well, technically, this already exists. The openstack-infra/config
 project already has pretty much everything a 3rd party would ever need
 to setup an OpenStack environment, execute Tempest (or other) tests
 against the environment, save and publish artifacts, and send
 notifications of test results upstream.

  2. Is it appropriate for 3rd party testers to share their tests as
 JJB code, if they are willing?
2a. Would this live in the same location as (1a)?

 Why would 3rd party testers be using anything other than Tempest for
 integration testing? Put another way... if a 3rd party *is* using
 something other than Tempest, why not put it in Tempest :)

  For those unfamiliar with JJB, here is a little example of what
 you might do:
 
  Example of (untested) JJB macro describing how to configure Jenkins to
  trigger from gerrit:
  snip

 As much as JJB is total awesomesauce -- as it prevents people needing to
 manually update Jenkins job config.xml files -- any 3rd party that is
 attempting to put together a test environment/platform for which you
 intend to interact with the upstream CI system should go check out
 devstack-gate [1], read the scripts, and grok it.

 I'm working on some instructions to assist admins in 3rd party testing
 labs in setting all of their platform up using the upstream tools like
 devstack-gate and JJB, and this documentation should be done around
 middle of next week. I'll post to the ML with links to that
 documentation when it's done.


That would be great. Thanks. Please note that Icehouse-2 is the deadline
for Neutron 3rd party test setups to be operational.

 Best,
 -jay

 [1] https://github.com/openstack-infra/devstack-gate


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] ML2 vlan type driver does not honor network_vlan_ranges

2014-01-17 Thread Paul Ward

Henry, thank you very much for your reply.  To try to tie together our
discussion here with what's in the launchpad bug report I opened
(https://bugs.launchpad.net/neutron/+bug/1269926), here is the method used
to create the network.  I'm creating the network via a UI, which does a
rest api POST to https://ip/powervc/openstack/network/v2.0//networks with
the following payload:

name: test4094
provider:network_type: vlan
provider:physical_network: default
provider:segmentation_id: 4094
Per the documentation, I assume the tenant_id is obtained via keystone.

Also interesting, I see this in /var/log/neutron/server.log:

2014-01-17 17:43:05.688 62718 DEBUG neutron.plugins.ml2.drivers.type_vlan
[req-484c7ddd-7f83-443b-9427-f7ac327dd99d 0
26e92528a0bc4d84ac0777b2d2b93a83] NT-E59BA3F Reserving specific vlan 4094
on physical network default outside pool
reserve_provider_segment 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/drivers/type_vlan.py:212

Which indicates OpenStack realizes this is outside the vlan range yet still
allowed it.  Lending even more credence to the fact that I'm incorrect in
my thinking that this should have been prevented.  Further information to
help understand why this is not being enforced would be greatly
appreciated.

Thanks!

- Paul

Henry Gessau ges...@cisco.com wrote on 01/16/2014 03:31:44 PM:

 Date: Thu, 16 Jan 2014 16:31:44 -0500
 From: Henry Gessau ges...@cisco.com
 To: OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [neutron] ML2 vlan type driver does not
honor network_vlan_ranges
 Message-ID: 52d84fc0.8020...@cisco.com
 Content-Type: text/plain; charset=ISO-8859-1

 network_vlan_ranges is a 'pool' of vlans from which to pick a vlans for
 tenant networks. Provider networks are not confined to this pool. In
fact, I
 believe it is a more common use-case that provider vlans are outside the
 pool so that they do not conflict with tenant vlan allocation.

 -- Henry

 On Thu, Jan 16, at 3:45 pm, Paul Ward wpw...@us.ibm.com wrote:

  In testing some new function I've written, I've unsurfaced the problem
that
  the ML2 vlan type driver does not enforce the vlan range specified in
the
  network_vlan_ranges option in ml2_conf.ini file.  It is properly
enforcing
  the physical network name, and even checking to be sure the
segmentation_id
  is valid in the sense that it's not outside the range of ALL validvlan
ids.
   But it does not actually enforce that segmentation_id is within the
vlan
  range specified for the given physical network in network_vlan_ranges.
 
  The fix I propose is simple.  Add the following check to
  /neutron/plugins/ml2/drivers/type_vlan.py
  (TypeVlanDriver.validate_provider_segment()):
 
  range_min, range_max = self.network_vlan_ranges
[physical_network][0]
  if segmentation_id not in range(range_min, range_max):
  msg = (_(segmentation_id out of range (%(min)s through 
   %(max)s)) %
 {'min': range_min,
  'max': range_max})
  raise exc.InvalidInput(error_message=msg)
 
  This would go near line 182 in
  https://github.com/openstack/neutron/blob/master/neutron/plugins/
 ml2/drivers/type_vlan.py.
 
  One question I have is whether self.network_vlan_ranges
[physical_network]
  could actually be an empty list rather than a tuple representing the
vlan
  range.  I believe that should always exist, but the documentation is
not
  clear on this.  For reference, the corresponding line in
 ml2_conf.ini is this:
 
  [ml2_type_vlan]
  network_vlan_ranges = default:1:4093
 
  Thanks in advance to any that choose to provide some insight here!
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Relationship between Neutron LBaaS and Libra

2014-01-17 Thread Georgy Okrokvertskhov
Hi,


Here are e-mail threads which keeps the history of LBaaS decisions:
LBaaS IIRC meeting minutes:
http://lists.openstack.org/pipermail/openstack-dev/2012-August/000390.html
LBaaS e-mail discussion:
http://lists.openstack.org/pipermail/openstack-dev/2012-August/000785.html

As you see there was a comparison of existed at that moment LBaaS solutions:
 * Atlas-LB
 * Mirantis LBaaS
 * eBay LBaaS

Git history shows that the initial commit for Libra was on September 10th
2012. This commit contains few files without any LBaaS functionality.

I think it is quite fair to say that OpenStack community did a great job on
carefully evaluating existing and working LBaaS projects and made a
decision to add some of existing functionality to Quantum.

Thanks
Georgy


On Fri, Jan 17, 2014 at 1:12 PM, Alex Freedland afreedl...@mirantis.comwrote:

 Andrew, Jay and all,

 Thank you for bringing this topic up. Incidentally, just a month ago at
 OpenStack Israel I spoke to Monty and other HP folks about getting the
 Libra initiatives integrated into LBaaS.  I am happy that this discussion
 is now happening on the mailing list.

 I remember the history of how this got started. Mirantis was working with
 a number of customers (GAP, PayPal, and a few others) who were asking for
 LBaaS feature. At that time, Atlas was the default choice in the community,
 but its Java-based implementation did not agree with the rest of OpenStack.

 There was no Libra anywhere in the OpenStack sandbox, so we have proposed
 a set of blueprints and Eugene Nikonorov and the team started moving ahead
 with the implementation. Even before the code was accepted into Quantum, a
 number of customers started to use it and a number of vendors (F5, Radware,
 etc.) joined the community to add there own plugins.

 Consequently, the decision was made to add LBaaS to Quantum (aka Neutron).

 We would love to see the Libra developers join the Neutron team and
 collaborate on the ways to bring the two initiatives together.


 Alex Freedland
 Community Team
 Mirantis, Inc.




 On Fri, Jan 17, 2014 at 11:53 AM, Jay Pipes jaypi...@gmail.com wrote:

 On Fri, 2014-01-17 at 17:03 +, Andrew Hutchings wrote:
  On 17 Jan 2014, at 16:10, Jay Pipes jaypi...@gmail.com wrote:
 
   On Fri, 2014-01-17 at 14:34 +0100, Thomas Herve wrote:
   Hi all,
  
   I've been looking at Neutron default LBaaS provider using haproxy,
 and while it's working nicely, it seems to have several shortcomings in
 terms of scalability and high availability. The Libra project seems to
 offer a more robust alternative, at least for scaling. The haproxy
 implementation in Neutron seems to continue to evolve (like with
 https://blueprints.launchpad.net/neutron/+spec/lbaas-ha-haproxy), but
 I'm wondering why we can't leverage Libra. The APIs are a bit different,
 but the goals look very similar, and there is a waste of effort with 2
 different implementations. Maybe we could see a Libra driver for Neutron
 LBaaS for example?
  
   Yep, it's a completely duplicative and wasteful effort.
  
   It would be great for Libra developers to contribute to Neutron LBaaS.
 
  Hi Jay and Thomas,
 
  I am the outgoing technical lead of Libra for HP.  But will reply
 whilst the new technical lead (Marc Pilon) gets subscribed to this.

 :( I had no idea, Andrew!

  I would go as far as duplicative or wasteful. Libra existed before
 Neutron LBaaS and is originally based on the Atlas API specifications.
  Neutron LBaaS has started duplicating some of our features recently which
 we find quite flattering.

 I presume you meant you would *not* go as far as duplicative or
 wasteful :)

 So, please don't take this the wrong way... but does anyone other than
 HP run Libra? Likewise, does anyone other than Rackspace run Atlas?

 I find it a little difficult to comprehend why, if Libra preceded work
 on Neutron LBaaS, that it wasn't used as the basis of Neutron's LBaaS
 work. I can understand this for Atlas, since it's Java, but Libra is
 Python code... so it's even more confusing to me.

 Granted, I don't know the history of Neutron LBaaS, but it just seems to
 be that this particular area (LBaaS) has such blatantly overlapping
 codebases with separate contributor teams. Just baffling really.

 Any background or history you can give me (however opinionated!) would
 be very much appreciated :)

  After the 5.x release of Libra has been stabilised we will be working
 towards integration with Neutron.  It is a very important thing on our
 roadmap and we are already working with 2 other large companies in
 Openstack to figure that piece out.

 Which large OpenStack companies? Are these companies currently deploying
 Libra?

 Thanks,
 -jay

  If anyone else wants to get involved or just wants to play with Libra
 I’m sure the HP team would be happy to hear about it and help where they
 can.
 
  Hope this helps
 
  Kind Regards
  Andrew
  ___
  OpenStack-dev mailing list
  

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-17 Thread yunhong jiang
On Fri, 2014-01-17 at 22:30 +, Robert Li (baoli) wrote:
 Yunhong,
 
 I'm hoping that these comments can be directly addressed:
   a practical deployment scenario that requires arbitrary
 attributes.

I'm just strongly against to support only one attributes (your PCI
group) for scheduling and management, that's really TOO limited.

A simple scenario is, I have 3 encryption card:
Card 1 (vendor_id is V1, device_id =0xa)
card 2(vendor_id is V1, device_id=0xb)
card 3(vendor_id is V2, device_id=0xb)

I have two images. One image only support Card 1 and another image
support Card 1/3 (or any other combination of the 3 card type). I don't
only one attributes will meet such requirement.

As to arbitrary attributes or limited list of attributes, my opinion is,
as there are so many type of PCI devices and so many potential of PCI
devices usage, support arbitrary attributes will make our effort more
flexible, if we can push the implementation into the tree.

   detailed design on the following (that also take into account
 the
 introduction of predefined attributes):
 * PCI stats report since the scheduler is stats based

I don't think there are much difference with current implementation.

 * the scheduler in support of PCI flavors with arbitrary
 attributes and potential overlapping.

As Ian said, we need make sure the pci_stats and the PCI flavor have the
same set of attributes, so I don't think there are much difference with
current implementation.

   networking requirements to support multiple provider
 nets/physical
 nets

Can't the extra info resolve this issue? Can you elaborate the issue?

Thanks
--jyh
 
 I guess that the above will become clear as the discussion goes on.
 And we
 also need to define the deliveries
  
 Thanks,
 Robert 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-17 Thread Chris Friesen

On 01/17/2014 04:20 PM, Devananda van der Veen wrote:


tl;dr, We should not be recycling bare metal nodes between untrusted
tenants at this time. There's a broader discussion about firmware
security going on, which, I think, will take a while for the hardware
vendors to really address.


What can the hardware vendors do?  Has anyone proposed a meaningful 
solution for the firmware issue?


Given the number of devices (NIC, GPU, storage controllers, etc.) that 
could potentially have firmware update capabilities it's not clear to me 
how this could be reliably solved.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread Renat Akhmerov

On 17 Jan 2014, at 10:04, Jonathan LaCour jonathan-li...@cleverdevil.org 
wrote:

 pip install openstack


That would be awesome :)

Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] milestone-proposed branches

2014-01-17 Thread Robert Collins
On 18 January 2014 09:09, Clint Byrum cl...@fewbar.com wrote:
 tl;dr: You're right, it would be useful. Points on what is blocking it
 below:

 I'll address the bigger points here below, but for the record, I think
 setup-endpoints and register-endpoint are stable enough now that they
 should just be included with keystoneclient or keystone. Perhaps rewritten
 as subcommands to the keystone cli, but even as-is they would be useful
 in keystoneclient's bin dir IMO.

They aren't really - I had to edit them yesterday :). We need to
finish a full production deployment I think to properly assess that.
And... with tuskar coming along we may not need these tools at all :).


 If we are prepared to make basic release-to-release stability guarantees
 for everything in incubator (or kick the few things we aren't prepared
 to do that for out to a new incubator) then huzzah! Lets do what you
 suggest above. :)

 I just don't think we're there yet, and I'd rather see us fork off the
 things that are ready as they get to that point rather than try to make
 a giant push to freeze the whole thing. I'm afraid we'd have users in a
 bad position by expecting the icehouse version of assert-user to still
 be there and keep working in Juno.

Giant push to freeze the whole thing is quite different to I want
to maintain a stable release of what we have now - if someone wants
to do that, I think we should focus on enabling them, not on creating
more work.

That said, I think stable releases of CD'd tooling is an odd concept
in itself, but thats a different discussion.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread Robert Collins
On 18 January 2014 06:42, John Griffith john.griff...@solidfire.com wrote:
 On Fri, Jan 17, 2014 at 1:15 AM, Robert Collins
 robe...@robertcollins.net wrote:

 Maybe this is going a bit sideways, but my point was that making a
 first step of getting periodic runs on vendor gear and publicly
 submitting those results would be a good starting point and a
 SIGNIFICANT improvement over what we have today.

 It seems to me that requiring every vendor to have a deployment in
 house dedicated and reserved 24/7 might be a tough order right out of
 the gate.  That being said, of course I'm willing and able to do that
 for my employer, but feedback from others hasn't been quite so
 amiable.

 The feedback here seems significant enough that maybe gating every
 change is the way to go though.  I'm certainly willing to opt in to
 that model and get things off the ground.  I do have a couple of
 concerns (number 3 begin the most significant):

 1. I don't want ANY commit/patch waiting for a Vendors infrastructure
 to run a test.  We would definitely need a timeout mechanism or
 something along those lines to ensure none of this disrupts the gate

 2. Isolating this to changes in Cinder seems fine, the intent was
 mostly a compatability / features check.  This takes it up a notch and
 allows us to detect when something breaks right away which is
 certainly a good thing.

 3. Support and maintenance is a concern here.  We have a first rate
 community that ALL pull together to make our gating and infrastructure
 work in OpenStack.  Even with that it's still hard for everybody to
 keep up due to number of project and simply the volume of patches that
 go in on a daily basis.  There's no way I could do my regular jobs
 that I'm already doing AND maintain my own fork/install of the
 OpenStack gating infrastructure.

 4. Despite all of the heavy weight corporation throwing resource after
 resource at OpenStack, keep in mind that it is an Open Source
 community still.  I don't want to do ANYTHING that would make it some
 unfriendly to folks who would like to commit.  Keep in mind that
 vendors here aren't necessarily all large corporations, or even all
 paid for proprietary products.  There are open source storage drivers
 for example in Cinder and they may or may not have any of the
 resources to make this happen but that doesn't mean they should not be
 allowed to have code in OpenStack.

 The fact is that the problem I see is that there are drivers/devices
 that flat out don't work and end users (heck even some vendors that
 choose not to test) don't know this until they've purchased a bunch of
 gear and tried to deploy their cloud.  What I was initially proposing
 here was just a more formal public and community representation of
 whether a device works as it's advertised or not.

 Please keep in mind that my proposal here was a first step sort of
 test case.  Rather than start with something HUGE like deploying the
 OpenStack CI in every vendors lab to test every commit (and Im sorry
 for those that don't agree but that does seem like a SIGNIFICANT
 undertaking), why not take incremental steps to make things better and
 learn as we go along?

Certainly - I totally agree that anything  nothing. I was asking
about your statement of not having enough infra to get a handle on
what would block things. As you know, tripleo is running up a
production quality test cloud to test tripleo, Ironic and once we get
everything in place - multinode gating jobs. We're *super* interested
in making the bar to increased validation as low as possible.

I broadly agree with your points 1 through 4, of course!

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-17 Thread Devananda van der Veen
On Fri, Jan 17, 2014 at 3:21 PM, Chris Friesen
chris.frie...@windriver.comwrote:

 On 01/17/2014 04:20 PM, Devananda van der Veen wrote:

  tl;dr, We should not be recycling bare metal nodes between untrusted
 tenants at this time. There's a broader discussion about firmware
 security going on, which, I think, will take a while for the hardware
 vendors to really address.


 What can the hardware vendors do?  Has anyone proposed a meaningful
 solution for the firmware issue?

 Given the number of devices (NIC, GPU, storage controllers, etc.) that
 could potentially have firmware update capabilities it's not clear to me
 how this could be reliably solved.

 Chris


Precisely.

That's what I mean by there's a broader discussion. We can encourage
hardware vendors to take firmware security more seriously and add
out-of-band validation mechanisms to their devices. From my perspective,
the industry is moving in that direction already, though raising awareness
directly with your preferred vendors can't hurt ;)

-Deva
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread John Griffith
On Fri, Jan 17, 2014 at 6:24 PM, Robert Collins
robe...@robertcollins.net wrote:
 On 18 January 2014 06:42, John Griffith john.griff...@solidfire.com wrote:
 On Fri, Jan 17, 2014 at 1:15 AM, Robert Collins
 robe...@robertcollins.net wrote:

 Maybe this is going a bit sideways, but my point was that making a
 first step of getting periodic runs on vendor gear and publicly
 submitting those results would be a good starting point and a
 SIGNIFICANT improvement over what we have today.

 It seems to me that requiring every vendor to have a deployment in
 house dedicated and reserved 24/7 might be a tough order right out of
 the gate.  That being said, of course I'm willing and able to do that
 for my employer, but feedback from others hasn't been quite so
 amiable.

 The feedback here seems significant enough that maybe gating every
 change is the way to go though.  I'm certainly willing to opt in to
 that model and get things off the ground.  I do have a couple of
 concerns (number 3 begin the most significant):

 1. I don't want ANY commit/patch waiting for a Vendors infrastructure
 to run a test.  We would definitely need a timeout mechanism or
 something along those lines to ensure none of this disrupts the gate

 2. Isolating this to changes in Cinder seems fine, the intent was
 mostly a compatability / features check.  This takes it up a notch and
 allows us to detect when something breaks right away which is
 certainly a good thing.

 3. Support and maintenance is a concern here.  We have a first rate
 community that ALL pull together to make our gating and infrastructure
 work in OpenStack.  Even with that it's still hard for everybody to
 keep up due to number of project and simply the volume of patches that
 go in on a daily basis.  There's no way I could do my regular jobs
 that I'm already doing AND maintain my own fork/install of the
 OpenStack gating infrastructure.

 4. Despite all of the heavy weight corporation throwing resource after
 resource at OpenStack, keep in mind that it is an Open Source
 community still.  I don't want to do ANYTHING that would make it some
 unfriendly to folks who would like to commit.  Keep in mind that
 vendors here aren't necessarily all large corporations, or even all
 paid for proprietary products.  There are open source storage drivers
 for example in Cinder and they may or may not have any of the
 resources to make this happen but that doesn't mean they should not be
 allowed to have code in OpenStack.

 The fact is that the problem I see is that there are drivers/devices
 that flat out don't work and end users (heck even some vendors that
 choose not to test) don't know this until they've purchased a bunch of
 gear and tried to deploy their cloud.  What I was initially proposing
 here was just a more formal public and community representation of
 whether a device works as it's advertised or not.

 Please keep in mind that my proposal here was a first step sort of
 test case.  Rather than start with something HUGE like deploying the
 OpenStack CI in every vendors lab to test every commit (and Im sorry
 for those that don't agree but that does seem like a SIGNIFICANT
 undertaking), why not take incremental steps to make things better and
 learn as we go along?

 Certainly - I totally agree that anything  nothing. I was asking
 about your statement of not having enough infra to get a handle on
 what would block things. As you know, tripleo is running up a

Sorry, got carried away and didn't really answer your question about
resources clearly.  My point about resources was in terms of
man-power, dedicated hardware, networking and all of the things that
go along with spinning up tests on every commit and archiving the
results.  I would definitely like to do this, but first I'd like to
see something that every backend driver maintainer can do at least at
each milestone.

 production quality test cloud to test tripleo, Ironic and once we get
 everything in place - multinode gating jobs. We're *super* interested
 in making the bar to increased validation as low as possible.

We should chat in IRC about approaches here and see if we can align.
For the record HP's resources are vastly different than say a small
start up storage vendor or an open-source storage software stack.

By the way, maybe you can point me to what tripleo is doing, looking
in gerrit I see the jenkins gate noop and the docs job but that's
all I'm seeing?


 I broadly agree with your points 1 through 4, of course!

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Bottom line I appreciate your feedback and comments, it's generated
some new thoughts for me to ponder over the week-end on this subject.

Thanks,
John

___
OpenStack-dev 

[openstack-dev] [Diesel] Proposal for new project

2014-01-17 Thread Raymond, Rob
I would like to gauge interest in a new project named Diesel.

https://wiki.openstack.org/wiki/Diesel

If you are already familiar with Savanna, the best way to describe it is:
Savanna is to map reduce applications as Diesel is to web applications.

The mission of Diesel is to allow OpenStack clouds to run applications.
The cloud administrator can control the non functional aspects, freeing up
the application developer to focus on their application and its
functionality.

In the spirit of Google App Engine, Heroku, Engine Yard and others, Diesel
runs web applications in the cloud. It can be used by cloud administrators
to define the application types that they support. They are also
responsible for defining through Diesel how these applications run on top
of their cloud infrastructure. Diesel will control the availability and
scalability of the web application deployment.

Please send me email if you would like to collaborate on this and I can
set up an IRC meeting.

Rob Raymond


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Availability of external testing logs

2014-01-17 Thread Collins, Sean
Yeah - it appears that even if you clear a -1 the countdown clock still keeps 
ticking, one of my reviews[0] just expired this morning. I'll bring it
up at the IPv6 meeting to restore any patches in our topic that are currently 
abandoned so that you can clear them.

[0] https://review.openstack.org/#/c/56381/9

Sean M. Collins


From: Torbjorn Tornkvist [kruska...@gmail.com]
Sent: Wednesday, January 15, 2014 6:40 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] Availability of external testing logs

On 2014-01-14 16:04, Collins, Sean wrote:
 Can we get the -1 from Tail-F cleared from this review?

 https://review.openstack.org/#/c/56184/17

I've been trying to do this but it fails.
Possibly because it is 'Abandoned' ?
See below:

$ ssh -p 29418 ncsopenstack gerrit review -m 'Clearing out wrong -1 vote '
  --verified=0 26a6c6
X11 forwarding request failed on channel 0
error: Change is closed
one or more approvals failed; review output above


Cheers, Tobbe

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Diesel] Proposal for new project

2014-01-17 Thread Rajesh Ramchandani
Hi Rob - there seems be overlap with project Solum. Can you please outline high 
level differences between Diesel and Solum?

Raj

Sent from my iPad

 On Jan 18, 2014, at 9:06 AM, Raymond, Rob rob.raym...@hp.com wrote:
 
 I would like to gauge interest in a new project named Diesel.
 
 https://wiki.openstack.org/wiki/Diesel
 
 If you are already familiar with Savanna, the best way to describe it is:
 Savanna is to map reduce applications as Diesel is to web applications.
 
 The mission of Diesel is to allow OpenStack clouds to run applications.
 The cloud administrator can control the non functional aspects, freeing up
 the application developer to focus on their application and its
 functionality.
 
 In the spirit of Google App Engine, Heroku, Engine Yard and others, Diesel
 runs web applications in the cloud. It can be used by cloud administrators
 to define the application types that they support. They are also
 responsible for defining through Diesel how these applications run on top
 of their cloud infrastructure. Diesel will control the availability and
 scalability of the web application deployment.
 
 Please send me email if you would like to collaborate on this and I can
 set up an IRC meeting.
 
 Rob Raymond
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread John Utz
Outlook Web MUA, forgive the top post. :-(

While a single import line that brings all the good stuff in at one shot is 
very convenient for the creation of an application, it would muddy the security 
picture *substantially* for the exact type of developer\customer that you would 
be targeting with this sort of syntactic sugar.

As Jesse alludes to below, the expanding tree of dependencies would be masked 
by the aggregation.

So, most likely, they would be pulling in vast numbers of things that they 
don't require to get their simple app done (there's an idea! an eclipse plugin 
that helpfully points out all the things that you are *not* using and offers to 
redo your imports for you :-) ).

As a result, when a security defect is published concerning one of those hidden 
dependencies, they will not have any reason to think that it effects them.

just my us$0.02;

johnu 

From: Jesse Noller [jesse.nol...@rackspace.com]
Sent: Thursday, January 16, 2014 5:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] a common client library

On Jan 16, 2014, at 4:59 PM, Renat Akhmerov 
rakhme...@mirantis.commailto:rakhme...@mirantis.com wrote:

On 16 Jan 2014, at 13:06, Jesse Noller 
jesse.nol...@rackspace.commailto:jesse.nol...@rackspace.com wrote:

Since it’s pretty easy to get lost among all the opinions I’d like to 
clarify/ask a couple of things:


  *   Keeping all the clients physically separate/combining them in to a single 
library. Two things here:
 *   In case of combining them, what exact project are we considering? If 
this list is limited to core projects like nova and keystone what policy could 
we have for other projects to join this list? (Incubation, graduation, 
something else?)
 *   In terms of granularity and easiness of development I’m for keeping 
them separate but have them use the same boilerplate code, basically we need a 
OpenStack Rest Client Framework which is flexible enough to address all the 
needs in an abstract domain agnostic manner. I would assume that combining them 
would be an additional organizational burden that every stakeholder would have 
to deal with.

Keeping them separate is awesome for *us* but really, really, really sucks for 
users trying to use the system.

You may be right but not sure that adding another line into requirements.txt is 
a huge loss of usability.


It is when that 1 dependency pulls in 6 others that pull in 10 more - every 
little barrier or potential failure from the inability to make a static binary 
to how each tool acts different is a paper cut of frustration to an end user.

Most of the time the clients don't even properly install because of 
dependencies on setuptools plugins and other things. For developers (as I've 
said) the story is worse: you have potentially 22+ individual packages and 
their dependencies to deal with if they want to use a complete openstack 
install from their code.

So it doesn't boil down to just 1 dependency: it's a long laundry list of 
things that make consumers' lives more difficult and painful.

This doesn't even touch on the fact there aren't blessed SDKs or tools pointing 
users to consume openstack in their preferred programming language.

Shipping an API isn't enough - but it can be fixed easily enough.

Renat Akhmerov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Diesel] Proposal for new project

2014-01-17 Thread Adrian Otto
Please discuss this with the Solum team before proceeding. This sounds like a 
complete overlap with the app deployment portion of Solum. It would make much 
more sense to combine efforts than to run this as two projects.

--
Adrian


 Original message 
From: Rajesh Ramchandani
Date:01/17/2014 8:04 PM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Diesel] Proposal for new project

Hi Rob - there seems be overlap with project Solum. Can you please outline high 
level differences between Diesel and Solum?

Raj

Sent from my iPad

 On Jan 18, 2014, at 9:06 AM, Raymond, Rob rob.raym...@hp.com wrote:

 I would like to gauge interest in a new project named Diesel.

 https://wiki.openstack.org/wiki/Diesel

 If you are already familiar with Savanna, the best way to describe it is:
 Savanna is to map reduce applications as Diesel is to web applications.

 The mission of Diesel is to allow OpenStack clouds to run applications.
 The cloud administrator can control the non functional aspects, freeing up
 the application developer to focus on their application and its
 functionality.

 In the spirit of Google App Engine, Heroku, Engine Yard and others, Diesel
 runs web applications in the cloud. It can be used by cloud administrators
 to define the application types that they support. They are also
 responsible for defining through Diesel how these applications run on top
 of their cloud infrastructure. Diesel will control the availability and
 scalability of the web application deployment.

 Please send me email if you would like to collaborate on this and I can
 set up an IRC meeting.

 Rob Raymond


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Diesel] Proposal for new project

2014-01-17 Thread Raymond, Rob

Hi Raj

As I see it, Solum is a set of utilities aimed at developers to use
OpenStack clouds but will not be part of OpenStack proper.
While Diesel is meant to be a service that is provided by an OpenStack
cloud (and at some point becoming part of OpenStack itself). It defines a
contract and division of responsibility between developer and cloud.

Rob

 Original message 
From: Rajesh Ramchandani
Date:01/17/2014 8:04 PM (GMT-08:00)
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-dev at lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Subject: Re: [openstack-dev] [Diesel] Proposal for new project

Hi Rob - there seems be overlap with project Solum. Can you please outline
high level differences between Diesel and Solum?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Disk Eraser

2014-01-17 Thread Robert Collins
On 18 January 2014 12:21, Chris Friesen chris.frie...@windriver.com wrote:
 On 01/17/2014 04:20 PM, Devananda van der Veen wrote:

 tl;dr, We should not be recycling bare metal nodes between untrusted
 tenants at this time. There's a broader discussion about firmware
 security going on, which, I think, will take a while for the hardware
 vendors to really address.


 What can the hardware vendors do?  Has anyone proposed a meaningful solution
 for the firmware issue?


So historically, for 99% of users of new machines, it's been
considered a super low risk right - they don't boot off of unknown
devices, and they aren't reusing machines across different users.
Second hand users had risks, but vendors aren't designing for the
second hand purchaser.

However more and more viruses are targeting lower and lower parts of
the boot stack (see why UEFI is so important) and there are now
multiple confirmations of hostile payloads that can live in hard disk
drive microcontrollers - and some of the NSA payloads look like they
inhabit system management bioses... - it's become clear that this is
something that is a genuine risk for all users; new users from viruses
and other malware, second hand users from the original user.

So, industry wise, I think over the next few years folk will finish
auditing their supply chain to determine what devices are at risk and
then start implementing defenses. The basic problem though is that our
entire machine architecture trusts that the rest of the machine is
trusted (e.g. device can DMA anything they want... - one previous
class of attack was compromised firewire devices - plugin, and they
would disable your screen saver without password entry.)

 Given the number of devices (NIC, GPU, storage controllers, etc.) that could
 potentially have firmware update capabilities it's not clear to me how this
 could be reliably solved.

Slowly and carefully :)

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread Robert Collins
On 17 January 2014 06:39, Mark Washenberger
mark.washenber...@markwash.net wrote:


 There's a few more items here that are needed for glance to be able to work
 with requests (which we really really want).
 1) Support for 100-expect-continue is probably going to be required in
 glance as well as swift

Is this currently supported? If not, frankly, I wouldn't bother. The
semantics in HTTP/2 are much better - I'd aim straight at that.

 2) Support for turning off tls/ssl compression (our streams are already
 compressed)

Out of interest - whats the overhead of running tls compression
against compressed data? Is it really noticable?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread Jamie Lennox
I can't see any reason that all of these situations can't be met. 

We can finally take the openstack pypi namespace, move keystoneclient - 
openstack.keystone and similar for the other projects. Have them all based upon 
openstack.base and probably an openstack.transport for transport.

For the all-in-one users we can then just have openstack.client which depends 
on all of the openstack.x projects. This would satisfy the requirement of 
keeping projects seperate, but having the one entry point for newer users. 
Similar to the OSC project (which could acutally rely on the new all-in-one).

This would also satisfy a lot of the clients who have i know are looking to 
move to a version 2 and break compatability with some of the crap from the 
early days.

I think what is most important here is deciding what we want from our clients 
and discussing a common base that we are happy to support - not just renaming 
the existing ones.

(I don't buy the problem with large amounts of dependencies, if you have a 
meta-package you just have one line in requirements and pip will figure the 
rest out.)

Jamie

- Original Message -
 From: Jonathan LaCour jonathan-li...@cleverdevil.org
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Saturday, 18 January, 2014 4:00:58 AM
 Subject: Re: [openstack-dev] a common client library
 
 On Thu, Jan 16, 2014 at 1:23 PM, Donald Stufft  don...@stufft.io  wrote:
 
 
 
 
 On Jan 16, 2014, at 4:06 PM, Jesse Noller  jesse.nol...@rackspace.com 
 wrote:
 
 
 
 
 
 On Jan 16, 2014, at 2:22 PM, Renat Akhmerov  rakhme...@mirantis.com  wrote:
 
 
 
 
 Since it’s pretty easy to get lost among all the opinions I’d like to
 clarify/ask a couple of things:
 
 
 
 * Keeping all the clients physically separate/combining them in to a
 single library. Two things here:
 * In case of combining them, what exact project are we considering?
 If this list is limited to core projects like nova and keystone what
 policy could we have for other projects to join this list?
 (Incubation, graduation, something else?)
 * In terms of granularity and easiness of development I’m for keeping
 them separate but have them use the same boilerplate code, basically
 we need a OpenStack Rest Client Framework which is flexible enough
 to address all the needs in an abstract domain agnostic manner. I
 would assume that combining them would be an additional
 organizational burden that every stakeholder would have to deal
 with.
 
 Keeping them separate is awesome for *us* but really, really, really sucks
 for users trying to use the system.
 
 I agree. Keeping them separate trades user usability for developer usability,
 I think user usability is a better thing to strive for.
 100% agree with this. In order for OpenStack to be its most successful, I
 believe firmly that a focus on end-users and deployers needs to take the
 forefront. That means making OpenStack clouds as easy to consume/leverage as
 possible for users and tool builders, and simplifying/streamlining as much
 as possible.
 
 I think that a single, common client project, based upon package namespaces,
 with a unified, cohesive feel is a big step in this direction.
 
 --
 Jonathan LaCour
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread Robert Collins
On 17 January 2014 06:57, Mark Washenberger
mark.washenber...@markwash.net wrote:

 Just throwing this out there because it seems relevant to client design.

 As we've been looking at porting clients to using v2 of the Images API, its
 seems more and more to me that including the *server* version in the main
 import path is a real obstacle.

 IMO any future client libs should write library interfaces based on the
 peculiarities of user needs, not based on the vagaries of the server
 version. So as a user of this library I would do something like:

   1 from openstack.api import images
   2 client = images.make_me_a_client(auth_url, etcetera) # all version
 negotiation is happening here
   3 client.list_images()  # works more or less same no matter who I'm
 talking to

 Now, there would still likely be hidden implementation code that is
 different per server version and which is instantiated in line 2 above, and
 maybe that's the library path stuff you are talking about.

That design seems guaranteed to behave somewhat poorly (e.g. fail to
upgrade) when servers are upgraded - for short lived processes like
'nova boot' that doesn't matter, but for software running in a daemon
- e.g. in nova-api talking to neutron - that seems much more likely to
be a problem.

I think the pseudo code is fine, but client shouldn't be a concrete
version locked client, rather a proxy object that can revalidate the
version every {sensible time period} and/or observe HTTP headers to
detect when upgrades are possible (or downgrades are required).

-Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread Robert Collins
On 17 January 2014 08:03, Alexei Kornienko alexei.kornie...@gmail.com wrote:
 Hello Joe,

 2)Another option would be to follow waterfall process and create a solid
 library interface before including it to all client projects. However such
 approach this period can take unknown amount of time and can be easily
 failed during integration stage cause requirements change or some other
 reason.

 Please let me know what you think.

Given how fast OpenStack moves, I think waterfall is a non-starter.
I'd do a library from the start, and just do backwards compat - it's
really not that hard  :)

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] a common client library

2014-01-17 Thread Robert Collins
On 17 January 2014 09:22, Renat Akhmerov rakhme...@mirantis.com wrote:
 Since it’s pretty easy to get lost among all the opinions I’d like to
 clarify/ask a couple of things:

 Keeping all the clients physically separate/combining them in to a single
 library. Two things here:

 In case of combining them, what exact project are we considering? If this
 list is limited to core projects like nova and keystone what policy could we
 have for other projects to join this list? (Incubation, graduation,
 something else?)
 In terms of granularity and easiness of development I’m for keeping them
 separate but have them use the same boilerplate code, basically we need a
 OpenStack Rest Client Framework which is flexible enough to address all the
 needs in an abstract domain agnostic manner. I would assume that combining
 them would be an additional organizational burden that every stakeholder
 would have to deal with.

 Has anyone ever considered an idea of generating a fully functional REST
 client automatically based on an API specification (WADL could be used for
 that)? Not sure how convenient it would be, it really depends on a
 particular implementation, but as an idea it could be at least thought of.
 Sounds a little bit crazy though, I recognize it :).

Launchpadlib which builds on wadllib did *exactly* that. It worked
fairly well with the one caveat that it fell into the ORM trap - just
in time lookups for everything with crippling roundtrips.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] Third party testing

2014-01-17 Thread Robert Collins
On 18 January 2014 16:31, John Griffith john.griff...@solidfire.com wrote:
 On Fri, Jan 17, 2014 at 6:24 PM, Robert Collins
 robe...@robertcollins.net wrote:

 Certainly - I totally agree that anything  nothing. I was asking
 about your statement of not having enough infra to get a handle on
 what would block things. As you know, tripleo is running up a

 Sorry, got carried away and didn't really answer your question about
 resources clearly.

LOL, np.

  My point about resources was in terms of
 man-power, dedicated hardware, networking and all of the things that
 go along with spinning up tests on every commit and archiving the
 results.  I would definitely like to do this, but first I'd like to
 see something that every backend driver maintainer can do at least at
 each milestone.

 production quality test cloud to test tripleo, Ironic and once we get
 everything in place - multinode gating jobs. We're *super* interested
 in making the bar to increased validation as low as possible.

 We should chat in IRC about approaches here and see if we can align.
 For the record HP's resources are vastly different than say a small
 start up storage vendor or an open-source storage software stack.

Yeah, I'm aware :/. For open source stacks, my hope is that the
contributed hardware from HP, Redhat etc will permit us to test open
source stacks in the set of permutations we end up testing.

 By the way, maybe you can point me to what tripleo is doing, looking
 in gerrit I see the jenkins gate noop and the docs job but that's
 all I'm seeing?

https://wiki.openstack.org/wiki/TripleO/TripleOCloud and
https://wiki.openstack.org/wiki/TripleO/TripleOCloud/Regions
and https://etherpad.openstack.org/p/tripleo-test-cluster

Cheers,
Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev