[openstack-dev] [Nova] Question about thread safe of key-pair and securtiy rules quota

2014-07-24 Thread Chen CH Ji

According to bug [1], there are some possibilities that concurrent
operations on keypair/security rules can exceed quota
Found that we have 3 kinds of resources in quotas.py:
ReservableResource/AbsoluteResource/CountableResource

curious about CountableResource because it's can't be thread safe due to
its logic:

count = QUOTAS.count(context, 'security_group_rules', id)
try:
projected = count + len(vals)
QUOTAS.limit_check(context, security_group_rules=projected)

was it designed by purpose to be different to ReservableResource? If set it
to ReservableResource just like RAM/CPU, what kind of side effect it might
lead to ?

Also, is it possible to consider a solution like 'hold a write lock in db
layer, check the count of resource and raise exception if it exceed quota'?

Thanks


[1] https://bugs.launchpad.net/nova/+bug/1301532

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Denis Makogon
On Thu, Jul 24, 2014 at 6:10 AM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


That's actually good question. But it seems to be a bit complicated. You
have to tell to option loader about type of given configuration item.
I also was thinking about this type of feature, and that's what came into
my mind.

I found very useful JSON or YAML format for describing options. Here's
simple example how to describe configuration file that would describe
dynamic configuration.

options.yaml

- groups:
  - DEFAULT
  - NOT_DEFAULT
  - ANOTHER_ONE

- list:
  - option_a:
- group: DEFAULT
- value: [a, b ,c]
- description: description

- dict:
  - option_b:
- group: DEFAULT
- value: { a:b, c:d}
- description: description

and so on ...

Explanation:

` - groups` attribute defines which groups oslo.config should register.
`-  list` - option type
`- option_b` - option descriptor, each descriptor is a dict (string: list)
where key is an option name, and attributes inside it describes to which
group it belongs, its value, and description.

oslo.config would need just parse YAML file and register all options and in
the end you'll receive a set of registered options per their groups.

It's the best variant of dynamic option loading, but i'm open for
discussion.


Best regards,
Denis Makogon

 Thanks a lot!

 --
 Best wishes!
 Baohua

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] DB layer - Specific exceptions when resource cant be found. Do they have any added value?

2014-07-24 Thread Avishay Balderman
Hi
In the LBaaS DB layer there is a utility method (‘_get_resource’) that takes 
entity type and the entity id and fetch it from DB.
If the entity  can’t be found in DB the method  throws a specific exception.

Example:
If we were looking for a Pool and it was not found in DB -- throw PoolNotFound
If we were looking for a HealthMonitor and it was not found in DB -- throw 
HealthMonitorNotFound
etc…

I can see very little value (if any…) in having those specific exceptions.

Why not have a base “NotFound” exception that gets the entity type and its id.
This exception will tell the method caller what is the  entity type and its id. 
Isnt that good enough?

On the other hand this code should be mainlined and reviewed… So not only it 
has no value, the dev. team  should maintain it.

Here is the code (v1) 
https://github.com/openstack/neutron/blob/master/neutron/db/loadbalancer/loadbalancer_db.py#L210
I prefer not to have it in v2

Thanks

Avishay







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [InstanceGroup] Why instance group API extension do not support setting metadata

2014-07-24 Thread Jay Lau
Hi,

I see that the instance_group object already support instance group
metadata, why we filter out metadata in instance group api extension? Can
we enable this?

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]resize

2014-07-24 Thread fdsafdsafd
In resize, we convert the disk and drop peel backing file, should we judge 
whether we are in shared_storage? If we are in shared storage, for example, 
nfs, then we can use the image in _base to be the backing file. And the time 
cost to resize will be faster.


The processing in line 5132
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py




Thanks___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]resize

2014-07-24 Thread Tian, Shuangtai
whether we already use like that ?
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5156

From: fdsafdsafd [mailto:jaze...@163.com]
Sent: Thursday, July 24, 2014 4:30 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova]resize

In resize, we convert the disk and drop peel backing file, should we judge 
whether we are in shared_storage? If we are in shared storage, for example,
nfs, then we can use the image in _base to be the backing file. And the time 
cost to resize will be faster.

The processing in line 5132
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py


Thanks

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]resize

2014-07-24 Thread fdsafdsafd

No.
before L5156, we convert it from qcow2 to qcow2, in which it strips backing 
file.
I think here, we should wirte like this:
 
if info['type'] == 'qcow2' and info['backing_file']:
       if shared_storage:
             utils.execute('cp', from_path, img_path)
       else:
 tmp_path = from_path + _rbase
         # merge backing file
         utils.execute('qemu-img', 'convert', '-f', 'qcow2',
                              '-O', 'qcow2', from_path, tmp_path)
            libvirt_utils.copy_image(tmp_path, img_path, host=dest)
            utils.execute('rm', '-f', tmp_path)
else:  # raw or qcow2 with no backing file
         libvirt_utils.copy_image(from_path, img_path, host=dest)



At 2014-07-24 05:02:39, Tian, Shuangtai shuangtai.t...@intel.com wrote:
 




!--

_font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
_font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
_font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
_font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
_font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}

p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:SimSun;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-reply;
font-family:Calibri,sans-serif;
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:Calibri,sans-serif;}
_page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
{page:WordSection1;}
--





whether we already use like that ?

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5156

 

From: fdsafdsafd [mailto:jaze...@163.com]


Sent: Thursday, July 24, 2014 4:30 PM

To: openstack-dev@lists.openstack.org

Subject: [openstack-dev] [nova]resize

 





In resize, we convert the disk and drop peel backing file, should we judge 
whether we are in shared_storage? If we are in shared storage, for example, 




nfs, then we can use the image in _base to be the backing file. And the time 
cost to resize will be faster.




 




The processing in line 5132




https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py




 




 




Thanks



 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Neutron integration test job

2014-07-24 Thread Nikhil Manchanda

 On Wed, Jul 23, 2014 at 7:28 AM, Denis Makogon dmako...@mirantis.com wrote:
 [...]

 Add Neutron-based configuration for DevStack to let folks try it

This makes sense to tackle; now that the neutron integration pieces have
merged in Trove (yahoo!).

However, it looks like the changes you propose in your DevStack patchset
[1] have been copied directly from the trove-integration scripts at
[2]. I have two primary concerns with this:

a. Most of these values are only required for the trove functional
tests to pass -- they aren't required for a user install of trove with
Neutron. For such values, the trove-integration scripts seem like a
better place for this configuration.

b. Since the trove functional tests run based on the trove-integration
scripts, what this means is that if this change is merged, this
configuration code will be run twice, once in devstack, and once again
as part of the test-init script from trove-integration.

[1] https://review.openstack.org/#/c/108966
[2] 
https://github.com/openstack/trove-integration/blob/master/scripts/redstack#L406-427



 Implementing/providing new type of testing job that will test on a regular
 basis all Trove tests with enabled Neutron to verify that all our networking
 preparations for instance are fine.

 The last thing is the most interesting. And i’d like to discuss it with all
 of you, folks.
 So, i’ve wrote initial job template taking into account specific
 configuration required by DevStack and Trove-integration, see [4], and i’d
 like to receive all possible feedbacks as soon as possible.


So it looks like the test job you propose [3] is based on a current
experimental job template: gate-trove-functional-dsvm-{datastore}
[4]. Since pretty much most of it is an exact copy (except for the
NEUTRON_ENABLED bit) I'd suggest working that in as a parameter to the
current job template instead of duplicating the exact same code as part
of another job.

[3] https://gist.github.com/denismakogon/76d9bd3181781097c39b
[4] 
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/trove.yaml#L30-63


Thanks,
Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Chmouel Boudjnah
Hello,

Thanks for writing this summary, I like all those ideas and thanks working
hard on fixing this.

   * For all non gold standard configurations, we'll dedicate a part of
 our infrastructure to running them in a continuous background loop,
 as well as making these configs available as experimental jobs. The
 idea here is that we'll actually be able to provide more
 configurations that are operating in a more traditional CI (post
 merge) context. People that are interested in keeping these bits
 functional can monitor those jobs and help with fixes when needed.
 The experimental jobs mean that if developers are concerned about
 the effect of a particular change on one of these configs, it's easy
 to request a pre-merge test run.  In the near term we might imagine
 this would allow for things like ceph, mongodb, docker, and possibly
 very new libvirt to be validated in some way upstream.

What about external CI ? is external CI would need to be post merge or
still stay as is ? what would be the difference between external CI
plugging on review changes and post CI merges?

Chmouel

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vhost-scsi support in Nova

2014-07-24 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 10:32:44PM -0700, Nicholas A. Bellinger wrote:
 *) vhost-scsi doesn't support migration
 
 Since it's initial merge in QEMU v1.5, vhost-scsi has a migration blocker
 set.  This is primarily due to requiring some external orchestration in
 order to setup the necessary vhost-scsi endpoints on the migration
 destination to match what's running on the migration source.
 
 Here are a couple of points that Stefan detailed some time ago about what's
 involved for properly supporting live migration with vhost-scsi:
 
 (1) vhost-scsi needs to tell QEMU when it dirties memory pages, either by
 DMAing to guest memory buffers or by modifying the virtio vring (which also
 lives in guest memory).  This should be straightforward since the
 infrastructure is already present in vhost (it's called the log) and used
 by drivers/vhost/net.c.
 
 (2) The harder part is seamless target handover to the destination host.
 vhost-scsi needs to serialize any SCSI target state from the source machine
 and load it on the destination machine.  We could be in the middle of
 emulating a SCSI command.
 
 An obvious solution is to only support active-passive or active-active HA
 setups where tcm already knows how to fail over.  This typically requires
 shared storage and maybe some communication for the clustering mechanism.
 There are more sophisticated approaches, so this straightforward one is just
 an example.
 
 That said, we do intended to support live migration for vhost-scsi using
 iSCSI/iSER/FC shared storage.
 
 *) vhost-scsi doesn't support qcow2
 
 Given all other cinder drivers do not use QEMU qcow2 to access storage
 blocks, with the exception of the Netapp and Gluster driver, this argument
 is not particularly relevant here.
 
 However, this doesn't mean that vhost-scsi (and target-core itself) cannot
 support qcow2 images.  There is currently an effort to add a userspace
 backend driver for the upstream target (tcm_core_user [3]), that will allow
 for supporting various disk formats in userspace.
 
 The important part for vhost-scsi is that regardless of what type of target
 backend driver is put behind the fabric LUNs (raw block devices using
 IBLOCK, qcow2 images using target_core_user, etc) the changes required in
 Nova and libvirt to support vhost-scsi remain the same.  They do not change
 based on the backend driver.
 
 *) vhost-scsi is not intended for production
 
 vhost-scsi has been included the upstream kernel since the v3.6 release, and
 included in QEMU since v1.5.  vhost-scsi runs unmodified out of the box on a
 number of popular distributions including Fedora, Ubuntu, and OpenSuse.  It
 also works as a QEMU boot device with Seabios, and even with the Windows
 virtio-scsi mini-port driver.
 
 There is at least one vendor who has already posted libvirt patches to
 support vhost-scsi, so vhost-scsi is already being pushed beyond a debugging
 and development tool.
 
 For instance, here are a few specific use cases where vhost-scsi is
 currently the only option for virtio-scsi guests:
 
   - Low (sub 100 usec) latencies for AIO reads/writes with small iodepth
 workloads
   - 1M+ small block IOPs workloads at low CPU utilization with large
 iopdeth workloads.
   - End-to-end data integrity using T10 protection information (DIF)

IIUC, there is also missing support for block jobs like drive-mirror
which is needed by Nova.

From a functionality POV migration  drive-mirror support are the two
core roadblocks to including vhost-scsi in Nova (as well as libvirt
support for it of course). Realistically it doesn't sound like these
are likely to be solved soon enough to give us confidence in taking
this for the Juno release cycle.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Hide CI comments in Gerrit

2014-07-24 Thread Mike Kolesnik
Great script!

I have a fork that I made and improved it a bit:
https://gist.github.com/mkolesni/92076378d45c7b5e692b

This fork supports:
1. Button/link is integrated nicely to the gerrit UI (appears in the
comments title, just like the other ones).
2. Auto hide will hide comments by default (can be turned off).
3. Regex like bot detection which requires a shorter list of unique
bot names, and less maintenance of the script.
4. oVirt support (for those interested).

Regards,
Mike

- Original Message -
 Hi,
 
 I created a small userscript that allows you to hide CI comments in Gerrit.
 That way you can read only comments written by humans and hide everything
 else. I’ve been struggling for a long time to follow discussions on changes
 with many patch sets because of the CI noise. So I came up with this
 userscript:
 
 https://gist.github.com/rgerganov/35382752557cb975354a
 
 It adds “Toggle CI” button at the bottom of the page that hides/shows CI
 comments. Right now it is configured for Nova CIs, as I contribute mostly
 there, but you can easily make it work for other projects as well. It
 supports both the “old” and “new” screens that we have.
 
 How to install on Chrome: open chrome://extensions and dragdrop the script
 there
 How to install on Firefox: install Greasemonkey first and then open the
 script
 
 Known issues:
  - you may need to reload the page to get the new button
  - I tried to add the button somewhere close to the collapse/expand links but
  it didn’t work for some reason
 
 Hope you will find it useful. Any feedback is welcome :)
 
 Thanks,
 Rado
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Sean Dague
On 07/24/2014 06:06 AM, Chmouel Boudjnah wrote:
 Hello,
 
 Thanks for writing this summary, I like all those ideas and thanks working
 hard on fixing this.
 
   * For all non gold standard configurations, we'll dedicate a part of
 our infrastructure to running them in a continuous background loop,
 as well as making these configs available as experimental jobs. The
 idea here is that we'll actually be able to provide more
 configurations that are operating in a more traditional CI (post
 merge) context. People that are interested in keeping these bits
 functional can monitor those jobs and help with fixes when needed.
 The experimental jobs mean that if developers are concerned about
 the effect of a particular change on one of these configs, it's easy
 to request a pre-merge test run.  In the near term we might imagine
 this would allow for things like ceph, mongodb, docker, and possibly
 very new libvirt to be validated in some way upstream.
 
 What about external CI ? is external CI would need to be post merge or
 still stay as is ? what would be the difference between external CI
 plugging on review changes and post CI merges?

External CI is *really* supposed to be for things that Infrastructure
can't or won't run (for technical or policy reasons). VMWare isn't open
source, so that would always need to be outside of infra. Xen is
something that there remains technical challenges on to get working in
infra, but I think everyone would like to see it there eventually.

Overall capacity and randomness issues means we can't do all these
configs in a pre-merge context. But moving to a fixed capacity post
merge world means we could create a ton of test data for these
configurations.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Approval Deadline (SAD) has passed, next steps

2014-07-24 Thread Livnat Peer
On 07/21/2014 04:16 PM, Kyle Mestery wrote:
 Hi all!
 
 A quick note that SAD has passed. We briskly approved a pile of BPs
 over the weekend, most of them vendor related as low priority, best
 effort attempts for Juno-3. At this point, we're hugely oversubscribed
 for Juno-3, so it's unlikely we'll make exceptions for things into
 Juno-3 now.
 
 I don't plan to open a Kilo directory in the specs repository quite
 yet. I'd like to first let things settle down a bit with Juno-3 before
 going there. Once I do, specs which were not approved should be moved
 to that directory where they can be reviewed with the idea they are
 targeting Kilo instead of Juno.
 
 Also, just a note that we have a handful of bugs and BPs we're trying
 to land in Juno-3 yet today, so core reviewers, please focus on those
 today.
 
 Thanks!
 Kyle
 
 [1] https://launchpad.net/neutron/+milestone/juno-2
 


Hi Kyle,

Do we have guidelines for what can/should qualify as an exception?
I see some exception requests and I would like to understand what
criteria they should meet in order to qualify as an exception.

Thanks ,Livnat


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Neutron integration test job

2014-07-24 Thread Denis Makogon
On Thu, Jul 24, 2014 at 12:32 PM, Nikhil Manchanda nik...@manchanda.me
wrote:


  On Wed, Jul 23, 2014 at 7:28 AM, Denis Makogon dmako...@mirantis.com
 wrote:
  [...]
 
  Add Neutron-based configuration for DevStack to let folks try it

 This makes sense to tackle; now that the neutron integration pieces have
 merged in Trove (yahoo!).

 However, it looks like the changes you propose in your DevStack patchset
 [1] have been copied directly from the trove-integration scripts at
 [2]. I have two primary concerns with this:

 a. Most of these values are only required for the trove functional
 tests to pass -- they aren't required for a user install of trove with
 Neutron. For such values, the trove-integration scripts seem like a
 better place for this configuration.

 b. Since the trove functional tests run based on the trove-integration
 scripts, what this means is that if this change is merged, this
 configuration code will be run twice, once in devstack, and once again
 as part of the test-init script from trove-integration.

 [1] https://review.openstack.org/#/c/108966
 [2]
 https://github.com/openstack/trove-integration/blob/master/scripts/redstack#L406-427



  Implementing/providing new type of testing job that will test on a
 regular
  basis all Trove tests with enabled Neutron to verify that all our
 networking
  preparations for instance are fine.
 
  The last thing is the most interesting. And i’d like to discuss it with
 all
  of you, folks.
  So, i’ve wrote initial job template taking into account specific
  configuration required by DevStack and Trove-integration, see [4], and
 i’d
  like to receive all possible feedbacks as soon as possible.
 

 So it looks like the test job you propose [3] is based on a current
 experimental job template: gate-trove-functional-dsvm-{datastore}
 [4]. Since pretty much most of it is an exact copy (except for the
 NEUTRON_ENABLED bit) I'd suggest working that in as a parameter to the
 current job template instead of duplicating the exact same code as part
 of another job.

 Nikhil, i already did lots of refactoring inside trove.yaml (see patchset
[1] and its dependent patchset). The same thing i'm going to do
I know that there's lots of duplications, just wanted to describe complete
template.

The actual question is - Is given template is correct? Would it work with
trove-integration and with pure devstack in the nearest future?

P.S. I've got only basic knowledge about jenkins jobs inside infra.

[1] https://review.openstack.org/#/c/100601/



 [3] https://gist.github.com/denismakogon/76d9bd3181781097c39b
 [4]
 https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/jenkins_job_builder/config/trove.yaml#L30-63


 Thanks,
 Nikhil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Upgrade of netchecker/mcagents during OpenStack patching procedure

2014-07-24 Thread Mike Scherbakov
Hi,
for #1, I think we can wait for upgrades feature to be able to upgrade
netchecker and mcagents. Patching feature uses puppet, and as these
packages are not installed by puppet, we will need some workarounds for now.
I don't think it's that critical to have workarounds for that, so let's
skip it.



On Thu, Jul 24, 2014 at 2:58 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 I want to discuss here several bugs

 1. Do we want to upgrade (deliver new packages) for netchecker and
 mcagents? [1]

 If yes, then we have to add a list of packages which are
 installed on provisioning stage (e.g. netchecker/mcagent/something else)
 in puppet, to run patching for this packages.

 2. After master node upgrade (from 5.0 to 5.0.1/5.1) Murana test is still
 disabled [2]
 for old clusters, despite the fact that the test was fixed in 5.0.1/5.1
 and works on
 clusters which were created after upgrade.

 I'm not an expert in OSTF, are there any suggestions how to fix it? Who we
 can
 assign this bug to?

 [1] https://bugs.launchpad.net/fuel/+bug/1343139
 [2] https://bugs.launchpad.net/fuel/+bug/1337823

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Upgrade of netchecker/mcagents during OpenStack patching procedure

2014-07-24 Thread Dmitriy Shulyak
Hi,

1. There is several incompatibilities with network checker in 5.0 and 5.1,
mainly caused by introduction of multicast verification.
Issue with additional release information, which easy to resolve by
excluding multicast on 5.0 environment
[1] https://bugs.launchpad.net/fuel/+bug/1342814
Issue with running network verification on old boostrap and newly created
5.1 environment
[2] https://bugs.launchpad.net/fuel/+bug/1348130
There is no easy way to fix it.. so i will probably disable multicast for
now

Other issue is about bugs that whas fixed in mcagents, network checker and
nailgun agent

2. It can be done with some hacks in ostf



On Thu, Jul 24, 2014 at 1:58 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 I want to discuss here several bugs

 1. Do we want to upgrade (deliver new packages) for netchecker and
 mcagents? [1]

 If yes, then we have to add a list of packages which are
 installed on provisioning stage (e.g. netchecker/mcagent/something else)
 in puppet, to run patching for this packages.

 2. After master node upgrade (from 5.0 to 5.0.1/5.1) Murana test is still
 disabled [2]
 for old clusters, despite the fact that the test was fixed in 5.0.1/5.1
 and works on
 clusters which were created after upgrade.

 I'm not an expert in OSTF, are there any suggestions how to fix it? Who we
 can
 assign this bug to?

 [1] https://bugs.launchpad.net/fuel/+bug/1343139
 [2] https://bugs.launchpad.net/fuel/+bug/1337823

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-24 Thread Ajaya Agrawal
That is a very good suggestion.

I started contributing to openstack three months back. IMO it is not that
difficult to get started and there are many blogs which can help you get
started. There are many low hanging fruits which could be fixed by newbies.
The real problem comes when you are post that phase. There are too many
projects and it is difficult to select one and work on a small feature from
that project. I think we should be concentrating on this part more. The
projects which could get most benefit from this are the ones which are not
incubated yet or just incubated.

Cheers,
Ajaya

Cheers,
Ajaya


On Thu, Jul 24, 2014 at 4:12 AM, Joshua Harlow harlo...@outlook.com wrote:

 Awesome,

 When I start to see emails on ML that say anyone need any help for XYZ ...
 (which is great btw) it makes me feel like there should be a more
 appropriate avenue for those inspirational folks looking to get involved (a
 ML isn't really the best place for this kind of guidance and directing).

 And in general mentoring will help all involved if we all do more of it :-)

 Let me know if any thing is needed that I can possible help with to get
 more of it going.

 -Josh

 On Jul 23, 2014, at 2:44 PM, Jay Bryant jsbry...@electronicjungle.net
 wrote:

 Great question Josh!

 Have been doing a lot of mentoring within IBM for OpenStack and have now
 been asked to formalize some of that work.  Not surprised there is an
 external need as well.

 Anne and Stefano.  Let me know if the kids anything I can do to help.

 Jay
 Hi all,

 I was reading over a IMHO insightful hacker news thread last night:

 https://news.ycombinator.com/item?id=8068547

 Labeled/titled: 'I made a patch for Mozilla, and you can do it too'

 It made me wonder what kind of mentoring support are we as a community
 offering to newbies (a random google search for 'openstack mentoring' shows
 mentors for GSoC, mentors for interns, outreach for women... but no mention
 of mentors as a way for everyone to get involved)?

 Looking at the comments in that hacker news thread, the article itself it
 seems like mentoring is stressed over and over as the way to get involved.

 Has there been ongoing efforts to establish such a program (I know there
 is training work that has been worked on, but that's not exactly the same).

 Thoughts, comments...?

 -Josh
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove][stevedore] Datastore configuration opts refactoring. Stevedore integration.

2014-07-24 Thread Denis Makogon
Hello, Stackers.

Trove wiki and launchpad pages are stating that, it is a scalable  database
service that allows users to quickly and easily utilize the features of a
relational database without the burden of handling complex administrative
tasks. Trove already can provision single instances of certain databases.

How does developers can integrate new datastores?

Each datastore requires its own option group. What does it mean?

For each datastore developer has to defined
https://github.com/openstack/trove/blob/master/trove/common/cfg.py#L270-L436
oslo
option group https://wiki.openstack.org/wiki/Oslo/Config. It contains a
set of required configuration parameters that are being used at various
stages of provisioning/management.

Group content

Each group contains options that are being used by regular Trove services
such as Trove-api, Trove-taskmanager and one very specific Trove-guestagent.

Options that are required by API and Taskmanager service:


   -

   tcp_ports - a list of TCP ports that would be used as basis for building
   rules for data security group assigned for instance.
   -

   upd_ports - a list of UDP ports that would be used as basis for building
   rules for data security group assigned for instance.
   -

   root_on_create - Enable the automatic creation of the root user for the
   service during instance-create. The generated password for the root user is
   immediately returned in the response of instance-create as the 'password'
   field.
   -

   usage_timeout - Timeout to wait for a guest to become active.
   -

   backup_strategy - specific class that responsible for backup creation


Options that are required by Trove-guestagent:


   -

   backup_strategy - specific class that responsible for backup creation
   -

   mount_point - FS path to mount block storage volume.
   -

   backup_namespace - backup namespace where backup_strategy defined
   -

   restore_namespace - restore namespace where restore strategy defined


For now all this options (for all services) are defined within one
configuration group.

So, here comes first suggestion. We need to split each datastore group onto
two:

   -

   server-side datastore options
   -

   guest-side datastore options


Benefits. Refactoring would give an ability to split guest
per-datastore options and extract guest from codebase completely.

Now let’s take a look how does Trove-guestagent works.

Trove-guestagent is a RPC service with per-datastore manager. Each time
instance provisioning gets initiated server-side injects configuration
files, one of them contains significant option for guest -
datastore_manager option. It’s used to load specific datastore manager
implementation.

This is how it works:

Type of configuration attribute: dictionary

Name: datastore_registry_ext

Example:

datastore_registry_ext = {percona:
trove.guestagent.datastore.mysql.manager.Manager,

 mysql:
trove.guestagent.datastore.mysql.manager.Manager,

cassandra:
trove.guestagent.datastore.cassandra.manager.Manager,}

Here comes an issue - each guest contains tons for files
specific to each datastore. For development reasons - it’s totally fine,
but for production requirements - it’s not good at all. Guest should be
lightweight, it should be small, etc.


How can we simplify datastore integration?

I’d like to propose to integrate stevedore
http://stevedore.readthedocs.org/en/latest/ into Trove services.

From Trove-guestagent perspective. According to description above we have
two separate entities that are needed to be injected during guest
deployment (while preparing image for specific datastore):

   -

   datastore configuration options
   -

   manager implementation


Basically, those entities can be merged into one, since manager
implementation relays on datastore configuration options.

From Trove-API and Trove-taskmanager perspective we need to inject
per-datastore attributes which are mentioned above.

Implementation details

This topic mostly touches guestagent. Stevedore integration will forces us
to define new abstract layer for datastore manager - BaseDatastore manager.

Benefits? For now only mysql datastore manager can handle 100% of API calls
(see [1]) but other datastores would not support certain API calls (see [2]
- [5]). So, abstract layer would reduce size of certain manager
implementation (it would not contain calls like [6] ). Another one benefit
 - we don’t need to handle managers registry - each new datastore manager
can be included from 3d party packages through guests` setup.cfg

So, i’d like to initiate discussion with all of you folks before submitting
BP and requesting for review procedure.


Thoughts?

[1]
https://github.com/openstack/trove/blob/master/trove/guestagent/datastore/mysql/manager.py

[2]
https://github.com/openstack/trove/blob/master/trove/guestagent/datastore/mongodb/manager.py

[3]

Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

2014-07-24 Thread Evgeny Fedoruk
Hi Doug,
I agree with Brandon, since there is no flavors framework yet, each driver not 
supporting TLS is in charge of throwing the unsupported exception.
The driver can do it once getting a listener with TERMINATED-HTTPS protocol.

Evg


-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Wednesday, July 23, 2014 9:09 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

@Evgeny: Did you intend on adding another patchset in the reviews I've been 
working on? If so I don't really see any changes, so if they're are some 
changes you needed in there let me know.

@Doug: I think if the drivers see the TERMINATED_HTTPS protocol then they can 
throw an exception.  I don't think a driver interface change is needed.

Thanks,
Brandon


On Wed, 2014-07-23 at 17:02 +, Doug Wiegley wrote:
 Do we want any driver interface changes for this?  At one level, with 
 the current interface, conforming drivers could just reference 
 listener.sni_containers, with no changes.  But, do we want something 
 in place so that the API can return an unsupported error for non-TLS 
 v2 drivers?  Or must all v2 drivers support TLS?
 
 doug
 
 
 
 On 7/23/14, 10:54 AM, Evgeny Fedoruk evge...@radware.com wrote:
 
 My code is here:
 https://review.openstack.org/#/c/109035/1
 
 
 
 -Original Message-
 From: Evgeny Fedoruk
 Sent: Wednesday, July 23, 2014 6:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work 
 division
 
 Hi Carlos,
 
 As I understand you are working on common module for Barbican 
 interactions.
 I will commit my code later today and I will appreciate if you and 
 anybody else  who is interested will review this change.
 There is one specific spot for the common Barbican interactions 
 module API integration.
 After the IRC meeting tomorrow, we can discuss the work items and 
 decide who is interested/available to do them.
 Does it make sense?
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Wednesday, July 23, 2014 6:15 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work 
 division
 
 Do you have any idea as to how we can split up the work?
 
 On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
  wrote:
 
  Hi,
  
  I'm working on TLS integration with loadbalancer v2 extension and db.
  Basing on Brandon's  patches 
 https://review.openstack.org/#/c/105609 , 
 https://review.openstack.org/#/c/105331/  , 
 https://review.openstack.org/#/c/105610/
  I will abandon previous 2 patches for TLS which are 
 https://review.openstack.org/#/c/74031/ and 
 https://review.openstack.org/#/c/102837/
  Managing to submit my change later today. It will include lbaas 
 extension v2 modification, lbaas db v2 modifications, alembic 
 migration for schema changes and new tests in unit testing for lbaas db v2.
  
  Thanks,
  Evg
  
  -Original Message-
  From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
  Sent: Wednesday, July 23, 2014 3:54 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work 
  division
  
 Since it looks like the TLS blueprint was approved I''m sure were 
 all eager to start coded so how should we divide up work on the source code.
 I have Pull requests in pyopenssl
 https://github.com/pyca/pyopenssl/pull/143;. and a few one liners 
 in pica/cryptography to expose the needed low-level that I'm hoping 
 will be added pretty soon to that PR 143 test's can pass. Incase it 
 doesn't we will fall back to using the pyasn1_modules as it already 
 also has a means to fetch what we want at a lower level.
  I'm just hoping that we can split the work up so that we can 
 collaborate together on this with out over serializing the work were 
 people become dependent on waiting for some one else to complete 
 their work or worse one person ending up doing all the work.
  
  
Carlos D. Garza 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 

Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Doug Hellmann

On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known 
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load them?
 
   For example, I do not know the group names (section name in the 
 configuration file), but we read the configuration file and detect the 
 definitions inside it.
 
 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2
 
Then I want to automatically load the group1. key1 and group2. key2, 
 without knowing the name of group1 first.

If you don’t know the group name, how would you know where to look in the 
parsed configuration for the resulting options?

Doug

 
 Thanks a lot!
 
 -- 
 Best wishes!
 Baohua
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] overuse of 'except Exception'

2014-07-24 Thread Chris Dent

On Wed, 23 Jul 2014, Doug Hellmann wrote:

That's bad enough, but much worse, this will catch all sorts of
exceptions, even ones that are completely unexpected and ought to
cause a more drastic (and thus immediately informative) failure
than 'something failed’.


In most cases, we chose to handle errors this way to keep the service
running even in the face of “bad” data, since we are trying to
collect an audit stream and we don’t want to miss good data if we
encounter bad data.


a) I acknowledge that you're actually one of the elders to whom I
   referred earlier so I hesitate to disagree with you here, so feel
   free to shoot me down, but...

b) keep the service running in the face of bad is exactly the
   sort or reason why I don't like this idiom. I think those
   exceptions which we can enumerate as causes of bad should be
   explicitly caught and explicitly logged and the rest of them
   should explicitly cause death exactly because we don't know
   what happened and the situation is _actually_ exceptional and we
   ought to know now, not later, that it happened, and not some
   number of minutes or hours or even days later when we notice that
   some process, though still running, hasn't done any real work.

   That kind of keep it alive rationale often leads to far more
   complex debugging situations than otherwise.

In other words there are two kinds of bad: The bad that we know
and can expect (even though we don't want it) and the bad that we
don't know and shouldn't expect. These should be handled
differently.

A compromise position (if one is needed) would be something akin to,
but not exactly like:

except (TheVarious, ExceptionsIKnow) as exc:
LOG.warning('shame, no workie, but you know, it happens: %s', exc)
except Exception:
LOG.exception('crisis!')

This makes it easier to distinguish between the noise and the nasty,
which I've found to be quite challenging thus far.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] overuse of 'except Exception'

2014-07-24 Thread Doug Hellmann

On Jul 24, 2014, at 8:23 AM, Chris Dent chd...@redhat.com wrote:

 On Wed, 23 Jul 2014, Doug Hellmann wrote:
 That's bad enough, but much worse, this will catch all sorts of
 exceptions, even ones that are completely unexpected and ought to
 cause a more drastic (and thus immediately informative) failure
 than 'something failed’.
 
 In most cases, we chose to handle errors this way to keep the service
 running even in the face of “bad” data, since we are trying to
 collect an audit stream and we don’t want to miss good data if we
 encounter bad data.
 
 a) I acknowledge that you're actually one of the elders to whom I
   referred earlier so I hesitate to disagree with you here, so feel
   free to shoot me down, but…

I don’t claim any special status except that I was there and am trying to 
provide background on why things are as they are. :-)

 
 b) keep the service running in the face of bad is exactly the
   sort or reason why I don't like this idiom. I think those
   exceptions which we can enumerate as causes of bad should be
   explicitly caught and explicitly logged and the rest of them
   should explicitly cause death exactly because we don't know
   what happened and the situation is _actually_ exceptional and we
   ought to know now, not later, that it happened, and not some
   number of minutes or hours or even days later when we notice that
   some process, though still running, hasn't done any real work.
 
   That kind of keep it alive rationale often leads to far more
   complex debugging situations than otherwise.
 
 In other words there are two kinds of bad: The bad that we know
 and can expect (even though we don't want it) and the bad that we
 don't know and shouldn't expect. These should be handled
 differently.
 
 A compromise position (if one is needed) would be something akin to,
 but not exactly like:
 
except (TheVarious, ExceptionsIKnow) as exc:
LOG.warning('shame, no workie, but you know, it happens: %s', exc)
except Exception:
LOG.exception('crisis!')
 
 This makes it easier to distinguish between the noise and the nasty,
 which I've found to be quite challenging thus far.

There are not, now, any particular guarantees about notification payloads [1], 
though, and so while we have collector plugins tailored to the types of 
notifications we see from other services today, the number, type, and content 
of those notifications are all subject to change. When new services come online 
and start sending ceilometer data (or an existing service defines a new event 
type), the event collection code can usually handle it but work needs to be 
done to convert events to samples for metering. If the payload of a known event 
type changes, existing code to convert the event to samples may fail to find a 
field. 

Having a hard-fail error handler is useful in situations where continuing 
operation would make the problem worse *and* the deployer can fix the problem. 
Being unable to connect to a database might be an example of this. However, we 
don’t want the ceilometer collector to shutdown if it receives a notification 
it doesn’t understand because discarding a malformed message isn’t going to 
make other ceilometer operations any worse, and seeing such a message isn’t a 
situation the deployer can fix. Catching KeyError, IndexError, AttributeError, 
etc. for those cases would be useful if we were going to treat those exception 
types differently somehow, but we don’t.

That said, it’s possible we could tighten up some of the error handling outside 
of event processing loops, so as I said, if you have a more specific proposal 
for places you think we can predict the exception type reliably, we should talk 
about those directly instead of the general case. You mention distinguishing 
between “the noise and the nasty” — do you have some logs you can share?

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/039858.html


 
 -- 
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Instance OS Support

2014-07-24 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 24/07/14 04:22, Quang Long wrote:
 Hi guys, I have a question, if we use OpenStack Havana using
 QEMU/KVM Hypervisor and based on Ubuntu 12.04 OS, what OS for
 instance can we use when lauch?
 
 I found a link related to this issue, for refrence? 
 http://www.linux-kvm.org/page/Guest_Support_Status#Windows_Family
 
 But with Red Hat Enterprise Linux OpenStack Platform, I realize
 less OS supported 
 https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Installation_and_Configuration_Guide/Supported_Virtual_Machine_Operating_Systems.html

 
I think the difference is more about which systems are considered by
Red Hat worth being supported with their support service. KVM may
still run other systems, it's just that support provided by Red Hat
may be limited on platforms that haven't got the official status.

 All of your answers would be appreciated. Many thanks.
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJT0QMSAAoJEC5aWaUY1u57HfsH/2raAiM19qNafLTCMZLukX6z
+U7fsT920RAI+0dpFAwMYakIIrUXbuDDrfklwe0cR7iSS1Fo+kU0t9Zbnw677tQb
7CFFMJ+taR9WagvXXjAGhn5ANs+duQajNJueJ6dl7HbjYfBe+mqIw16XhGst5+Lh
rH3akwXxSejv2853lMn9Ac7rdvPupPEASnNxQCUB4rHw2L/gcoyqcn4G2BAqighi
/bYNeM/hWZhUr6viSQbKMcY2nMekDkln1QRC8auxR4h7fTgTxVbdFOwwxJ6A5nst
11Z+giKSK4mwfyLKpsSFsMJI94L4+cV2W9qjH+TuDSsKtspemwK7gu3ixxBCGaY=
=kzC/
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec Freeze Exception] [Gantt] Scheduler Isolate DB spec

2014-07-24 Thread Sylvain Bauza
Le 24/07/2014 02:11, Michael Still a écrit :
 In that case this exception is approved. The exception is in the form
 of another week to get the spec merged, so quick iterations are the
 key.

 Cheers,
 Michael

Thanks Michael, greatly appreciated.

-Sylvain

 On Wed, Jul 23, 2014 at 5:31 PM, Sylvain Bauza sba...@redhat.com wrote:
 Le 23/07/2014 01:11, Michael Still a écrit :
 This spec freeze exception only has one core signed up. Are there any
 other cores interested in working with Sylvain on this one?

 Michael
 By looking at
 https://etherpad.openstack.org/p/nova-juno-spec-priorities, I can see
 ndipanov as volunteer for sponsoring this blueprint.

 -Sylvain

 On Mon, Jul 21, 2014 at 7:59 PM, John Garbutt j...@johngarbutt.com wrote:
 On 18 July 2014 09:10, Sylvain Bauza sba...@redhat.com wrote:
 Hi team,

 I would like to put your attention on https://review.openstack.org/89893
 This spec targets to isolate access within the filters to only Scheduler
 bits. This one is a prerequisite for a possible split of the scheduler
 into a separate project named Gantt, as it's necessary to remove direct
 access to other Nova objects (like aggregates and instances).

 This spec is one of the oldest specs so far, but its approval has been
 delayed because there were other concerns to discuss first about how we
 split the scheduler. Now that these concerns have been addressed, it is
 time for going back to that blueprint and iterate over it.

 I understand the exception is for a window of 7 days. In my opinion,
 this objective is targetable as now all the pieces are there for making
 a consensus.

 The change by itself is only a refactoring of the existing code with no
 impact on APIs neither on DB scheme, so IMHO this blueprint is a good
 opportunity for being on track with the objective of a split by
 beginning of Kilo.

 Cores, I leave you appreciate the urgency and I'm available by IRC or
 email for answering questions.
 Regardless of Gantt, tidying up the data dependencies here make sense.

 I feel we need to consider how the above works with upgrades.

 I am happy to sponsor this blueprint. Although I worry we might not
 get agreement in time.

 Thanks,
 John

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Spec freeze exception] Controlled shutdown of GuestOS

2014-07-24 Thread Day, Phil
According to: https://etherpad.openstack.org/p/nova-juno-spec-priorities   
alaski has also singed up for this if I drop the point of contention - which 
I'ev done

 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: 24 July 2014 00:50
 To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [Nova][Spec freeze exception] Controlled
 shutdown of GuestOS
 
 Another core sponsor would be nice on this one. Any takers?
 
 Michael
 
 On Thu, Jul 24, 2014 at 4:14 AM, Daniel P. Berrange berra...@redhat.com
 wrote:
  On Wed, Jul 23, 2014 at 06:08:52PM +, Day, Phil wrote:
  Hi Folks,
 
  I'd like to propose the following as an exception to the spec freeze, on 
  the
 basis that it addresses a potential data corruption issues in the Guest.
 
  https://review.openstack.org/#/c/89650
 
  We were pretty close to getting acceptance on this before, apart from a
 debate over whether one additional config value could be allowed to be set
 via image metadata - so I've given in for now on wanting that feature from a
 deployer perspective, and said that we'll hard code it as requested.
 
  Initial parts of the implementation are here:
  https://review.openstack.org/#/c/68942/
  https://review.openstack.org/#/c/99916/
 
  Per my comments already, I think this is important for Juno and will
  sponsor it.
 
  Regards,
  Daniel
  --
  |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
  |: http://libvirt.org  -o- http://virt-manager.org 
  :|
  |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ 
  :|
  |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc 
  :|
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [specs] how to continue spec discussion

2014-07-24 Thread Ben Nemec
On 2014-07-17 09:37, Russell Bryant wrote:
 On 07/16/2014 10:30 AM, John Garbutt wrote:
 On 16 July 2014 14:07, Thierry Carrez thie...@openstack.org wrote:
 Daniel P. Berrange wrote:
 On Wed, Jul 16, 2014 at 11:57:33AM +, Tim Bell wrote:
 It seems a pity to archive the comments and reviewer lists along
 with losing a place to continue the discussions even if we are not
 expecting to see code in Juno.

 Agreed we should keep those comments.

 Agreed, that is sub-optimal to say the least.

 The spec documents themselves are in a release specific directory
 though. Any which are to be postponed to Kxxx would need to move
 into a specs/k directory instead of specs/juno, but we don't
 know what the k directory needs to be called yet :-(

 The poll ends in 18 hours, so that should no longer be a blocker :)

 Aww, there goes our lame excuse for punting making a decision on this.

 I think what we don't really want to abandon those specs and lose
 comments and history... but we want to shelve them in a place where they
 do not interrupt core developers workflow as they concentrate on Juno
 work. It will be difficult to efficiently ignore them if they are filed
 in a next or a kxxx directory, as they would still clutter /most/ Gerrit
 views.

 +1

 My intention was that once the specific project is open for K specs,
 people will restore their original patch set, and move the spec to the
 K directory, thus keeping all the history.

 For Nova, the open reviews, with a -2, are ones that are on the
 potential exception list, and so still might need some reviews. If
 they gain an exception, the -2 will be removed. The list of possible
 exceptions is currently included in bottom of this etherpad:
 https://etherpad.openstack.org/p/nova-juno-spec-priorities
 
 I think we can track potential exceptions without abandoning patches.  I
 think having them still open helps retain a dashboard of outstanding
 specs.  I'm also worried about how contributors feel having their spec
 abandoned when it's not especially clear why in the review.  Anyway, I'd
 prefer just leaving them all open (with a -2 is fine) unless we think
 it's a dead end.

+1.  This is basically how we handle code changes that don't make
feature freeze, so I don't see why it wouldn't work for specs too.

 
 For exceptions, I think we should require a core review sponsor for any
 exception, similar to how we've handled feature freeze exceptions in the
 past.  I don't think it makes much sense to provide an exception unless
 we're confident we can get it in.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Approval Deadline (SAD) has passed, next steps

2014-07-24 Thread Kyle Mestery
On Thu, Jul 24, 2014 at 5:38 AM, Livnat Peer lp...@redhat.com wrote:
 On 07/21/2014 04:16 PM, Kyle Mestery wrote:
 Hi all!

 A quick note that SAD has passed. We briskly approved a pile of BPs
 over the weekend, most of them vendor related as low priority, best
 effort attempts for Juno-3. At this point, we're hugely oversubscribed
 for Juno-3, so it's unlikely we'll make exceptions for things into
 Juno-3 now.

 I don't plan to open a Kilo directory in the specs repository quite
 yet. I'd like to first let things settle down a bit with Juno-3 before
 going there. Once I do, specs which were not approved should be moved
 to that directory where they can be reviewed with the idea they are
 targeting Kilo instead of Juno.

 Also, just a note that we have a handful of bugs and BPs we're trying
 to land in Juno-3 yet today, so core reviewers, please focus on those
 today.

 Thanks!
 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-2



 Hi Kyle,

 Do we have guidelines for what can/should qualify as an exception?
 I see some exception requests and I would like to understand what
 criteria they should meet in order to qualify as an exception.

 Thanks ,Livnat

Salvatore sent an email to the list [1] which perfectly describes the
types of things we'll allow exceptions for. To be honest, we're
already 4x oversubscribed for Juno-3 [2] as it is, and it's unlikely
even 2/3 of the existing approved BPs will land code. That's one
reason it's hard for me to think of approving existing ones,
especially considering how close we are to feature freeze and the end
of Juno [3].

Thanks,
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/040969.html
[2] https://launchpad.net/neutron/+milestone/juno-3
[3] https://wiki.openstack.org/wiki/Juno_Release_Schedule


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-24 Thread gordon chung
 Gordon, could you prepare a version of the repository that stops with the 
 export and whatever changes are needed to make the test jobs for the new  
 library run? If removing some of those tests is part of making the suite run, 
 we can talk about that on the list here, but if you can make the job run 
 without  that commit we should review it in gerrit after the repository is 
 imported.

from what i recall, the stray tests commit was because running graduate.sh put 
the unit tests under tests/unit/middleware/xyz.py and added a few base test 
files that weren't used for anything. the commit removed the unused base test 
files and kept the test files directly under tests directory.
cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Denis Makogon
Hello, Stackers.

 I’d like to discuss the future of Trove metadata API. But first small
history info (mostly taken for Trove medata spec, see [1]):
Instance metadata is a feature that has been requested frequently by our
users. They need a way to store critical information for their instances
and have that be associated with the instance so that it is displayed
whenever that instance is listed via the API. This also becomes very usable
from a testing perspective when doing integration/ci. We can utilize the
metadata to store things like what process created the instance, what the
instance is being used for, etc... The design for this feature is modeled
heavily on the Nova metadata API with a few tweaks in how it works
internally.

And here comes conflict. Glance devs are working on “Glance Metadata
Catalog” feature (see [2]). And as for me, we don’t have to “reinvent the
wheel” for Trove. It seems that we would be able

to use Glance API to interact with   Metadata Catalog. And it seems to be
redundant to write our own API for metadata CRUD operations.



From Trove perspective, we need to define a list concrete use cases for
metadata usage (eg given goals at [1] are out of scope of Database program,
etc.).

From development and cross-project integration perspective, we need to
delegate all development to Glance devs. But we still able to help Glance
devs with this feature by taking active part in polishing proposed spec
(see [2]).



Unfortunately, we’re(Trove devs) are on half way to metadata - patch
for python-troveclient already merged. So, we need to consider
deprecation/reverting of merged and block

merging of proposed ( see [3]) patchsets in favor of Glance Metadata
Catalog.


Thoughts?

[1] https://wiki.openstack.org/wiki/Trove-Instance-Metadata

[2] https://review.openstack.org/#/c/98554/11

[3] https://review.openstack.org/#/c/82123/


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] overuse of 'except Exception'

2014-07-24 Thread Chris Dent

On Thu, 24 Jul 2014, Doug Hellmann wrote:


I don’t claim any special status except that I was there and am
trying to provide background on why things are as they are. :-)


I think that counts and I very much appreciate the responses.


Having a hard-fail error handler is useful in situations where
continuing operation would make the problem worse *and* the deployer
can fix the problem. Being unable to connect to a database might be an
example of this. However, we don’t want the ceilometer collector to
shutdown if it receives a notification it doesn’t understand because
discarding a malformed message isn’t going to make other ceilometer
operations any worse, and seeing such a message isn’t a situation
the deployer can fix. Catching KeyError, IndexError, AttributeError,
etc. for those cases would be useful if we were going to treat those
exception types differently somehow, but we don’t.


I guess what I'm asking is shouldn't we treat them differently? If
I've got a dict coming in over the bus and it is missing a key, big
deal, the bus is still working. I _do_ want to know about it but it
isn't a disaster so I can (and should) catch KeyError and log a
short message (without traceback) that is specially encoded to say
how about that the notification payload was messed up.

Maybe such a thing is elsewhere in the stack, if so, great. In that
case the thing code I pointed out as a bit of a compromise is in
place, just not in the same place.

What I don't want is a thing logged as _exceptional_ when it isn't.


That said, it’s possible we could tighten up some of the error
handling outside of event processing loops, so as I said, if you have
a more specific proposal for places you think we can predict the
exception type reliably, we should talk about those directly instead
of the general case. You mention distinguishing between “the noise
and the nasty” — do you have some logs you can share?


Only vaguely at this point, based on my experiences in the past few days
trying to chase down failures in the gate. There's just so much logged,
a lot of which doesn't help, but at the same time a fair bit which looks
like it ought to be a traceback and handled more aggressively. That
experience drove me into the Ceilometer code in an effort to check the
hygiene there and see if there was something I could do in that small
environment (rather than the overwhelming context of the The Entire
Project™).

I'll pay a bit closer attention to the specific relationship
between the ceilometer exceptions (on the loops) and the logs and
when I find something that particularly annoys me, I'll submit a
patch for review and we'll see how it goes.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Authentication is turned on - Fuel API and UI

2014-07-24 Thread Kamil Sambor
Hi folks,

All parts of code related to stage I and II from blueprint
http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.htm
http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.html
are
merged. In result of that, fuel (api and UI)  we now have authentication
via keystone and now is required as default. Keystone is installed in new
container during master installation. We can configure password via
fuelmenu during installation (default user:password - admin:admin).
Password is saved in astute.yaml, also admin_token is stored here.
Almost all endpoints in fuel are protected and they required authentication
token. We made exception for few endpoints and they are defined in
nailgun/middleware/keystone.py in public_url .
Default password can be changed via UI or via fuel-cli. In case of changing
password via UI or fuel-cli password is not stored in any file only in
keystone, so if you forgot password you can change it using keystone client
from master node and admin_token from astute.yaml using command: keystone
--os-endpoint=http://10.20.0.2:35357/v2.0 --os-token=admin_token
password-update .
Fuel client now use for authentication user and passwords which are stored
in /etc/fuel/client/config.yaml. Password in this file is not changed
during changing via fuel-cli or UI, user must change this password manualy.
If user don't want use config file can provide user and password to
fuel-cli by flags: --os-username=admin --os-password=test. We added also
possibilities to change password via fuel-cli, to do this we should
execute: fuel user --change-password --new-pass=new .
To run or disable authentication we should change
/etc/nailgun/settings.yaml (AUTHENTICATION_METHOD) in nailgun container.

Best regards,
Kamil S.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Spec freeze exception] Controlled shutdown of GuestOS

2014-07-24 Thread John Garbutt
OK, I think this is important as well, so thats 2-3 cores signed up.

Lets assume the exception is granted I guess, or at least, lets clear
it up in the nova-meeting.

Thanks,
John

On 24 July 2014 14:20, Day, Phil philip@hp.com wrote:
 According to: https://etherpad.openstack.org/p/nova-juno-spec-priorities   
 alaski has also singed up for this if I drop the point of contention - which 
 I'ev done

 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: 24 July 2014 00:50
 To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [Nova][Spec freeze exception] Controlled
 shutdown of GuestOS

 Another core sponsor would be nice on this one. Any takers?

 Michael

 On Thu, Jul 24, 2014 at 4:14 AM, Daniel P. Berrange berra...@redhat.com
 wrote:
  On Wed, Jul 23, 2014 at 06:08:52PM +, Day, Phil wrote:
  Hi Folks,
 
  I'd like to propose the following as an exception to the spec freeze, on 
  the
 basis that it addresses a potential data corruption issues in the Guest.
 
  https://review.openstack.org/#/c/89650
 
  We were pretty close to getting acceptance on this before, apart from a
 debate over whether one additional config value could be allowed to be set
 via image metadata - so I've given in for now on wanting that feature from a
 deployer perspective, and said that we'll hard code it as requested.
 
  Initial parts of the implementation are here:
  https://review.openstack.org/#/c/68942/
  https://review.openstack.org/#/c/99916/
 
  Per my comments already, I think this is important for Juno and will
  sponsor it.
 
  Regards,
  Daniel
  --
  |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
  |: http://libvirt.org  -o- http://virt-manager.org 
  :|
  |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ 
  :|
  |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc 
  :|
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Authentication is turned on - Fuel API and UI

2014-07-24 Thread Mike Scherbakov
Kamil,
thank you for the detailed information.

Meg, do we have anything documented about authx yet? I think Kamil's email
can be used as a source to prepare user and operation guides for Fuel 5.1.

Thanks,


On Thu, Jul 24, 2014 at 5:45 PM, Kamil Sambor ksam...@mirantis.com wrote:

 Hi folks,

 All parts of code related to stage I and II from blueprint
 http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.htm
 http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.html
  are
 merged. In result of that, fuel (api and UI)  we now have authentication
 via keystone and now is required as default. Keystone is installed in new
 container during master installation. We can configure password via
 fuelmenu during installation (default user:password - admin:admin).
 Password is saved in astute.yaml, also admin_token is stored here.
 Almost all endpoints in fuel are protected and they required
 authentication token. We made exception for few endpoints and they are
 defined in nailgun/middleware/keystone.py in public_url .
 Default password can be changed via UI or via fuel-cli. In case of
 changing password via UI or fuel-cli password is not stored in any file
 only in keystone, so if you forgot password you can change it using
 keystone client from master node and admin_token from astute.yaml using
 command: keystone --os-endpoint=http://10.20.0.2:35357/v2.0 
 --os-token=admin_token
 password-update .
 Fuel client now use for authentication user and passwords which are stored
 in /etc/fuel/client/config.yaml. Password in this file is not changed
 during changing via fuel-cli or UI, user must change this password manualy.
 If user don't want use config file can provide user and password to
 fuel-cli by flags: --os-username=admin --os-password=test. We added also
 possibilities to change password via fuel-cli, to do this we should
 execute: fuel user --change-password --new-pass=new .
 To run or disable authentication we should change
 /etc/nailgun/settings.yaml (AUTHENTICATION_METHOD) in nailgun container.

 Best regards,
 Kamil S.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-24 Thread CARVER, PAUL

Alan Kavanagh wrote:

If we have more work being put on the table, then more Core members would
definitely go a long way with assisting this, we cant wait for folks to be
reviewing stuff as an excuse to not get features landed in a given release.

Stability is absolutely essential so we can't force things through without
adequate review. The automated CI testing in OpenStack is impressive, but
it is far from flawless and even if it worked perfectly it's still just
CI, not AI. There's a large class of problems that it just can't catch.

I agree with Alan that if there's a discrepancy between the amount of code
that folks would like to land in a release and the number of core member
working hours in a six month period then that is something the board needs
to take an interest in.

I think a friendly adversarial approach is healthy for OpenStack. Specs and
code should need to be defended, not just rubber stamped. Having core
reviewers critiquing code written by their competitors, suppliers, or vendors
is healthy for the overall code quality. However, simply having specs and
code not get reviewed at all due to a shortage of core reviewers is not
healthy and will limit the success of OpenStack.

I don't really follow Linux kernel development, but a quick search turned
up [1] which seems to indicate at least one additional level between
developer and core (depending on whether we consider Linus and Andrew levels
unto themselves and whether we consider OpenStack projects as full systems
or as subsystems of OpenStack.

Speaking only for myself and not ATT, I'm disappointed that my employer
doesn't have more developers actively writing code. We ought to (in my
personal opinion) be supplying core reviewers to at least a couple of
OpenStack projects. But one way or another we need to get more capabilities
reviewed and merged. My personal top disappointments are with the current
state of IPv6, HA, and QoS, but I'm sure other folks can list lots of other
capabilities that they're really going to be frustrated to find lacking in Juno.

[1] 
http://techblog.aasisvinayak.com/linux-kernel-development-process-how-it-works/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Authentication is turned on - Fuel API and UI

2014-07-24 Thread Lukasz Oles
Hi all,

one more thing. You do not need to install keystone in your development
environment. By default it runs there in fake mode. Keystone mode is
enabled only on iso. If you want to test it locally you have to install
keystone and configure nailgun as Kamil explained.

Regards,


On Thu, Jul 24, 2014 at 3:57 PM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Kamil,
 thank you for the detailed information.

 Meg, do we have anything documented about authx yet? I think Kamil's email
 can be used as a source to prepare user and operation guides for Fuel 5.1.

 Thanks,


 On Thu, Jul 24, 2014 at 5:45 PM, Kamil Sambor ksam...@mirantis.com
 wrote:

 Hi folks,

 All parts of code related to stage I and II from blueprint
 http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.htm
 http://docs-draft.openstack.org/29/96429/11/gate/gate-fuel-specs-docs/2807f30/doc/build/html/specs/5.1/access-control-master-node.html
  are
 merged. In result of that, fuel (api and UI)  we now have authentication
 via keystone and now is required as default. Keystone is installed in new
 container during master installation. We can configure password via
 fuelmenu during installation (default user:password - admin:admin).
 Password is saved in astute.yaml, also admin_token is stored here.
 Almost all endpoints in fuel are protected and they required
 authentication token. We made exception for few endpoints and they are
 defined in nailgun/middleware/keystone.py in public_url .
 Default password can be changed via UI or via fuel-cli. In case of
 changing password via UI or fuel-cli password is not stored in any file
 only in keystone, so if you forgot password you can change it using
 keystone client from master node and admin_token from astute.yaml using
 command: keystone --os-endpoint=http://10.20.0.2:35357/v2.0 
 --os-token=admin_token
 password-update .
 Fuel client now use for authentication user and passwords which are
 stored in /etc/fuel/client/config.yaml. Password in this file is not
 changed during changing via fuel-cli or UI, user must change this password
 manualy. If user don't want use config file can provide user and password
 to fuel-cli by flags: --os-username=admin --os-password=test. We added also
 possibilities to change password via fuel-cli, to do this we should
 execute: fuel user --change-password --new-pass=new .
 To run or disable authentication we should change
 /etc/nailgun/settings.yaml (AUTHENTICATION_METHOD) in nailgun container.

 Best regards,
 Kamil S.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Łukasz Oleś
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Arnaud Legendre
Hi Denis,

I think this is a perfect time for you to review the spec for the glance 
metadata catalog https://review.openstack.org/#/c/98554/ and see if it fits 
your use case.
Also, we have a session tomorrow at 9:00am PST at the Glance meetup to discuss 
this topic. I think it would be useful if you could join (in person or 
remotely). Please see the details: 
https://wiki.openstack.org/wiki/Glance/JunoCycleMeetup

Thank you,
Arnaud

On Jul 24, 2014, at 6:32 AM, Denis Makogon 
dmako...@mirantis.commailto:dmako...@mirantis.com wrote:

Hello, Stackers.

 I’d like to discuss the future of Trove metadata API. But first small 
history info (mostly taken for Trove medata spec, see [1]):
Instance metadata is a feature that has been requested frequently by our users. 
They need a way to store critical information for their instances and have that 
be associated with the instance so that it is displayed whenever that instance 
is listed via the API. This also becomes very usable from a testing perspective 
when doing integration/ci. We can utilize the metadata to store things like 
what process created the instance, what the instance is being used for, etc... 
The design for this feature is modeled heavily on the Nova metadata API with a 
few tweaks in how it works internally.

And here comes conflict. Glance devs are working on “Glance Metadata 
Catalog” feature (see [2]). And as for me, we don’t have to “reinvent the 
wheel” for Trove. It seems that we would be able
to use Glance API to interact with   Metadata Catalog. And it seems to be 
redundant to write our own API for metadata CRUD operations.



From Trove perspective, we need to define a list concrete use cases for 
metadata usage (eg given goals at [1] are out of scope of Database program, 
etc.).
From development and cross-project integration perspective, we need to 
delegate all development to Glance devs. But we still able to help Glance devs 
with this feature by taking active part in polishing proposed spec (see [2]).



Unfortunately, we’re(Trove devs) are on half way to metadata - patch for 
python-troveclient already merged. So, we need to consider 
deprecation/reverting of merged and block
merging of proposed ( see [3]) patchsets in favor of Glance Metadata Catalog.


Thoughts?

[1] https://wiki.openstack.org/wiki/Trove-Instance-Metadata
[2] https://review.openstack.org/#/c/98554/11
[3] https://review.openstack.org/#/c/82123/


Best regards,
Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Instance OS Support

2014-07-24 Thread Steve Gordon
- Original Message -
 From: Ihar Hrachyshka ihrac...@redhat.com
 To: openstack-dev@lists.openstack.org
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512
 On 24/07/14 04:22, Quang Long wrote:
  Hi guys, I have a question, if we use OpenStack Havana using
  QEMU/KVM Hypervisor and based on Ubuntu 12.04 OS, what OS for
  instance can we use when lauch?
  
  I found a link related to this issue, for refrence?
  http://www.linux-kvm.org/page/Guest_Support_Status#Windows_Family
  
  But with Red Hat Enterprise Linux OpenStack Platform, I realize
  less OS supported
  https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Installation_and_Configuration_Guide/Supported_Virtual_Machine_Operating_Systems.html
 
  
 I think the difference is more about which systems are considered by
 Red Hat worth being supported with their support service. KVM may
 still run other systems, it's just that support provided by Red Hat
 may be limited on platforms that haven't got the official status.

Indeed, the operating systems listed in the RHELOSP documentation are those 
that were explicitly tested with the versions of KVM shipped in RHEL and 
RHELOSP. It is likely (and expected) that many other guests will function just 
fine as reflected in the user submissions on the KVM site. If you run into any 
bugs with running your guests on KVM then please report them even if they are 
not formally supported per the RHELOSP list!

NB: In future please direct user queries to the openst...@lists.openstack.org 
list. This list is intended for development discussion.

Thanks,

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Approval Deadline (SAD) has passed, next steps

2014-07-24 Thread Kyle Mestery
On Thu, Jul 24, 2014 at 9:52 AM, Alan Kavanagh
alan.kavan...@ericsson.com wrote:
 Hi Kyle

 I do sympathise and know its not easy to accommodate all BP's, and I know 
 it's a difficult job to take these decisions.

 May I therefore suggest that anything that gets punted from Juno is ensured 
 for high priority and acceptance for Kilo Release ? This means we will have 
 those that are not approved ensure they take high priority and guaranteed for 
 the next release (Kilo in this case) and send an email to confirm this.

 That way we don't have people feeling constantly disappointed and being 
 pushed further down the pecking order, and ensure they are not being demoted 
 and support on progressing their work at the next release.

 Interested to hear your thoughts on this Kyle and others and feel free to 
 suggest an alternatives, I know this is a difficult topic to address but I 
 feel it is necessary for us to have this discussion.

It's hard to say guaranteed for Kilo at this point. The best I can
say is that once we open up the neutron-specs repository for Kilo, we
start the process of adding them there. One other thing I can propose
is that we allocate 10 minutes each week at the Neutron meeting to
allow people to propose new ideas. This would let people socialize
their ideas with the broader Neutron team. We could likely accommodate
one or two per week. If people think this would be helpful, we can try
this.

I have another email on this subject, but I'll send it separately to
attract more attention.

Thanks,
Kyle

 Alan

 -Original Message-
 From: Kyle Mestery [mailto:mest...@mestery.com]
 Sent: July-24-14 9:14 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Cc: Kyle Mestery
 Subject: Re: [openstack-dev] [neutron] Spec Approval Deadline (SAD) has 
 passed, next steps

 On Thu, Jul 24, 2014 at 5:38 AM, Livnat Peer lp...@redhat.com wrote:
 On 07/21/2014 04:16 PM, Kyle Mestery wrote:
 Hi all!

 A quick note that SAD has passed. We briskly approved a pile of BPs
 over the weekend, most of them vendor related as low priority, best
 effort attempts for Juno-3. At this point, we're hugely
 oversubscribed for Juno-3, so it's unlikely we'll make exceptions for
 things into
 Juno-3 now.

 I don't plan to open a Kilo directory in the specs repository quite
 yet. I'd like to first let things settle down a bit with Juno-3
 before going there. Once I do, specs which were not approved should
 be moved to that directory where they can be reviewed with the idea
 they are targeting Kilo instead of Juno.

 Also, just a note that we have a handful of bugs and BPs we're trying
 to land in Juno-3 yet today, so core reviewers, please focus on those
 today.

 Thanks!
 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-2



 Hi Kyle,

 Do we have guidelines for what can/should qualify as an exception?
 I see some exception requests and I would like to understand what
 criteria they should meet in order to qualify as an exception.

 Thanks ,Livnat

 Salvatore sent an email to the list [1] which perfectly describes the types 
 of things we'll allow exceptions for. To be honest, we're already 4x 
 oversubscribed for Juno-3 [2] as it is, and it's unlikely even 2/3 of the 
 existing approved BPs will land code. That's one reason it's hard for me to 
 think of approving existing ones, especially considering how close we are to 
 feature freeze and the end of Juno [3].

 Thanks,
 Kyle

 [1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/040969.html
 [2] https://launchpad.net/neutron/+milestone/juno-3
 [3] https://wiki.openstack.org/wiki/Juno_Release_Schedule


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Denis Makogon
On Thu, Jul 24, 2014 at 5:46 PM, Arnaud Legendre alegen...@vmware.com
wrote:

  Hi Denis,

  I think this is a perfect time for you to review the spec for the glance
 metadata catalog https://review.openstack.org/#/c/98554/ and see if it
 fits your use case.
 Also, we have a session tomorrow at 9:00am PST at the Glance meetup to
 discuss this topic. I think it would be useful if you could join (in person
 or remotely). Please see the details:
 https://wiki.openstack.org/wiki/Glance/JunoCycleMeetup

 I will try to take part in, unfortunately remotely. Also, i'm reviewing
metadata spec right now. If there would be some kind of a gap or missing
abilities - i would leave comments. But for the cursory glance - it looks
exactly what we need, except our own(Trove) specific things.

Thanks,
Denis M.


  Thank you,
 Arnaud

  On Jul 24, 2014, at 6:32 AM, Denis Makogon dmako...@mirantis.com wrote:

   Hello, Stackers.

  I’d like to discuss the future of Trove metadata API. But first small
 history info (mostly taken for Trove medata spec, see [1]):
  Instance metadata is a feature that has been requested frequently by our
 users. They need a way to store critical information for their instances
 and have that be associated with the instance so that it is displayed
 whenever that instance is listed via the API. This also becomes very usable
 from a testing perspective when doing integration/ci. We can utilize the
 metadata to store things like what process created the instance, what the
 instance is being used for, etc... The design for this feature is modeled
 heavily on the Nova metadata API with a few tweaks in how it works
 internally.

  And here comes conflict. Glance devs are working on “Glance Metadata
 Catalog” feature (see [2]). And as for me, we don’t have to “reinvent the
 wheel” for Trove. It seems that we would be able
  to use Glance API to interact with   Metadata Catalog. And it seems to
 be redundant to write our own API for metadata CRUD operations.


  From Trove perspective, we need to define a list concrete use cases
 for metadata usage (eg given goals at [1] are out of scope of Database
 program, etc.).
  From development and cross-project integration perspective, we need to
 delegate all development to Glance devs. But we still able to help Glance
 devs with this feature by taking active part in polishing proposed spec
 (see [2]).


  Unfortunately, we’re(Trove devs) are on half way to metadata - patch
 for python-troveclient already merged. So, we need to consider
 deprecation/reverting of merged and block
  merging of proposed ( see [3]) patchsets in favor of Glance Metadata
 Catalog.


 Thoughts?

  [1] https://wiki.openstack.org/wiki/Trove-Instance-Metadata
  [2] https://review.openstack.org/#/c/98554/11
  [3] https://review.openstack.org/#/c/82123/


  Best regards,
  Denis Makogon
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Spec Approval Deadline (SAD) has passed, next steps

2014-07-24 Thread Alan Kavanagh
Hi Kyle

I do sympathise and know its not easy to accommodate all BP's, and I know it's 
a difficult job to take these decisions.

May I therefore suggest that anything that gets punted from Juno is ensured 
for high priority and acceptance for Kilo Release ? This means we will have 
those that are not approved ensure they take high priority and guaranteed for 
the next release (Kilo in this case) and send an email to confirm this. 

That way we don't have people feeling constantly disappointed and being pushed 
further down the pecking order, and ensure they are not being demoted and 
support on progressing their work at the next release.

Interested to hear your thoughts on this Kyle and others and feel free to 
suggest an alternatives, I know this is a difficult topic to address but I feel 
it is necessary for us to have this discussion.

Alan

-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: July-24-14 9:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Kyle Mestery
Subject: Re: [openstack-dev] [neutron] Spec Approval Deadline (SAD) has passed, 
next steps

On Thu, Jul 24, 2014 at 5:38 AM, Livnat Peer lp...@redhat.com wrote:
 On 07/21/2014 04:16 PM, Kyle Mestery wrote:
 Hi all!

 A quick note that SAD has passed. We briskly approved a pile of BPs 
 over the weekend, most of them vendor related as low priority, best 
 effort attempts for Juno-3. At this point, we're hugely 
 oversubscribed for Juno-3, so it's unlikely we'll make exceptions for 
 things into
 Juno-3 now.

 I don't plan to open a Kilo directory in the specs repository quite 
 yet. I'd like to first let things settle down a bit with Juno-3 
 before going there. Once I do, specs which were not approved should 
 be moved to that directory where they can be reviewed with the idea 
 they are targeting Kilo instead of Juno.

 Also, just a note that we have a handful of bugs and BPs we're trying 
 to land in Juno-3 yet today, so core reviewers, please focus on those 
 today.

 Thanks!
 Kyle

 [1] https://launchpad.net/neutron/+milestone/juno-2



 Hi Kyle,

 Do we have guidelines for what can/should qualify as an exception?
 I see some exception requests and I would like to understand what 
 criteria they should meet in order to qualify as an exception.

 Thanks ,Livnat

Salvatore sent an email to the list [1] which perfectly describes the types of 
things we'll allow exceptions for. To be honest, we're already 4x 
oversubscribed for Juno-3 [2] as it is, and it's unlikely even 2/3 of the 
existing approved BPs will land code. That's one reason it's hard for me to 
think of approving existing ones, especially considering how close we are to 
feature freeze and the end of Juno [3].

Thanks,
Kyle

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-July/040969.html
[2] https://launchpad.net/neutron/+milestone/juno-3
[3] https://wiki.openstack.org/wiki/Juno_Release_Schedule


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [not-only-neutron] How to Contribute upstream in OpenStack Neutron

2014-07-24 Thread Kyle Mestery
I've received a lot of emails lately, mostly private, from people who
feel they are being left out of the Neutron process. I'm unsure if
other projects have people who feel this way, thus the uniquely worded
subject above. I wanted to broadly address these concerns with this
email.

One thing I'd like to reiterate for people here, publicly on the list,
is that there is no hidden agendas in Neutron, no political machines
in the background working. As PTL, I've tried to be as transparent as
possible. The honest reality is that if you want to have influence in
Neutron or even in OpenStack in general, get involved upstream. Start
committing small patches. Start looking at bugs. Participate in the
weekly meetings. Build relationships upstream. Work across projects,
not just Neutron. None of this is specific to Neutron or even
OpenStack, but in fact is how you work in any upstream Open Source
community.

I'd also like to address the add more core reviewers to solve all
these problems angle. While adding more core reviewers is a good
thing, we need to groom core reviewers and meld them into the team.
This takes time, and it's something we in Neutron actively work on.
The process we use is documented here [1].

I'd also like to point out that one of the things I've tried to do in
Neutron as PTL during the Juno cycle is document as much of the
process around working in Neutron. That is all documented on this wiki
page here [2]. Feedback on this is most certainly welcome.

I'm willing to help work with anyone who wants to contribute more to
Neutron. I spend about half of my time doing just this already,
between reviews, emails, and IRC conversations. So please feel free to
find me on IRC (mestery on Freenode), on the ML, or even just use this
email address.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/NeutronCore
[2] https://wiki.openstack.org/wiki/NeutronPolicies

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-24 Thread Carlos Garza
Are you close to adding the stub modules for the X509 parsing and barbicn 
integration etc.

On Jul 24, 2014, at 6:38 AM, Evgeny Fedoruk evge...@radware.com wrote:

 Hi Doug,
 I agree with Brandon, since there is no flavors framework yet, each driver 
 not supporting TLS is in charge of throwing the unsupported exception.
 The driver can do it once getting a listener with TERMINATED-HTTPS protocol.
 
 Evg
 
 
 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
 Sent: Wednesday, July 23, 2014 9:09 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work 
 division
 
 @Evgeny: Did you intend on adding another patchset in the reviews I've been 
 working on? If so I don't really see any changes, so if they're are some 
 changes you needed in there let me know.
 
 @Doug: I think if the drivers see the TERMINATED_HTTPS protocol then they can 
 throw an exception.  I don't think a driver interface change is needed.
 
 Thanks,
 Brandon
 
 
 On Wed, 2014-07-23 at 17:02 +, Doug Wiegley wrote:
 Do we want any driver interface changes for this?  At one level, with 
 the current interface, conforming drivers could just reference 
 listener.sni_containers, with no changes.  But, do we want something 
 in place so that the API can return an unsupported error for non-TLS 
 v2 drivers?  Or must all v2 drivers support TLS?
 
 doug
 
 
 
 On 7/23/14, 10:54 AM, Evgeny Fedoruk evge...@radware.com wrote:
 
 My code is here:
 https://review.openstack.org/#/c/109035/1
 
 
 
 -Original Message-
 From: Evgeny Fedoruk
 Sent: Wednesday, July 23, 2014 6:54 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work 
 division
 
 Hi Carlos,
 
 As I understand you are working on common module for Barbican 
 interactions.
 I will commit my code later today and I will appreciate if you and 
 anybody else  who is interested will review this change.
 There is one specific spot for the common Barbican interactions 
 module API integration.
 After the IRC meeting tomorrow, we can discuss the work items and 
 decide who is interested/available to do them.
 Does it make sense?
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Wednesday, July 23, 2014 6:15 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work 
 division
 
   Do you have any idea as to how we can split up the work?
 
 On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
 
 Hi,
 
 I'm working on TLS integration with loadbalancer v2 extension and db.
 Basing on Brandon's  patches 
 https://review.openstack.org/#/c/105609 , 
 https://review.openstack.org/#/c/105331/  , 
 https://review.openstack.org/#/c/105610/
 I will abandon previous 2 patches for TLS which are 
 https://review.openstack.org/#/c/74031/ and 
 https://review.openstack.org/#/c/102837/
 Managing to submit my change later today. It will include lbaas 
 extension v2 modification, lbaas db v2 modifications, alembic 
 migration for schema changes and new tests in unit testing for lbaas db v2.
 
 Thanks,
 Evg
 
 -Original Message-
 From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
 Sent: Wednesday, July 23, 2014 3:54 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work 
 division
 
Since it looks like the TLS blueprint was approved I''m sure were 
 all eager to start coded so how should we divide up work on the source 
 code.
 I have Pull requests in pyopenssl
 https://github.com/pyca/pyopenssl/pull/143;. and a few one liners 
 in pica/cryptography to expose the needed low-level that I'm hoping 
 will be added pretty soon to that PR 143 test's can pass. Incase it 
 doesn't we will fall back to using the pyasn1_modules as it already 
 also has a means to fetch what we want at a lower level.
 I'm just hoping that we can split the work up so that we can 
 collaborate together on this with out over serializing the work were 
 people become dependent on waiting for some one else to complete 
 their work or worse one person ending up doing all the work.
 
 
  Carlos D. Garza 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list

[openstack-dev] [Neutron][LBaaS] new common module for Barbican TLS containers interaction

2014-07-24 Thread Evgeny Fedoruk
Hi,

Following our talk on TLS work items split,
We need to decide how will we validate/extract certificates Barbican TLS 
containers.
As we agreed on IRC, the first priority should be certificates fetching.

TLS RST describes a new common module that will be used by LBaaS API and LBaaS 
drivers.
It's proposed front-end API is currently:
1. Ensuring Barbican TLS container existence (used by LBaaS API)
2. Validating Barbican TLS container (used by LBaaS API)
   This API will also register LBaaS as a container's consumer in Barbican's 
repository.
   POST request:
   http://admin-api/v1/containers/{container-uuid}/consumers
   {
type: LBaaS,
URL: https://lbaas.myurl.net/loadbalancers/lbaas_loadbalancer_id/
   }

3. Extracting SubjectCommonName and SubjectAltName information
from certificates' X509 (used by LBaaS front-end API)
   As for now, only dNSName (and optionally directoryName) types will be 
extracted from
SubjectAltName sequence,

4. Fetching certificate's data from Barbican TLS container
(used by provider/driver code)

5. Unregistering LBaaS as a consumer of the container when container is not
 used by any listener any more (used by LBaaS front-end API)

So this new module's front-end is used by LBaaS API/drivers and its back-end is 
facing Barbican API.
Please give your feedback on module API, should we merge 1 and 2?

I will be able to start working on the new module skeleton on Sunday morning. 
It will include API functions.

TLS implementation patch has a spot where container validation should happen: 
https://review.openstack.org/#/c/109035/3/neutron/db/loadbalancer/loadbalancer_dbv2.py
 line 
518https://review.openstack.org/#/c/109035/3/neutron/db/loadbalancer/loadbalancer_dbv2.py%20line%20518
After submitting the module skeleton I can make the TLS implementation patch to 
depend on that module patch and use its API.

As an alternative we might leave this job to drivers, if common module will be 
not implemented

What are your thoughts/suggestions/plans?

Thanks,
Evg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] new common module for Barbican TLS containers interaction

2014-07-24 Thread Carlos Garza
I'Just park a madule with a stub call that I can populate with 
pyasn1.
On Jul 24, 2014, at 10:38 AM, Evgeny Fedoruk evge...@radware.com
 wrote:

 Hi,
  
 Following our talk on TLS work items split,
 We need to decide how will we validate/extract certificates Barbican TLS 
 containers.
 As we agreed on IRC, the first priority should be certificates fetching.
  
 TLS RST describes a new common module that will be used by LBaaS API and 
 LBaaS drivers.
 It’s proposed front-end API is currently:
 1. Ensuring Barbican TLS container existence (used by LBaaS API)
 2. Validating Barbican TLS container (used by LBaaS API)
This API will also register LBaaS as a container's consumer in 
 Barbican's repository.
POST request:
http://admin-api/v1/containers/{container-uuid}/consumers
{
 type: LBaaS,
 URL: https://lbaas.myurl.net/loadbalancers/lbaas_loadbalancer_id/
}
  
 3. Extracting SubjectCommonName and SubjectAltName information
 from certificates’ X509 (used by LBaaS front-end API)
As for now, only dNSName (and optionally directoryName) types will be 
 extracted from
 SubjectAltName sequence,
  
 4. Fetching certificate’s data from Barbican TLS container
 (used by provider/driver code)
  
 5. Unregistering LBaaS as a consumer of the container when container is not
  used by any listener any more (used by LBaaS front-end API)
  
 So this new module’s front-end is used by LBaaS API/drivers and its back-end 
 is facing Barbican API.
 Please give your feedback on module API, should we merge 1 and 2?
  
 I will be able to start working on the new module skeleton on Sunday morning. 
 It will include API functions.
  
 TLS implementation patch has a spot where container validation should 
 happen:https://review.openstack.org/#/c/109035/3/neutron/db/loadbalancer/loadbalancer_dbv2.py
  line 518
 After submitting the module skeleton I can make the TLS implementation patch 
 to depend on that module patch and use its API.
  
 As an alternative we might leave this job to drivers, if common module will 
 be not implemented
  
 What are your thoughts/suggestions/plans?
  
 Thanks,
 Evg
  
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] new common module for Barbican TLS containers interaction

2014-07-24 Thread Carlos Garza
Sorry I meant to say I'm pretty agreeable just park a stub module so I can 
populate it.
On Jul 24, 2014, at 11:06 AM, Carlos Garza carlos.ga...@rackspace.com
 wrote:

 I'Just park a module with a stub call that I can populate with 
 pyasn1.
 On Jul 24, 2014, at 10:38 AM, Evgeny Fedoruk evge...@radware.com
 wrote:
 
 Hi,
 
 Following our talk on TLS work items split,
 We need to decide how will we validate/extract certificates Barbican TLS 
 containers.
 As we agreed on IRC, the first priority should be certificates fetching.
 
 TLS RST describes a new common module that will be used by LBaaS API and 
 LBaaS drivers.
 It’s proposed front-end API is currently:
 1. Ensuring Barbican TLS container existence (used by LBaaS API)
 2. Validating Barbican TLS container (used by LBaaS API)
   This API will also register LBaaS as a container's consumer in 
 Barbican's repository.
   POST request:
   http://admin-api/v1/containers/{container-uuid}/consumers
   {
type: LBaaS,
URL: https://lbaas.myurl.net/loadbalancers/lbaas_loadbalancer_id/
   }
 
 3. Extracting SubjectCommonName and SubjectAltName information
from certificates’ X509 (used by LBaaS front-end API)
   As for now, only dNSName (and optionally directoryName) types will be 
 extracted from
SubjectAltName sequence,
 
 4. Fetching certificate’s data from Barbican TLS container
(used by provider/driver code)
 
 5. Unregistering LBaaS as a consumer of the container when container is not
 used by any listener any more (used by LBaaS front-end API)
 
 So this new module’s front-end is used by LBaaS API/drivers and its back-end 
 is facing Barbican API.
 Please give your feedback on module API, should we merge 1 and 2?
 
 I will be able to start working on the new module skeleton on Sunday 
 morning. It will include API functions.
 
 TLS implementation patch has a spot where container validation should 
 happen:https://review.openstack.org/#/c/109035/3/neutron/db/loadbalancer/loadbalancer_dbv2.py
  line 518
 After submitting the module skeleton I can make the TLS implementation patch 
 to depend on that module patch and use its API.
 
 As an alternative we might leave this job to drivers, if common module will 
 be not implemented
 
 What are your thoughts/suggestions/plans?
 
 Thanks,
 Evg
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [not-only-neutron] How to Contribute upstream in OpenStack Neutron

2014-07-24 Thread Alan Kavanagh
Hi Kyle

Thanks for taking the time for writing this note also I know this is not an 
easy discussion and not something being a matter of waving hands or fingers. 
I believe what you have stated is well understood, though the main points I 
raised seems to be missing from this Neutron Policies wiki (interested to see 
if other projects have addressed and document this) such as (1) How to address 
when a contribution gets punted, (2) how to address BP's that are not 
progressing, (3)how to ensure that in the even a given BP/patch/etc gets no 
reviews how to address these. I feel this is around the Governance than just 
about the procedures and processes.

Also, having more Core reviewers from different companies would also go a long 
way to helping to ensure that the different views and expectations are 
addressed community wide. I agree on the need to groom core reviewers, I guess 
what I miss here is the time it takes and how large would the Core Team grow, 
are their limits?

Kyle you are doing an amazing job, full commend you on that and believe you are 
definitely going beyond here to help out and its most appreciated. It would be 
good to get these points ironed out as they are lingering and having them 
addressed will help us going forward.

BR
Alan

-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: July-24-14 11:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] [not-only-neutron] How to Contribute 
upstream in OpenStack Neutron

I've received a lot of emails lately, mostly private, from people who feel they 
are being left out of the Neutron process. I'm unsure if other projects have 
people who feel this way, thus the uniquely worded subject above. I wanted to 
broadly address these concerns with this email.

One thing I'd like to reiterate for people here, publicly on the list, is that 
there is no hidden agendas in Neutron, no political machines in the background 
working. As PTL, I've tried to be as transparent as possible. The honest 
reality is that if you want to have influence in Neutron or even in OpenStack 
in general, get involved upstream. Start committing small patches. Start 
looking at bugs. Participate in the weekly meetings. Build relationships 
upstream. Work across projects, not just Neutron. None of this is specific to 
Neutron or even OpenStack, but in fact is how you work in any upstream Open 
Source community.

I'd also like to address the add more core reviewers to solve all these 
problems angle. While adding more core reviewers is a good thing, we need to 
groom core reviewers and meld them into the team.
This takes time, and it's something we in Neutron actively work on.
The process we use is documented here [1].

I'd also like to point out that one of the things I've tried to do in Neutron 
as PTL during the Juno cycle is document as much of the process around working 
in Neutron. That is all documented on this wiki page here [2]. Feedback on this 
is most certainly welcome.

I'm willing to help work with anyone who wants to contribute more to Neutron. I 
spend about half of my time doing just this already, between reviews, emails, 
and IRC conversations. So please feel free to find me on IRC (mestery on 
Freenode), on the ML, or even just use this email address.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/NeutronCore
[2] https://wiki.openstack.org/wiki/NeutronPolicies

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Test Ceilometer polling in tempest

2014-07-24 Thread Matthew Treinish
On Wed, Jul 16, 2014 at 07:44:38PM +0400, Dina Belova wrote:
 Ildiko, thanks for starting this discussion.
 
 Really, that is quite painful problem for Ceilometer and QA team. As far as
 I know, currently there is some kind of tendency of making integration
 Tempest tests quicker and less resource consuming - that's quite logical
 IMHO. Polling as a way of information collecting from different services
 and projects is quite consuming speaking about load on Nova API, etc. -
 that's why I completely understand the wish of QA team to get rid of it,
 although polling still makes lots work inside Ceilometer, and that's why
 integration testing for this feature is really important for me as
 Ceilometer contributor - without pollsters testing we have no way to check
 its workability.
 
 That's why I'll be really glad if Ildiko's (or whatever other) solution
 that will allow polling testing in the gate will be found and accepted.
 
 Problem with described above solution requires some kind of change in what
 do we call environment preparing for the integration testing - and we
 really need QA crew help here. Afair polling deprecation was suggested in
 some of the IRC discussions (by only notifications usage), but that's not
 the solution that might be just used right now - but we need way of
 Ceilometer workability verification right now to continue work on its
 improvement.
 
 So any suggestions and comments are welcome here :)
 
 Thanks!
 Dina
 
 
 On Wed, Jul 16, 2014 at 7:06 PM, Ildikó Váncsa ildiko.van...@ericsson.com
 wrote:
 
   Hi Folks,
 
 
 
  We’ve faced with some problems during running Ceilometer integration tests
  on the gate. The main issue is that we cannot test the polling mechanism,
  as if we use a small polling interval, like 1 min, then it puts a high
  pressure on Nova API. If we use a longer interval, like 10 mins, then we
  will not be able to execute any tests successfully, because it would run
  too long.
 
 
 
  The idea, to solve this issue,  is to reconfigure Ceilometer, when the
  polling is tested. Which would mean to change the polling interval from the
  default 10 mins to 1 min at the beginning of the test, restart the service
  and when the test is finished, the polling interval should be changed back
  to 10 mins, which will require one more service restart. The downside of
  this idea is, that it needs service restart today. It is on the list of
  plans to support dynamic re-configuration of Ceilometer, which would mean
  the ability to change the polling interval without restarting the service.
 
 
 
  I know that this idea isn’t ideal from the PoV that the system
  configuration is changed during running the tests, but this is an expected
  scenario even in a production environment. We would change a parameter that
  can be changed by a user any time in a way as users do it too. Later on,
  when we can reconfigure the polling interval without restarting the
  service, this approach will be even simpler.

So your saying that you expect users to be able to manually reconfigure
Ceilometer on the fly to be able to use polling, that seems far from ideal.

 
 
 
  This idea would make it possible to test the polling mechanism of
  Ceilometer without any radical change in the ordering of test cases or any
  other things that would be strange in integration tests. We couldn’t find
  any better way to solve the issue of the load on the APIs caused by polling.
 
 
 
  What’s your opinion about this scenario? Do you think it could be a viable
  solution to the above described problem?
 
 
 

Umm, so frankly this approach seems kind of crazy to me. Aside from the project
level implications of saying that as a user to ensure you can't use polling data
reliably unless you adjust the polling frequency of Ceilometer. The bigger issue
is that you're not necessarily solving the problem. The test will still have an
inherent race condition because there is still no guarantee on the polling
happening during the test window.

So assume this were implemented and you decrease the polling rate to 1 min. and
restart ceilometer during the test setUp(). You'll still dependent on an
internal ceilometer event occurring during the wait period of the test. There's
actually no guarantee that everything will happen in the timeout interval for
the test, your just hoping it will. It's still just a best guess that will
probably work in the general case, but will just cause race bugs in the gate
when things get slow for random reasons. (which increasing the poll rate,
even temporarily, will contribute to)

The other thing to consider is how would this be implemented, changing a config
and restarting a service is *way* outside the scope of tempest. I'd be -2 on
anything proposed to tempest that mucks around with a projects config file like
this or anything that restarts a service. If it were exposed through a rest API
command that'd be a different story, but then you'd still have the race issue I
described 

Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Russell Bryant
On 07/23/2014 05:39 PM, James E. Blair wrote:
 ==Final thoughts==
 
 The current rate of test failures and subsequent rechecks is not
 sustainable in the long term.  It's not good for contributors,
 reveiewers, or the overall project quality.  While these bugs do need to
 be addressed, it's unlikely that the current process will cause that to
 happen.  Instead, we want to push more substantial testing into the
 projects themselves with functional and interface testing, and depend
 less on devstack-gate integration tests to catch all bugs.  This should
 help us catch bugs closer to the source and in an environment where
 debugging is easier.  We also want to reduce the scope of devstack gate
 tests to a gold standard while running tests of other configurations in
 a traditional CI process so that people interested in those
 configurations can focus on ensuring they work.

Very nice writeup.  I think these steps sound like a positive way forward.

Thanks!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] Automatic elastic rechecks

2014-07-24 Thread Jeremy Stanley
On 2014-07-18 15:09:34 +0100 (+0100), Daniel P. Berrange wrote:
[...]
 If there were multiple failures and only some were identified, it would
 be reasonable to *not* automatically recheck.
[...]

Another major blocker is that we often add signatures for failures
which occur 100% of the time, and while those tend to get fixed a
bit faster than 1% failures, automatic rechecks would mean that for
some period while we're investigating the gate would just be
spinning over and over running jobs which had no chance of passing.

I suppose it could be argued that elastic-recheck needs a
categorization mechanism so that it also won't recommend rechecking
for those sorts of scenarios (all discussion of automated rechecks
aside).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [not-only-neutron] How to Contribute upstream in OpenStack Neutron

2014-07-24 Thread Anita Kuno
On 07/24/2014 11:50 AM, Alan Kavanagh wrote:
 Hi Kyle
 
 Thanks for taking the time for writing this note also I know this is not an 
 easy discussion and not something being a matter of waving hands or 
 fingers. I believe what you have stated is well understood, though the main 
 points I raised seems to be missing from this Neutron Policies wiki 
 (interested to see if other projects have addressed and document this) such 
 as (1) How to address when a contribution gets punted, (2) how to address 
 BP's that are not progressing, (3)how to ensure that in the even a given 
 BP/patch/etc gets no reviews how to address these. I feel this is around the 
 Governance than just about the procedures and processes.
 
Hi Alan,

Let me add some thoughts here.

The listed items mostly boil down to communication.
Are those contributors with punted contributions in IRC channels? Do the
attend the program weekly meeting? Many punted contributions, be they
patches or blueprints, have the status they do because noone can find
who the owners of these contributions are to ask them to fill in the
gaps so reviewers even know what is being proposed.

Now if folks don't know what channels to use or how to use IRC then that
is an issue the we need to address, but having people available so
reviewers can ask them about their offerings is a great first step and
no I personally don't think that this is a governance issue.

If you want to find out what items are governance issues, do attend the
TC weekly meeting:
https://wiki.openstack.org/wiki/Governance/TechnicalCommittee and be
sure to read the logs of past meetings:
http://eavesdrop.openstack.org/meetings/tc/

 Also, having more Core reviewers from different companies would also go a 
 long way to helping to ensure that the different views and expectations are 
 addressed community wide. I agree on the need to groom core reviewers, I 
 guess what I miss here is the time it takes and how large would the Core Team 
 grow, are their limits?
 
I agree that having diversity in core reviewers is very important. Core
reviewers are those reviewers that have put in the time to do the
reviews to demonstrate their commitment to the program. They also have
enough experience with the program to gain the trust of other core
reviewers. How long it takes is based on the individual reviewer. As for
how large the team can grow, it is based on how many people want to do
the work that it takes to gain that knowledge and experience.

In short, it is up to the potential core reviewer.

Thanks Alan,
Anita.

 Kyle you are doing an amazing job, full commend you on that and believe you 
 are definitely going beyond here to help out and its most appreciated. It 
 would be good to get these points ironed out as they are lingering and having 
 them addressed will help us going forward.
 
 BR
 Alan
 
 -Original Message-
 From: Kyle Mestery [mailto:mest...@mestery.com] 
 Sent: July-24-14 11:10 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [neutron] [not-only-neutron] How to Contribute 
 upstream in OpenStack Neutron
 
 I've received a lot of emails lately, mostly private, from people who feel 
 they are being left out of the Neutron process. I'm unsure if other projects 
 have people who feel this way, thus the uniquely worded subject above. I 
 wanted to broadly address these concerns with this email.
 
 One thing I'd like to reiterate for people here, publicly on the list, is 
 that there is no hidden agendas in Neutron, no political machines in the 
 background working. As PTL, I've tried to be as transparent as possible. The 
 honest reality is that if you want to have influence in Neutron or even in 
 OpenStack in general, get involved upstream. Start committing small patches. 
 Start looking at bugs. Participate in the weekly meetings. Build 
 relationships upstream. Work across projects, not just Neutron. None of this 
 is specific to Neutron or even OpenStack, but in fact is how you work in any 
 upstream Open Source community.
 
 I'd also like to address the add more core reviewers to solve all these 
 problems angle. While adding more core reviewers is a good thing, we need to 
 groom core reviewers and meld them into the team.
 This takes time, and it's something we in Neutron actively work on.
 The process we use is documented here [1].
 
 I'd also like to point out that one of the things I've tried to do in Neutron 
 as PTL during the Juno cycle is document as much of the process around 
 working in Neutron. That is all documented on this wiki page here [2]. 
 Feedback on this is most certainly welcome.
 
 I'm willing to help work with anyone who wants to contribute more to Neutron. 
 I spend about half of my time doing just this already, between reviews, 
 emails, and IRC conversations. So please feel free to find me on IRC (mestery 
 on Freenode), on the ML, or even just use this email address.
 
 Thanks,
 Kyle
 
 [1] 

Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-24 Thread Stefano Maffulli
On 07/23/2014 06:22 AM, Kyle Mestery wrote:
 Thanks for sending this out Salvatore. We are way oversubscribed,
 and at this point, I'm in agreement on not letting any new
 exceptions which do not fall under the above guidelines. Given how
 much is already packed in there, this makes the most sense.

The increasing time to merge patches and the increasing backlog was a
topic that came up during the Board meeting on Tuesday. Signals seem to
point at not enough core reviewers in many projects as one of the causes
of these issues.  I have signed up to analyze this more carefully so
that the board can come up with suggestions/requirements to members
organization. Stay tuned for more.

For the short term, though, a careful analysis and exercise in
prioritization together with extra efforts for reviews from the parties
involved would be great.

On 07/24/2014 07:05 AM, CARVER, PAUL wrote:
 I don't really follow Linux kernel development, but a quick search 
 turned up [1] which seems to indicate at least one additional level 
 between

It's hard to drive parallels across such different projects. I would
consider Andrew and Linus our release managers (stable vs current) and
subsystem maintainers the equivalent of our PTLs, the driver maintainers
as our 'core reviewers'. I don't think there are more layers on the
kernel. BTW, I heard that in April OpenStack may have surpassed the
kernel in terms of monthly commits, so we're comparable in size.

 Speaking only for myself and not ATT, I'm disappointed that my 
 employer doesn't have more developers actively writing code. We ought
 to (in my personal opinion) be supplying core reviewers to at least a
 couple of OpenStack projects.

Yes, I would expect any company the size of ATT be providing at least 1
developer upstream for 10 developers downstream. I'll be looking at some
numbers to check if there is a general behavior around this, mabye come
up with recommendations. Stay tuned.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Spec freeze exception] Controlled shutdown of GuestOS

2014-07-24 Thread Andrew Laski


From: Day, Phil [philip@hp.com]
Sent: Thursday, July 24, 2014 9:20 AM
To: OpenStack Development Mailing List (not for usage questions); Daniel P. 
Berrange
Subject: Re: [openstack-dev] [Nova][Spec freeze exception] Controlled shutdown 
of GuestOS

According to: https://etherpad.openstack.org/p/nova-juno-spec-priorities   
alaski has also singed up for this if I drop the point of contention - which 
I'ev done


Yes, I will sponsor this one as well.  This is more a bug fix than a feature 
IMO and would be really nice to get into Juno.



 -Original Message-
 From: Michael Still [mailto:mi...@stillhq.com]
 Sent: 24 July 2014 00:50
 To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [Nova][Spec freeze exception] Controlled
 shutdown of GuestOS

 Another core sponsor would be nice on this one. Any takers?

 Michael

 On Thu, Jul 24, 2014 at 4:14 AM, Daniel P. Berrange berra...@redhat.com
 wrote:
  On Wed, Jul 23, 2014 at 06:08:52PM +, Day, Phil wrote:
  Hi Folks,
 
  I'd like to propose the following as an exception to the spec freeze, on 
  the
 basis that it addresses a potential data corruption issues in the Guest.
 
  https://review.openstack.org/#/c/89650
 
  We were pretty close to getting acceptance on this before, apart from a
 debate over whether one additional config value could be allowed to be set
 via image metadata - so I've given in for now on wanting that feature from a
 deployer perspective, and said that we'll hard code it as requested.
 
  Initial parts of the implementation are here:
  https://review.openstack.org/#/c/68942/
  https://review.openstack.org/#/c/99916/
 
  Per my comments already, I think this is important for Juno and will
  sponsor it.
 
  Regards,
  Daniel
  --
  |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
  |: http://libvirt.org  -o- http://virt-manager.org 
  :|
  |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ 
  :|
  |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc 
  :|
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:

 ==Future changes==

 ===Fixing Faster===
 
 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.
 
 [3] https://etherpad.openstack.org/p/gatetriage-june2014
 
 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found in the queues it will be given higher priority, including bubbling
 up to the top of the gate queue automatically. The manual promote
 process should no longer be needed, and instead bugs fixing elastic
 recheck tracked issues will be promoted automatically.
 
 At the same time we'll also promote review on critical gate bugs through
 making them visible in a number of different channels (like on elastic
 recheck pages, review day, and in the gerrit dashboards). The idea here
 again is to make the reviews that fix key bugs pop to the top of
 everyone's views.

In some of the harder gate bugs I've looked at (especially the infamous
'live snapshot' timeout bug), it has been damn hard to actually figure
out what's wrong. AFAIK, no one has ever been able to reproduce it
outside of the gate infrastructure. I've even gone as far as setting up
identical Ubuntu VMs to the ones used in the gate on a local cloud, and
running the tempest tests multiple times, but still can't reproduce what
happens on the gate machines themselves :-( As such we're relying on
code inspection and the collected log messages to try and figure out
what might be wrong.

The gate collects alot of info and publishes it, but in this case I
have found the published logs to be insufficient - I needed to get
the more verbose libvirtd.log file. devstack has the ability to turn
this on via an environment variable, but it is disabled by default
because it would add 3% to the total size of logs collected per gate
job.

There's no way for me to get that environment variable for devstack
turned on for a specific review I want to test with. In the end I
uploaded a change to nova which abused rootwrap to elevate privileges,
install extra deb packages, reconfigure libvirtd logging and restart
the libvirtd daemon.

  https://review.openstack.org/#/c/103066/11/etc/nova/rootwrap.d/compute.filters
  https://review.openstack.org/#/c/103066/11/nova/virt/libvirt/driver.py

This let me get further, but still not resolve it. My next attack is
to build a custom QEMU binary and hack nova further so that it can
download my custom QEMU binary from a website onto the gate machine
and run the test with it. Failing that I'm going to be hacking things
to try to attach to QEMU in the gate with GDB and get stack traces.
Anything is doable thanks to rootwrap giving us a way to elevate
privileges from Nova, but it is a somewhat tedious approach.

I'd like us to think about whether they is anything we can do to make
life easier in these kind of hard debugging scenarios where the regular
logs are not sufficient.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Question about thread safe of key-pair and securtiy rules quota

2014-07-24 Thread Kevin L. Mitchell
On Thu, 2014-07-24 at 14:04 +0800, Chen CH Ji wrote:
 According to bug [1], there are some possibilities that concurrent
 operations on keypair/security rules can exceed quota
 Found that we have 3 kinds of resources in quotas.py:
  ReservableResource/AbsoluteResource/CountableResource
 
 curious about CountableResource because it's can't be thread safe due
 to its logic:
 
 count = QUOTAS.count(context, 'security_group_rules', id)
 try:
 projected = count + len(vals)
 QUOTAS.limit_check(context,
 security_group_rules=projected)
 
 was it designed by purpose to be different to ReservableResource? If
 set it to ReservableResource just like RAM/CPU, what kind of side
 effect it might lead to ?
 
 Also, is it possible to consider a solution like 'hold a write lock in
 db layer, check the count of resource and raise exception if it exceed
 quota'? 

First, some background: the difference between a ReservableResource and
a CountableResource comes down to the method used to count the in-use
resources.  For ReservableResource, the count is based on the number of
database objects matching the process ID, whereas with
CountableResource, a counting function has to be specified; otherwise,
the classes are identical and designed to be used identically—indeed,
CountableResource extends ReservableResource and only overrides the
constructor.  The only two CountableResource quota objects declared in
Nova are security_group_rules, where the counting function counts the
rules per group; and key_pairs, where the counting function counts the
key pairs per user.

Now, the code you paste in your email is the wrong code to use with a
ReservableResource or CountableResource; indeed, the limit_check()
docstring indicates that it's for those quotas for which there is no
usage synchronization function…meaning an AbsoluteResource.  For
ReservableResource or CountableResource, the code should be using the
reserve() method.  The reserve() method performs its checks safely, and
creates a reservation to prevent cross-process allocation that would
result in quota exceeded.  The downside is that reserve() must create a
reservation that must later be committed or rolled back, which is
probably why that code is using the limit_check() inappropriately.

It may be worthwhile to add some sanity-checking to limit_check() and
reserve() that ensure that they only work with the correct resource
type(s), in order to prevent this sort of problem from occurring in the
future.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate] Automatic elastic rechecks

2014-07-24 Thread Daniel P. Berrange
On Thu, Jul 24, 2014 at 04:31:05PM +, Jeremy Stanley wrote:
 On 2014-07-18 15:09:34 +0100 (+0100), Daniel P. Berrange wrote:
 [...]
  If there were multiple failures and only some were identified, it would
  be reasonable to *not* automatically recheck.
 [...]
 
 Another major blocker is that we often add signatures for failures
 which occur 100% of the time, and while those tend to get fixed a
 bit faster than 1% failures, automatic rechecks would mean that for
 some period while we're investigating the gate would just be
 spinning over and over running jobs which had no chance of passing.
 
 I suppose it could be argued that elastic-recheck needs a
 categorization mechanism so that it also won't recommend rechecking
 for those sorts of scenarios (all discussion of automated rechecks
 aside).

Yep, if there's a bug which is known to hit 90%+ of the time due
to some known problem, launchpad could be tagged with NoRecheck
and e-r taught to avoid re-queuing such failures.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Anyone using owner_is_tenant = False with image members?

2014-07-24 Thread Scott Devoid
So it turns out that fixing this issue is not very simple. It turns out
that there are stubbed out openstack.common.policy checks in the glance-api
code, which are pretty much useless because they do not use the image as a
target. [1] Then there's a chain of API / client calls where it's unclear
who is responsible for validating ownership: python-glanceclient -
glance-api - glance-registry-client - glance-registry-api -
glance.db.sqlalchemy.api. Add to that the fact that request IDs are not
consistently captured along the logging path [2] and it's a holy mess.

I am wondering...
1. Has anyone actually set owner_is_tenant to false? Has this ever been
tested?
2. From glance developers, what kind of permissions / policy scenarios do
you actually expect to work?

Right now we have one user who consistently gets an empty 404 back from
nova image-list because glance-api barfs on a single image and gives up
on the entire API request...and there are no non-INFO/DEBUG messages in
glance logs for this. :-/

~ Scott

[1] https://bugs.launchpad.net/glance/+bug/1346648
[2] https://bugs.launchpad.net/glance/+bug/1336958

On Fri, Jul 11, 2014 at 12:26 PM, Scott Devoid dev...@anl.gov wrote:

 Hi Alexander,

 I read through the artifact spec. Based on my reading it does not fix this
 issue at all. [1] Furthermore, I do not understand why the glance
 developers are focused on adding features like artifacts or signed images
 when there are significant usability problems with glance as it currently
 stands. This is echoing Sean Dague's comment that bugs are filed against
 glance but never addressed.

 [1] See the **Sharing Artifact** section, which indicates that sharing may
 only be done between projects and that the tenant owns the image.


 On Thu, Jul 3, 2014 at 4:55 AM, Alexander Tivelkov ativel...@mirantis.com
  wrote:

 Thanks Scott, that is a nice topic

 In theory, I would prefer to have both owner_tenant and owner_user to be
 persisted with an image, and to have a policy rule which allows to specify
 if the users of a tenant have access to images owned by or shared with
 other users of their tenant. But this will require too much changes to the
 current object model, and I am not sure if we need to introduce such
 changes now.

 However, this is the approach I would like to use in Artifacts. At least
 the current version of the spec assumes that both these fields to be
 maintained ([0])

 [0]
 https://review.openstack.org/#/c/100968/4/specs/juno/artifact-repository.rst

 --
 Regards,
 Alexander Tivelkov


 On Thu, Jul 3, 2014 at 3:44 AM, Scott Devoid dev...@anl.gov wrote:

  Hi folks,

 Background:

 Among all services, I think glance is unique in only having a single
 'owner' field for each image. Most other services include a 'user_id' and a
 'tenant_id' for things that are scoped this way. Glance provides a way to
 change this behavior by setting owner_is_tenant to false, which implies
 that owner is user_id. This works great: new images are owned by the user
 that created them.

 Why do we want this?

 We would like to make sure that the only person who can delete an image
 (besides admins) is the person who uploaded said image. This achieves that
 goal nicely. Images are private to the user, who may share them with other
 users using the image-member API.

 However, one problem is that we'd like to allow users to share with
 entire projects / tenants. Additionally, we have a number of images (~400)
 migrated over from a different OpenStack deployment, that are owned by the
 tenant and we would like to make sure that users in that tenant can see
 those images.

 Solution?

 I've implemented a small patch to the is_image_visible API call [1]
 which checks the image.owner and image.members against context.owner and
 context.tenant. This appears to work well, at least in my testing.

 I am wondering if this is something folks would like to see integrated?
 Also for glance developers, if there is a cleaner way to go about solving
 this problem? [2]

 ~ Scott

 [1]
 https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/api.py#L209
 [2] https://review.openstack.org/104377

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

2014-07-24 Thread Collins, Sean
On Wed, Jul 23, 2014 at 11:19:13AM EDT, Luke Gorrie wrote:
 Tail-f NCS: I want to keep this feature well maintained and compliant with
 all the rules. I am the person who wrote this driver originally, I have
 been the responsible person for 90% of its lifetime, I am the person who
 setup the current CI, and I am the one responsible for smooth operation of
 that CI. I am reviewing its results with my morning coffee and have been
 doing so for the past 6 weeks. I would like to have it start voting and I
 believe that it and I are ready for that. I am responsive to email, I am
 usually on IRC (lukego), and in case of emergency you can SMS/call my
 mobile on +41 79 244 32 17.
 
 So... Let's be friends again? (and do ever cooler stuff in Kilo?)



Luke was kind enough to reach out to me, and we had a discussion in
order to bury the hatchet. Posting his contact details and being
available to discuss things has put my mind at ease, I am ready to move
forward.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

2014-07-24 Thread Kyle Mestery
On Thu, Jul 24, 2014 at 12:03 PM, Collins, Sean
sean_colli...@cable.comcast.com wrote:
 On Wed, Jul 23, 2014 at 11:19:13AM EDT, Luke Gorrie wrote:
 Tail-f NCS: I want to keep this feature well maintained and compliant with
 all the rules. I am the person who wrote this driver originally, I have
 been the responsible person for 90% of its lifetime, I am the person who
 setup the current CI, and I am the one responsible for smooth operation of
 that CI. I am reviewing its results with my morning coffee and have been
 doing so for the past 6 weeks. I would like to have it start voting and I
 believe that it and I are ready for that. I am responsive to email, I am
 usually on IRC (lukego), and in case of emergency you can SMS/call my
 mobile on +41 79 244 32 17.

 So... Let's be friends again? (and do ever cooler stuff in Kilo?)



 Luke was kind enough to reach out to me, and we had a discussion in
 order to bury the hatchet. Posting his contact details and being
 available to discuss things has put my mind at ease, I am ready to move
 forward.

+1

He also reached out to me, so I'm also happy to add this back and move
forward with burying the hatchet. I'm all for second chances in
general, and Luke's gone out of his way to work with people upstream
in a much more efficient and effective manner.

Thanks,
Kyle

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Amrith Kumar
Speaking as a ‘database guy’ and a ‘Trove guy’, I’ll say this; “Metadata” is a 
very generic term and the meaning of “metadata” in a database context is very 
different from the meaning of “metadata” in the context that Glance is 
providing. 

 

Furthermore the usage and access pattern for this metadata, the frequency of 
change, and above all the frequency of access are fundamentally different 
between Trove and what Glance appears to be offering, and we should probably 
not get too caught up in the project “title”.

 

We would not be “reinventing the wheel” if we implemented an independent 
metadata scheme for Trove; we would be implementing the right kind of when for 
the vehicle that we are operating. Therefore I do not agree with your 
characterization that concludes that:

 

 given goals at [1] are out of scope of Database program, etc

 

Just to be clear, when you write:

 

 Unfortunately, we’re(Trove devs) are on half way to metadata …

 

it is vital to understand that our view of “metadata” is very different from 
(for example, a file system’s view of metadata, or potentially Glance’s view of 
metadata). For that reason, I believe that your comments on 
https://review.openstack.org/#/c/82123/16 are also somewhat extreme.

 

Before postulating a solution (or “delegating development to Glance devs”), it 
would be more useful to fully describe the problem being solved by Glance and 
the problem(s) we are looking to solve in Trove, and then we could have a 
meaningful discussion about the right solution. 

 

I submit to you that we will come away concluding that there is a round peg, 
and a square hole. Yes, one will fit in the other but the final product will 
leave neither party particularly happy with the end result.

 

-amrith

 

From: Denis Makogon [mailto:dmako...@mirantis.com] 
Sent: Thursday, July 24, 2014 9:33 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Glance][Trove] Metadata Catalog

 

Hello, Stackers.


 I’d like to discuss the future of Trove metadata API. But first small 
history info (mostly taken for Trove medata spec, see [1]):

Instance metadata is a feature that has been requested frequently by our users. 
They need a way to store critical information for their instances and have that 
be associated with the instance so that it is displayed whenever that instance 
is listed via the API. This also becomes very usable from a testing perspective 
when doing integration/ci. We can utilize the metadata to store things like 
what process created the instance, what the instance is being used for, etc... 
The design for this feature is modeled heavily on the Nova metadata API with a 
few tweaks in how it works internally.

And here comes conflict. Glance devs are working on “Glance Metadata 
Catalog” feature (see [2]). And as for me, we don’t have to “reinvent the 
wheel” for Trove. It seems that we would be able 

to use Glance API to interact with   Metadata Catalog. And it seems to be 
redundant to write our own API for metadata CRUD operations.



From Trove perspective, we need to define a list concrete use cases for 
metadata usage (eg given goals at [1] are out of scope of Database program, 
etc.). 

From development and cross-project integration perspective, we need to 
delegate all development to Glance devs. But we still able to help Glance devs 
with this feature by taking active part in polishing proposed spec (see [2]).



Unfortunately, we’re(Trove devs) are on half way to metadata - patch for 
python-troveclient already merged. So, we need to consider 
deprecation/reverting of merged and block 

merging of proposed ( see [3]) patchsets in favor of Glance Metadata Catalog.



Thoughts?

[1]  https://wiki.openstack.org/wiki/Trove-Instance-Metadata 
https://wiki.openstack.org/wiki/Trove-Instance-Metadata

[2]  https://review.openstack.org/#/c/98554/11 
https://review.openstack.org/#/c/98554/11

[3]  https://review.openstack.org/#/c/82123/ 
https://review.openstack.org/#/c/82123/

 

Best regards,

Denis Makogon



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Neutron] Request voting for Tail-f CI account

2014-07-24 Thread Anita Kuno
On 07/24/2014 01:18 PM, Kyle Mestery wrote:
 On Thu, Jul 24, 2014 at 12:03 PM, Collins, Sean
 sean_colli...@cable.comcast.com wrote:
 On Wed, Jul 23, 2014 at 11:19:13AM EDT, Luke Gorrie wrote:
 Tail-f NCS: I want to keep this feature well maintained and compliant with
 all the rules. I am the person who wrote this driver originally, I have
 been the responsible person for 90% of its lifetime, I am the person who
 setup the current CI, and I am the one responsible for smooth operation of
 that CI. I am reviewing its results with my morning coffee and have been
 doing so for the past 6 weeks. I would like to have it start voting and I
 believe that it and I are ready for that. I am responsive to email, I am
 usually on IRC (lukego), and in case of emergency you can SMS/call my
 mobile on +41 79 244 32 17.

 So... Let's be friends again? (and do ever cooler stuff in Kilo?)



 Luke was kind enough to reach out to me, and we had a discussion in
 order to bury the hatchet. Posting his contact details and being
 available to discuss things has put my mind at ease, I am ready to move
 forward.

 +1
 
 He also reached out to me, so I'm also happy to add this back and move
 forward with burying the hatchet. I'm all for second chances in
 general, and Luke's gone out of his way to work with people upstream
 in a much more efficient and effective manner.
 
 Thanks,
 Kyle
 
Well done, Luke. It takes a lot of work to dig oneself out of a hole and
create good relationships where there need to be some. It is a tough job
and not everyone chooses to do it.

You chose to and you succeeded. I commend your work.

I'm glad we have a good resolution in this space.

Thanks to all involved for their persistence and hard work. Well done,
Anita.

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in the
 parsed configuration for the resulting options?


I can imagine something like this:
1. iterate over undefined groups in config;
2. select groups of interest (e.g. by prefix or some regular expression);
3. register options in them;
4. use those options.

Registered group can be passed to a plugin/library that would register its
options in it.

So the only thing that oslo.config lacks in its interface here is some way
to allow the first step. The rest can be overcomed with some sugar.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-24 Thread Julio Carlos Barrera Juez
Hi again.

With previous days code, we don't experience any error in our logs, but we
don't see any logs in q-svc nor q-vpn. When we execute any Neutron VPN
command like neutron vpn-ikepolicy-list we receive:

404 Not Found

The resource could not be found.


 And in q-svc logs we see:

2014-07-24 19:50:37.587 DEBUG routes.middleware
[req-8efb06d9-36fb-44e4-ab94-2221daadd2a5 demo
4af34184cec14e70a15dee0508f16e7e] No route matched for GET
/vpn/ikepolicies.json from (pid=4998) __call__
/usr/lib/python2.7/dist-packages/routes/middleware.py:97
2014-07-24 19:50:37.588 DEBUG routes.middleware
[req-8efb06d9-36fb-44e4-ab94-2221daadd2a5 demo
4af34184cec14e70a15dee0508f16e7e] No route matched for GET
/vpn/ikepolicies.json from (pid=4998) __call__
/usr/lib/python2.7/dist-packages/routes/middleware.py:97

Why logs in our plugin are not printed?
Why /usr/lib/python2.7/dist-packages/routes/middleware.py is not finding
our service driver?

Thanks.


http://dana.i2cat.net   http://www.i2cat.net/en
Julio C. Barrera Juez  [image: View my profile on LinkedIn]
http://es.linkedin.com/in/jcbarrera/en
Office phone: (+34) 93 357 99 27 (ext. 527)
Office mobile phone: (+34) 625 66 77 26
Distributed Applications and Networks Area (DANA)
i2CAT Foundation, Barcelona


On 18 July 2014 12:56, Paul Michali (pcm) p...@cisco.com wrote:

 No docs, it’s an internal API between service and device driver (so you
 can implement it however you desire. You can look at the reference and
 Cisco ones for examples (they are currently both the same, although the
 Cisco one will likely change in the future).  You’ll need to define a
 “topic” for the RPC between the two drivers that is unique to your
 implementation. Again, look at the existing ones and look for “topic”
 variable to see what strings they map to.

 From service driver to device driver, there is only one API,
 vpnservice_updated(), and in the other direction there are
 two, get_vpn_services_on_host() and udpate_status().

 Regards,


 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



 On Jul 18, 2014, at 2:30 AM, Julio Carlos Barrera Juez 
 juliocarlos.barr...@i2cat.net wrote:

 Is there any documentation about these RPC messages? Or de we need to use
 examples as guide?

 Once again, thank you Paul.

  http://dana.i2cat.net/   http://www.i2cat.net/en
 Julio C. Barrera Juez  [image: View my profile on LinkedIn]
 http://es.linkedin.com/in/jcbarrera/en
 Office phone: (+34) 93 357 99 27 (ext. 527)
 Office mobile phone: (+34) 625 66 77 26
 Distributed Applications and Networks Area (DANA)
 i2CAT Foundation, Barcelona


 On 17 July 2014 20:37, Paul Michali (pcm) p...@cisco.com wrote:

 So you have your driver loading… great!

 The service driver will log in screen-q-*svc*.log, provided you have the
 service driver called out in neutron.conf (as the only one for VPN).

 Later, you’ll need the supporting RPC classes to send messages from
 service driver to device driver…


 Regards,


 PCM (Paul Michali)

 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



 On Jul 17, 2014, at 2:18 PM, Julio Carlos Barrera Juez 
 juliocarlos.barr...@i2cat.net wrote:

 We have followed your advices:

 - We created our fake device driver located in the same level as other
 device drivers
 (/opt/stack/neutron/neutron/services/vpn//device_drivers/fake_device_driver.py):

 import abc
 import six

 from neutron.openstack.common import log
 from neutron.services.vpn import device_drivers


 LOG = log.getLogger(__name__)

 @six.add_metaclass(abc.ABCMeta)
 class FakeDeviceDriver(device_drivers.DeviceDriver):
 '''
 classdocs
 '''

 def __init__(self, agent, host):
 pass

 def sync(self, context, processes):
 pass

 def create_router(self, process_id):
 pass

 def destroy_router(self, process_id):
 pass


 - Our service driver located in
 /opt/stack/neutron/neutron/services/vpn/service_drivers/fake_service_driver.py:

 from neutron.openstack.common import log

 LOG = log.getLogger(__name__)

 class FakeServiceDriver():
 '''
 classdocs
 '''

 def get_vpnservices(self, context, filters=None, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass

 def get_vpnservice(self, context, vpnservice_id, fields=None):
 LOG.info('XX Calling method: ' + __name__)
 pass

 def create_vpnservice(self, context, vpnservice):
 LOG.info('XX Calling method: ' + __name__)
 pass

 def update_vpnservice(self, context, vpnservice_id, vpnservice):
 LOG.info('XX Calling method: ' + __name__)
 pass

 def delete_vpnservice(self, context, vpnservice_id):
 

Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Doug Hellmann

On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:

 
 
 
 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:
 
 Hi, all
  The current oslo.cfg module provides an easy way to load name known 
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load them?
 
   For example, I do not know the group names (section name in the 
 configuration file), but we read the configuration file and detect the 
 definitions inside it.
 
 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2
 
Then I want to automatically load the group1. key1 and group2. key2, 
 without knowing the name of group1 first.
 
 If you don’t know the group name, how would you know where to look in the 
 parsed configuration for the resulting options?
 
 I can imagine something like this:
 1. iterate over undefined groups in config;
 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.
 
 Registered group can be passed to a plugin/library that would register its 
 options in it.

If the options are related to the plugin, could the plugin just register them 
before it tries to use them?

I guess it’s not clear what problem you’re actually trying to solve by 
proposing this change to the way the config files are parsed. That doesn’t mean 
your idea is wrong, just that I can’t evaluate it or point out another 
solution. So what is it that you’re trying to do that has led to this 
suggestion?

Doug

 
 So the only thing that oslo.config lacks in its interface here is some way to 
 allow the first step. The rest can be overcomed with some sugar.
 
 -- 
 
 Kind regards, Yuriy.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal device driver for VPN

2014-07-24 Thread Paul Michali (pcm)
Check /etc/neutron/neutron.conf and see if your service driver is correctly 
specified for VPN. You can also check the q-svc and q-vpn logs at the beginning 
to see if the service and device drivers were actually loaded by the plugin and 
agent, respectively. You can check vpn_agent.ini in same area, to see if your 
device driver is called out.

Regards,

PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Jul 24, 2014, at 2:11 PM, Julio Carlos Barrera Juez 
juliocarlos.barr...@i2cat.net wrote:

 Hi again.
 
 With previous days code, we don't experience any error in our logs, but we 
 don't see any logs in q-svc nor q-vpn. When we execute any Neutron VPN 
 command like neutron vpn-ikepolicy-list we receive:
 
 404 Not Found
 
 The resource could not be found.
 
  And in q-svc logs we see:
 
 2014-07-24 19:50:37.587 DEBUG routes.middleware 
 [req-8efb06d9-36fb-44e4-ab94-2221daadd2a5 demo 
 4af34184cec14e70a15dee0508f16e7e] No route matched for GET 
 /vpn/ikepolicies.json from (pid=4998) __call__ 
 /usr/lib/python2.7/dist-packages/routes/middleware.py:97
 2014-07-24 19:50:37.588 DEBUG routes.middleware 
 [req-8efb06d9-36fb-44e4-ab94-2221daadd2a5 demo 
 4af34184cec14e70a15dee0508f16e7e] No route matched for GET 
 /vpn/ikepolicies.json from (pid=4998) __call__ 
 /usr/lib/python2.7/dist-packages/routes/middleware.py:97
 
 Why logs in our plugin are not printed? Why 
 /usr/lib/python2.7/dist-packages/routes/middleware.py is not finding our 
 service driver?
 
 Thanks.
 
 
   
 Julio C. Barrera Juez  
 Office phone: (+34) 93 357 99 27 (ext. 527)
 Office mobile phone: (+34) 625 66 77 26
 Distributed Applications and Networks Area (DANA)
 i2CAT Foundation, Barcelona
 
 
 On 18 July 2014 12:56, Paul Michali (pcm) p...@cisco.com wrote:
 No docs, it’s an internal API between service and device driver (so you can 
 implement it however you desire. You can look at the reference and Cisco ones 
 for examples (they are currently both the same, although the Cisco one will 
 likely change in the future).  You’ll need to define a “topic” for the RPC 
 between the two drivers that is unique to your implementation. Again, look at 
 the existing ones and look for “topic” variable to see what strings they map 
 to.
 
 From service driver to device driver, there is only one API, 
 vpnservice_updated(), and in the other direction there are two, 
 get_vpn_services_on_host() and udpate_status().
 
 Regards,
 
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 On Jul 18, 2014, at 2:30 AM, Julio Carlos Barrera Juez 
 juliocarlos.barr...@i2cat.net wrote:
 
 Is there any documentation about these RPC messages? Or de we need to use 
 examples as guide?
 
 Once again, thank you Paul.
 
   
 Julio C. Barrera Juez  
 Office phone: (+34) 93 357 99 27 (ext. 527)
 Office mobile phone: (+34) 625 66 77 26
 Distributed Applications and Networks Area (DANA)
 i2CAT Foundation, Barcelona
 
 
 On 17 July 2014 20:37, Paul Michali (pcm) p...@cisco.com wrote:
 So you have your driver loading… great!
 
 The service driver will log in screen-q-svc.log, provided you have the 
 service driver called out in neutron.conf (as the only one for VPN).
 
 Later, you’ll need the supporting RPC classes to send messages from service 
 driver to device driver…
 
 
 Regards,
 
 
 PCM (Paul Michali)
 
 MAIL …..…. p...@cisco.com
 IRC ……..… pcm_ (irc.freenode.com)
 TW ………... @pmichali
 GPG Key … 4525ECC253E31A83
 Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
 
 
 
 On Jul 17, 2014, at 2:18 PM, Julio Carlos Barrera Juez 
 juliocarlos.barr...@i2cat.net wrote:
 
 We have followed your advices:
 
 - We created our fake device driver located in the same level as other 
 device drivers 
 (/opt/stack/neutron/neutron/services/vpn//device_drivers/fake_device_driver.py):
 
 import abc
 import six
 
 from neutron.openstack.common import log
 from neutron.services.vpn import device_drivers
 
 
 LOG = log.getLogger(__name__)
 
 @six.add_metaclass(abc.ABCMeta)
 class FakeDeviceDriver(device_drivers.DeviceDriver):
 '''
 classdocs
 '''
 
 def __init__(self, agent, host):
 pass
 
 def sync(self, context, processes):
 pass
 
 def create_router(self, process_id):
 pass
 
 def destroy_router(self, process_id):
 pass
 
 - Our service driver located in 
 /opt/stack/neutron/neutron/services/vpn/service_drivers/fake_service_driver.py:
 
 from neutron.openstack.common import log
 
 LOG = log.getLogger(__name__)
  
 class FakeServiceDriver():
 '''
 classdocs
 '''
  
 def get_vpnservices(self, context, filters=None, fields=None):
 LOG.info('XX Calling method: ' + __name__)
   

Re: [openstack-dev] [nova]resize

2014-07-24 Thread Vishvananda Ishaya
The resize code as written originally did the simplest possible thing. It
converts and copies the whole file so that it doesn’t have to figure out how
to sync backing files etc. This could definitely be improved, especially now 
that
there is code in _create_images_and_backing that can ensure that backing files 
are
downloaded/created if they are not there.

Additionally the resize code should be using something other than ssh/rsync. I’m
a fan of using glance to store the file during transfer, but others have 
suggested
using the live migrate code or libvirt to transfer the disks.

Vish

On Jul 24, 2014, at 2:26 AM, fdsafdsafd jaze...@163.com wrote:

 
 No.
 before L5156, we convert it from qcow2 to qcow2, in which it strips backing 
 file.
 I think here, we should wirte like this:
 
 if info['type'] == 'qcow2' and info['backing_file']:
if shared_storage:
  utils.execute('cp', from_path, img_path)
else:
 tmp_path = from_path + _rbase
  # merge backing file
  utils.execute('qemu-img', 'convert', '-f', 'qcow2',
   '-O', 'qcow2', from_path, tmp_path)
 libvirt_utils.copy_image(tmp_path, img_path, host=dest)
 utils.execute('rm', '-f', tmp_path)
 else:  # raw or qcow2 with no backing file
 libvirt_utils.copy_image(from_path, img_path, host=dest)
 
 
 
 At 2014-07-24 05:02:39, Tian, Shuangtai shuangtai.t...@intel.com wrote:
 
 
 
 
 
 !--
 
 _font-face
   {font-family:SimSun;
   panose-1:2 1 6 0 3 1 1 1 1 1;}
 _font-face
   {font-family:SimSun;
   panose-1:2 1 6 0 3 1 1 1 1 1;}
 _font-face
   {font-family:Calibri;
   panose-1:2 15 5 2 2 2 4 3 2 4;}
 _font-face
   {font-family:Tahoma;
   panose-1:2 11 6 4 3 5 4 4 2 4;}
 _font-face
   {font-family:SimSun;
   panose-1:2 1 6 0 3 1 1 1 1 1;}
 
 p.MsoNormal, li.MsoNormal, div.MsoNormal
   {margin:0cm;
   margin-bottom:.0001pt;
   font-size:12.0pt;
   font-family:SimSun;}
 a:link, span.MsoHyperlink
   {mso-style-priority:99;
   color:blue;
   text-decoration:underline;}
 a:visited, span.MsoHyperlinkFollowed
   {mso-style-priority:99;
   color:purple;
   text-decoration:underline;}
 span.EmailStyle17
   {mso-style-type:personal-reply;
   font-family:Calibri,sans-serif;
   color:#1F497D;}
 .MsoChpDefault
   {mso-style-type:export-only;
   font-family:Calibri,sans-serif;}
 _page WordSection1
   {size:612.0pt 792.0pt;
   margin:72.0pt 90.0pt 72.0pt 90.0pt;}
 div.WordSection1
   {page:WordSection1;}
 --
 
 
 
 
 
 whether we already use like that ?
 
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5156
 
  
 
 From: fdsafdsafd [mailto:jaze...@163.com]
 
 
 Sent: Thursday, July 24, 2014 4:30 PM
 
 To: openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] [nova]resize
 
  
 
 
 
 
 
 In resize, we convert the disk and drop peel backing file, should we judge 
 whether we are in shared_storage? If we are in shared storage, for example, 
 
 
 
 
 nfs, then we can use the image in _base to be the backing file. And the time 
 cost to resize will be faster.
 
 
 
 
  
 
 
 
 
 The processing in line 5132
 
 
 
 
 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py
 
 
 
 
  
 
 
 
 
  
 
 
 
 
 Thanks
 
 
 
  
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] vhost-scsi support in Nova

2014-07-24 Thread Vishvananda Ishaya

On Jul 24, 2014, at 3:06 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Jul 23, 2014 at 10:32:44PM -0700, Nicholas A. Bellinger wrote:
 *) vhost-scsi doesn't support migration
 
 Since it's initial merge in QEMU v1.5, vhost-scsi has a migration blocker
 set.  This is primarily due to requiring some external orchestration in
 order to setup the necessary vhost-scsi endpoints on the migration
 destination to match what's running on the migration source.
 
 Here are a couple of points that Stefan detailed some time ago about what's
 involved for properly supporting live migration with vhost-scsi:
 
 (1) vhost-scsi needs to tell QEMU when it dirties memory pages, either by
 DMAing to guest memory buffers or by modifying the virtio vring (which also
 lives in guest memory).  This should be straightforward since the
 infrastructure is already present in vhost (it's called the log) and used
 by drivers/vhost/net.c.
 
 (2) The harder part is seamless target handover to the destination host.
 vhost-scsi needs to serialize any SCSI target state from the source machine
 and load it on the destination machine.  We could be in the middle of
 emulating a SCSI command.
 
 An obvious solution is to only support active-passive or active-active HA
 setups where tcm already knows how to fail over.  This typically requires
 shared storage and maybe some communication for the clustering mechanism.
 There are more sophisticated approaches, so this straightforward one is just
 an example.
 
 That said, we do intended to support live migration for vhost-scsi using
 iSCSI/iSER/FC shared storage.
 
 *) vhost-scsi doesn't support qcow2
 
 Given all other cinder drivers do not use QEMU qcow2 to access storage
 blocks, with the exception of the Netapp and Gluster driver, this argument
 is not particularly relevant here.
 
 However, this doesn't mean that vhost-scsi (and target-core itself) cannot
 support qcow2 images.  There is currently an effort to add a userspace
 backend driver for the upstream target (tcm_core_user [3]), that will allow
 for supporting various disk formats in userspace.
 
 The important part for vhost-scsi is that regardless of what type of target
 backend driver is put behind the fabric LUNs (raw block devices using
 IBLOCK, qcow2 images using target_core_user, etc) the changes required in
 Nova and libvirt to support vhost-scsi remain the same.  They do not change
 based on the backend driver.
 
 *) vhost-scsi is not intended for production
 
 vhost-scsi has been included the upstream kernel since the v3.6 release, and
 included in QEMU since v1.5.  vhost-scsi runs unmodified out of the box on a
 number of popular distributions including Fedora, Ubuntu, and OpenSuse.  It
 also works as a QEMU boot device with Seabios, and even with the Windows
 virtio-scsi mini-port driver.
 
 There is at least one vendor who has already posted libvirt patches to
 support vhost-scsi, so vhost-scsi is already being pushed beyond a debugging
 and development tool.
 
 For instance, here are a few specific use cases where vhost-scsi is
 currently the only option for virtio-scsi guests:
 
  - Low (sub 100 usec) latencies for AIO reads/writes with small iodepth
workloads
  - 1M+ small block IOPs workloads at low CPU utilization with large
iopdeth workloads.
  - End-to-end data integrity using T10 protection information (DIF)
 
 IIUC, there is also missing support for block jobs like drive-mirror
 which is needed by Nova.
 
 From a functionality POV migration  drive-mirror support are the two
 core roadblocks to including vhost-scsi in Nova (as well as libvirt
 support for it of course). Realistically it doesn't sound like these
 are likely to be solved soon enough to give us confidence in taking
 this for the Juno release cycle.


As I understand this work, vhost-scsi provides massive perf improvements
over virtio, which makes it seem like a very valuable addition. I’m ok
with telling customers that it means that migration and snapshotting are
not supported as long as the feature is protected by a flavor type or
image metadata (i.e. not on by default). I know plenty of customers that
would gladly trade some of the friendly management features for better
i/o performance.

Therefore I think it is acceptable to take it with some documentation that
it is experimental. Maybe I’m unique but I deal with people pushing for
better performance all the time.

Vish

 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] Mentor program?

2014-07-24 Thread Joshua Harlow
Awesome,

When I start to see emails on ML that say anyone need any help for XYZ ... 
(which is great btw) it makes me feel like there should be a more appropriate 
avenue for those inspirational folks looking to get involved (a ML isn't really 
the best place for this kind of guidance and directing). 

And in general mentoring will help all involved if we all do more of it :-)

Let me know if any thing is needed that I can possible help with to get more of 
it going.

-Josh

On Jul 23, 2014, at 2:44 PM, Jay Bryant jsbry...@electronicjungle.net wrote:

 Great question Josh!
 
 Have been doing a lot of mentoring within IBM for OpenStack and have now been 
 asked to formalize some of that work.  Not surprised there is an external 
 need as well.
 
 Anne and Stefano.  Let me know if the kids anything I can do to help.
 
 Jay
 
 Hi all,
 
 I was reading over a IMHO insightful hacker news thread last night:
 
 https://news.ycombinator.com/item?id=8068547
 
 Labeled/titled: 'I made a patch for Mozilla, and you can do it too'
 
 It made me wonder what kind of mentoring support are we as a community 
 offering to newbies (a random google search for 'openstack mentoring' shows 
 mentors for GSoC, mentors for interns, outreach for women... but no mention 
 of mentors as a way for everyone to get involved)?
 
 Looking at the comments in that hacker news thread, the article itself it 
 seems like mentoring is stressed over and over as the way to get involved.
 
 Has there been ongoing efforts to establish such a program (I know there is 
 training work that has been worked on, but that's not exactly the same).
 
 Thoughts, comments...?
 
 -Josh
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Joshua Harlow
A potentially brilliant idea ;-)

Aren't all the machines the gate runs tests on VMs running via OpenStack APIs?

OpenStack supports snapshotting (last time I checked). So instead of providing 
back a whole bunch of log files, provide back a snapshot of the machine/s that 
ran the tests; let person who wants to download that snapshot download it (and 
then they can boot it up into virtualbox, qemu, there own OpenStack cloud...) 
and investigate all the log files they desire. 

Are we really being so conservative on space that we couldn't do this? I find 
it hard to believe that space is a concern for anything anymore (if it really 
matters store the snapshots in ceph, or glusterfs, swift, or something else... 
which should dedup the blocks). This is pretty common with how people use 
snapshots and what they back them with anyway so it would be nice if infra 
exposed the same thing...

Would something like that be possible? I'm not so familiar with all the inner 
workings of the infra project; but if it eventually boots VMs using an 
OpenStack cloud, it would seem reasonable that it could provide the same 
mechanisms we are all already used to using...

Thoughts?

On Jul 24, 2014, at 9:40 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
 
 ==Future changes==
 
 ===Fixing Faster===
 
 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.
 
 [3] https://etherpad.openstack.org/p/gatetriage-june2014
 
 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found in the queues it will be given higher priority, including bubbling
 up to the top of the gate queue automatically. The manual promote
 process should no longer be needed, and instead bugs fixing elastic
 recheck tracked issues will be promoted automatically.
 
 At the same time we'll also promote review on critical gate bugs through
 making them visible in a number of different channels (like on elastic
 recheck pages, review day, and in the gerrit dashboards). The idea here
 again is to make the reviews that fix key bugs pop to the top of
 everyone's views.
 
 In some of the harder gate bugs I've looked at (especially the infamous
 'live snapshot' timeout bug), it has been damn hard to actually figure
 out what's wrong. AFAIK, no one has ever been able to reproduce it
 outside of the gate infrastructure. I've even gone as far as setting up
 identical Ubuntu VMs to the ones used in the gate on a local cloud, and
 running the tempest tests multiple times, but still can't reproduce what
 happens on the gate machines themselves :-( As such we're relying on
 code inspection and the collected log messages to try and figure out
 what might be wrong.
 
 The gate collects alot of info and publishes it, but in this case I
 have found the published logs to be insufficient - I needed to get
 the more verbose libvirtd.log file. devstack has the ability to turn
 this on via an environment variable, but it is disabled by default
 because it would add 3% to the total size of logs collected per gate
 job.
 
 There's no way for me to get that environment variable for devstack
 turned on for a specific review I want to test with. In the end I
 uploaded a change to nova which abused rootwrap to elevate privileges,
 install extra deb packages, reconfigure libvirtd logging and restart
 the libvirtd daemon.
 
  
 https://review.openstack.org/#/c/103066/11/etc/nova/rootwrap.d/compute.filters
  https://review.openstack.org/#/c/103066/11/nova/virt/libvirt/driver.py
 
 This let me get further, but still not resolve it. My next attack is
 to build a custom QEMU binary and hack nova further so that it can
 download my custom QEMU binary from a website onto the gate machine
 and run the test with it. Failing that I'm going to be hacking things
 to try to attach to QEMU in the gate with GDB and get stack traces.
 Anything is doable thanks to rootwrap giving us a way to elevate
 privileges from Nova, but it is a somewhat tedious approach.
 
 I'd like us to think about whether they is anything we can do to make
 life easier in these kind of hard debugging scenarios where the regular
 logs are not sufficient.
 
 Regards,
 Daniel
 -- 
 |: http://berrange.com  -o-

Re: [openstack-dev] [heat]Heat Db Model updates

2014-07-24 Thread Zane Bitter

On 17/07/14 07:51, Ryan Brown wrote:

On 07/17/2014 03:33 AM, Steven Hardy wrote:

On Thu, Jul 17, 2014 at 12:31:05AM -0400, Zane Bitter wrote:

On 16/07/14 23:48, Manickam, Kanagaraj wrote:

SNIP
*Resource*

Status  action should be enum of predefined status


+1


Rsrc_metadata - make full name resource_metadata


-0. I don't see any benefit here.


Agreed



I'd actually be in favor of the change from rsrc-resource, I feel like
rsrc is a pretty opaque abbreviation.


I'd just like to remind everyone that these changes are not free. 
Database migrations are a pain to manage, and every new one slows down 
our unit tests.


We now support multiple heat-engines connected to the same database and 
people want to upgrade their installations, so that means we have to be 
able to handle different versions talking to the same database. Unless 
somebody has a bright idea I haven't thought of, I assume that means 
carrying code to handle both versions for 6 months before actually being 
able to implement the migration. Or are we saying that you have to 
completely shut down all instances of Heat to do an upgrade?


The name of the nova_instance column is so egregiously misleading that 
it's probably worth the pain. Using an enumeration for the states will 
save a lot of space in the database (though it would be a much more 
obvious win if we were querying on those columns). Changing a random 
prefix that was added to avoid a namespace conflict to a slightly 
different random prefix is well below the cost-benefit line IMO.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in the
 parsed configuration for the resulting options?


 I can imagine something like this:
 1. iterate over undefined groups in config;

 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.

 Registered group can be passed to a plugin/library that would register its
 options in it.


 If the options are related to the plugin, could the plugin just register
 them before it tries to use them?


Plugin would have to register its options under a fixed group. But what if
we want a number of plugin instances?



 I guess it’s not clear what problem you’re actually trying to solve by
 proposing this change to the way the config files are parsed. That doesn’t
 mean your idea is wrong, just that I can’t evaluate it or point out another
 solution. So what is it that you’re trying to do that has led to this
 suggestion?


I don't exactly know what the original author's intention is but I don't
generally like the fact that all libraries and plugins wanting to use
config have to influence global CONF instance.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Anita Kuno
On 07/24/2014 12:40 PM, Daniel P. Berrange wrote:
 On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
 
 ==Future changes==
 
 ===Fixing Faster===

 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.

 [3] https://etherpad.openstack.org/p/gatetriage-june2014

 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found in the queues it will be given higher priority, including bubbling
 up to the top of the gate queue automatically. The manual promote
 process should no longer be needed, and instead bugs fixing elastic
 recheck tracked issues will be promoted automatically.

 At the same time we'll also promote review on critical gate bugs through
 making them visible in a number of different channels (like on elastic
 recheck pages, review day, and in the gerrit dashboards). The idea here
 again is to make the reviews that fix key bugs pop to the top of
 everyone's views.
 
 In some of the harder gate bugs I've looked at (especially the infamous
 'live snapshot' timeout bug), it has been damn hard to actually figure
 out what's wrong. AFAIK, no one has ever been able to reproduce it
 outside of the gate infrastructure. I've even gone as far as setting up
 identical Ubuntu VMs to the ones used in the gate on a local cloud, and
 running the tempest tests multiple times, but still can't reproduce what
 happens on the gate machines themselves :-( As such we're relying on
 code inspection and the collected log messages to try and figure out
 what might be wrong.
 
 The gate collects alot of info and publishes it, but in this case I
 have found the published logs to be insufficient - I needed to get
 the more verbose libvirtd.log file. devstack has the ability to turn
 this on via an environment variable, but it is disabled by default
 because it would add 3% to the total size of logs collected per gate
 job.
 
 There's no way for me to get that environment variable for devstack
 turned on for a specific review I want to test with. In the end I
 uploaded a change to nova which abused rootwrap to elevate privileges,
 install extra deb packages, reconfigure libvirtd logging and restart
 the libvirtd daemon.
 
   
 https://review.openstack.org/#/c/103066/11/etc/nova/rootwrap.d/compute.filters
   https://review.openstack.org/#/c/103066/11/nova/virt/libvirt/driver.py
 
 This let me get further, but still not resolve it. My next attack is
 to build a custom QEMU binary and hack nova further so that it can
 download my custom QEMU binary from a website onto the gate machine
 and run the test with it. Failing that I'm going to be hacking things
 to try to attach to QEMU in the gate with GDB and get stack traces.
 Anything is doable thanks to rootwrap giving us a way to elevate
 privileges from Nova, but it is a somewhat tedious approach.
 
 I'd like us to think about whether they is anything we can do to make
 life easier in these kind of hard debugging scenarios where the regular
 logs are not sufficient.
 
 Regards,
 Daniel
 
For really really difficult bugs that can't be reproduced outside the
gate, we do have the ability to hold vms if we know they have are
displaying the bug, if they are caught before the vm in question is
scheduled for deletion. In this case, make your intentions known in a
discussion with a member of infra-root. A conversation will ensue
involving what to do to get you what you need to continue debugging.

It doesn't work in all cases, but some have found it helpful. Keep in
mind you will be asked to demonstrate you have tried all other avenues
before this one is exercised.

Thanks,
Anita.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-24 Thread Joshua Harlow
Sorry about repeatedly blasting this out. I blame mail.app or outlook.com...

Hopefully the glitch has been fixed...

-Josh

On Jul 23, 2014, at 3:42 PM, Joshua Harlow harlo...@outlook.com wrote:

 Awesome,
 
 When I start to see emails on ML that say anyone need any help for XYZ ... 
 (which is great btw) it makes me feel like there should be a more appropriate 
 avenue for those inspirational folks looking to get involved (a ML isn't 
 really the best place for this kind of guidance and directing). 
 
 And in general mentoring will help all involved if we all do more of it :-)
 
 Let me know if any thing is needed that I can possible help with to get more 
 of it going.
 
 -Josh
 
 On Jul 23, 2014, at 2:44 PM, Jay Bryant jsbry...@electronicjungle.net wrote:
 
 Great question Josh!
 
 Have been doing a lot of mentoring within IBM for OpenStack and have now 
 been asked to formalize some of that work.  Not surprised there is an 
 external need as well.
 
 Anne and Stefano.  Let me know if the kids anything I can do to help.
 
 Jay
 
 Hi all,
 
 I was reading over a IMHO insightful hacker news thread last night:
 
 https://news.ycombinator.com/item?id=8068547
 
 Labeled/titled: 'I made a patch for Mozilla, and you can do it too'
 
 It made me wonder what kind of mentoring support are we as a community 
 offering to newbies (a random google search for 'openstack mentoring' shows 
 mentors for GSoC, mentors for interns, outreach for women... but no mention 
 of mentors as a way for everyone to get involved)?
 
 Looking at the comments in that hacker news thread, the article itself it 
 seems like mentoring is stressed over and over as the way to get involved.
 
 Has there been ongoing efforts to establish such a program (I know there is 
 training work that has been worked on, but that's not exactly the same).
 
 Thoughts, comments...?
 
 -Josh
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Joshua Harlow

On Jul 24, 2014, at 12:08 PM, Anita Kuno ante...@anteaya.info wrote:

 On 07/24/2014 12:40 PM, Daniel P. Berrange wrote:
 On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
 
 ==Future changes==
 
 ===Fixing Faster===
 
 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.
 
 [3] https://etherpad.openstack.org/p/gatetriage-june2014
 
 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found in the queues it will be given higher priority, including bubbling
 up to the top of the gate queue automatically. The manual promote
 process should no longer be needed, and instead bugs fixing elastic
 recheck tracked issues will be promoted automatically.
 
 At the same time we'll also promote review on critical gate bugs through
 making them visible in a number of different channels (like on elastic
 recheck pages, review day, and in the gerrit dashboards). The idea here
 again is to make the reviews that fix key bugs pop to the top of
 everyone's views.
 
 In some of the harder gate bugs I've looked at (especially the infamous
 'live snapshot' timeout bug), it has been damn hard to actually figure
 out what's wrong. AFAIK, no one has ever been able to reproduce it
 outside of the gate infrastructure. I've even gone as far as setting up
 identical Ubuntu VMs to the ones used in the gate on a local cloud, and
 running the tempest tests multiple times, but still can't reproduce what
 happens on the gate machines themselves :-( As such we're relying on
 code inspection and the collected log messages to try and figure out
 what might be wrong.
 
 The gate collects alot of info and publishes it, but in this case I
 have found the published logs to be insufficient - I needed to get
 the more verbose libvirtd.log file. devstack has the ability to turn
 this on via an environment variable, but it is disabled by default
 because it would add 3% to the total size of logs collected per gate
 job.
 
 There's no way for me to get that environment variable for devstack
 turned on for a specific review I want to test with. In the end I
 uploaded a change to nova which abused rootwrap to elevate privileges,
 install extra deb packages, reconfigure libvirtd logging and restart
 the libvirtd daemon.
 
  
 https://review.openstack.org/#/c/103066/11/etc/nova/rootwrap.d/compute.filters
  https://review.openstack.org/#/c/103066/11/nova/virt/libvirt/driver.py
 
 This let me get further, but still not resolve it. My next attack is
 to build a custom QEMU binary and hack nova further so that it can
 download my custom QEMU binary from a website onto the gate machine
 and run the test with it. Failing that I'm going to be hacking things
 to try to attach to QEMU in the gate with GDB and get stack traces.
 Anything is doable thanks to rootwrap giving us a way to elevate
 privileges from Nova, but it is a somewhat tedious approach.
 
 I'd like us to think about whether they is anything we can do to make
 life easier in these kind of hard debugging scenarios where the regular
 logs are not sufficient.
 
 Regards,
 Daniel
 
 For really really difficult bugs that can't be reproduced outside the
 gate, we do have the ability to hold vms if we know they have are
 displaying the bug, if they are caught before the vm in question is
 scheduled for deletion. In this case, make your intentions known in a
 discussion with a member of infra-root. A conversation will ensue
 involving what to do to get you what you need to continue debugging.
 

Why? Is space really that expensive? It boggles my mind a little that we have a 
well financed foundation (afaik, correct me if I am wrong...) but yet can't 
save 'all' the things in a smart manner (saving all the VMs snapshots doesn't 
mean saving hundreds/thousands of gigabytes when u are using de-duping 
cinder/glance... backends). Expire those VMs after a week if that helps but it 
feels like we shouldn't be so conservative about developers needs to have 
access to all the VMs that the gate used/created..., it's not like developers 
are trying to 'harm' openstack by investigating root issues that raw access to 
the VM images can provide (in fact it's to the contrary).

 It doesn't work in all cases, but some have found it helpful. Keep in
 mind you 

Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Sean Dague
On 07/24/2014 02:51 PM, Joshua Harlow wrote:
 A potentially brilliant idea ;-)
 
 Aren't all the machines the gate runs tests on VMs running via OpenStack APIs?
 
 OpenStack supports snapshotting (last time I checked). So instead of 
 providing back a whole bunch of log files, provide back a snapshot of the 
 machine/s that ran the tests; let person who wants to download that snapshot 
 download it (and then they can boot it up into virtualbox, qemu, there own 
 OpenStack cloud...) and investigate all the log files they desire. 
 
 Are we really being so conservative on space that we couldn't do this? I find 
 it hard to believe that space is a concern for anything anymore (if it really 
 matters store the snapshots in ceph, or glusterfs, swift, or something 
 else... which should dedup the blocks). This is pretty common with how people 
 use snapshots and what they back them with anyway so it would be nice if 
 infra exposed the same thing...
 
 Would something like that be possible? I'm not so familiar with all the inner 
 workings of the infra project; but if it eventually boots VMs using an 
 OpenStack cloud, it would seem reasonable that it could provide the same 
 mechanisms we are all already used to using...
 
 Thoughts?

There are actual space concerns. Especially when we're talking about 20k
runs / week. At which point snapshots are probably in the neighborhood
of 10G, so we're talking about 200 TB / week of storage. Plus there are
actual technical details of the fact that glance end points are really
quite beta in the clouds we use. Remember our tests runs aren't pets,
they are cattle, we need to figure out the right distillation of data
and move on, as there isn't enough space or time to keep everything around.

Also portability of system images is... limited between hypervisors.

If this is something you'd like to see if you could figure out the hard
parts of, I invite you to dive in on the infra side. It's very easy to
say it's easy. :) Actually coming up with a workable solution requires a
ton more time and energy.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Sean Dague
On 07/24/2014 12:40 PM, Daniel P. Berrange wrote:
 On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
 
 ==Future changes==
 
 ===Fixing Faster===

 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.

 [3] https://etherpad.openstack.org/p/gatetriage-june2014

 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found in the queues it will be given higher priority, including bubbling
 up to the top of the gate queue automatically. The manual promote
 process should no longer be needed, and instead bugs fixing elastic
 recheck tracked issues will be promoted automatically.

 At the same time we'll also promote review on critical gate bugs through
 making them visible in a number of different channels (like on elastic
 recheck pages, review day, and in the gerrit dashboards). The idea here
 again is to make the reviews that fix key bugs pop to the top of
 everyone's views.
 
 In some of the harder gate bugs I've looked at (especially the infamous
 'live snapshot' timeout bug), it has been damn hard to actually figure
 out what's wrong. AFAIK, no one has ever been able to reproduce it
 outside of the gate infrastructure. I've even gone as far as setting up
 identical Ubuntu VMs to the ones used in the gate on a local cloud, and
 running the tempest tests multiple times, but still can't reproduce what
 happens on the gate machines themselves :-( As such we're relying on
 code inspection and the collected log messages to try and figure out
 what might be wrong.
 
 The gate collects alot of info and publishes it, but in this case I
 have found the published logs to be insufficient - I needed to get
 the more verbose libvirtd.log file. devstack has the ability to turn
 this on via an environment variable, but it is disabled by default
 because it would add 3% to the total size of logs collected per gate
 job.

Right now we're at 95% full on 14 TB (which is the max # of volumes you
can attach to a single system in RAX), so every gig is sacred. There has
been a big push, which included the sprint last week in Darmstadt, to
get log data into swift, at which point our available storage goes way up.

So for right now, we're a little squashed. Hopefully within a month
we'll have the full solution.

As soon as we get those kinks out, I'd say we're in a position to flip
on that logging in devstack by default.

 There's no way for me to get that environment variable for devstack
 turned on for a specific review I want to test with. In the end I
 uploaded a change to nova which abused rootwrap to elevate privileges,
 install extra deb packages, reconfigure libvirtd logging and restart
 the libvirtd daemon.
 
   
 https://review.openstack.org/#/c/103066/11/etc/nova/rootwrap.d/compute.filters
   https://review.openstack.org/#/c/103066/11/nova/virt/libvirt/driver.py
 
 This let me get further, but still not resolve it. My next attack is
 to build a custom QEMU binary and hack nova further so that it can
 download my custom QEMU binary from a website onto the gate machine
 and run the test with it. Failing that I'm going to be hacking things
 to try to attach to QEMU in the gate with GDB and get stack traces.
 Anything is doable thanks to rootwrap giving us a way to elevate
 privileges from Nova, but it is a somewhat tedious approach.
 
 I'd like us to think about whether they is anything we can do to make
 life easier in these kind of hard debugging scenarios where the regular
 logs are not sufficient.

Agreed. Honestly, though we do also need to figure out first fail
detection on our logs as well. Because realistically if we can't debug
failures from those, then I really don't understand how we're ever going
to expect large users to.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Craig Vyvial
Denis,

The scope of the metadata api goes beyond just using the glance metadata.
The metadata can be used for instances and and other objects to add extra
data like tags or something else that maybe a UI might want to use. We need
this feature either way.

-Craig


On Thu, Jul 24, 2014 at 12:17 PM, Amrith Kumar amr...@tesora.com wrote:

 Speaking as a ‘database guy’ and a ‘Trove guy’, I’ll say this; “Metadata”
 is a very generic term and the meaning of “metadata” in a database context
 is very different from the meaning of “metadata” in the context that Glance
 is providing.



 Furthermore the usage and access pattern for this metadata, the frequency
 of change, and above all the frequency of access are fundamentally
 different between Trove and what Glance appears to be offering, and we
 should probably not get too caught up in the project “title”.



 We would not be “reinventing the wheel” if we implemented an independent
 metadata scheme for Trove; we would be implementing the right kind of when
 for the vehicle that we are operating. Therefore I do not agree with your
 characterization that concludes that:



  given goals at [1] are out of scope of Database program, etc



 Just to be clear, when you write:



  Unfortunately, we’re(Trove devs) are on half way to metadata …



 it is vital to understand that our view of “metadata” is very different
 from (for example, a file system’s view of metadata, or potentially
 Glance’s view of metadata). For that reason, I believe that your comments
 on https://review.openstack.org/#/c/82123/16 are also somewhat extreme.



 Before postulating a solution (or “delegating development to Glance
 devs”), it would be more useful to fully describe the problem being solved
 by Glance and the problem(s) we are looking to solve in Trove, and then we
 could have a meaningful discussion about the right solution.



 I submit to you that we will come away concluding that there is a round
 peg, and a square hole. Yes, one will fit in the other but the final
 product will leave neither party particularly happy with the end result.



 -amrith



 *From:* Denis Makogon [mailto:dmako...@mirantis.com]
 *Sent:* Thursday, July 24, 2014 9:33 AM
 *To:* OpenStack Development Mailing List
 *Subject:* [openstack-dev] [Glance][Trove] Metadata Catalog



 Hello, Stackers.


  I’d like to discuss the future of Trove metadata API. But first small
 history info (mostly taken for Trove medata spec, see [1]):

 *Instance metadata is a feature that has been requested frequently by our
 users. They need a way to store critical information for their instances
 and have that be associated with the instance so that it is displayed
 whenever that instance is listed via the API. This also becomes very usable
 from a testing perspective when doing integration/ci. We can utilize the
 metadata to store things like what process created the instance, what the
 instance is being used for, etc... The design for this feature is modeled
 heavily on the Nova metadata API with a few tweaks in how it works
 internally.*

 And here comes conflict. Glance devs are working on “Glance Metadata
 Catalog” feature (see [2]). And as for me, we don’t have to* “reinvent
 the wheel” for Trove. *It seems that we would be able

 to use Glance API to interact with   Metadata Catalog. And it seems to be
 redundant to write our own API for metadata CRUD operations.



 From Trove perspective, we need to define a list concrete use cases
 for metadata usage (eg given goals at [1] are out of scope of Database
 program, etc.).

 From development and cross-project integration perspective, we need to
 delegate all development to Glance devs. But we still able to help Glance
 devs with this feature by taking active part in polishing proposed spec
 (see [2]).



 Unfortunately, we’re(Trove devs) are on half way to metadata - patch
 for python-troveclient already merged. So, we need to consider
 deprecation/reverting of merged and block

 merging of proposed ( see [3]) patchsets in favor of Glance Metadata
 Catalog.



 Thoughts?

 [1] https://wiki.openstack.org/wiki/Trove-Instance-Metadata

 [2] https://review.openstack.org/#/c/98554/11

 [3] https://review.openstack.org/#/c/82123/



 Best regards,

 Denis Makogon

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Doug Hellmann

On Jul 24, 2014, at 3:08 PM, Yuriy Taraday yorik@gmail.com wrote:

 
 
 
 On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:
 
 
 
 
 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com wrote:
 
 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:
 
 Hi, all
  The current oslo.cfg module provides an easy way to load name known 
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load 
 them?
 
   For example, I do not know the group names (section name in the 
 configuration file), but we read the configuration file and detect the 
 definitions inside it.
 
 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2
 
Then I want to automatically load the group1. key1 and group2. key2, 
 without knowing the name of group1 first.
 
 If you don’t know the group name, how would you know where to look in the 
 parsed configuration for the resulting options?
 
 I can imagine something like this:
 1. iterate over undefined groups in config;
 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.
 
 Registered group can be passed to a plugin/library that would register its 
 options in it.
 
 If the options are related to the plugin, could the plugin just register them 
 before it tries to use them?
 
 Plugin would have to register its options under a fixed group. But what if we 
 want a number of plugin instances? 

Presumably something would know a name associated with each instance and could 
pass it to the plugin to use when registering its options.

  
 
 I guess it’s not clear what problem you’re actually trying to solve by 
 proposing this change to the way the config files are parsed. That doesn’t 
 mean your idea is wrong, just that I can’t evaluate it or point out another 
 solution. So what is it that you’re trying to do that has led to this 
 suggestion?
 
 I don't exactly know what the original author's intention is but I don't 
 generally like the fact that all libraries and plugins wanting to use config 
 have to influence global CONF instance.

That is a common misconception. The use of a global configuration option is an 
application developer choice. The config library does not require it. Some of 
the other modules in the oslo incubator expect a global config object because 
they started life in applications with that pattern, but as we move them to 
libraries we are updating the APIs to take a ConfigObj as argument (see 
oslo.messaging and oslo.db for examples).

Doug

 
 -- 
 
 Kind regards, Yuriy.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][qa] cinder client versions and tempest

2014-07-24 Thread David Kranz
I noticed that the cinder list-extensions url suffix is underneath the 
v1/v2 in the GET url but the returned result is the same either way. 
Some of the

returned items have v1 in the namespace, and others v2.

Also, in tempest, there is a single config section for cinder and only a 
single extensions client even though we run cinder
tests for v1 and v2 through separate volume clients. I would have 
expected that listing extensions would be separate calls for v1
and v2 and that the results might be different, implying that tempest 
conf should have a separate section (and service enabled) for volumes v2
rather than treating the presence of v1 and v2 as flags in 
volume-feature-enabled. Am I missing something here?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Joshua Harlow

On Jul 24, 2014, at 12:54 PM, Sean Dague s...@dague.net wrote:

 On 07/24/2014 02:51 PM, Joshua Harlow wrote:
 A potentially brilliant idea ;-)
 
 Aren't all the machines the gate runs tests on VMs running via OpenStack 
 APIs?
 
 OpenStack supports snapshotting (last time I checked). So instead of 
 providing back a whole bunch of log files, provide back a snapshot of the 
 machine/s that ran the tests; let person who wants to download that snapshot 
 download it (and then they can boot it up into virtualbox, qemu, there own 
 OpenStack cloud...) and investigate all the log files they desire. 
 
 Are we really being so conservative on space that we couldn't do this? I 
 find it hard to believe that space is a concern for anything anymore (if it 
 really matters store the snapshots in ceph, or glusterfs, swift, or 
 something else... which should dedup the blocks). This is pretty common with 
 how people use snapshots and what they back them with anyway so it would be 
 nice if infra exposed the same thing...
 
 Would something like that be possible? I'm not so familiar with all the 
 inner workings of the infra project; but if it eventually boots VMs using an 
 OpenStack cloud, it would seem reasonable that it could provide the same 
 mechanisms we are all already used to using...
 
 Thoughts?
 
 There are actual space concerns. Especially when we're talking about 20k
 runs / week. At which point snapshots are probably in the neighborhood
 of 10G, so we're talking about 200 TB / week of storage. Plus there are
 actual technical details of the fact that glance end points are really
 quite beta in the clouds we use. Remember our tests runs aren't pets,
 they are cattle, we need to figure out the right distillation of data
 and move on, as there isn't enough space or time to keep everything around.

Sure not pets..., save only the failing ones then (the broken cattle)?

Is 200TB/week really how much is actually stored when ceph or other uses 
data-deduping? Does rackspace or HP (the VM providers for infra afaik) do 
this/or use a similar deduping technology for storing snapshots?

I agree with right distillation and maybe it's not always needed, but it 
would/could be nice to have a button on gerrit that u could activate within a 
certain amount of time after the run to get all the images that the VMs used 
during the tests (yes the download would be likely be huge) if you really want 
to setup the exact same environment that the test failed with. Maybe have that 
button expire after a week (then u only need 200 TB of *expiring* space).

 Also portability of system images is... limited between hypervisors.
 
 If this is something you'd like to see if you could figure out the hard
 parts of, I invite you to dive in on the infra side. It's very easy to
 say it's easy. :) Actually coming up with a workable solution requires a
 ton more time and energy.

Of course, that goes without saying,

I guess I thought this is a ML for discussions and thoughts (in part the 
'thought' part of this subject) and need not be a solution off the bat.

Just an idea anyway...

 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Volume replication - driver support walk through

2014-07-24 Thread Ronen Kat
Hello,

The initial code for managing volume replication in Cinder is now 
available as work-in-progress - see 
https://review.openstack.org/#/c/106718
I expect to remove the work-in-progress early next week.

I would like to hold a walk through of the replication feature for Cinder 
driver owners who are interested to implement replication - I plan to hold 
it on Wednesday July 30 17:00 UTC, just after the Cinder meeting.
I will make available a phone call-in number and access details, as I 
don't think Google Hangouts can support enough video connections (ten to 
the best of my knowledge).
Alternative suggestions are welcome

For those who cannot attend the 17:00 UTC walk through (due to time zone 
issues), I can hold another one on July 31, 08:00 UTC - please let me know 
if there is interest for this time slot.

Regards,

Ronen,___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] team meeting minutes July 24

2014-07-24 Thread Sergey Lukjanov
Thanks everyone who have joined Sahara meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-07-24-18.03.html
Logs: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-07-24-18.03.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] overuse of 'except Exception'

2014-07-24 Thread Jay Pipes

On 07/24/2014 08:23 AM, Chris Dent wrote:

In other words there are two kinds of bad: The bad that we know
and can expect (even though we don't want it) and the bad that we
don't know and shouldn't expect. These should be handled
differently.


I like to call this The Rumsfeld.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Iccha Sethi
+1

We are unsure when these changes will get into glance.
IMO we should go ahead will our instance metadata patch for now and when things 
are ready in glance land we can consider migrating to using that as a generic 
metadata repository.

Thanks,
Iccha

From: Craig Vyvial cp16...@gmail.commailto:cp16...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, July 24, 2014 at 3:04 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance][Trove] Metadata Catalog

Denis,

The scope of the metadata api goes beyond just using the glance metadata. The 
metadata can be used for instances and and other objects to add extra data like 
tags or something else that maybe a UI might want to use. We need this feature 
either way.

-Craig


On Thu, Jul 24, 2014 at 12:17 PM, Amrith Kumar 
amr...@tesora.commailto:amr...@tesora.com wrote:
Speaking as a ‘database guy’ and a ‘Trove guy’, I’ll say this; “Metadata” is a 
very generic term and the meaning of “metadata” in a database context is very 
different from the meaning of “metadata” in the context that Glance is 
providing.

Furthermore the usage and access pattern for this metadata, the frequency of 
change, and above all the frequency of access are fundamentally different 
between Trove and what Glance appears to be offering, and we should probably 
not get too caught up in the project “title”.

We would not be “reinventing the wheel” if we implemented an independent 
metadata scheme for Trove; we would be implementing the right kind of when for 
the vehicle that we are operating. Therefore I do not agree with your 
characterization that concludes that:

 given goals at [1] are out of scope of Database program, etc

Just to be clear, when you write:

 Unfortunately, we’re(Trove devs) are on half way to metadata …

it is vital to understand that our view of “metadata” is very different from 
(for example, a file system’s view of metadata, or potentially Glance’s view of 
metadata). For that reason, I believe that your comments on 
https://review.openstack.org/#/c/82123/16 are also somewhat extreme.

Before postulating a solution (or “delegating development to Glance devs”), it 
would be more useful to fully describe the problem being solved by Glance and 
the problem(s) we are looking to solve in Trove, and then we could have a 
meaningful discussion about the right solution.

I submit to you that we will come away concluding that there is a round peg, 
and a square hole. Yes, one will fit in the other but the final product will 
leave neither party particularly happy with the end result.

-amrith

From: Denis Makogon [mailto:dmako...@mirantis.commailto:dmako...@mirantis.com]
Sent: Thursday, July 24, 2014 9:33 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Glance][Trove] Metadata Catalog


Hello, Stackers.

 I’d like to discuss the future of Trove metadata API. But first small 
history info (mostly taken for Trove medata spec, see [1]):
Instance metadata is a feature that has been requested frequently by our users. 
They need a way to store critical information for their instances and have that 
be associated with the instance so that it is displayed whenever that instance 
is listed via the API. This also becomes very usable from a testing perspective 
when doing integration/ci. We can utilize the metadata to store things like 
what process created the instance, what the instance is being used for, etc... 
The design for this feature is modeled heavily on the Nova metadata API with a 
few tweaks in how it works internally.

And here comes conflict. Glance devs are working on “Glance Metadata 
Catalog” feature (see [2]). And as for me, we don’t have to “reinvent the 
wheel” for Trove. It seems that we would be able

to use Glance API to interact with   Metadata Catalog. And it seems to be 
redundant to write our own API for metadata CRUD operations.



From Trove perspective, we need to define a list concrete use cases for 
metadata usage (eg given goals at [1] are out of scope of Database program, 
etc.).

From development and cross-project integration perspective, we need to 
delegate all development to Glance devs. But we still able to help Glance devs 
with this feature by taking active part in polishing proposed spec (see [2]).



Unfortunately, we’re(Trove devs) are on half way to metadata - patch for 
python-troveclient already merged. So, we need to consider 
deprecation/reverting of merged and block

merging of proposed ( see [3]) patchsets in favor of Glance Metadata Catalog.


Thoughts?

[1] https://wiki.openstack.org/wiki/Trove-Instance-Metadata

[2] https://review.openstack.org/#/c/98554/11

[3] https://review.openstack.org/#/c/82123/


Best regards,

Denis Makogon


Re: [openstack-dev] [neutron] Add static routes on neutron router to devices in the external network

2014-07-24 Thread Kevin Benton
I think external gateway routes are accepted now.
The code just checks against the CIDRs of all ports belonging to the
router. [1]


1.
https://github.com/openstack/neutron/blob/a2fff6ee728db57f0e862548aac9296899ef0fc7/neutron/db/extraroute_db.py#L106


On Wed, Jul 23, 2014 at 8:12 PM, Carl Baldwin c...@ecbaldwin.net wrote:

 I wondered the same as Kevin.  Could you confirm that the vpn gateway is
 directly connected to the external subnet or not?  The diagram isn't quite
 clear

 Assuming it is directly connected then it is probable that routes through
 the external gateway are not considered, hence the error you received.  It
 seems reasonable to me to consider a proposal that would allow this.  It
 should be an admin only capability by default since it would be over the
 external (shared) network and not a tenant network.  This seems like a new
 feature rather than a bug to me.

 As an alternative, could you try configuring your router with the static
 route so that it would send an icmp redirect to the neutron router?

 Carl
 On Jul 22, 2014 11:23 AM, Kevin Benton blak...@gmail.com wrote:

 The issue (if I understand your diagram correctly) is that the VPN GW
 address is on the other side of your home router from the neutron router.
 The nexthop address has to be an address on one of the subnets directly
 attached to the router. In this topology, the static route should be on
 your home router.

 --
 Kevin Benton


 On Tue, Jul 22, 2014 at 6:55 AM, Ricardo Carrillo Cruz 
 ricardo.carrillo.c...@gmail.com wrote:

 Hello guys

 I have the following network setup at home:

 [openstack instances] - [neutron router] - [  [home router] [vpn gw]
 ]
  TENANT NETWORK  EXTERNAL NETWORK

 I need my instances to connect to machines that are connected thru the
 vpn gw server.
 By default, all traffic that comes from openstack instances go thru the
 neutron router, and then hop onto the home router.

 I've seen there's an extra routes extension for neutron routers that
 would allow me to do that, but apparently I can't add extra routes to
 destinations in the external network, only subnets known by neutron.
 This can be seen from the neutron CLI command:

 snip
 neutron router-update router name --routes type=dict list=true
 destination=network connected by VPN in CIDR,nexthop=vpn gw IP
 Invalid format for routes: [{u'nexthop': u'vpn gw IP', u'destination':
 u'network connected by VPN in CIDR'}], the nexthop is not connected with
 router
 /snip

 Is this use case not being possible to do at all?

 P.S.
 I found Heat BP
 https://blueprints.launchpad.net/heat/+spec/router-properties-object
 that in the description reads this can be done on Neutron, but can't figure
 out how.

 Regards

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Kevin Benton

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][SR-IOV]: RE: ML2 mechanism driver for SR-IOV capable NIC based switching, ...

2014-07-24 Thread Robert Li (baoli)
Hi Kyle, 

Sorry I missed your queries on the IRC channel today. I was thinking about
this whole BP. After chatting with Irena this morning, I think that I
understand what this BP is trying to achieve overall. I also had a chat
with Sandhya afterwards. I¹d like to discuss a few things in here:
  
  ‹ Sandhya¹s MD is going to support cisco¹s VMFEX. Overall her code¹s
structure would look like very much similar to Irena¹s patch in part 1.
However, she cannot simply inherit from SriovNicSwitchMechanismDriver. The
differences for her code are: 1) get_vif_details() would populate
profileid (rather than vlanid), 2) she¹d need to do vmfex specific
processing in try_to_bind(). We¹re thinking that with a little of
generalization, SriovNicSwitchMechanismDriver() (with a changed name such
as SriovMechanismDriver()) can be used both for nic switch and vmfex. It
would look like in terms of class hierarchy:
 SriovMechanismDriver
SriovNicSwitchMechanismDriver
SriovQBRMechanismDriver
 SriovCiscoVmfexMechanismDriver

Code duplication would be reduced significantly. The change would be:
   ‹ make get_vif_details an abstract method in SriovMechanismDriver
   ‹ make an abstract method to perform specific bind action required
by a particular adaptor indicated in the PCI vendor info
   ‹ vif type and agent type should be set based on the PCI vendor
info 

A little change of patch part 1 would achieve the above

  ‹ Originally I thought that SR-IOV port¹s status would be depending on
the Sriov Agent (patch part 2). After chatting with Irena, this is not the
case. So all the SR-IOV ports will be active once created or bound
according to the try_to_bind() method. In addition, the current Sriov
Agent (patch part 2) only supports port admin status change for mlnx
adaptor. I think these caveats need to be spelled out explicitly to avoid
any confusion or misunderstanding, at least in the documentation.

  ‹ Sandhya has planned to support both intel and vmfex in her MD. This
requires a hybrid sriov mech driver that populates vif details based on
the PCI vendor info in the port. Another way to do this is to run two MDs
in the same time, one supporting intel, the other vmfex. This would work
well with the above classes. But it requires change of the two config
options (in Irena¹s patch part one) so that per MD config options can be
specified. I¹m not sure if this is practical in real deployment (meaning
use of SR-IOV adaptors from different vendors in the same deployment), but
I think it¹s doable within the existing ml2 framework.

we¹ll go over the above in the next sr-iov IRC meeting as well.

Thanks,
Robert









On 7/24/14, 1:55 PM, Kyle Mestery (Code Review) rev...@openstack.org
wrote:

Kyle Mestery has posted comments on this change.

Change subject: ML2 mechanism driver for SR-IOV capable NIC based
switching, Part 2
..


Patch Set 3: Code-Review+2 Workflow+1

I believe Irena has answered all of Robert's questions. Any subsequent
issues can be handled as a followup.

-- 
To view, visit https://review.openstack.org/107651
To unsubscribe, visit https://review.openstack.org/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: I533ccee067935326d5837f90ba321a962e8dc2a6
Gerrit-PatchSet: 3
Gerrit-Project: openstack/neutron
Gerrit-Branch: master
Gerrit-Owner: Berezovsky Irena ire...@mellanox.com
Gerrit-Reviewer: Akihiro Motoki mot...@da.jp.nec.com
Gerrit-Reviewer: Arista Testing arista-openstack-t...@aristanetworks.com
Gerrit-Reviewer: Baodong (Robert) Li ba...@cisco.com
Gerrit-Reviewer: Berezovsky Irena ire...@mellanox.com
Gerrit-Reviewer: Big Switch CI openstack...@bigswitch.com
Gerrit-Reviewer: Brocade CI openstack_ger...@brocade.com
Gerrit-Reviewer: Brocade OSS CI dl-grp-vyatta-...@brocade.com
Gerrit-Reviewer: Cisco Neutron CI cisco-openstack-neutron...@cisco.com
Gerrit-Reviewer: Freescale CI fslo...@freescale.com
Gerrit-Reviewer: Hyper-V CI hyper-v...@microsoft.com
Gerrit-Reviewer: Jenkins
Gerrit-Reviewer: Kyle Mestery mest...@mestery.com
Gerrit-Reviewer: Mellanox External Testing
mlnx-openstack...@dev.mellanox.co.il
Gerrit-Reviewer: Metaplugin CI Test metaplugint...@gmail.com
Gerrit-Reviewer: Midokura CI Bot lu...@midokura.com
Gerrit-Reviewer: NEC OpenStack CI nec-openstack...@iaas.jp.nec.com
Gerrit-Reviewer: Neutron Ryu ryu-openstack-rev...@lists.sourceforge.net
Gerrit-Reviewer: One Convergence CI oc-neutron-t...@oneconvergence.com
Gerrit-Reviewer: PLUMgrid CI plumgrid-ci...@plumgrid.com
Gerrit-Reviewer: Tail-f NCS Jenkins to...@tail-f.com
Gerrit-Reviewer: vArmour CI Test openstack-ci-t...@varmour.com
Gerrit-HasComments: No


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Yuriy Taraday
On Fri, Jul 25, 2014 at 12:05 AM, Doug Hellmann d...@doughellmann.com
wrote:


 On Jul 24, 2014, at 3:08 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:




 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com
 wrote:


 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:

 Hi, all
  The current oslo.cfg module provides an easy way to load name known
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load
 them?

   For example, I do not know the group names (section name in the
 configuration file), but we read the configuration file and detect the
 definitions inside it.

 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2

Then I want to automatically load the group1. key1 and group2.
 key2, without knowing the name of group1 first.


 If you don’t know the group name, how would you know where to look in
 the parsed configuration for the resulting options?


 I can imagine something like this:
 1. iterate over undefined groups in config;

 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.

 Registered group can be passed to a plugin/library that would register
 its options in it.


 If the options are related to the plugin, could the plugin just register
 them before it tries to use them?


 Plugin would have to register its options under a fixed group. But what if
 we want a number of plugin instances?


 Presumably something would know a name associated with each instance and
 could pass it to the plugin to use when registering its options.




 I guess it’s not clear what problem you’re actually trying to solve by
 proposing this change to the way the config files are parsed. That doesn’t
 mean your idea is wrong, just that I can’t evaluate it or point out another
 solution. So what is it that you’re trying to do that has led to this
 suggestion?


 I don't exactly know what the original author's intention is but I don't
 generally like the fact that all libraries and plugins wanting to use
 config have to influence global CONF instance.


 That is a common misconception. The use of a global configuration option
 is an application developer choice. The config library does not require it.
 Some of the other modules in the oslo incubator expect a global config
 object because they started life in applications with that pattern, but as
 we move them to libraries we are updating the APIs to take a ConfigObj as
 argument (see oslo.messaging and oslo.db for examples).


What I mean is that instead of passing ConfigObj and a section name in
arguments for some plugin/lib it would be cleaner to receive an object that
represents one section of config, not the whole config at once.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Matthew Treinish
On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
 OpenStack has a substantial CI system that is core to its development
 process.  The goals of the system are to facilitate merging good code,
 prevent regressions, and ensure that there is at least one configuration
 of upstream OpenStack that we know works as a whole.  The project
 gating technique that we use is effective at preventing many kinds of
 regressions from landing, however more subtle, non-deterministic bugs
 can still get through, and these are the bugs that are currently
 plaguing developers with seemingly random test failures.
 
 Most of these bugs are not failures of the test system; they are real
 bugs.  Many of them have even been in OpenStack for a long time, but are
 only becoming visible now due to improvements in our tests.  That's not
 much help to developers whose patches are being hit with negative test
 results from unrelated failures.  We need to find a way to address the
 non-deterministic bugs that are lurking in OpenStack without making it
 easier for new bugs to creep in.
 
 The CI system and project infrastructure are not static.  They have
 evolved with the project to get to where they are today, and the
 challenge now is to continue to evolve them to address the problems
 we're seeing now.  The QA and Infrastructure teams recently hosted a
 sprint where we discussed some of these issues in depth.  This post from
 Sean Dague goes into a bit of the background: [1].  The rest of this
 email outlines the medium and long-term changes we would like to make to
 address these problems.
 
 [1] https://dague.net/2014/07/22/openstack-failures/
 
 ==Things we're already doing==
 
 The elastic-recheck tool[2] is used to identify random failures in
 test runs.  It tries to match failures to known bugs using signatures
 created from log messages.  It helps developers prioritize bugs by how
 frequently they manifest as test failures.  It also collects information
 on unclassified errors -- we can see how many (and which) test runs
 failed for an unknown reason and our overall progress on finding
 fingerprints for random failures.
 
 [2] http://status.openstack.org/elastic-recheck/
 
 We added a feature to Zuul that lets us manually promote changes to
 the top of the Gate pipeline.  When the QA team identifies a change that
 fixes a bug that is affecting overall gate stability, we can move that
 change to the top of the queue so that it may merge more quickly.
 
 We added the clean check facility in reaction to the January gate break
 down. While it does mean that any individual patch might see more tests
 run on it, it's now largely kept the gate queue at a countable number of
 hours, instead of regularly growing to more than a work day in
 length. It also means that a developer can Approve a code merge before
 tests have returned, and not ruin it for everyone else if there turned
 out to be a bug that the tests could catch.
 
 ==Future changes==
 
 ===Communication===
 We used to be better at communicating about the CI system.  As it and
 the project grew, we incrementally added to our institutional knowledge,
 but we haven't been good about maintaining that information in a form
 that new or existing contributors can consume to understand what's going
 on and why.
 
 We have started on a major effort in that direction that we call the
 infra-manual project -- it's designed to be a comprehensive user
 manual for the project infrastructure, including the CI process.  Even
 before that project is complete, we will write a document that
 summarizes the CI system and ensure it is included in new developer
 documentation and linked to from test results.
 
 There are also a number of ways for people to get involved in the CI
 system, whether focused on Infrastructure or QA, but it is not always
 clear how to do so.  We will improve our documentation to highlight how
 to contribute.
 
 ===Fixing Faster===
 
 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.
 
 [3] https://etherpad.openstack.org/p/gatetriage-june2014
 
 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found in the queues it will be given higher priority, including bubbling
 up to the top of the gate queue 

[openstack-dev] debug logs and defaults was (Thoughts on the patch test failure rate and moving forward)

2014-07-24 Thread Robert Collins
On 25 July 2014 08:01, Sean Dague s...@dague.net wrote:

 I'd like us to think about whether they is anything we can do to make
 life easier in these kind of hard debugging scenarios where the regular
 logs are not sufficient.

 Agreed. Honestly, though we do also need to figure out first fail
 detection on our logs as well. Because realistically if we can't debug
 failures from those, then I really don't understand how we're ever going
 to expect large users to.


I'm so glad you said that :). In conversations with our users, and
existing large deployers of Openstack, one thing has come through very
consistently: our default logs are insufficient.

We had an extensive discussion about this in the TripleO mid-cycle
meetup, and I think we reached broad consensus on the following:
 - the defaults should be what folk are running in production
 - we don't want to lead on changing defaults - its a big enough thing
we want to drive the discussion but not workaround it by changing our
defaults
 - large clouds are *today* running debug (with a few tweaks to remove
the most egregious log spammers and known security issues [like
dumping tokens into logs]
 - AFAICT productised clouds (push-button deploy etc) are running
something very similar
 - we would love it if developers *also* saw what users will see by
default, since that will tend to both stop things getting to spammy,
and too sparse.

So - I know thats brief - what we'd like to do is to poll a slightly
wider set of deployers - e.g. via a spec, perhaps some help from Tom
with the users and ops groups - and get a baseline of things that
there is consensus on and things that aren't, and then just change the
defaults to match. Further, to achieve the 'developers see the same
thing as users' bit, we'd like to make devstack do what TripleO does -
use defaults for logging levels, particularly in the gate.

Its totally true that we have a good policy about logging and we're
changing things to fit it but thats the long term play: short term,
making the default meet our deployments seems realtively easy and
immensely sane.

-Rob
-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-24 Thread Jay Pipes

On 07/24/2014 10:05 AM, CARVER, PAUL wrote:

Alan Kavanagh wrote:


If we have more work being put on the table, then more Core
members would definitely go a long way with assisting this, we cant
wait for folks to be reviewing stuff as an excuse to not get
features landed in a given release.


We absolutely can and should wait for folks to be reviewing stuff
properly. A large number of problems in OpenStack code and flawed design
can be attributed to impatience and pushing through code that wasn't ready.

I've said this many times, but the best way to get core reviews on
patches that you submit is to put the effort into reviewing others'
code. Core reviewers are more willing to do reviews for someone who is
clearly trying to help the project in more ways than just pushing their 
own code. Note that, Alan, I'm not trying to imply that you are guilty 
of the above! :) I'm just recommending techniques for the general

contributor community who are not on a core team (including myself!).


Stability is absolutely essential so we can't force things through
without adequate review. The automated CI testing in OpenStack is
impressive, but it is far from flawless and even if it worked
perfectly it's still just CI, not AI. There's a large class of
problems that it just can't catch.


Yes, exactly.


I agree with Alan that if there's a discrepancy between the amount
of code that folks would like to land in a release and the number of
core member working hours in a six month period then that is
something the board needs to take an interest in.


Well, technically this is not at all what the OpenStack board is
responsible for. This is likely the purview of the PTLs, the individual
project core teams, and possibly the Technical Committee. The board is
really about company-to-company interests, legal and product marketing
topics, and such.


I think a friendly adversarial approach is healthy for OpenStack.
Specs and code should need to be defended, not just rubber stamped.


++


Having core reviewers critiquing code written by their competitors,
suppliers, or vendors is healthy for the overall code quality.


++


However, simply having specs and code not get reviewed at all due to
a shortage of core reviewers is not healthy and will limit the
success of OpenStack.


Agreed.


I don't really follow Linux kernel development, but a quick search
turned up [1] which seems to indicate at least one additional level
between developer and core (depending on whether we consider Linus
and Andrew levels unto themselves and whether we consider OpenStack
projects as full systems or as subsystems of OpenStack.

Speaking only for myself and not ATT, I'm disappointed that my
employer doesn't have more developers actively writing code.


As an ex-ATT-er, I would agree with that sentiment.


We ought to (in my personal opinion) be supplying core reviewers to
at least a couple of OpenStack projects. But one way or another we
need to get more capabilities reviewed and merged. My personal top
disappointments are with the current state of IPv6, HA, and QoS, but
I'm sure other folks can list lots of other capabilities that
they're really going to be frustrated to find lacking in Juno.


I agree with you. It's not something that is fixable overnight, or by a 
small group of people, IMO. It's something that needs to be addressed by 
the core project teams, acting as a group in order to reduce review wait 
times and ensure that there is responsiveness, transparency and 
thoroughness to the review (code as well as spec) process.


I put together some slides recently that have some insights and 
(hopefully) some helpful suggestions for both doing and receiving code 
reviews, as well as staying sane in the era of corporate agendas. 
Perhaps folks will find it useful:


http://bit.ly/navigating-openstack-community

Best,
-jay


[1]
http://techblog.aasisvinayak.com/linux-kernel-development-process-how-it-works/





___ OpenStack-dev
mailing list OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Trove] Metadata Catalog

2014-07-24 Thread Tim Simpson
I agree as well.

I think we should spend less time worrying about what other projects in 
OpenStack might do in the future and spend more time on adding the features we 
need today to Trove. I understand that it's better to work together but too 
often we stop progress on something in Trove to wait on a feature in another 
project that is either incomplete or merely being planned.

While this stems from our strong desire to be part of the community, which is a 
good thing, it hasn't actually led many of us to do work for these other 
projects. At the same time, its negatively impacted Trove. I also think it 
leads us to over-design or incorrectly design features as we plan for 
functionality in other projects that may never materialize in the forms we 
expect.

So my vote is we merge our own metadata feature and not fret over how metadata 
may end up working in Glance.

Thanks,

Tim


From: Iccha Sethi [iccha.se...@rackspace.com]
Sent: Thursday, July 24, 2014 4:02 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][Trove] Metadata Catalog

+1

We are unsure when these changes will get into glance.
IMO we should go ahead will our instance metadata patch for now and when things 
are ready in glance land we can consider migrating to using that as a generic 
metadata repository.

Thanks,
Iccha

From: Craig Vyvial cp16...@gmail.commailto:cp16...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, July 24, 2014 at 3:04 PM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Glance][Trove] Metadata Catalog

Denis,

The scope of the metadata api goes beyond just using the glance metadata. The 
metadata can be used for instances and and other objects to add extra data like 
tags or something else that maybe a UI might want to use. We need this feature 
either way.

-Craig


On Thu, Jul 24, 2014 at 12:17 PM, Amrith Kumar 
amr...@tesora.commailto:amr...@tesora.com wrote:
Speaking as a ‘database guy’ and a ‘Trove guy’, I’ll say this; “Metadata” is a 
very generic term and the meaning of “metadata” in a database context is very 
different from the meaning of “metadata” in the context that Glance is 
providing.

Furthermore the usage and access pattern for this metadata, the frequency of 
change, and above all the frequency of access are fundamentally different 
between Trove and what Glance appears to be offering, and we should probably 
not get too caught up in the project “title”.

We would not be “reinventing the wheel” if we implemented an independent 
metadata scheme for Trove; we would be implementing the right kind of when for 
the vehicle that we are operating. Therefore I do not agree with your 
characterization that concludes that:

 given goals at [1] are out of scope of Database program, etc

Just to be clear, when you write:

 Unfortunately, we’re(Trove devs) are on half way to metadata …

it is vital to understand that our view of “metadata” is very different from 
(for example, a file system’s view of metadata, or potentially Glance’s view of 
metadata). For that reason, I believe that your comments on 
https://review.openstack.org/#/c/82123/16 are also somewhat extreme.

Before postulating a solution (or “delegating development to Glance devs”), it 
would be more useful to fully describe the problem being solved by Glance and 
the problem(s) we are looking to solve in Trove, and then we could have a 
meaningful discussion about the right solution.

I submit to you that we will come away concluding that there is a round peg, 
and a square hole. Yes, one will fit in the other but the final product will 
leave neither party particularly happy with the end result.

-amrith

From: Denis Makogon [mailto:dmako...@mirantis.commailto:dmako...@mirantis.com]
Sent: Thursday, July 24, 2014 9:33 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Glance][Trove] Metadata Catalog


Hello, Stackers.

 I’d like to discuss the future of Trove metadata API. But first small 
history info (mostly taken for Trove medata spec, see [1]):
Instance metadata is a feature that has been requested frequently by our users. 
They need a way to store critical information for their instances and have that 
be associated with the instance so that it is displayed whenever that instance 
is listed via the API. This also becomes very usable from a testing perspective 
when doing integration/ci. We can utilize the metadata to store things like 
what process created the instance, what the instance is being used for, etc... 
The design for this feature is modeled heavily on the Nova metadata API with a 
few tweaks in how it works internally.

And here comes conflict. Glance devs are working on “Glance 

Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Angus Salkeld
On Wed, 2014-07-23 at 14:39 -0700, James E. Blair wrote:
 OpenStack has a substantial CI system that is core to its development
 process.  The goals of the system are to facilitate merging good code,
 prevent regressions, and ensure that there is at least one configuration
 of upstream OpenStack that we know works as a whole.  The project
 gating technique that we use is effective at preventing many kinds of
 regressions from landing, however more subtle, non-deterministic bugs
 can still get through, and these are the bugs that are currently
 plaguing developers with seemingly random test failures.
 
 Most of these bugs are not failures of the test system; they are real
 bugs.  Many of them have even been in OpenStack for a long time, but are
 only becoming visible now due to improvements in our tests.  That's not
 much help to developers whose patches are being hit with negative test
 results from unrelated failures.  We need to find a way to address the
 non-deterministic bugs that are lurking in OpenStack without making it
 easier for new bugs to creep in.
 
 The CI system and project infrastructure are not static.  They have
 evolved with the project to get to where they are today, and the
 challenge now is to continue to evolve them to address the problems
 we're seeing now.  The QA and Infrastructure teams recently hosted a
 sprint where we discussed some of these issues in depth.  This post from
 Sean Dague goes into a bit of the background: [1].  The rest of this
 email outlines the medium and long-term changes we would like to make to
 address these problems.
 
 [1] https://dague.net/2014/07/22/openstack-failures/
 
 ==Things we're already doing==
 
 The elastic-recheck tool[2] is used to identify random failures in
 test runs.  It tries to match failures to known bugs using signatures
 created from log messages.  It helps developers prioritize bugs by how
 frequently they manifest as test failures.  It also collects information
 on unclassified errors -- we can see how many (and which) test runs
 failed for an unknown reason and our overall progress on finding
 fingerprints for random failures.
 
 [2] http://status.openstack.org/elastic-recheck/
 
 We added a feature to Zuul that lets us manually promote changes to
 the top of the Gate pipeline.  When the QA team identifies a change that
 fixes a bug that is affecting overall gate stability, we can move that
 change to the top of the queue so that it may merge more quickly.
 
 We added the clean check facility in reaction to the January gate break
 down. While it does mean that any individual patch might see more tests
 run on it, it's now largely kept the gate queue at a countable number of
 hours, instead of regularly growing to more than a work day in
 length. It also means that a developer can Approve a code merge before
 tests have returned, and not ruin it for everyone else if there turned
 out to be a bug that the tests could catch.
 
 ==Future changes==
 
 ===Communication===
 We used to be better at communicating about the CI system.  As it and
 the project grew, we incrementally added to our institutional knowledge,
 but we haven't been good about maintaining that information in a form
 that new or existing contributors can consume to understand what's going
 on and why.
 
 We have started on a major effort in that direction that we call the
 infra-manual project -- it's designed to be a comprehensive user
 manual for the project infrastructure, including the CI process.  Even
 before that project is complete, we will write a document that
 summarizes the CI system and ensure it is included in new developer
 documentation and linked to from test results.
 
 There are also a number of ways for people to get involved in the CI
 system, whether focused on Infrastructure or QA, but it is not always
 clear how to do so.  We will improve our documentation to highlight how
 to contribute.
 
 ===Fixing Faster===
 
 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.
 
 [3] https://etherpad.openstack.org/p/gatetriage-june2014
 
 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found in the queues it will be given higher priority, including bubbling
 up to the top of the gate queue 

Re: [openstack-dev] [Murano]

2014-07-24 Thread Alexander Tivelkov
Hi Steve,

Sorry I've missed this discussion for a while, but it looks like I have to
add my 5 cents here now.

Initially our intension was to make each Murano component self
deployable, i.e. to incapsulate within its deploy method all the
necessary actions to create the component, including generation of Heat
snippet, merging it to the environment's template, pushing this template to
Heat and doing any post-heat configuration if needed via Murano Agent.

That's why the deploy method of NeutronNetwork class is
doing $.environment.stack.push() - to make sure that the network is created
when this method is called, regardless of the usages of this network in
other components of the Environment. If you remove it from there, the call
to network.deploy() will simply update the template in the
environment.stack, but the actual update will not happen. So, the deploy
method will not actually deploy anything - it will just prepare some
snippet for future pushing.

I understand your concerns though. But probably the solution should be more
complex - and I like the idea of having event-based workflow proposed by
Stan above.
I even don't think that we do really need the deploy() methods in the Apps
or Components.
Instead, I suggest to have more fine-grained workflow steps which are
executed by higher-level entity , such as Environment.

For example, heat-based components may have createHeatSnippet() methods
which just return a part of the heat template corresponding to the
component. The deploy method of the environment may iteratively process all
its components (and their nested components as well, of course), call this
createHeatSnippet methods, merge the results into a single template - and
then push this template as a single call to Heat. Then a post-heat config
phase may be executed, if needed to run something with Murano Agent (as
Heat Software Config is now the recommended way to deploy the software,
there should be not too many of such needs - only for Windows-based
deployments and other legacy stuff).


--
Regards,
Alexander Tivelkov


On Tue, Jul 22, 2014 at 2:59 PM, Lee Calcote (lecalcot) lecal...@cisco.com
wrote:

  Gents,

  For what it’s worth - We’ve long accounting for “extension points”
 within our VM and physical server provisioning flows, where developers may
 drop in code to augment OOTB behavior with customer/solution-specific
 needs.  While there are many extension points laced throughout different
 points in the provisioning flow, we pervasively injected “pre” and “post”
 provisioning extension points to allow for easy customization (like the one
 being attempted by Steve).

  The notions of prepareDeploy and finishDeploy resonant well.

  Regards,
 Lee

 *Lee Calcote*


 * Sr. Software Engineering Manager Cloud and Virtualization Group *
 Phone: 512-378-8835
 Mail/Jabber/Video: *lecal...@cisco.com*

 United States
 www.cisco.com

   From: Stan Lagun sla...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Tuesday, July 22, 2014 at 4:37 PM
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Murano]

   Hi Steve,

  1. There are no objections whatsoever if you know how to do it without
 breaking the entire concept
 2. I thing that deployment workflow need to be broken to more fine-grained
 steps. Maybe instead of single deploy methdos have prepareDeploy (which
 doesn't push the changes to Heat), deploy and finishDeploy. Maybe
 more/other methods need to be defined. This will make the whole process
 more customizible
 3. If you want to have single-instance applications based on a fixed
 prebuild image then maybe what you need is to have your apps inhertir both
 Application and Instance classes and then override Instance's deploy method
 and add HOT snippet before VM instantiation. This may also require ability
 for child class to bind fixed values to parent class properties (narrowing
 class public contract, hiding those properties from user). This is not yet
 supported in MuranoPL but can be done in UI form as a temporary workaround
 4. Didn't get why you mentioned object model. Object model is mostly user
 input. Do you suggest passing HOT snippets as part of user input? If so
 that would be something I oppose to
 5. I guess image tagging would be better solution to image-based deployment
 6. Personally I believe that problem can be eficently solved by Murano
 today or in the nearest future without resorting to pure HOT packages. This
 is not against Murano design and perfectly alligned with it


  Sincerely yours,
 Stan Lagun
 Principal Software Engineer @ Mirantis

  sla...@mirantis.com


 On Tue, Jul 22, 2014 at 8:05 PM, McLellan, Steven steve.mclel...@hp.com
 wrote:

  Hi,



 This is a little rambling, so I’ll put this summary here and some
 discussion below. I would like to be able to add heat template 

Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Sean Dague
On 07/24/2014 06:15 PM, Angus Salkeld wrote:
 On Wed, 2014-07-23 at 14:39 -0700, James E. Blair wrote:
 OpenStack has a substantial CI system that is core to its development
 process.  The goals of the system are to facilitate merging good code,
 prevent regressions, and ensure that there is at least one configuration
 of upstream OpenStack that we know works as a whole.  The project
 gating technique that we use is effective at preventing many kinds of
 regressions from landing, however more subtle, non-deterministic bugs
 can still get through, and these are the bugs that are currently
 plaguing developers with seemingly random test failures.

 Most of these bugs are not failures of the test system; they are real
 bugs.  Many of them have even been in OpenStack for a long time, but are
 only becoming visible now due to improvements in our tests.  That's not
 much help to developers whose patches are being hit with negative test
 results from unrelated failures.  We need to find a way to address the
 non-deterministic bugs that are lurking in OpenStack without making it
 easier for new bugs to creep in.

 The CI system and project infrastructure are not static.  They have
 evolved with the project to get to where they are today, and the
 challenge now is to continue to evolve them to address the problems
 we're seeing now.  The QA and Infrastructure teams recently hosted a
 sprint where we discussed some of these issues in depth.  This post from
 Sean Dague goes into a bit of the background: [1].  The rest of this
 email outlines the medium and long-term changes we would like to make to
 address these problems.

 [1] https://dague.net/2014/07/22/openstack-failures/

 ==Things we're already doing==

 The elastic-recheck tool[2] is used to identify random failures in
 test runs.  It tries to match failures to known bugs using signatures
 created from log messages.  It helps developers prioritize bugs by how
 frequently they manifest as test failures.  It also collects information
 on unclassified errors -- we can see how many (and which) test runs
 failed for an unknown reason and our overall progress on finding
 fingerprints for random failures.

 [2] http://status.openstack.org/elastic-recheck/

 We added a feature to Zuul that lets us manually promote changes to
 the top of the Gate pipeline.  When the QA team identifies a change that
 fixes a bug that is affecting overall gate stability, we can move that
 change to the top of the queue so that it may merge more quickly.

 We added the clean check facility in reaction to the January gate break
 down. While it does mean that any individual patch might see more tests
 run on it, it's now largely kept the gate queue at a countable number of
 hours, instead of regularly growing to more than a work day in
 length. It also means that a developer can Approve a code merge before
 tests have returned, and not ruin it for everyone else if there turned
 out to be a bug that the tests could catch.

 ==Future changes==

 ===Communication===
 We used to be better at communicating about the CI system.  As it and
 the project grew, we incrementally added to our institutional knowledge,
 but we haven't been good about maintaining that information in a form
 that new or existing contributors can consume to understand what's going
 on and why.

 We have started on a major effort in that direction that we call the
 infra-manual project -- it's designed to be a comprehensive user
 manual for the project infrastructure, including the CI process.  Even
 before that project is complete, we will write a document that
 summarizes the CI system and ensure it is included in new developer
 documentation and linked to from test results.

 There are also a number of ways for people to get involved in the CI
 system, whether focused on Infrastructure or QA, but it is not always
 clear how to do so.  We will improve our documentation to highlight how
 to contribute.

 ===Fixing Faster===

 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.

 [3] https://etherpad.openstack.org/p/gatetriage-june2014

 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found in the queues it will be given higher priority, including bubbling
 up to the 

Re: [openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-24 Thread Doug Hellmann

On Jul 24, 2014, at 5:43 PM, Yuriy Taraday yorik@gmail.com wrote:

 
 
 
 On Fri, Jul 25, 2014 at 12:05 AM, Doug Hellmann d...@doughellmann.com wrote:
 
 On Jul 24, 2014, at 3:08 PM, Yuriy Taraday yorik@gmail.com wrote:
 
 
 
 
 On Thu, Jul 24, 2014 at 10:31 PM, Doug Hellmann d...@doughellmann.com 
 wrote:
 
 On Jul 24, 2014, at 1:58 PM, Yuriy Taraday yorik@gmail.com wrote:
 
 
 
 
 On Thu, Jul 24, 2014 at 4:14 PM, Doug Hellmann d...@doughellmann.com 
 wrote:
 
 On Jul 23, 2014, at 11:10 PM, Baohua Yang yangbao...@gmail.com wrote:
 
 Hi, all
  The current oslo.cfg module provides an easy way to load name known 
 options/groups from he configuration files.
   I am wondering if there's a possible solution to dynamically load 
 them?
 
   For example, I do not know the group names (section name in the 
 configuration file), but we read the configuration file and detect the 
 definitions inside it.
 
 #Configuration file:
 [group1]
 key1 = value1
 key2 = value2
 
Then I want to automatically load the group1. key1 and group2. 
 key2, without knowing the name of group1 first.
 
 If you don’t know the group name, how would you know where to look in the 
 parsed configuration for the resulting options?
 
 I can imagine something like this:
 1. iterate over undefined groups in config;
 2. select groups of interest (e.g. by prefix or some regular expression);
 3. register options in them;
 4. use those options.
 
 Registered group can be passed to a plugin/library that would register its 
 options in it.
 
 If the options are related to the plugin, could the plugin just register 
 them before it tries to use them?
 
 Plugin would have to register its options under a fixed group. But what if 
 we want a number of plugin instances? 
 
 Presumably something would know a name associated with each instance and 
 could pass it to the plugin to use when registering its options.
 
  
 
 I guess it’s not clear what problem you’re actually trying to solve by 
 proposing this change to the way the config files are parsed. That doesn’t 
 mean your idea is wrong, just that I can’t evaluate it or point out another 
 solution. So what is it that you’re trying to do that has led to this 
 suggestion?
 
 I don't exactly know what the original author's intention is but I don't 
 generally like the fact that all libraries and plugins wanting to use config 
 have to influence global CONF instance.
 
 That is a common misconception. The use of a global configuration option is 
 an application developer choice. The config library does not require it. Some 
 of the other modules in the oslo incubator expect a global config object 
 because they started life in applications with that pattern, but as we move 
 them to libraries we are updating the APIs to take a ConfigObj as argument 
 (see oslo.messaging and oslo.db for examples).
 
 What I mean is that instead of passing ConfigObj and a section name in 
 arguments for some plugin/lib it would be cleaner to receive an object that 
 represents one section of config, not the whole config at once.

The new ConfigFilter class lets you do something like what you want [1]. The 
options are visible only in the filtered view created by the plugin, so the 
application can’t see them. That provides better data separation, and prevents 
the options used by the plugin or library from becoming part of its API.

Doug

[1] http://docs.openstack.org/developer/oslo.config/cfgfilter.html

 
 -- 
 
 Kind regards, Yuriy.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread John Dickinson

On Jul 24, 2014, at 3:25 PM, Sean Dague s...@dague.net wrote:

 On 07/24/2014 06:15 PM, Angus Salkeld wrote:
 On Wed, 2014-07-23 at 14:39 -0700, James E. Blair wrote:
 OpenStack has a substantial CI system that is core to its development
 process.  The goals of the system are to facilitate merging good code,
 prevent regressions, and ensure that there is at least one configuration
 of upstream OpenStack that we know works as a whole.  The project
 gating technique that we use is effective at preventing many kinds of
 regressions from landing, however more subtle, non-deterministic bugs
 can still get through, and these are the bugs that are currently
 plaguing developers with seemingly random test failures.
 
 Most of these bugs are not failures of the test system; they are real
 bugs.  Many of them have even been in OpenStack for a long time, but are
 only becoming visible now due to improvements in our tests.  That's not
 much help to developers whose patches are being hit with negative test
 results from unrelated failures.  We need to find a way to address the
 non-deterministic bugs that are lurking in OpenStack without making it
 easier for new bugs to creep in.
 
 The CI system and project infrastructure are not static.  They have
 evolved with the project to get to where they are today, and the
 challenge now is to continue to evolve them to address the problems
 we're seeing now.  The QA and Infrastructure teams recently hosted a
 sprint where we discussed some of these issues in depth.  This post from
 Sean Dague goes into a bit of the background: [1].  The rest of this
 email outlines the medium and long-term changes we would like to make to
 address these problems.
 
 [1] https://dague.net/2014/07/22/openstack-failures/
 
 ==Things we're already doing==
 
 The elastic-recheck tool[2] is used to identify random failures in
 test runs.  It tries to match failures to known bugs using signatures
 created from log messages.  It helps developers prioritize bugs by how
 frequently they manifest as test failures.  It also collects information
 on unclassified errors -- we can see how many (and which) test runs
 failed for an unknown reason and our overall progress on finding
 fingerprints for random failures.
 
 [2] http://status.openstack.org/elastic-recheck/
 
 We added a feature to Zuul that lets us manually promote changes to
 the top of the Gate pipeline.  When the QA team identifies a change that
 fixes a bug that is affecting overall gate stability, we can move that
 change to the top of the queue so that it may merge more quickly.
 
 We added the clean check facility in reaction to the January gate break
 down. While it does mean that any individual patch might see more tests
 run on it, it's now largely kept the gate queue at a countable number of
 hours, instead of regularly growing to more than a work day in
 length. It also means that a developer can Approve a code merge before
 tests have returned, and not ruin it for everyone else if there turned
 out to be a bug that the tests could catch.
 
 ==Future changes==
 
 ===Communication===
 We used to be better at communicating about the CI system.  As it and
 the project grew, we incrementally added to our institutional knowledge,
 but we haven't been good about maintaining that information in a form
 that new or existing contributors can consume to understand what's going
 on and why.
 
 We have started on a major effort in that direction that we call the
 infra-manual project -- it's designed to be a comprehensive user
 manual for the project infrastructure, including the CI process.  Even
 before that project is complete, we will write a document that
 summarizes the CI system and ensure it is included in new developer
 documentation and linked to from test results.
 
 There are also a number of ways for people to get involved in the CI
 system, whether focused on Infrastructure or QA, but it is not always
 clear how to do so.  We will improve our documentation to highlight how
 to contribute.
 
 ===Fixing Faster===
 
 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.
 
 [3] https://etherpad.openstack.org/p/gatetriage-june2014
 
 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found 

[openstack-dev] [Keystone] Removed admin role from admin user/tenant, can't add back

2014-07-24 Thread Pendergrass, Eric
In an effort to test ceilometer roles I removed the admin role from the
admin tenant and user.  Now I can't add it back since I don't have a
user/tenant combo with the admin role:

 

keystone user-role-add --role e4252b63c308470b8cb7f77c37d27632 --user
8c678720fb5b4e3bb18dee222d7d7933 --tenant 9229d9ffed3d4fe2aa00d7575acd7ada

You are not authorized to perform the requested action: admin_required
(Disable debug mode to suppress these details.) (HTTP 403)

 

Is there a way to do this in the mysql database if I know the
user/tenant/role IDs?  Or, is there another way with keystone client?

 

Thanks,

Eric



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] debug logs and defaults was (Thoughts on the patch test failure rate and moving forward)

2014-07-24 Thread Doug Hellmann

On Jul 24, 2014, at 6:05 PM, Robert Collins robe...@robertcollins.net wrote:

 On 25 July 2014 08:01, Sean Dague s...@dague.net wrote:
 
 I'd like us to think about whether they is anything we can do to make
 life easier in these kind of hard debugging scenarios where the regular
 logs are not sufficient.
 
 Agreed. Honestly, though we do also need to figure out first fail
 detection on our logs as well. Because realistically if we can't debug
 failures from those, then I really don't understand how we're ever going
 to expect large users to.
 
 
 I'm so glad you said that :). In conversations with our users, and
 existing large deployers of Openstack, one thing has come through very
 consistently: our default logs are insufficient.
 
 We had an extensive discussion about this in the TripleO mid-cycle
 meetup, and I think we reached broad consensus on the following:
 - the defaults should be what folk are running in production
 - we don't want to lead on changing defaults - its a big enough thing
 we want to drive the discussion but not workaround it by changing our
 defaults
 - large clouds are *today* running debug (with a few tweaks to remove
 the most egregious log spammers and known security issues [like
 dumping tokens into logs]
 - AFAICT productised clouds (push-button deploy etc) are running
 something very similar
 - we would love it if developers *also* saw what users will see by
 default, since that will tend to both stop things getting to spammy,
 and too sparse.
 
 So - I know thats brief - what we'd like to do is to poll a slightly
 wider set of deployers - e.g. via a spec, perhaps some help from Tom

This one would be a good place for that conversation to start: 
https://review.openstack.org/#/c/91446/

 with the users and ops groups - and get a baseline of things that
 there is consensus on and things that aren't, and then just change the
 defaults to match. Further, to achieve the 'developers see the same
 thing as users' bit, we'd like to make devstack do what TripleO does -
 use defaults for logging levels, particularly in the gate.
 
 Its totally true that we have a good policy about logging and we're
 changing things to fit it but thats the long term play: short term,
 making the default meet our deployments seems realtively easy and
 immensely sane.
 
 -Rob
 -- 
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-24 Thread Sean Dague
On 07/24/2014 05:57 PM, Matthew Treinish wrote:
 On Wed, Jul 23, 2014 at 02:39:47PM -0700, James E. Blair wrote:
 OpenStack has a substantial CI system that is core to its development
 process.  The goals of the system are to facilitate merging good code,
 prevent regressions, and ensure that there is at least one configuration
 of upstream OpenStack that we know works as a whole.  The project
 gating technique that we use is effective at preventing many kinds of
 regressions from landing, however more subtle, non-deterministic bugs
 can still get through, and these are the bugs that are currently
 plaguing developers with seemingly random test failures.

 Most of these bugs are not failures of the test system; they are real
 bugs.  Many of them have even been in OpenStack for a long time, but are
 only becoming visible now due to improvements in our tests.  That's not
 much help to developers whose patches are being hit with negative test
 results from unrelated failures.  We need to find a way to address the
 non-deterministic bugs that are lurking in OpenStack without making it
 easier for new bugs to creep in.

 The CI system and project infrastructure are not static.  They have
 evolved with the project to get to where they are today, and the
 challenge now is to continue to evolve them to address the problems
 we're seeing now.  The QA and Infrastructure teams recently hosted a
 sprint where we discussed some of these issues in depth.  This post from
 Sean Dague goes into a bit of the background: [1].  The rest of this
 email outlines the medium and long-term changes we would like to make to
 address these problems.

 [1] https://dague.net/2014/07/22/openstack-failures/

 ==Things we're already doing==

 The elastic-recheck tool[2] is used to identify random failures in
 test runs.  It tries to match failures to known bugs using signatures
 created from log messages.  It helps developers prioritize bugs by how
 frequently they manifest as test failures.  It also collects information
 on unclassified errors -- we can see how many (and which) test runs
 failed for an unknown reason and our overall progress on finding
 fingerprints for random failures.

 [2] http://status.openstack.org/elastic-recheck/

 We added a feature to Zuul that lets us manually promote changes to
 the top of the Gate pipeline.  When the QA team identifies a change that
 fixes a bug that is affecting overall gate stability, we can move that
 change to the top of the queue so that it may merge more quickly.

 We added the clean check facility in reaction to the January gate break
 down. While it does mean that any individual patch might see more tests
 run on it, it's now largely kept the gate queue at a countable number of
 hours, instead of regularly growing to more than a work day in
 length. It also means that a developer can Approve a code merge before
 tests have returned, and not ruin it for everyone else if there turned
 out to be a bug that the tests could catch.

 ==Future changes==

 ===Communication===
 We used to be better at communicating about the CI system.  As it and
 the project grew, we incrementally added to our institutional knowledge,
 but we haven't been good about maintaining that information in a form
 that new or existing contributors can consume to understand what's going
 on and why.

 We have started on a major effort in that direction that we call the
 infra-manual project -- it's designed to be a comprehensive user
 manual for the project infrastructure, including the CI process.  Even
 before that project is complete, we will write a document that
 summarizes the CI system and ensure it is included in new developer
 documentation and linked to from test results.

 There are also a number of ways for people to get involved in the CI
 system, whether focused on Infrastructure or QA, but it is not always
 clear how to do so.  We will improve our documentation to highlight how
 to contribute.

 ===Fixing Faster===

 We introduce bugs to OpenStack at some constant rate, which piles up
 over time. Our systems currently treat all changes as equally risky and
 important to the health of the system, which makes landing code changes
 to fix key bugs slow when we're at a high reset rate. We've got a manual
 process of promoting changes today to get around this, but that's
 actually quite costly in people time, and takes getting all the right
 people together at once to promote changes. You can see a number of the
 changes we promoted during the gate storm in June [3], and it was no
 small number of fixes to get us back to a reasonably passing gate. We
 think that optimizing this system will help us land fixes to critical
 bugs faster.

 [3] https://etherpad.openstack.org/p/gatetriage-june2014

 The basic idea is to use the data from elastic recheck to identify that
 a patch is fixing a critical gate related bug. When one of these is
 found in the queues it will be given higher priority, including bubbling
 up 

Re: [openstack-dev] debug logs and defaults was (Thoughts on the patch test failure rate and moving forward)

2014-07-24 Thread Sean Dague
On 07/24/2014 06:44 PM, Doug Hellmann wrote:
 
 On Jul 24, 2014, at 6:05 PM, Robert Collins robe...@robertcollins.net wrote:
 
 On 25 July 2014 08:01, Sean Dague s...@dague.net wrote:

 I'd like us to think about whether they is anything we can do to make
 life easier in these kind of hard debugging scenarios where the regular
 logs are not sufficient.

 Agreed. Honestly, though we do also need to figure out first fail
 detection on our logs as well. Because realistically if we can't debug
 failures from those, then I really don't understand how we're ever going
 to expect large users to.


 I'm so glad you said that :). In conversations with our users, and
 existing large deployers of Openstack, one thing has come through very
 consistently: our default logs are insufficient.

 We had an extensive discussion about this in the TripleO mid-cycle
 meetup, and I think we reached broad consensus on the following:
 - the defaults should be what folk are running in production
 - we don't want to lead on changing defaults - its a big enough thing
 we want to drive the discussion but not workaround it by changing our
 defaults
 - large clouds are *today* running debug (with a few tweaks to remove
 the most egregious log spammers and known security issues [like
 dumping tokens into logs]
 - AFAICT productised clouds (push-button deploy etc) are running
 something very similar
 - we would love it if developers *also* saw what users will see by
 default, since that will tend to both stop things getting to spammy,
 and too sparse.

 So - I know thats brief - what we'd like to do is to poll a slightly
 wider set of deployers - e.g. via a spec, perhaps some help from Tom
 
 This one would be a good place for that conversation to start: 
 https://review.openstack.org/#/c/91446/

Right, kind of already been doing that for the last few months. :)

Assistance moving the ball forward appreciated. I think we really need
to just land this stuff in phases, as even getting through the minor
adjusts in that spec (like AUDIT change) is going to take a while. A
bunch of people have been going preemptively on it which is good.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] - rebasing patches for CI

2014-07-24 Thread Kevin Benton
Cherry-picking onto the target branch requires an extra step and custom
code that I wanted to avoid.
Right now I can just pass the gerrit ref into devstack's local.conf as the
branch and everything works.
 If there was a way to get that Zuul ref, I could just use that instead and
no new code would be required.

Is exposing that ref in a known format/location something the infra team
might consider?

Thanks


On Tue, Jul 22, 2014 at 4:16 PM, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-07-21 11:36:43 -0700 (-0700), Kevin Benton wrote:
  I see. So then back to my other question, is it possible to get
  access to the same branch that is being passed to the OpenStack CI
  devstack tests?
 
  For example, in the console output I can see it uses a ref
  like refs/zuul/ master/Z75ac747d605b4eb28d4add7fa5b99890.[1] Is
  that visible somewhere (other than the logs of course) could be
  used in a third-party system?

 Right now, no. It's information passed from Zuul to a Jenkins master
 via Gearman, but as far as I know is currently only discoverable
 within the logs and the job parameters displayed in Jenkins. There
 has been some discussion in the past of Zuul providing some more
 detailed information to third-party systems (perhaps the capability
 to add them as additional Gearman workers) but that has never been
 fully fleshed out.

 For the case of independent pipelines (which is all I would expect a
 third-party CI to have any interest in running for the purpose of
 testing a proposed change) it should be entirely sufficient to
 cherry-pick a patch/series from our Gerrit onto the target branch.
 Only _dependent_ pipelines currently make use of Zuul's capability
 to provide a common ref representing a set of different changes
 across multiple projects, since independent pipelines will only ever
 have an available ZUUL_REF on a single project (the same project for
 which the change is being proposed).
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >