Re: [openstack-dev] [Neutron][LBaaS] Failure when trying to rebase to latest v2 patches

2014-08-18 Thread Brandon Logan
Hey Vijay,
The reason that didn't work is because the netscaler-lbaas-driver-v2 had an 
older patch set of the 105610 change.  That means it was an entirely different 
commit so after you rebased you ended up having two commits with a duplicate 
commit message (which means duplicate Change-IDs).  Git doesn't know that they 
are duplicate because they have different commit hashes, so to git it looks 
like two totally different commits.

I pretty much do exactly what Doug said, except I don't reclone.  When you do 
git review -d it essentially just does a git checkout of that review and 
creates a new local branch.  If you just copied the checkout link in gerrit and 
ran that it would just put you in a detached head state.  Either one works, 
because you can still cherry-pick commits from either state and git review will 
still work as well.

The gerrit UI's rebase button is easy to use but you should probably only use 
it if you have nothing to change.  Otherwise, do the git review -d and 
cherry-pick method.

Thanks,
Brandon

From: Vijay Venkatachalam [vijay.venkatacha...@citrix.com]
Sent: Sunday, August 17, 2014 11:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Failure when trying to rebase to 
latest v2 patches

This worked. Thanks!

But this procedure requires to clone again.

I wonder why rebase didn’t work and cherry pick worked. May be I should tried 
the gerrit UI.

As said in the earlier mail to Brandon, this is what I did.

# netscaler-lbaas-driver-v2  == the topic branch where I had my original 
changes.
git checkout netscaler-lbaas-driver-v2
git review -d 105610
# the above command pulls the latest dependent changes into 
review/brandon_logan/bp/lbaas-api-and-objmodel-improvement
git checkout netscaler-lbaas-driver-v2
git rebase -i review/brandon_logan/bp/lbaas-api-and-objmodel-improvement
# At this point the rebase didn’t succeed, I resolved the conflicts
git add files_that_are_resolved
git commit -a --amend
git review
#At this point I got the errors.

Thanks,
Vijay V.
-Original Message-
From: Doug Wiegley [mailto:do...@a10networks.com]
Sent: 18 August 2014 08:54
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Failure when trying to rebase to 
latest v2 patches

From the looks of your error, you at least have a problem with more than one 
commit in your topic branch.

Here¹s the process that I use.  I¹m not claiming it¹s the best, but it works 
without rewriting Brandon¹s commits.  Watch the git log at the end, and make 
sure the dependent hashes match what¹s in gerrit, before the Œgit
review¹:

git clone https://review.openstack.org/openstack/neutron
neutron-juno-update1
cd neutron-juno-update1/
git review -d 105610
git checkout -b bp/a10-lbaas-driver
*cherry-pick your commit from gerrit* (e.g. git fetch 
https://review.openstack.org/openstack/neutron refs/changes/37/106937/26  git 
cherry-pick FETCH_HEAD) *resolve conflicts* git cherry-pick ‹continue *make 
changes* git commit -a --amend git log -n5 --decorate --pretty=oneline git 
review

If you¹re not making any changes, then you can just hit the Œrebase¹ button in 
the gerrit ui.

Thanks,
doug



On 8/17/14, 8:19 PM, Vijay Venkatachalam
vijay.venkatacha...@citrix.com wrote:

Hi Brandon,

I am trying to rebase Netscaler driver to the latest v2 patches as
mentioned in https://wiki.openstack.org/wiki/GerritWorkflow
But it failed during review submit

It failed with the following error

remote: Processing changes: refs: 1, done To
ssh://vijayvenkatacha...@review.openstack.org:29418/openstack/neutron.g
it
 ! [remote rejected] HEAD -
refs/publish/master/bp/netscaler-lbass-v2-driver (squash commits first)
error: failed to push some refs to
'ssh://vijayvenkatacha...@review.openstack.org:29418/openstack/neutron.
git
'


Any clues on how to proceed?

Thanks,
Vijay V.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Time to Samba! :-)

2014-08-18 Thread Alessandro Pilotti
Hi Thiago,

Like for the Windows case, where we have Heat templates for AD DC and other 
MSFT related workloads (Exchange, SQL Server, SharePoint, etc) [1], the best 
place in OpenStack for Samba 4 DC is a dedicated Heat template.

Heat is the de facto workload orchestration standard for OpenStack, so I'd 
definitely start from there.

Said that, Keystone has AD support via LDAP. It'd be great to see some 
documentation for using a Samba 4 DC in place of a Windows DC.

Another area of interaction for Samba 4 is Cinder: we have code under review 
for exporting volumes over SMB, useful for Hyper-V compute nodes and other 
scenarios. [2]

Talking about Nova, in large deployments using Hyper-V compute nodes it's 
common to manage credentials with domain membership, quite useful for live 
migration in particular. I'd like to document the usage of a Samba 4 AD DC in 
this context, although the last time I tried I had issues with Kerberos 
delegation, required for live migration. Quite some time passed, so it's 
definitely worth giving it another try.

Slightly outside of the OpenStack territory (but still correlated to it) I'd 
consider also Ubuntu Juju for the fact that it's possible to create 
relationships based on a Samba 4 DC charm and any other charm that needs domain 
membership. We have charms for Windows AD, it'd be great to add a Samba 4 as an 
alternative.

Thanks,

Alessandro

[1] https://github.com/cloudbase/windows-heat-templates

[2] https://blueprints.launchpad.net/cinder/+spec/smbfs-volume-driver

On 16.08.2014, at 22:12, Martinx - ジェームズ 
thiagocmarti...@gmail.commailto:thiagocmarti...@gmail.com wrote:

Hey Stackers,

 I'm wondering here... Samba4 is pretty solid (up coming 4.2 rocks), I'm using 
it on a daily basis as an AD DC controller, for both Windows and Linux 
Instances! With replication, file system ACLs - cifs, built-in LDAP, dynamic 
DNS with Bind9 as a backend (no netbios) and etc... Pretty cool!

 In OpenStack ecosystem, there are awesome solutions like Trove, Solum, 
Designate and etc... Amazing times BTW! So, why not try to integrate Samba4, 
working as an AD DC, within OpenStack itself?!

 If yes, then, what is the best way/approach to achieve this?!

 I mean, for SQL, we have Trove, for iSCSI, Cinder, Nova uses Libvirt... Don't 
you guys think that it is time to have an OpenStack project for LDAP too? And 
since Samba4 come with it, plus DNS, AD, Kerberos and etc, I think that it will 
be huge if we manage to integrate it with OpenStack.

 I think that it would be nice to have, for example: domains, users and groups 
management at Horizon, and each tenant with its own Administrator (not the 
Keystone global admin) (to mange its Samba4 domains), so, they will be able 
to fully manage its own account, while allowing Keystone to authenticate 
against these users...

 Also, maybe Designate can have support for it too! I don't know for sure...

 Today, I'm doing this Samba integration manually, I have an external 
Samba4, from OpenStack's point of view, then, each tenant/project, have its own 
DNS domains, when a instance boots up, I just need to do something like this 
(bootstrap):

--
echo 127.0.1.1 
instance-1.tenant-1.domain-1.comhttp://instance-1.tenant-1.domain-1.com 
instance-1  /etc/hosts
net ads join -U administrator
--

 To make this work, the instance just needs to use Samba4 AD DC as its Name 
Servers, configured at its /etc/resolv.conf, delivered by DHCP Agent. The 
packages `samba-common-bin` and `krb5-user` are also required. Including a 
ready to use smb.conf file.

 Then, ping 
instance-1.tenant-1.domain-1.comhttp://instance-1.tenant-1.domain-1.com 
worldwide! It works for both IPv4 and IPv6!!

 Also, Samba4 works okay with Disjoint 
Namespaceshttp://technet.microsoft.com/en-us/library/cc731929(v=ws.10).aspx, 
so, each tenant can have one or more domains and subdomains! Like 
*.realm.domain.comhttp://realm.domain.com, *.domain.comhttp://domain.com, 
*.cloud-net-1.domain.comhttp://cloud-net-1.domain.com, 
*.domain2.comhttp://domain2.com... All dynamic managed by Samba4 and Bind9!

 What about that?!

Cheers!
Thiago
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] publisher metering_secret in ceilometer.conf

2014-08-18 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Hi Folks,

I created a new pollster plugin for ceilometer by:

- adding a new item under enterypoint/ceilometer.poll.central in setup.cfg 
file
- adding the implementation code inheriting plugin.CentralPollster
- adding a new source to pipeline.yaml as bellows:
---
- name: service_source
   interval: 600
   meters:
- service.stat
  sinks:
- meter_sink
---

But the new meter doesn't show up in the output of ceilometer meter-list.
See the log of ceilometer-agent-central, it might be caused by not correct 
signature during dispactching:
--
2014-08-18 03:07:13.170 16528 DEBUG ceilometer.dispatcher.database [-] metering 
data service.stat for 7398ae3f-c866-4484-b975-19d121acb2b1 @ 
2014-08-18T03:07:13.137888: 0 record_metering_data 
/opt/stack/ceilometer/ceilometer/dispatcher/database.py:55
2014-08-18 03:07:13.171 16528 WARNING ceilometer.dispatcher.database [-] 
message signature invalid, discarding message: {u'counter_name': ...
---
Seen from the source code 
(ceilometer/dispatcher/database.py:record_metering_data()), it fails when 
verify_signature. So I added a new item to ceilometer.conf:
-
[publisher_rpc]
metering_secret = 
-


And then the above warning log(message signature invalid) disappears, but the 
it seems that record_metering_data() is NOT invoked at all, because there is no 
the above debug log either.

And then I remove the metering_secret from ceilometer.conf, 
record_metering_data() is still not be invoked.

My questions are:

-  For the issue of message signature invalid during invoking 
recording_metering_data(), is it a correct solution to add metering_secret 
items to ceilometer.conf?

-  Are there anything wrong after adding meter_secret, so that 
record_metering_data() could not be invoked?

-  What else config/source file should I need to modify if I want my 
new meter shown up in ceilometer meter-list output?

-  Any other suggestions/comments?

Thanks in advance!
-Gary


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Implement RBD snapshots instead of QEMU snapshots

2014-08-18 Thread kerwin
I find a BP here.
https://blueprints.launchpad.net/nova/+spec/implement-rbd-snapshots-instead-of-qemu-snapshots

We have same need for doing snapshot on rbd-backed instance (not full copy, 
only snapshot).

Do anyone have thought about this?
Any updates / patch on the BP, or somebody already did this for openstack?

✉ --
kerwin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Not able to Access Nova Rest Services

2014-08-18 Thread Soumya Acharya
HI All,
I am pretty new to openstack . I am trying to access rest
services to create single/multiple image using nova rest services .


Some how i am not able to access. Can you please point to some
documentation or example ?

-- 
Regards and Thanks
Soumya Kanti Acharya
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Live Migration Bug in vmware VCDriver

2014-08-18 Thread 한승진
Is there anybody working on below bug?

https://bugs.launchpad.net/nova/+bug/1192192

The comments are ends 2014-03-26

I guess we should fix the VCDriver source codes.

If someone is doing now, can you share how to solve the problem?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Rotating the weekly Neutron meeting

2014-08-18 Thread Oleg Bondarev
+1, thanks!


On Wed, Aug 13, 2014 at 6:05 PM, Kyle Mestery mest...@mestery.com wrote:

 Per this week's Neutron meeting [1], it was decided that offering a
 rotating meeting slot for the weekly Neutron meeting would be a good
 thing. This will allow for a much easier time for people in
 Asia/Pacific timezones, as well as for people in Europe.

 So, I'd like to propose we rotate the weekly as follows:

 Monday 2100UTC
 Tuesday 1400UTC

 If people are ok with these time slots, I'll set this up and we'll
 likely start with this new schedule in September, after the FPF.

 Thanks!
 Kyle

 [1]
 http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-08-11-21.00.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] publisher metering_secret in ceilometer.conf

2014-08-18 Thread Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Btw, there is no service.stat record/row in meter table in ceilometer 
database.

Regards,
Gary

From: Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Sent: Monday, August 18, 2014 3:45 PM
To: openstack-dev@lists.openstack.org
Cc: Duan, Li-Gong (Gary@HPServers-Core-OE-PSC)
Subject: [Ceilometer] publisher metering_secret in ceilometer.conf

Hi Folks,

I created a new pollster plugin for ceilometer by:

- adding a new item under enterypoint/ceilometer.poll.central in setup.cfg 
file
- adding the implementation code inheriting plugin.CentralPollster
- adding a new source to pipeline.yaml as bellows:
---
- name: service_source
   interval: 600
   meters:
- service.stat
  sinks:
- meter_sink
---

But the new meter doesn't show up in the output of ceilometer meter-list.
See the log of ceilometer-agent-central, it might be caused by not correct 
signature during dispactching:
--
2014-08-18 03:07:13.170 16528 DEBUG ceilometer.dispatcher.database [-] metering 
data service.stat for 7398ae3f-c866-4484-b975-19d121acb2b1 @ 
2014-08-18T03:07:13.137888: 0 record_metering_data 
/opt/stack/ceilometer/ceilometer/dispatcher/database.py:55
2014-08-18 03:07:13.171 16528 WARNING ceilometer.dispatcher.database [-] 
message signature invalid, discarding message: {u'counter_name': ...
---
Seen from the source code 
(ceilometer/dispatcher/database.py:record_metering_data()), it fails when 
verify_signature. So I added a new item to ceilometer.conf:
-
[publisher_rpc]
metering_secret = 
-


And then the above warning log(message signature invalid) disappears, but the 
it seems that record_metering_data() is NOT invoked at all, because there is no 
the above debug log either.

And then I remove the metering_secret from ceilometer.conf, 
record_metering_data() is still not be invoked.

My questions are:

-  For the issue of message signature invalid during invoking 
recording_metering_data(), is it a correct solution to add metering_secret 
items to ceilometer.conf?

-  Are there anything wrong after adding meter_secret, so that 
record_metering_data() could not be invoked?

-  What else config/source file should I need to modify if I want my 
new meter shown up in ceilometer meter-list output?

-  Any other suggestions/comments?

Thanks in advance!
-Gary


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Philip Cheong
I think it's a very interesting test for docker. I too have been think
about this for some time to try and dockerise OpenStack services, but as
the usual story goes, I have plenty things I'd love to try, but there are
only so many hours in a day...

Would definitely be interested to hear if anyone has attempted this and
what the outcome was.

Any suggestions on what the most appropriate service would be to begin with?


On 14 August 2014 14:54, Jay Lau jay.lau@gmail.com wrote:

 I see a few mentions of OpenStack services themselves being containerized
 in Docker. Is this a serious trend in the community?

 http://allthingsopen.com/2014/02/12/why-containers-for-openstack-services/

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*Philip Cheong*
*Elastx *| Public and Private PaaS
email: philip.che...@elastx.se
office: +46 8 557 728 10
mobile: +46 702 8170 814
twitter: @Elastx https://twitter.com/Elastx
http://elastx.se
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-08-18 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 17/08/14 02:09, Angus Lees wrote:
 
 On 16 Aug 2014 06:09, Doug Hellmann d...@doughellmann.com 
 mailto:d...@doughellmann.com wrote:
 
 
 On Aug 15, 2014, at 9:29 AM, Ihar Hrachyshka
 ihrac...@redhat.com
 mailto:ihrac...@redhat.com wrote:
 
 Signed PGP part Some updates on the matter:
 
 - oslo-spec was approved with narrowed scope which is now
 'enabled mysqlconnector as an alternative in gate' instead of
 'switch the default db driver to mysqlconnector'. We'll revisit
 the switch part the next cycle once we have the new driver
 running in gate and real benchmarking is heavy-lifted.
 
 - there are several patches that are needed to make devstack
 and tempest passing deployment and testing. Those are collected
 under the hood of: https://review.openstack.org/#/c/114207/ Not
 much of them.
 
 - we'll need a new oslo.db release to bump versions (this is
 needed to set raise_on_warnings=False for the new driver, which
 was incorrectly set to True in sqlalchemy till very recently).
 This is expected to be released this month (as per Roman
 Podoliaka).
 
 This release is currently blocked on landing some changes in
 projects
 using the library so they don?t break when the new version starts
 using different exception classes. We?re tracking that work in 
 https://etherpad.openstack.org/p/sqla_exceptions_caught
 
 It looks like we?re down to 2 patches, one for cinder
 (https://review.openstack.org/#/c/111760/) and one for glance 
 (https://review.openstack.org/#/c/109655). Roman, can you verify
 that those are the only two projects that need changes for the
 exception issue?
 
 
 - once the corresponding patch for sqlalchemy-migrate is
 merged, we'll also need a new version released for this.
 
 So we're going for a new version of sqlalchemy?  (We have a
 separate workaround for raise_on_warnings that doesn't require the
 new sqlalchemy release if this brings too many other issues)

Wrong. We're going for a new version of *sqlalchemy-migrate*. Which is
the code that we inherited from Mike and currently track in stackforge.

 
 - on PyPI side, no news for now. The last time I've heard from
 Geert (the maintainer of MySQL Connector for Python), he was
 working on this. I suspect there are some legal considerations
 running inside Oracle. I'll update once I know more about
 that.
 
 If we don?t have the new package on PyPI, how do we plan to
 include it
 in the gate? Are there options to allow an exception, or to make
 the mirroring software download it anyway?
 
 We can test via devstack without waiting for pypi, since devstack
 will install via rpms/debs.

I expect that it will be settled. I have no indication that the issue
is unsolvable, it will just take a bit more time than we're accustomed
to. :)

At the moment, we install MySQLdb from distro packages for devstack.
Same applies to new driver. It will be still great to see the package
published on PyPI so that we can track its version requirements
instead of relying on distros to package it properly. But I don't see
it as a blocker.

Also, we will probably be able to run with other drivers supported by
SQLAlchemy once all the work is done.

 
 - Gus
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT8cCDAAoJEC5aWaUY1u57YgQH/0T6QxHfYv0NUWh95AOS4U24
YQ70/A5RD41rn+b/+RU27B97WCY7kJDSn//tfW+THFTZFWZDLy/g2AuXKkbTT3DU
4DTEIvkk4pTtmUhRGLDp4hVWnZ/wKMq1Xtu+jtuXEvz0dcSxgtzwKE3auJ1fD/SL
gZPhUQwPpdQASQo8DSh1iziftlpTzvmhsvAAexvDRFdpZs87b7VTH2AFLYRgW47P
07eow5WL9KprR+Yxfg680A9GoghtB0ffGLvQmdnfOaln+MRx51ywTcq3RKeUU4RH
fgJjddZOPsKhHHPHEwak8+qd2iZ/AvUh0OvkZ3QqX9Dj3ZcpYnJYMApHQNvnebw=
=n5u7
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-08-18 Thread Thierry Carrez
Mark McLoughlin wrote:
 [...]
 I don't see how any self-respecting open-source project can throw a
 release over the wall and have no ability to address critical bugs with
 that release until the next release 6 months later which will also
 include a bunch of new feature work with new bugs. That's not a distro
 maintainer point of view.

I agree that the job changed a bit since those early days, and now apart
from a very small group of stable specialists (mostly the stable release
managers), everyone else on stable-core is actually specialized in a
given project. It would make sense for each project to have a set of
dedicated stable liaisons that would work together with stable release
managers in getting critical bugfixes to stable branch point releases.
Relying on the same small group of people now that we have 10+ projects
to cover is unreasonable.

There are two issues to solve before we do that, though. The projects
have to be OK with taking on that extra burden (it becomes their
responsibility to dedicate resources to stable branches). And we need to
make sure the stable branch guidelines are well communicated.

 [...]
 That's quite a leap to say that -core team members will be so incapable
 of the appropriate level of conservatism that the branch will be killed.
 
 The idea behind not automatically granting +2 on stable/* to -core
 members was simply we didn't want people diving in and approving
 unsuitable stuff out of ignorance.
 
 I could totally see an argument now that everyone is a lot more familiar
 with gerrit, the concept of stable releases is well established and we
 should be able to trust -core team members to learn how to make the risk
 vs benefit tradeoffs needed for stable reviews.

The question here is whether every -core member on a project should
automatically be on stable-core (and we can reuse the same group) or do
we have to maintain two groups.

From my experience reviewing stable branch changes, I see two types of
issues with regular core doing stable reviews. There is the accidental
stable/* review, where the person thinks he is reviewing a master
review. Gerrit makes it look extremely similar, so more often than not,
we have -core members wondering why they don't have +2 on review X, or
core members doing a code review (criticizing the code again) rather
than a stable review (checking the type of change and the backport).
Maybe we could solve that by making gerrit visually different on stable
reviews ?

And then there is the opportunistic review, where a core member bends
the stable rules because he wants a specific fix backported. So we get
database migrations, new configuration options, or default behavior
changes +1ed or +2ed in stable. When we discuss those on the review, the
-core person generally disagrees with the stable rules and would like
them relaxed.

This is why I tend to prefer an opt-in system where the core member
signs up to do stable review. He clearly says he agrees with the stable
rules and will follow them. He also signs up to do enough of them to
limit the opportunistic and accidental issues.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-08-18 Thread Victor Sergeyev
Hello Doug, All.

 This release is currently blocked on landing some changes in projects
using the library so they don’t break when the new version starts using
different exception classes. We’re tracking that work in
https://etherpad.openstack.org/p/sqla_exceptions_caught

 It looks like we’re down to 2 patches, one for cinder (
https://review.openstack.org/#/c/111760/) and one for glance (
https://review.openstack.org/#/c/109655).

At the moment these patches are merged, so the exception issue was fixed in
all core OS projects.

But unfortunately, there is another blocker for the oslo.db release - Heat
uses BaseMigrationTestCase class, which was removed from oslo.db in patch
https://review.openstack.org/#/c/93424/ , so the new oslo.db release will
break unittests in Heat. Here is the patch, which should fix this issue -
https://review.openstack.org/#/c/109658/

I really hope, that this patch is the last release blocker :)
Roman, folks - please fix me, if I miss something



On Fri, Aug 15, 2014 at 11:07 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Aug 15, 2014, at 9:29 AM, Ihar Hrachyshka ihrac...@redhat.com wrote:

  Signed PGP part
  Some updates on the matter:
 
  - oslo-spec was approved with narrowed scope which is now 'enabled
  mysqlconnector as an alternative in gate' instead of 'switch the
  default db driver to mysqlconnector'. We'll revisit the switch part
  the next cycle once we have the new driver running in gate and real
  benchmarking is heavy-lifted.
 
  - there are several patches that are needed to make devstack and
  tempest passing deployment and testing. Those are collected under the
  hood of: https://review.openstack.org/#/c/114207/ Not much of them.
 
  - we'll need a new oslo.db release to bump versions (this is needed to
  set raise_on_warnings=False for the new driver, which was incorrectly
  set to True in sqlalchemy till very recently). This is expected to be
  released this month (as per Roman Podoliaka).

 This release is currently blocked on landing some changes in projects
 using the library so they don’t break when the new version starts using
 different exception classes. We’re tracking that work in
 https://etherpad.openstack.org/p/sqla_exceptions_caught

 It looks like we’re down to 2 patches, one for cinder (
 https://review.openstack.org/#/c/111760/) and one for glance (
 https://review.openstack.org/#/c/109655). Roman, can you verify that
 those are the only two projects that need changes for the exception issue?

 
  - once the corresponding patch for sqlalchemy-migrate is merged, we'll
  also need a new version released for this.
 
  - on PyPI side, no news for now. The last time I've heard from Geert
  (the maintainer of MySQL Connector for Python), he was working on
  this. I suspect there are some legal considerations running inside
  Oracle. I'll update once I know more about that.

 If we don’t have the new package on PyPI, how do we plan to include it in
 the gate? Are there options to allow an exception, or to make the mirroring
 software download it anyway?

 Doug

 
  - once all the relevant patches land in affected projects and
  devstack, I'm going to introduce a separate gate job to run against
  mysqlconnector.
 
  Cheers,
  /Ihar
 
  On 22/07/14 15:03, Ihar Hrachyshka wrote:
   FYI: I've moved the spec to oslo space since the switch is not
   really limited to neutron, and most of coding is to be done in
   oslo.db (though not much anyway).
  
   New spec: https://review.openstack.org/#/c/108355/
  
   On 09/07/14 13:17, Ihar Hrachyshka wrote:
   Hi all,
  
   Multiple projects are suffering from db lock timeouts due to
   deadlocks deep in mysqldb library that we use to interact with
   mysql servers. In essence, the problem is due to missing
   eventlet support in mysqldb module, meaning when a db lock is
   encountered, the library does not yield to the next green thread,
   allowing other threads to eventually unlock the grabbed lock, and
   instead it just blocks the main thread, that eventually raises
   timeout exception (OperationalError).
  
   The failed operation is not retried, leaving failing request not
served. In Nova, there is a special retry mechanism for
   deadlocks, though I think it's more a hack than a proper fix.
  
   Neutron is one of the projects that suffer from those timeout
   errors a lot. Partly it's due to lack of discipline in how we do
   nested calls in l3_db and ml2_plugin code, but that's not
   something to change in foreseeable future, so we need to find
   another solution that is applicable for Juno. Ideally, the
   solution should be applicable for Icehouse too to allow
   distributors to resolve existing deadlocks without waiting for
   Juno.
  
   We've had several discussions and attempts to introduce a
   solution to the problem. Thanks to oslo.db guys, we now have more
   or less clear view on the cause of the failures and how to easily
   fix them. The solution is to switch mysqldb to 

Re: [openstack-dev] Live Migration Bug in vmware VCDriver

2014-08-18 Thread Jay Lau
It seems that VCDriver do not support live migration till now.

I recalled in ATL summit, the VMWare team is going to do some enhancement
to enable live migration:
1) Make sure one nova compute can only manage one cluster or resource pool,
this can make sure VMs in different cluster/resource pool can migrate to
each other.
2) Till now, I see that VCDriver did not implement
check_can_live_migrate_destination(), this caused live migration will
always be failed when using VCDriver.

Thanks.


2014-08-18 16:14 GMT+08:00 한승진 yongi...@gmail.com:

 Is there anybody working on below bug?

 https://bugs.launchpad.net/nova/+bug/1192192

 The comments are ends 2014-03-26

 I guess we should fix the VCDriver source codes.

 If someone is doing now, can you share how to solve the problem?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Jay Lau
I see that there are some openstack docker images in public docker repo,
perhaps you can check them on github to see how to use them.

[root@db03b04 ~]# docker search openstack
NAME
DESCRIPTION STARS OFFICIAL
AUTOMATED
ewindisch/dockenstackOpenStack development environment
(using D...   6[OK]
jyidiego/openstack-clientAn ubuntu 12.10 LTS image that has
nova, s...   1
dkuffner/docker-openstack-stress A docker container for openstack
which pro...   0[OK]
garland/docker-openstack-keystone
0[OK]
mpaone/openstack
0
nirmata/openstack-base
0
balle/openstack-ipython2-client  Features Python 2.7.5, Ipython
2.1.0 and H...   0
booleancandy/openstack_clients
0[OK]
leseb/openstack-keystone
0
raxcloud/openstack-client
0
paulczar/openstack-agent
0
booleancandy/openstack-clients
0
jyidiego/openstack-client-rumm-ansible
0
bodenr/jumpgate  SoftLayer Jumpgate WSGi OpenStack
REST API...   0[OK]
sebasmagri/docker-marconiDocker images for the Marconi
Message Queu...   0[OK]
chamerling/openstack-client
0[OK]
centurylink/openstack-cli-wetty  This image provides a Wetty
terminal with ...   0[OK]


2014-08-18 16:47 GMT+08:00 Philip Cheong philip.che...@elastx.se:

 I think it's a very interesting test for docker. I too have been think
 about this for some time to try and dockerise OpenStack services, but as
 the usual story goes, I have plenty things I'd love to try, but there are
 only so many hours in a day...

 Would definitely be interested to hear if anyone has attempted this and
 what the outcome was.

 Any suggestions on what the most appropriate service would be to begin
 with?


 On 14 August 2014 14:54, Jay Lau jay.lau@gmail.com wrote:

 I see a few mentions of OpenStack services themselves being containerized
 in Docker. Is this a serious trend in the community?

 http://allthingsopen.com/2014/02/12/why-containers-for-openstack-services/

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Philip Cheong*
 *Elastx *| Public and Private PaaS
 email: philip.che...@elastx.se
 office: +46 8 557 728 10
 mobile: +46 702 8170 814
 twitter: @Elastx https://twitter.com/Elastx
 http://elastx.se

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Thierry Carrez
Doug Hellmann wrote:
 On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
 Let me try to say it another way.  You seemed to say that it wasn't much
 to ask given the rate at which things happen in OpenStack.  I would
 argue that given the rate, we should not try to ask more of individuals
 (like this proposal) and risk burnout.  Instead, we should be doing our
 best to be more open an inclusive to give the project the best chance to
 grow, as that's the best way to get more done.

 I think an increased travel expectation is a raised bar that will hinder
 team growth, not help it.
 
 +1, well said.

Sorry, I was away for a few days. This is a topic I have a few strong
opinions on :)

There is no denial that the meetup format is working well, comparatively
better than the design summit format. There is also no denial that that
requiring 4 travels per year for a core dev is unreasonable. Where is
the limit ? Wouldn't we be more productive and aligned if we did one per
month ? No, the question is how to reach a sufficient level of focus and
alignment while keeping the number of mandatory travel at 2 per year.

I don't think our issue comes from not having enough F2F time. Our issue
is that the design summit no longer reaches its objectives of aligning
key contributors on a common plan, and we need to fix it.

We established the design summit as the once-per-cycle opportunity to
have face-to-face time and get alignment across the main contributors to
a project. That used to be completely sufficient, but now it doesn't
work as well... which resulted in alignment and team discussions to be
discussed at mid-cycle meetups instead. Why ? And what could we change
to have those alignment discussions at the design summit again ?

Why are design summits less productive that mid-cycle meetups those days
? Is it because there are too many non-contributors in the design summit
rooms ? Is it the 40-min format ? Is it the distractions (having talks
to give somewhere else, booths to attend, parties and dinners to be at)
? Is it that beginning of cycle is not the best moment ? Once we know
WHY the design summit fails its main objective, maybe we can fix it.

My gut feeling is that having a restricted audience and a smaller group
lets people get to the bottom of an issue and reach consensus. And that
you need at least half a day or a full day of open discussion to reach
such alignment. And that it's not particularly great to get such
alignment in the middle of the cycle, getting it at the start is still
the right way to align with the release cycle.

Nothing prevents us from changing part of the design summit format (even
the Paris one!), and restrict attendance to some of the sessions. And if
the main issue is the distraction from the conference colocation, we
might have to discuss the future of co-location again. In that 2 events
per year objective, we could make the conference the optional cycle
thing, and a developer-oriented specific event the mandatory one.

If we manage to have alignment at the design summit, then it doesn't
spell the end of the mid-cycle things. But then, ideally the extra
mid-cycle gatherings should be focused on getting specific stuff done,
rather than general team alignment. Think workshop/hackathon rather than
private gathering. The goal of the workshop would be published in
advance, and people could opt to join that. It would be totally optional.

Cheers,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [Horizon] [Heat] Merlin project (formerly known as cross-project UI library for Heat/Mistral/Murano/Solum) plans for PoC and more

2014-08-18 Thread Timur Sufiev
David,

I'm happy to hear that :)! After thinking a bit, I came up with the
following strategy for further Merlin development: make all the
commits into a separate repository (stackforge/merlin) at least until
the PoC is ready. This will allow to keep project history more
granular instead of updating one large commit inside openstack/horizon
gerrit (thus also lessening the burden on Horizon reviewers). Once the
Merlin proceeds from the experimental/PoC phase to the implementing of
a more elaborated spec, it will be just the time for it to join with
the Horizon.

On Wed, Aug 13, 2014 at 2:48 AM, Lyle, David david.l...@hp.com wrote:
 On 8/6/14, 1:41 PM, Timur Sufiev tsuf...@mirantis.com wrote:

Hi, folks!

Two months ago there was an announcement in ML about gathering the
requirements for cross-project UI library for
Heat/Mistral/Murano/Solum [1]. The positive feedback in related
googledoc [2] and some IRC chats and emails that followed convinced me
that I'm not the only person interested in it :), so I'm happy to make
the next announcement.

The project finally has got its name - 'Merlin' (making complex UIs is
a kind of magic), Openstack wiki page [3] and all other stuff like
stackforge repo, launchpad page and IRC channel (they are all
referenced in [3]). For those who don't like clicking the links, here
is quick summary.

Merlin aims to provide a convenient client side framework for building
rich UIs for Openstack projects dealing with complex input data with
lot of dependencies and constraints (usually encoded in YAML format
via some DSL) - projects like Heat, Murano, Mistral or Solum. The
ultimate goal for such UI is to save users from reading comprehensive
documentation just in order to provide correct input data, thus making
the UI of these projects more user-friendly. If things go well for
Merlin, it could be eventually merged into Horizon library (I¹ll spare
another option for the end of this letter).

The framework trying to solve this ambitious task is facing at least 2
challenges:
(1) enabling the proper UX patterns and
(2) dealing with complexities of different projects' DSLs.

Having worked on DSL things in Murano project before, I'm planning at
first to deal with the challenge (2) in the upcoming Merlin PoC. So,
here is the initial plan: design an in-framework object model (OM)
that could translated forth and back into target project's DSL. This
OM is meant to be synchronised with visual elements shown on browser
canvas. Target project is the Heat with its HOT templates - it has the
most well-established syntax among other projects and comprehensive
documentation.

Considering the challenge (1), not being a dedicated UX engineer, I'm
planning to start with some rough UI concepts [4] and gradually
improve them relying on community feedback, and especially, Openstack
UX group. If anybody from the UX team (or any other team!) is willing
to be involved to a greater degree than just giving some feedback,
you're are enormously welcome! Join Merlin, it will be fun :)!

Finally, with this announcement I¹d like to start a discussion with
Horizon community. As far as I know, Horizon in its current state
lacks such UI toolkit as Merlin aims to provide. Would it be by any
chance possible for the Merlin project to be developed from the very
beginning as part of Horizon library? This choice has its pros and
cons I¹m aware of, but I¹d like to hear the opinions of Horizon
developers on that matter.

 I would like to see this toolset built into Horizon. That will make it
 accessible to integrated projects like Heat that Horizon already supports,
 but will also allow other projects to use the horizon library as a
 building block to providing managing project specific DSLs.

 David


[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-June/037054.html
[2]
https://docs.google.com/a/mirantis.com/document/d/19Q9JwoO77724RyOp7XkpYmA
Lwmdb7JjoQHcDv4ffZ-I/edit#
[3] https://wiki.openstack.org/wiki/Merlin
[4] https://wiki.openstack.org/wiki/Merlin/SampleUI

--
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift global cluster replication latency...

2014-08-18 Thread Shyam Prasad N
Hi,

Went through the following link:
https://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/

I'm trying to simulate the 2-region 3-replica scenario. The document says
that the 3rd replica will be asynchronously moved to the remote location
with a 2-region setup.

What I want to understand is if whether the latency of this asynchronous
copy can be tweaked/monitored? I couldn't find any configuration parameters
to tweak this. Do we have such an option? Or is it done on a best-effort
basis?

Thanks in advance...

-- 
-Shyam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] stable branches failure to handle review backlog

2014-08-18 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 18/08/14 11:00, Thierry Carrez wrote:
 Mark McLoughlin wrote:
 [...] I don't see how any self-respecting open-source project can
 throw a release over the wall and have no ability to address
 critical bugs with that release until the next release 6 months
 later which will also include a bunch of new feature work with
 new bugs. That's not a distro maintainer point of view.
 
 I agree that the job changed a bit since those early days, and now
 apart from a very small group of stable specialists (mostly the
 stable release managers), everyone else on stable-core is actually
 specialized in a given project. It would make sense for each
 project to have a set of dedicated stable liaisons that would
 work together with stable release managers in getting critical
 bugfixes to stable branch point releases. Relying on the same small
 group of people now that we have 10+ projects to cover is
 unreasonable.

Indeed, not everyone feels safe when reviewing stable backports for
projects he is not really involved in. For example, I'm more or less
fine with Neutron and Oslo, and some Nova patches, but once it gets to
e.g. Horizon or Keystone, I feel very worried that I even have +2 for
those branches, and generally don't apply it unless I'm completely
sure the fix is obvious, and the bug is clear for outsider (=me).

The problem is that some projects lack people that are both confident
in the code base and are dedicated to support stable maint effort. So
while Neutron or Nova do not feel great deficit in the number of
maintainers who care about their stable branches, for other projects
it may be really hard to find anyone to review the code.

Dedicated liasons would solve that.

 
 There are two issues to solve before we do that, though. The
 projects have to be OK with taking on that extra burden (it becomes
 their responsibility to dedicate resources to stable branches). And
 we need to make sure the stable branch guidelines are well
 communicated.
 
 [...] That's quite a leap to say that -core team members will be
 so incapable of the appropriate level of conservatism that the
 branch will be killed.
 
 The idea behind not automatically granting +2 on stable/* to
 -core members was simply we didn't want people diving in and
 approving unsuitable stuff out of ignorance.
 
 I could totally see an argument now that everyone is a lot more
 familiar with gerrit, the concept of stable releases is well
 established and we should be able to trust -core team members to
 learn how to make the risk vs benefit tradeoffs needed for stable
 reviews.
 
 The question here is whether every -core member on a project
 should automatically be on stable-core (and we can reuse the same
 group) or do we have to maintain two groups.
 
 From my experience reviewing stable branch changes, I see two types
 of issues with regular core doing stable reviews. There is the
 accidental stable/* review, where the person thinks he is reviewing
 a master review. Gerrit makes it look extremely similar, so more
 often than not, we have -core members wondering why they don't have
 +2 on review X, or core members doing a code review (criticizing
 the code again) rather than a stable review (checking the type of
 change and the backport). Maybe we could solve that by making
 gerrit visually different on stable reviews ?

I would be glad if stable patches are not shown by default in queries
unless explicitly requested. That would solve the issue of people
reviewing the code of backports, and others.

I would also like to have some way to explicitly mark a stable branch
patch as being 'not a trivial cherry-pick'. Sometimes stable branch
patches are not really similar to their original master fellows. Or
even we need to push a stable branch only patch because the issue or
the affected code is not in master. In that case, stable maintainers
are not the ones to do review, because the code should be checked by
the real cores of the affected projects, and checking technical
details and general applicability of the patch for stable branch is
not enough. Something like a button to mark the backport as requiring
special attention from cores.

Both the split and a way to drive cores' attention to specific
backport would remove burden from both project cores and stable
maintainers that sometimes struggle to review non-trivial bug fixes.

 
 And then there is the opportunistic review, where a core member
 bends the stable rules because he wants a specific fix backported.
 So we get database migrations, new configuration options, or
 default behavior changes +1ed or +2ed in stable. When we discuss
 those on the review, the -core person generally disagrees with the
 stable rules and would like them relaxed.
 
 This is why I tend to prefer an opt-in system where the core
 member signs up to do stable review. He clearly says he agrees with
 the stable rules and will follow them. He also signs up to do
 enough of them to 

[openstack-dev] [mistral] Team meeting reminder - 08/18/2014

2014-08-18 Thread Renat Akhmerov
Hi mistral folks,

I’d like to remind that we’ll have a team meeting today as usually at 16.00 UTC 
at #openstack-meeting.

Agenda:

* Review action items
* Current status (progress, issues, roadblocks)
* Further plans
* Release 0.1 scope (BPs and bugs)
* Open discussion

Please also look at https://wiki.openstack.org/wiki/Meetings/MistralAgenda if 
you need to find meeting archive.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Contact information required for ICLA -- getting a server error

2014-08-18 Thread Udi Kalifon
Hi.

I am trying to update my contact information in order to submit my first gerrit 
review. I go to review.openstack.org and log in, then go to my account settings 
and click on Contact Information. I provide my address and click Save 
Changes and get:

Code Review - Error
Server Error
Cannot store contact information

This has been going on for 2 days already... Who is the admin responsible?

Thanks in advance,
Udi.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Contact information required for ICLA -- getting a server error

2014-08-18 Thread Tom Fifield
On 18/08/14 19:01, Udi Kalifon wrote:
 Hi.
 
 I am trying to update my contact information in order to submit my first 
 gerrit review. I go to review.openstack.org and log in, then go to my account 
 settings and click on Contact Information. I provide my address and click 
 Save Changes and get:
 
 Code Review - Error
 Server Error
 Cannot store contact information
 
 This has been going on for 2 days already... Who is the admin responsible?
 
 Thanks in advance,
 Udi.

Hi Udi,

Sorry to hear you're having troubles!

One of the most common causes of this error is that the email address
you entered in your OpenStack Foundation profile does not match the
Preferred Email you set in Gerrit.

Can you double check this and get back to us if it works/doesn't work?



FAQ Link:
https://wiki.openstack.org/wiki/CLA-FAQ#When_trying_to_sign_the_new_ICLA_and_include_contact_information.2C_why_am_I.27m_getting_an_error_message_saying_that_my_E-mail_address_doesn.27t_correspond_to_a_Foundation_membership.3F



Regards,


Tom


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Picking a Name for the Tempest Library

2014-08-18 Thread Christopher Yeoh
On Sat, 16 Aug 2014 18:27:19 +0200
Marc Koderer m...@koderer.com wrote:

 Hi all,
 
 Am 15.08.2014 um 23:31 schrieb Jay Pipes jaypi...@gmail.com:
  
  I suggest that tempest should be the name of the import'able
  library, and that the integration tests themselves should be what
  is pulled out of the current Tempest repository, into their own
  repo called openstack-integration-tests or os-integration-tests“.
 
 why not keeping it simple:
 
 tempest: importable test library
 tempest-tests: all the test cases
 
 Simple, obvious and clear ;)

+1

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Live Migration Bug in vmware VCDriver

2014-08-18 Thread 한승진
Thanks for reply Jay~!

From icehouse, one nova-compute can manage multi clusters I think.

In this case, how should we progress in order to archive the live migration
functions.

Thanks.

John Haan.


2014-08-18 19:00 GMT+09:00 Jay Lau jay.lau@gmail.com:

 It seems that VCDriver do not support live migration till now.

 I recalled in ATL summit, the VMWare team is going to do some enhancement
 to enable live migration:
 1) Make sure one nova compute can only manage one cluster or resource
 pool, this can make sure VMs in different cluster/resource pool can migrate
 to each other.
 2) Till now, I see that VCDriver did not implement
 check_can_live_migrate_destination(), this caused live migration will
 always be failed when using VCDriver.

 Thanks.


 2014-08-18 16:14 GMT+08:00 한승진 yongi...@gmail.com:

 Is there anybody working on below bug?

 https://bugs.launchpad.net/nova/+bug/1192192

 The comments are ends 2014-03-26

 I guess we should fix the VCDriver source codes.

 If someone is doing now, can you share how to solve the problem?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] running pep8 tests much faster

2014-08-18 Thread Daniel P. Berrange
We recently had a change merged to the run_tests.sh script that Nova
developers would really benefit from knowing about:

   https://review.openstack.org/#/c/110746/

Basically, it provides a way to run the pep8 tests *only* against the files
which you have actually touched in your patch. For most patches this has a
huge benefit in running time since instead of checking 3000+ files it only
has to check a handful.

Old way, checking all of nova codebase at once:

$ time ./run_tests.sh -p
  Running flake8 ...

  real  2m4.410s
  user  2m3.530s
  sys   0m0.637s


New way, checking only changed files:

  $ time ./run_tests.sh -8
  Running flake8 on nova/tests/virt/libvirt/test_driver.py 
nova/virt/libvirt/driver.py 

  real  0m8.117s
  user  0m7.785s
  sys   0m0.287s

I'm guessing I know which most people will prefer :)


NB, this only checks files in the most recent patch in your checkout. ie
if you are sitting on a 10-patch series it is only validating the last
patch in that series. Probably not an issue for most people since you
need to explicitly check each patch individually during rebase regardless.

In summary, you can change 'run_tests.sh -p' to 'run_tests.sh -8' and be
generally much happier :-)

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder][qa] Volume attachment not visible on the guest

2014-08-18 Thread Attila Fazekas
Hi All,

I have a `little` trouble with the volume attachment stability.

The test_stamp_pattern test is skipped since long, you
can see what would happen if it would be enabled [1] now.

There is a workaround kind way for enabling that test [2].

I suspected the acpi hot plug event is not detected by the kernel 
at some phases of the boot, for example after the first pci scan,
 but before the pci hot plug initialized.

Is the above blind spot really exists ?

If yes, is something what needs to be handled by init system or
 kernel needs to ensure all device is discovered before calling init ?

Long time ago I had trouble with reproducing the above issue,
but now I was able to see a PCI rescan can solve the issue.
'echo 1  /sys/bus/pci/rescan' (ssh to guest)

Recently we found `another type` of volume attachment issue,
when booting from volume. [3]

Here I would expect the PCI device is ready before,
the VM actually started, but according to the 
console log, the disk device is missing.

When I am booting from an iscsi volume, is the virtual device show up
guaranteed by nova/cinder/libvirt/qemu/whatever
 to be present at the first pci scan ?

Is there anything what can delay the device/disk appearance ?

Best Regards,
Attila

[1] https://review.openstack.org/#/c/52740/
[2] https://review.openstack.org/#/c/62886/
[3] https://bugs.launchpad.net/nova/+bug/1357677

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Daniel P. Berrange
On Mon, Aug 18, 2014 at 12:18:16PM +0200, Thierry Carrez wrote:
 Doug Hellmann wrote:
  On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
  Let me try to say it another way.  You seemed to say that it wasn't much
  to ask given the rate at which things happen in OpenStack.  I would
  argue that given the rate, we should not try to ask more of individuals
  (like this proposal) and risk burnout.  Instead, we should be doing our
  best to be more open an inclusive to give the project the best chance to
  grow, as that's the best way to get more done.
 
  I think an increased travel expectation is a raised bar that will hinder
  team growth, not help it.
  
  +1, well said.
 
 Sorry, I was away for a few days. This is a topic I have a few strong
 opinions on :)
 
 There is no denial that the meetup format is working well, comparatively
 better than the design summit format. There is also no denial that that
 requiring 4 travels per year for a core dev is unreasonable. Where is
 the limit ? Wouldn't we be more productive and aligned if we did one per
 month ? No, the question is how to reach a sufficient level of focus and
 alignment while keeping the number of mandatory travel at 2 per year.
 
 I don't think our issue comes from not having enough F2F time. Our issue
 is that the design summit no longer reaches its objectives of aligning
 key contributors on a common plan, and we need to fix it.
 
 We established the design summit as the once-per-cycle opportunity to
 have face-to-face time and get alignment across the main contributors to
 a project. That used to be completely sufficient, but now it doesn't
 work as well... which resulted in alignment and team discussions to be
 discussed at mid-cycle meetups instead. Why ? And what could we change
 to have those alignment discussions at the design summit again ?
 
 Why are design summits less productive that mid-cycle meetups those days
 ? Is it because there are too many non-contributors in the design summit
 rooms ? Is it the 40-min format ? Is it the distractions (having talks
 to give somewhere else, booths to attend, parties and dinners to be at)
 ? Is it that beginning of cycle is not the best moment ? Once we know
 WHY the design summit fails its main objective, maybe we can fix it.

 My gut feeling is that having a restricted audience and a smaller group
 lets people get to the bottom of an issue and reach consensus. And that
 you need at least half a day or a full day of open discussion to reach
 such alignment. And that it's not particularly great to get such
 alignment in the middle of the cycle, getting it at the start is still
 the right way to align with the release cycle.
 
 Nothing prevents us from changing part of the design summit format (even
 the Paris one!), and restrict attendance to some of the sessions. And if
 the main issue is the distraction from the conference colocation, we
 might have to discuss the future of co-location again. In that 2 events
 per year objective, we could make the conference the optional cycle
 thing, and a developer-oriented specific event the mandatory one.
 
 If we manage to have alignment at the design summit, then it doesn't
 spell the end of the mid-cycle things. But then, ideally the extra
 mid-cycle gatherings should be focused on getting specific stuff done,
 rather than general team alignment. Think workshop/hackathon rather than
 private gathering. The goal of the workshop would be published in
 advance, and people could opt to join that. It would be totally optional.

This pretty much all aligns with my thoughts on the matter. The key point
is that the design summit is the right place from a cycle timing POV to
have the critical f2f discussions  debates, and we need to figure out
what we can do to make it a more effective venue than it currently is.

IME I'd probably say the design summit sessions I've been to fall into
two broad camps. 

 - Information dissemination - just talk through proposal(s) to let
   everyone know what's being planned / thought. Some questions and
   debate, but mostly a one-way presentation.

 - Technical debates - the topic is just a high level hook, around
   which, a lively argument  debate was planned  took place. 

I think that the number of the information dissemination sessions could
be cut back on by encouraging people to take advantage of other equally
as effective methods of communication. In many cases it would suffice to
just have a more extensive blueprint / spec created, or a detailed wiki
page or similar doc to outline the problem space. If we had some regular
slot where people could do online presentations (Technical talks) that
could be a good way to push the information, out of band from the main
summits. If those online talks led to significant questions, then those
questions could then justify design summit sessions for f2f debate.

As an example, much as it is nice that we give every hypervisor driver
in Nova a slot at the design 

Re: [openstack-dev] [all] The future of the integrated release

2014-08-18 Thread Thierry Carrez
Clint Byrum wrote:
 Here's why folk are questioning Ceilometer:
 
 Nova is a set of tools to abstract virtualization implementations.
 Neutron is a set of tools to abstract SDN/NFV implementations.
 Cinder is a set of tools to abstract block-device implementations.
 Trove is a set of tools to simplify consumption of existing databases.
 Sahara is a set of tools to simplify Hadoop consumption.
 Swift is a feature-complete implementation of object storage, none of
 which existed when it was started.
 Keystone supports all of the above, unifying their auth.
 Horizon supports all of the above, unifying their GUI.
 
 Ceilometer is a complete implementation of data collection and alerting.
 There is no shortage of implementations that exist already.
 
 I'm also core on two projects that are getting some push back these
 days:
 
 Heat is a complete implementation of orchestration. There are at least a
 few of these already in existence, though not as many as their are data
 collection and alerting systems.
 
 TripleO is an attempt to deploy OpenStack using tools that OpenStack
 provides. There are already quite a few other tools that _can_ deploy
 OpenStack, so it stands to reason that people will question why we
 don't just use those. It is my hope we'll push more into the unifying
 the implementations space and withdraw a bit from the implementing
 stuff space.
 
 So, you see, people are happy to unify around a single abstraction, but
 not so much around a brand new implementation of things that already
 exist.

Right, most projects focus on providing abstraction above
implementations, and that abstraction is where the real domain
expertise of OpenStack should be (because no one else is going to do it
for us). Every time we reinvent something, we are at larger risk because
we are out of our common specialty, and we just may not be as good as
the domain specialists. That doesn't mean we should never reinvent
something, but we need to be damn sure it's a good idea before we do.
It's sometimes less fun to piggyback on existing implementations, but if
they exist that's probably what we should do.

While Ceilometer is far from alone in that space, what sets it apart is
that even after it was blessed by the TC as the one we should all
converge on, we keep on seeing competing implementations for some (if
not all) of its scope. Convergence did not happen, and without
convergence we struggle in adoption. We need to understand why, and if
this is fixable.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Salvatore Orlando
As the conversation has drifted away from a discussion pertaining the nova
core team, I have some comments inline as well.


On 18 August 2014 12:18, Thierry Carrez thie...@openstack.org wrote:

 Doug Hellmann wrote:
  On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
  Let me try to say it another way.  You seemed to say that it wasn't much
  to ask given the rate at which things happen in OpenStack.  I would
  argue that given the rate, we should not try to ask more of individuals
  (like this proposal) and risk burnout.  Instead, we should be doing our
  best to be more open an inclusive to give the project the best chance to
  grow, as that's the best way to get more done.
 
  I think an increased travel expectation is a raised bar that will hinder
  team growth, not help it.
 
  +1, well said.

 Sorry, I was away for a few days. This is a topic I have a few strong
 opinions on :)

 There is no denial that the meetup format is working well, comparatively
 better than the design summit format. There is also no denial that that
 requiring 4 travels per year for a core dev is unreasonable. Where is
 the limit ? Wouldn't we be more productive and aligned if we did one per
 month ? No, the question is how to reach a sufficient level of focus and
 alignment while keeping the number of mandatory travel at 2 per year.


I honestly think that it is simply not possible to require a minimum travel
from core team members.
This might sound naive, but I reckon the various core teams could just use
a bit of common sense here.
A core member that goes to every summit and meetup just for doing pub
crawling in yet another city is definetely less productive than another
team members which tries to collaborate remotely via
etherpad/IRC/gerrit/etc. (ok this example was a bit extreme but I hope it
clarifies my thoughts).



 I don't think our issue comes from not having enough F2F time. Our issue
 is that the design summit no longer reaches its objectives of aligning
 key contributors on a common plan, and we need to fix it.


I totally agree on this point. I would be suprised if I were the first one
that in conversation (off and on the record) has mentioned that it is very
hard to achieve any form of consensus on anything at the design summit. And
it is pretty much impossible to move from a declarations of intent to the
definition of an architecture and/or actionable work items for the
subsequent release cycle.
Disclaimer: I spend 90% of my design summit time in the networking room, so
my judgment might be skewed.


 We established the design summit as the once-per-cycle opportunity to
 have face-to-face time and get alignment across the main contributors to
 a project. That used to be completely sufficient, but now it doesn't
 work as well... which resulted in alignment and team discussions to be
 discussed at mid-cycle meetups instead. Why ? And what could we change
 to have those alignment discussions at the design summit again ?


I suggested in the past to decouple the summit from the main conference.
This alone, in my opinion, would allow us to do the design summit at a
point where it's best for the upcoming release cycle, and reduce the
inevitable increased noise from rooms filled with over 150 people.



 Why are design summits less productive that mid-cycle meetups those days
 ? Is it because there are too many non-contributors in the design summit
 rooms ? Is it the 40-min format ? Is it the distractions (having talks
 to give somewhere else, booths to attend, parties and dinners to be at)
 ? Is it that beginning of cycle is not the best moment ? Once we know
 WHY the design summit fails its main objective, maybe we can fix it.


I think all of them apply and possibly other reasons, but probably we are a
going a bit off-topic.


 My gut feeling is that having a restricted audience and a smaller group
 lets people get to the bottom of an issue and reach consensus. And that
 you need at least half a day or a full day of open discussion to reach
 such alignment. And that it's not particularly great to get such
 alignment in the middle of the cycle, getting it at the start is still
 the right way to align with the release cycle.

 Nothing prevents us from changing part of the design summit format (even
 the Paris one!), and restrict attendance to some of the sessions. And if
 the main issue is the distraction from the conference colocation, we
 might have to discuss the future of co-location again. In that 2 events
 per year objective, we could make the conference the optional cycle
 thing, and a developer-oriented specific event the mandatory one.


While I agree that restricted attendance would increase productivity, it
might also be perceived as a barrier for new contributors and a reduction
of the overall democracy of the project. Maybe this could be achieved
naturally by decoupling conference and summit.



 If we manage to have alignment at the design summit, then it doesn't
 spell the end 

Re: [openstack-dev] Live Migration Bug in vmware VCDriver

2014-08-18 Thread Jay Lau
Till now, live migration is not supported by VCDriver in both Juno and
Icehouse.

For icehouse, yes, one nova compute can manage multiple clusters, but live
migration will be failed for such case as  target host and source host will
be considered to the same host (Only one nova compute).


2014-08-18 19:32 GMT+08:00 한승진 yongi...@gmail.com:

 Thanks for reply Jay~!

 From icehouse, one nova-compute can manage multi clusters I think.

 In this case, how should we progress in order to archive the live
 migration functions.

 Thanks.

 John Haan.


 2014-08-18 19:00 GMT+09:00 Jay Lau jay.lau@gmail.com:

 It seems that VCDriver do not support live migration till now.

 I recalled in ATL summit, the VMWare team is going to do some enhancement
 to enable live migration:
 1) Make sure one nova compute can only manage one cluster or resource
 pool, this can make sure VMs in different cluster/resource pool can migrate
 to each other.
 2) Till now, I see that VCDriver did not implement
 check_can_live_migrate_destination(), this caused live migration will
 always be failed when using VCDriver.

 Thanks.


 2014-08-18 16:14 GMT+08:00 한승진 yongi...@gmail.com:

  Is there anybody working on below bug?

 https://bugs.launchpad.net/nova/+bug/1192192

 The comments are ends 2014-03-26

 I guess we should fix the VCDriver source codes.

 If someone is doing now, can you share how to solve the problem?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Jyoti Ranjan
I believe that everything can not go as a dock container. For e.g.

1. compute nodes
2. baremetal provisioning
3. L3 router etc


My understanding is that container is good mechanism to deploy
api-controller and scheduler for many services. For backend component of
services (like nova-compute, cinder-volume if LVM is used), I that usage of
baremetal is more appropriate (except backend component like cinder-volume
for external devices, nova-compute proxy etc).


Just thought to check your opinion about my understanding! Need your views!


On Mon, Aug 18, 2014 at 3:34 PM, Jay Lau jay.lau@gmail.com wrote:

 I see that there are some openstack docker images in public docker repo,
 perhaps you can check them on github to see how to use them.

 [root@db03b04 ~]# docker search openstack
 NAME
 DESCRIPTION STARS OFFICIAL
 AUTOMATED
 ewindisch/dockenstackOpenStack development environment
 (using D...   6[OK]
 jyidiego/openstack-clientAn ubuntu 12.10 LTS image that
 has nova, s...   1
 dkuffner/docker-openstack-stress A docker container for openstack
 which pro...   0[OK]
 garland/docker-openstack-keystone
 0[OK]
 mpaone/openstack
 0
 nirmata/openstack-base
 0
 balle/openstack-ipython2-client  Features Python 2.7.5, Ipython
 2.1.0 and H...   0
 booleancandy/openstack_clients
 0[OK]
 leseb/openstack-keystone
 0
 raxcloud/openstack-client
 0
 paulczar/openstack-agent
 0
 booleancandy/openstack-clients
 0
 jyidiego/openstack-client-rumm-ansible
 0
 bodenr/jumpgate  SoftLayer Jumpgate WSGi OpenStack
 REST API...   0[OK]
 sebasmagri/docker-marconiDocker images for the Marconi
 Message Queu...   0[OK]
 chamerling/openstack-client
 0[OK]
 centurylink/openstack-cli-wetty  This image provides a Wetty
 terminal with ...   0[OK]


 2014-08-18 16:47 GMT+08:00 Philip Cheong philip.che...@elastx.se:

 I think it's a very interesting test for docker. I too have been think
 about this for some time to try and dockerise OpenStack services, but as
 the usual story goes, I have plenty things I'd love to try, but there are
 only so many hours in a day...

 Would definitely be interested to hear if anyone has attempted this and
 what the outcome was.

 Any suggestions on what the most appropriate service would be to begin
 with?


 On 14 August 2014 14:54, Jay Lau jay.lau@gmail.com wrote:

 I see a few mentions of OpenStack services themselves being
 containerized in Docker. Is this a serious trend in the community?


 http://allthingsopen.com/2014/02/12/why-containers-for-openstack-services/

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 *Philip Cheong*
 *Elastx *| Public and Private PaaS
 email: philip.che...@elastx.se
 office: +46 8 557 728 10
 mobile: +46 702 8170 814
 twitter: @Elastx https://twitter.com/Elastx
 http://elastx.se

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Sylvain Bauza
Le 18 août 2014 14:36, Salvatore Orlando sorla...@nicira.com a écrit :

 As the conversation has drifted away from a discussion pertaining the
nova core team, I have some comments inline as well.


 On 18 August 2014 12:18, Thierry Carrez thie...@openstack.org wrote:

 Doug Hellmann wrote:
  On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
  Let me try to say it another way.  You seemed to say that it wasn't
much
  to ask given the rate at which things happen in OpenStack.  I would
  argue that given the rate, we should not try to ask more of
individuals
  (like this proposal) and risk burnout.  Instead, we should be doing
our
  best to be more open an inclusive to give the project the best chance
to
  grow, as that's the best way to get more done.
 
  I think an increased travel expectation is a raised bar that will
hinder
  team growth, not help it.
 
  +1, well said.

 Sorry, I was away for a few days. This is a topic I have a few strong
 opinions on :)

 There is no denial that the meetup format is working well, comparatively
 better than the design summit format. There is also no denial that that
 requiring 4 travels per year for a core dev is unreasonable. Where is
 the limit ? Wouldn't we be more productive and aligned if we did one per
 month ? No, the question is how to reach a sufficient level of focus and
 alignment while keeping the number of mandatory travel at 2 per year.


 I honestly think that it is simply not possible to require a minimum
travel from core team members.
 This might sound naive, but I reckon the various core teams could just
use a bit of common sense here.
 A core member that goes to every summit and meetup just for doing pub
crawling in yet another city is definetely less productive than another
team members which tries to collaborate remotely via
etherpad/IRC/gerrit/etc. (ok this example was a bit extreme but I hope it
clarifies my thoughts).



 I don't think our issue comes from not having enough F2F time. Our issue
 is that the design summit no longer reaches its objectives of aligning
 key contributors on a common plan, and we need to fix it.


 I totally agree on this point. I would be suprised if I were the first
one that in conversation (off and on the record) has mentioned that it is
very hard to achieve any form of consensus on anything at the design
summit. And it is pretty much impossible to move from a declarations of
intent to the definition of an architecture and/or actionable work items
for the subsequent release cycle.
 Disclaimer: I spend 90% of my design summit time in the networking room,
so my judgment might be skewed.


 We established the design summit as the once-per-cycle opportunity to
 have face-to-face time and get alignment across the main contributors to
 a project. That used to be completely sufficient, but now it doesn't
 work as well... which resulted in alignment and team discussions to be
 discussed at mid-cycle meetups instead. Why ? And what could we change
 to have those alignment discussions at the design summit again ?


 I suggested in the past to decouple the summit from the main conference.
 This alone, in my opinion, would allow us to do the design summit at a
point where it's best for the upcoming release cycle, and reduce the
inevitable increased noise from rooms filled with over 150 people.

Strong -1 here.
Having design sessions happening in the same time than conference is cool
for having :
- mixup of operators, developers and users in the same area, enjoying the
atmosphere, creating a same view and team spirit within Openstack
- newcomers able to join their first developers sessions (we lower the bar)
- same budget for contributors able to propose good proposals for regular
conference

I had the chance to attend both Icehouse and Juno summits. IIRC, Icehouse
design summit was restricted to ATCs while Juno one was open to everyone.
That thing plus the fact that the audio quality of the Juno sessions was
poor (cf. last design session about feedback) makes me think that the
problem is rather a problem of having good sessions than more an overall
problem.

I'm pro restricting access to only ATCs and impose access to rooms only
before the session starts, that would improve dramatically the quality
without sending a bad signal to the community that now design discussions
are a matter of only engineers.

-Sylvain




 Why are design summits less productive that mid-cycle meetups those days
 ? Is it because there are too many non-contributors in the design summit
 rooms ? Is it the 40-min format ? Is it the distractions (having talks
 to give somewhere else, booths to attend, parties and dinners to be at)
 ? Is it that beginning of cycle is not the best moment ? Once we know
 WHY the design summit fails its main objective, maybe we can fix it.


 I think all of them apply and possibly other reasons, but probably we are
a going a bit off-topic.


 My gut feeling is that having a restricted audience and a 

[openstack-dev] [sahara] migration to olso.db

2014-08-18 Thread lonely Feb
I found sahara.openstack.common.db has replaced by olso.db, i wonder the
reason of this replacement. Is the any performance problem of the
original sahara.openstack.common.db
?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] requirements.txt: explicit vs. implicit

2014-08-18 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 14/08/14 18:33, Ben Nemec wrote:
 On 08/14/2014 08:37 AM, Ihar Hrachyshka wrote:
 Hi all,
 
 some plugins depend on modules that are not mentioned in 
 requirements.txt. Among them, Cisco Nexus (ncclient), Brocade 
 (ncclient), Embrane (heleosapi)... Some other plugins put their 
 dependencies in requirements.txt though (like Arista depending on
  jsonrpclib).
 
 There are pros and cons in both cases. The obvious issue with not
  putting those requirements in the file is that packagers are
 left uninformed about those implicit requirements existing,
 meaning plugins are shipped to users with broken dependencies. It
 also means we ship code that depends on unknown modules grabbed
 from random places in the internet instead of relying on what's 
 available on pypi, which is a bit scary.
 
 With my packager hat on, I would like to suggest to make those 
 dependencies explicit by filling in requirements.txt. This will 
 make packaging a bit easier. Of course, runtime dependencies
 being set correctly do not mean plugins are working and tested,
 but at least we give them chance to be tested and used.
 
 But, maybe there are valid concerns against doing so. In that
 case, I would be glad to know how packagers are expected to track
 those implicit dependencies.
 
 I would like to ask community to decide what's the right way to 
 handle those cases.
 
 So I raised a similar issue about six months ago and completely
 failed to follow up on the direction everyone seemed to be onboard
 with: 
 http://lists.openstack.org/pipermail/openstack-dev/2014-February/026976.html

  I did add support to pbr for using nested requirements files, and
 I had posted a PoC for oslo.messaging to allow requirements files
 for different backends, but some of our CI jobs don't know how to
 handle that and I never got around to addressing the limitation.
 
 - From the packaging perspective, I think you could do a
 requirements file that basically rolls up requirements.d/*.txt
 minus test.txt and get all the runtime dependencies that the
 project knows about, assuming we finished the implementation for
 this and started using it in projects.
 
 I don't really anticipate having time to pursue this in the near 
 future, so if you wanted to pick up the ball and run with it that 
 would be great! :-)

Thanks for the back reference!

Though I think it's overkill in Neutron case. There, we are not
interested in Qpid vs. Rabbitmq issue since we depend on
oslo.messaging that is to solve this specific dependency hell.

The case of Neutron plugins depending on specific code that is not
mentioned in requirements.txt is also a bit different. It's not about
alternative implementations of the same library for users to choose
from (as it's in case of qpid vs. rabbitmq). Instead, it's plugins
here that are really optional, but their dependencies are still strict
(=no alternative dependency implementations).

So what we need to make optional here are plugins. I've heard that
Kyle Mestery was going to propose splitting Neutron codebase in parts,
with core plugins sitting in the tree, and vendor specific stuff
maintained in separate repos. That would effectively make plugins
optional.

Till that time, I would make dependencies straight and explicit,
putting them in global requirements list. If distributions are
interested in not installing optional code for all plugins, they would
just separate core from plugins and attach those dependencies to
plugin-specific packages (as they already do btw, at least in Red Hat
world).

As for pip and devstack installations (which are not meant to be the
proper way to deploy openstack anyway), I don't see much value in
separating requirements. It's ok to have optional libraries installed
in your development environment, if only to be able to write unit
tests without mocking out the whole underlying library and praying
that your assumptions about its API are correct.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJT8frcAAoJEC5aWaUY1u57m9oH/0w/RCdJcfZOAwVtGWgd2pH/
BkMpEChlQl40Eb7Vz/9UnLA1jdnnaWmhu9CPEFYs8mYQy/ZMMfB1Ww0e1wl5w0VD
pnbepRyYhmmyupZpsG8ywx5lOdSHNWJzUSEsJpK/ZHszgMiZUB/l++mMr+YfjyYK
h4SzFpoiz7Dnr3qYfowJQuoeH/yFK3Qd03WAybm39waxj+CX/0DXFVS7xoFFokGa
sQWPrMf+xhH2iOVgpgZl2YMzVVDl/bu2NWFkyDepXbTX/gWYI67rx4SbidwWlRYG
ifRQbrruV34n6wdBZ8C5s1WonpOpqX6Unay6qxPsn1qmgPKP4M2SckJoZqbvKVk=
=A0Gf
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] 5.0.1 is out

2014-08-18 Thread Mike Scherbakov
Hi all,
maintenance release of Fuel is finally out. 5.0.1 is primarily bugfix
release with a ton of fixes backported from master (5.1) - 192 bugs were
processed [1].
This is the first release when we produce not only Fuel ISO, but also so
called upgrade tarball. It is a bundle which allows you to upgrade already
installed Fuel master node 5.0 to 5.0.1. Thanks to fanatical work of the
whole team, we finally made upgrades of Fuel! It is not yet OpenStack
patching / upgrades, but these features are coming. Patching of OpenStack
(2014.1 - 2014.1.1 - 2014.1.2) will be available in 5.1, which is under
development now and expected to be released at the end of August. Release
notes are available in [2].
Anyone can build 5.0.1 by fetching code [3] and using tag 5.0.1.

For master branch, we have nightly builds [4], so it is easier to try out
pre-5.1: just download an ISO and use VirtualBox scripts [5] to run it.

According to my feelings, and I hope QA team can confirm it, this should be
the most stable and reliable version so far. Thanks for the hard work,
Fuelers! Let's keep improving the quality!

[1] https://launchpad.net/fuel/+milestone/5.0.1
[2] http://docs.mirantis.com/openstack/fuel/fuel-5.0/release-notes.html
[3] https://wiki.openstack.org/wiki/Fuel#Source_code
[4] https://wiki.openstack.org/wiki/Fuel#Nightly_builds
[5] https://github.com/stackforge/fuel-main/tree/master/virtualbox
-- 
Mike Scherbakov
#mihgen
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Enabling full neutron Job

2014-08-18 Thread Salvatore Orlando
Thanks Mark.
As usual I had to fetch your message from the spam folder!

Anyway, I received a sensible request to avoid running neutron tests for
advanced services (load balancing, firewall, vpn), in the integrated gate.
Therefore the patches [1] and [2] will not run anymore service plugins in
 the 'standard' neutron job. On the other hand the introduce a new
extended neutron job - which is exactly the same as the neutron full job
we have now - which will run only on the neutron gate.

This is good because neutron services and other openstack projects such as
nova or keystone are pretty much orthogonal, and this will avoid causing
failures there should any of these services become unstable. On the other
hand it's good to point out that if changes in oslo libraries should break
the service plugin, we won't be able to detect that anymore as oslo
libraries use the integrated gate.

Also, some services such as load balancing and firewall run also
smoketests. Considering the current job structure this mean they have been
executed already for months in the integrated gate. Also - they will keep
being executed as the postgresql smoke job will keep running in place of
the full one until bug [3] is fixed.

Obviously if you disagree with this approach, please speak up. And note
that the patches [1] and [2] are WIPs at the moment. I'm aware that they
don't work ;)

Salvatore

[1] https://review.openstack.org/#/c/114933/
[2] https://review.openstack.org/#/c/114932/
[3] https://bugs.launchpad.net/nova/+bug/1305892

On 16 August 2014 01:13, Mark McClain mmccl...@yahoo-inc.com wrote:


  On Aug 15, 2014, at 6:20 PM, Salvatore Orlando sorla...@nicira.com
 wrote:

  The neutron full job is finally voting, and the first patch [1] has
 already passed it in gate checks!
 I've collected a few data points before it was switched to voting, and we
 should probably expect a failure rate around 4%. This is not bad, but
 neither great, and everybody's contribution will be appreciated in
 reporting and assessing the nature gate failures, which, needless to say,
 are mostly races.

  Note: we've also added the postgresql version of the same job, but that
 is not voting yet as we never executed it before.

  Salvatore

  [1] https://review.openstack.org/#/c/105694/


  Thanks to Salvatore for driving this effort and for everyone who
 contributed patches and reviews.  It is exciting to see it enabled.

  mark



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] migration to olso.db

2014-08-18 Thread Sergey Lukjanov
Hey Ionely,

the oslo.db is a graduated version of code from the oslo-incubator
(that was periodically synced to sahara.openstack.common.db). So, the
only reason is to switch to the graduated lib.

On Mon, Aug 18, 2014 at 5:04 PM, lonely Feb lonely8...@gmail.com wrote:
 I found sahara.openstack.common.db has replaced by olso.db, i wonder the
 reason of this replacement. Is the any performance problem of the original
 sahara.openstack.common.db ?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] running pep8 tests much faster

2014-08-18 Thread Matthew Booth
On 18/08/14 12:51, Daniel P. Berrange wrote:
 We recently had a change merged to the run_tests.sh script that Nova
 developers would really benefit from knowing about:
 
https://review.openstack.org/#/c/110746/
 
 Basically, it provides a way to run the pep8 tests *only* against the files
 which you have actually touched in your patch. For most patches this has a
 huge benefit in running time since instead of checking 3000+ files it only
 has to check a handful.
 
 Old way, checking all of nova codebase at once:
 
 $ time ./run_tests.sh -p
   Running flake8 ...
 
   real2m4.410s
   user2m3.530s
   sys 0m0.637s
 
 
 New way, checking only changed files:
 
   $ time ./run_tests.sh -8
   Running flake8 on nova/tests/virt/libvirt/test_driver.py 
 nova/virt/libvirt/driver.py 
 
   real0m8.117s
   user0m7.785s
   sys 0m0.287s
 
 I'm guessing I know which most people will prefer :)
 
 
 NB, this only checks files in the most recent patch in your checkout. ie
 if you are sitting on a 10-patch series it is only validating the last
 patch in that series. Probably not an issue for most people since you
 need to explicitly check each patch individually during rebase regardless.

Incidentally, in case people aren't familiar with it, git rebase -x is
an excellent tool for running this kind of thing against a 10-patch series:

git rebase -i -x './run_tests.sh -8' -x './run_tests.sh vmwareapi' base

This will give you an interactive rebase (-x requires -i) on to base.
After applying each patch it will run each of the 2 given commands. If
either fails it will pause. After resolving any issues you can continue
with 'git rebase --continue'.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] RE: Juno Home Stretch

2014-08-18 Thread John Wood
(kicking this thread to the dev mailing list)

Thanks for starting this discussion on Juno work efforts, Nate.  This would be 
a good discussion to pick up at the 3pm CDT IRC meeting today as well. I've 
added some thoughts below as well.

I believe the plan is to finalize features and API changes in 
openstack/barbican for M3 (so Sept 4th). My understanding is that corresponding 
updates to the client library to accommodate these features could continue 
after that date. 

So getting finalized KMIP, Dogtag and HSM secret storage options shored up 
seems like a good goal. I think the last KMIP CR is Kaitlin's (kfarr) [1]. 
Paul's (reaperhulk) CR [2] fixes an issue with HSM interactions (and stevedore 
interaction in general...we should discuss at the IRC meeting today).

Removing the tenant-ID from the URLs seems good as well, per this Venkat's 
(tsv) CR [3].

Regarding the transport key feature, I would hope that we could get the client 
portion available before the final Juno release.

Adam (rm_work) is working on the Containers portion of the client library in CR 
[4].

Regarding certificate generation...

I'm hopeful we get can get the framework for certificate generation (based on 
blueprint [5]) in place by M3, but that might be aggressive. I'm hopeful the 
feature can include the ability to invoke a certificate plugin to generate a 
certificate via the orders interface for happy path flows (i.e. data is 
correct). This could be Ade's Dogtag plugin, and the Symantec plugin. 

Work to retrieve which certificate plugins are available would probably need to 
be done in K, as well as Ade's sub-CA feature.

Per that blueprint, it would also be nice to have a simple certificate event 
plugin (that just logs, I'm working on that now) and a simple retry loop using 
periodic tasks.

As Ade (alee) pointed out, Arvind's (atiwari) CR to add type/meta to the orders 
resource [6] is needed for follow on asymmetric and certificate generation 
workflows.  Some initial work on asymmetric generation is happening in Arvind's 
CR here: https://review.openstack.org/#/c/111412/


Misc...

I'm thinking Arun's (arunkant) Keystone eventing CR [7], and my (woodster) 
json-home Version resource CR [8] might slip to K, but again we should discuss 
this today.

Ade, I had thought the Dogtag Chef work was getting fleshed out, so it might be 
close now?

I'd like to remove the 'context' argument from the secret_store.py plugin 
interface before M3. 

We might want to consider adding plugin validation to our current hard-coded 
validators.py classes as well?



Is there anything else out there?

Thanks,
John

[1] = https://review.openstack.org/#/c/101582/
[2] = https://review.openstack.org/#/c/114341/
[3] = https://review.openstack.org/#/c/112149/
[4] = https://review.openstack.org/#/c/113393/
[5] = 
https://github.com/openstack/barbican-specs/blob/master/specs/juno/add-ssl-ca-support.rst
[6] = https://review.openstack.org/#/c/87405/
[7] = https://review.openstack.org/#/c/110817/
[8] = https://review.openstack.org/#/c/108163/


From: Ade Lee [a...@redhat.com]
Sent: Friday, August 15, 2014 12:16 PM
To: Reller, Nathan S.
Cc: Jarret Raim; John Wood; Coffman, Joel M.; Farr, Kaitlin M.; 
nkin...@redhat.com; akon...@redhat.com
Subject: Re: Juno Home Stretch

On Fri, 2014-08-15 at 12:12 -0400, Reller, Nathan S. wrote:
 We are getting into the final weeks of Juno development, and I was
 wondering where we stand with status. I want to make sure we get in
 everything that we can, and I want to make sure I prioritize reviews or
 any other help that we can give appropriately.

 We have several Dogtag patches that were accepted. Is there anything left
 for the Dogtag secret store?


The key wrapping feature is basically completed on the server side.  On
the client side, though, nothing has yet been done.  I suspect that this
will be something that will end up being tackled in K.

Right now, I am focused on several Dogtag patches:
1. Patch to extend Dogtag plugin to issue certificates through the
backend CA.
2. Patch to extend Dogtag plugin to support asymmetric generation.

So, whats missing?  Once Arvind's CR lands, we still need a follow-on CR
to extend the orders API to be able to both generate and issue
certificates.  Central to this is the decision to not attempt to design
a common set of parameters for all CA's, but rather to require the
client to specify vendor specific metadata.  That means that we will
need to provide some kind of identifier that the client will use to
identify the relevant CA.

Note that a particular plugin may support multiple subCA's.  We plan in
the near future to add a feature to Dogtag that would allow the creation
of lightweight subCA's within a dogtag instance.  This would allow, for
instance, projects to issue certificates scoped to that particular
project.  Possibly adding an admin interface to Barbican to configure
such subCA's, or CA's in general is something that 

Re: [openstack-dev] [All] LOG.warning/LOG.warn

2014-08-18 Thread Doug Hellmann
warn() and warning() are synonyms (literally the same method, aliased). We had 
to add a similar alias in the oslo ContextAdapter to support code using both 
forms in existing code. If the documented form is warning(), then I agree we 
should stick with that, although I don’t think the churn caused by making that 
change is a good idea at this point in the schedule (Michael’s heuristic below 
makes sense).

Doug

On Aug 17, 2014, at 5:57 PM, Michael Still mi...@stillhq.com wrote:

 My recollection is that this was a request from the oslo team, but it
 was so long ago that I don't recall the details.
 
 I think the change is low value, so should only be done when someone
 is changing the logging in a file already (the log hinting for
 example).
 
 Michael
 
 On Sun, Aug 17, 2014 at 4:26 PM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 Over the last few weeks I have seen a number of patches where LOG.warn is
 replacing LOG.warning. I think that if we do something it should be the
 opposite as warning is the documented one in python 2 and 3
 https://docs.python.org/3/howto/logging.html.
 Any thoughts?
 Thanks
 Gary
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] gettext question about oslo.i18n library

2014-08-18 Thread Doug Hellmann
Yes, that would be a good next step.

Doug

On Aug 17, 2014, at 10:03 PM, Peng Wu peng.e...@gmail.com wrote:

  Yes, I am interested in adding these missing gettext functions to
 oslo.i18n library.
 
  Guess the next step is to create a blueprint for Kilo?
 
 Thanks,
  Peng Wu
 
 
 On Fri, 2014-08-15 at 16:02 -0400, Doug Hellmann wrote:
 On Aug 15, 2014, at 3:18 AM, Peng Wu peng.e...@gmail.com wrote:
 
 Hi,
 
 Recently I just read the code of oslo.i18n library,
 The lazy translation idea is great!
 
 But I found a question about gettext contextual markers
 and plural form, such as pgettext and ungettext functions,
 see [3].
 
 It seems the two gettext functions are missing in the oslo.i18n
 library.
 Is it correct? or will support it?
 
 Thanks,
 Peng Wu
 
 You’re right, those are not present.
 
 We apparently haven’t used them anywhere, yet, because they weren’t exposed 
 via the old gettextutils module in the incubator. We should add them. Are 
 you interested in working on a blueprint for Kilo to do that?
 
 Doug
 
 
 Refer URL:
 1. https://github.com/openstack/oslo.i18n
 2.
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/039217.html
 3. https://wiki.openstack.org/wiki/I18n/TranslatableStrings
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-18 Thread Mark McLoughlin
On Mon, 2014-08-18 at 14:23 +0200, Thierry Carrez wrote:
 Clint Byrum wrote:
  Here's why folk are questioning Ceilometer:
  
  Nova is a set of tools to abstract virtualization implementations.
  Neutron is a set of tools to abstract SDN/NFV implementations.
  Cinder is a set of tools to abstract block-device implementations.
  Trove is a set of tools to simplify consumption of existing databases.
  Sahara is a set of tools to simplify Hadoop consumption.
  Swift is a feature-complete implementation of object storage, none of
  which existed when it was started.
  Keystone supports all of the above, unifying their auth.
  Horizon supports all of the above, unifying their GUI.
  
  Ceilometer is a complete implementation of data collection and alerting.
  There is no shortage of implementations that exist already.
  
  I'm also core on two projects that are getting some push back these
  days:
  
  Heat is a complete implementation of orchestration. There are at least a
  few of these already in existence, though not as many as their are data
  collection and alerting systems.
  
  TripleO is an attempt to deploy OpenStack using tools that OpenStack
  provides. There are already quite a few other tools that _can_ deploy
  OpenStack, so it stands to reason that people will question why we
  don't just use those. It is my hope we'll push more into the unifying
  the implementations space and withdraw a bit from the implementing
  stuff space.
  
  So, you see, people are happy to unify around a single abstraction, but
  not so much around a brand new implementation of things that already
  exist.
 
 Right, most projects focus on providing abstraction above
 implementations, and that abstraction is where the real domain
 expertise of OpenStack should be (because no one else is going to do it
 for us). Every time we reinvent something, we are at larger risk because
 we are out of our common specialty, and we just may not be as good as
 the domain specialists. That doesn't mean we should never reinvent
 something, but we need to be damn sure it's a good idea before we do.
 It's sometimes less fun to piggyback on existing implementations, but if
 they exist that's probably what we should do.

It's certainly a valid angle to evaluate projects on, but it's also easy
to be overly reductive about it - e.g. that rather than re-implement
virtualization management, Nova should just be a thin abstraction over
vSphere, XenServer and oVirt.

To take that example, I don't think we as a project should be afraid of
having such discussions but it wouldn't be productive to frame that
conversation as the sky is falling, Nova re-implements the wheel, we
should de-integrate it.

 While Ceilometer is far from alone in that space, what sets it apart is
 that even after it was blessed by the TC as the one we should all
 converge on, we keep on seeing competing implementations for some (if
 not all) of its scope. Convergence did not happen, and without
 convergence we struggle in adoption. We need to understand why, and if
 this is fixable.

Convergence did not happen is a little unfair. It's certainly a busy
space, and things like Monasca and InfluxDB are new developments. I'm
impressed at how hard the Ceilometer team works to embrace such
developments and patiently talks through possibilities for convergence.
This attitude is something we should be applauding in an integrated
project.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-18 Thread Julien Danjou
On Thu, Aug 14 2014, Yuriy Taraday wrote:

Hi Yuriy,

[…]

 Looking forward to your opinions.

This looks like a good summary of the situation.

I've added a solution E based on pthread, but didn't get very far about
it for now.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] LOG.warning/LOG.warn

2014-08-18 Thread Daniel P. Berrange
On Mon, Aug 18, 2014 at 07:57:28AM +1000, Michael Still wrote:
 My recollection is that this was a request from the oslo team, but it
 was so long ago that I don't recall the details.
 
 I think the change is low value, so should only be done when someone
 is changing the logging in a file already (the log hinting for
 example).

The lazy conversion approach really encourages bad practice and is very
wasteful for developers/reviewers. In GIT commit guidelines we explicitly
say not to make code cleanups in their code that are unrelated to the
feature/bug being addressed. When we have done lazy conversion for this
kind of thing, I've seen it waste a hell of alot of time for developers.
People are never entirely clear which is the preferred style, so they
end up just making a guess which will often be wrong. So now we consume
scarce reviewer time pointing this out to people over  over  over again,
and waste developer time having them re-post their patches again.

If we want to change LOG.warning to LOG.warn, we should do a single
patch with a global search  replace to get the pain over  done with
as soon as possible, then enforce it with a hacking rule. No reviewer
time gets wasted and developers will see their mistake right away.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-18 Thread Anne Gentle
On Wed, Aug 13, 2014 at 3:29 PM, Doug Hellmann d...@doughellmann.com
wrote:


 On Aug 13, 2014, at 4:08 PM, Matthew Treinish mtrein...@kortar.org
 wrote:

  On Wed, Aug 13, 2014 at 03:43:21PM -0400, Eoghan Glynn wrote:
 
 
  Divert all cross project efforts from the following projects so we
 can
  focus
  our cross project resources. Once we are in a bitter place we can
 expand
  our
  cross project resources to cover these again. This doesn't mean
 removing
  anything.
  * Sahara
  * Trove
  * Tripleo
 
  You write as if cross-project efforts are both of fixed size and
  amenable to centralized command  control.
 
  Neither of which is actually the case, IMO.
 
  Additional cross-project resources can be ponied up by the large
  contributor companies, and existing cross-project resources are not
  necessarily divertable on command.
 
  What “cross-project efforts” are we talking about? The liaison program
 in
  Oslo has been a qualified success so far. Would it make sense to
 extend that
  to other programs and say that each project needs at least one
 designated
  QA, Infra, Doc, etc. contact?
 
  Well my working assumption was that we were talking about people with
  the appropriate domain knowledge who are focused primarily on standing
  up the QA infrastructure.
 
  (as opposed to designated points-of-contact within the individual
  project teams who would be the first port of call for the QA/infra/doc
  folks if they needed a project-specific perspective on some live issue)
 
  That said however, I agree that it would be useful for the QA/infra/doc
  teams to know who in each project is most domain-knowledgeable when they
  need to reach out about a project-specific issue.
 
 
  I actually hadn't considered doing a formal liaison program, like Oslo,
 in QA
  before. Mostly, because at least myself and most of the QA cores have a
 decent
  grasp on who to ping about certain topics or reviews. That being said, I
 realize
  that probably is only disseminating information in a single direction.
 So maybe
  having a formal liaison makes sense.
 
  I'll talk to Doug and others about this and see whether adopting
 something
  similar for QA makes sense.
 
 
  -Matt Treinish

 The Oslo liaison program started out as a pure communication channel, but
 many of the liaisons have stepped up to take on the task of merging changes
 into their “home” projects. That has allowed adoption of libraries this
 cycle at a rate far higher than we could have achieved if the Oslo team had
 been responsible for submitting those changes ourselves. They’ve helped us
 identify API issues in the process, which benefits the projects that have
 been slower to adopt. So I really think the liaisons are key to library
 graduation being successful at our current scale.



Yes, I was going to say that we use doc liaisons with varying success per
project, but it has definitely helped me keep sane (mostly). We originally
thought of it as a communication channel (you attend my meetings I'll
attend yours) but it's also great as a point person that I can reach out to
as PTL or to point others to when they have questions.

Anne




 Doug

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Swift authentication and backwards compatibility

2014-08-18 Thread Dmitry Mescheryakov
Hello people,

 I think backward compatibility is a good idea.  We can make the
 user/pass inputs for data objects optional (they are required
 currently), maybe even gray them out in the UI with a checkbox to turn
 them on, or something like that.


 This is similar to what I was thinking. We would allow the username and
 password inputs to accept a blank input.

I like the idea of keeping backward compatibility by supporting
username/password. And also I really dislike one more config (domain
for temp users) to be mandatory. So supporting old behaviour here also
simplifies deployment, which is good especially for new users.

Thanks,

Dmitry


2014-08-15 18:04 GMT+04:00 mike mccune mimcc...@redhat.com:
 thanks for the thoughts Trevor,



 On 08/15/2014 09:32 AM, Trevor McKay wrote:

 I think backward compatibility is a good idea.  We can make the
 user/pass inputs for data objects optional (they are required
 currently), maybe even gray them out in the UI with a checkbox to turn
 them on, or something like that.


 This is similar to what I was thinking. We would allow the username and
 password inputs to accept a blank input.

 I also like the idea of giving some sort of visual reference, like graying
 out the fields.


 Sahara can detect whether or not the proxy domain is there, and whether
 or not it can be created.  If Sahara ends up in a situation where it
 thinks user/pass are required, but the data objects don't have them,
 we can return a meaningful error.


 I think it sounds like we are going to avoid having Sahara attempt to create
 a domain. It will be the duty of a stack administrator to create the domain
 and give it's name in the sahara.conf file.

 Agreed about meaning errors.


 The job manager can key off of the values supplied for the data source
 objects (no user/pass? must be proxy) and/or cluster configs (for
 instance, a new cluster config could be added -- if it's absent we
 assume old cluster and therefore old hadoop swfit plugin).  Workflow
 can be generated accordingly.


 This sounds good. If there is some way to determine the version of the
 hadoop-swiftfs on the cluster that would be ideal.


 The hadoop swift plugin can look at the config values provided, as you
 noted yesterday, and get auth tokens in either manor.


 exactly.



 mike


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-18 Thread Anne Gentle
On Fri, Aug 15, 2014 at 3:01 PM, Joe Gordon joe.gord...@gmail.com wrote:




 On Thu, Aug 14, 2014 at 4:02 PM, Eoghan Glynn egl...@redhat.com wrote:


   Additional cross-project resources can be ponied up by the large
   contributor companies, and existing cross-project resources are not
   necessarily divertable on command.
 
  Sure additional cross-project resources can and need to be ponied up,
 but I
  am doubtful that will be enough.

 OK, so what exactly do you suspect wouldn't be enough, for what
 exactly?


 I am not sure what would be enough to get OpenStack back in a position
 where more developers/users are happier with the current state of affairs.
 Which is why I think we may want to try several things.



 Is it the likely number of such new resources, or the level of domain-
 expertise that they can be realistically be expected bring to the
 table, or the period of time to on-board them, or something else?


 Yes, all of the above.


 And which cross-project concern do you think is most strained by the
 current set of projects in the integrated release? Is it:

  * QA
  * infra
  * release management
  * oslo
  * documentation
  * stable-maint

 or something else?


 Good question.

 IMHO QA, Infra and release management are probably the most strained. But
 I also think there is something missing from this list. Many of the
 projects are hitting similar issues and end up solving them in different
 ways, which just leads to more confusion for the end user. Today we have a
 decent model for rolling out cross-project libraries (Oslo) but we don't
 have a good way of having broader cross project discussions such as: API
 standards (such as discoverability of features), logging standards,
 aligning on concepts (different projects have different terms and concepts
 for scaling and isolating failure domains), and an overall better user
 experience. So I think we have a whole class of cross project issues that
 we have not even begun addressing.


Docs are very, very strained. We scope docs to integrated only and we're
still lacking in quality, completeness, and speed of reviews.

At this week's extra TC meeting [1] we discussed only the difficulties with
integration and growth. I also want us to think about the cost of
integration with the current definitions and metrics we have. We discussed
whether the difficulties lie in the sheer number of projects, or is the
difficulty in the complexity due to cross-integration?

For docs I can point to sheer number of projects which is why we scope to
integrated only. But even that definition is becoming difficult for
cross-project so I want to explore the cross-project implications before
the sheer number implications.

One of the metrics I'd like to see is a metric of most cross-project drag
for all programs. The measures might be:
- number of infrastructure nodes used to test
- number of infrastructure jobs needed
- most failing tests
- incompleteness of test suite
- incompleteness of docs
- difficulty for users to use (API, CLI, or configuration) due to lack of
docs or hard-to-understand complexities
- most bugs affecting more than one project (cross-project bugs would count
against both projects)
- performance in production environments due to interlocking project needs
- any others?

We know nova/neutron carries a lot of this integration drag. We know
there's not an easy button -- but should we focus on the hard problems
before integrating many more projects? For now I think our answer is no,
but I want to hear what others think about that consideration as an
additional metric before moving projects through our incubator.

Thanks,
Anne


1.
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-08-14-19.03.log.html





 Each of those teams has quite different prerequisite skill-sets, and
 the on-ramp for someone jumping in seeking to make a positive impact
 will vary from team to team.

 Different approaches have been tried on different teams, ranging from
 dedicated project-liaisons (Oslo) to shared cores (Sahara/Infra) to
 newly assigned dedicated resources (QA/Infra). Which of these models
 might work in your opinion? Which are doomed to failure, and why?

 So can you be more specific here on why you think adding more cross-
 project resources won't be enough to address an identified shortage
 of cross-project resources, while de-integrating projects would be?

 And, please, can we put the proverbial strawman back in its box on
 this thread? It's all well and good as a polemic device, but doesn't
 really move the discussion forward in a constructive way, IMO.

 Thanks,
 Eoghan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-18 Thread Edgar Magana
Neutron Ci Folks,

I have received answers from almost all the CI contacts and I want to
thank you all.
Every case is different and I will review each one of your answer and
questions.

I do understand that every CI is different and this is why I would suggest
two things:

1) Today's Neutron IRC meeting we can discuss the current status and what
we want to achieve by the end of Juno release.
2) Have a short session during the Kilo summit to have an agreement on the
requirements for the plugins and drivers under Neutron tree.

I will answer each one of you and again I thank you all for your responses.

Thanks,

Edgar


On 8/15/14, 3:35 PM, Edgar Magana edgar.mag...@workday.com wrote:

Team,

I did a quick audit on the Neutron CI. Very sad results. Only few plugins
and drivers are running properly and testing all Neutron commits.
I created a report here:
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugi
n
_and_Drivers


We will discuss the actions to take on the next Neutron IRC meeting. So
please, reach me out to clarify what is the status of your CI.
I had two commits to quickly verify the CI reliability:

https://review.openstack.org/#/c/114393/

https://review.openstack.org/#/c/40296/


I would expect all plugins and drivers passing on the first one and
failing for the second but I got so many surprises.

Neutron code quality and reliability is a top priority, if you ignore this
report that plugin/driver will be candidate to be remove from Neutron
tree.

Cheers,

Edgar

P.s. I hate to be the inquisitor hereŠ but someone has to do the dirty
job!


On 8/14/14, 8:30 AM, Kyle Mestery mest...@mestery.com wrote:

Folks, I'm not sure if all CI accounts are running sufficient tests.
Per the requirements wiki page here [1], everyone needs to be running
more than just Tempest API tests, which I still see most neutron
third-party CI setups doing. I'd like to ask everyone who operates a
third-party CI account for Neutron to please look at the link below
and make sure you are running appropriate tests. If you have
questions, the weekly third-party meeting [2] is a great place to ask
questions.

Thanks,
Kyle

[1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
[2] https://wiki.openstack.org/wiki/Meetings/ThirdParty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Russell Bryant
On 08/18/2014 06:18 AM, Thierry Carrez wrote:
 Doug Hellmann wrote:
 On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
 Let me try to say it another way.  You seemed to say that it wasn't much
 to ask given the rate at which things happen in OpenStack.  I would
 argue that given the rate, we should not try to ask more of individuals
 (like this proposal) and risk burnout.  Instead, we should be doing our
 best to be more open an inclusive to give the project the best chance to
 grow, as that's the best way to get more done.

 I think an increased travel expectation is a raised bar that will hinder
 team growth, not help it.

 +1, well said.
 
 Sorry, I was away for a few days. This is a topic I have a few strong
 opinions on :)
 
 There is no denial that the meetup format is working well, comparatively
 better than the design summit format. There is also no denial that that
 requiring 4 travels per year for a core dev is unreasonable. Where is
 the limit ? Wouldn't we be more productive and aligned if we did one per
 month ? No, the question is how to reach a sufficient level of focus and
 alignment while keeping the number of mandatory travel at 2 per year.
 
 I don't think our issue comes from not having enough F2F time. Our issue
 is that the design summit no longer reaches its objectives of aligning
 key contributors on a common plan, and we need to fix it.
 
 We established the design summit as the once-per-cycle opportunity to
 have face-to-face time and get alignment across the main contributors to
 a project. That used to be completely sufficient, but now it doesn't
 work as well... which resulted in alignment and team discussions to be
 discussed at mid-cycle meetups instead. Why ? And what could we change
 to have those alignment discussions at the design summit again ?
 
 Why are design summits less productive that mid-cycle meetups those days
 ? Is it because there are too many non-contributors in the design summit
 rooms ? Is it the 40-min format ? Is it the distractions (having talks
 to give somewhere else, booths to attend, parties and dinners to be at)
 ? Is it that beginning of cycle is not the best moment ? Once we know
 WHY the design summit fails its main objective, maybe we can fix it.
 
 My gut feeling is that having a restricted audience and a smaller group
 lets people get to the bottom of an issue and reach consensus. And that
 you need at least half a day or a full day of open discussion to reach
 such alignment. And that it's not particularly great to get such
 alignment in the middle of the cycle, getting it at the start is still
 the right way to align with the release cycle.
 
 Nothing prevents us from changing part of the design summit format (even
 the Paris one!), and restrict attendance to some of the sessions. And if
 the main issue is the distraction from the conference colocation, we
 might have to discuss the future of co-location again. In that 2 events
 per year objective, we could make the conference the optional cycle
 thing, and a developer-oriented specific event the mandatory one.
 
 If we manage to have alignment at the design summit, then it doesn't
 spell the end of the mid-cycle things. But then, ideally the extra
 mid-cycle gatherings should be focused on getting specific stuff done,
 rather than general team alignment. Think workshop/hackathon rather than
 private gathering. The goal of the workshop would be published in
 advance, and people could opt to join that. It would be totally optional.

Great response ... I agree with everything you've said here.  Let's
figure out how to improve the design summit to better achieve team
alignment.

Of the things you mentioned, I think the biggest limit to alignment has
been the 40 minute format.  There are some topics that need more time.
It may be that we just need to take more advantage of the ability to
give a single topic multiple time slots to ensure enough time is
available.  As Dan discussed, there are some topics that we could stand
to turn down and distribute information another way that is just as
effective.

I would also say that the number of things going on at one time is also
problematic.  Not only are there several design summit sessions going
once, but there are conference sessions and customer meetings.  The
rapid rate of jumping around and context switching is exhausting.  It
also makes it a bit harder to get critical mass for an extended period
of time around a topic.  In mid-cycle meetups, there is one track and no
other things competing for time and attention.

I don't have a good suggestion for fixing this issue with so many things
competing for time and attention.  I used to be a big proponent of
splitting the event out completely, but I don't feel the same way
anymore.  In theory we could call the conference the optional event, but
in practice it's going to be required for many folks anyway.  I can't
speak for everyone, but I suspect if you're a senior engineer at your

Re: [openstack-dev] [All] LOG.warning/LOG.warn

2014-08-18 Thread Doug Hellmann

On Aug 18, 2014, at 10:15 AM, Daniel P. Berrange berra...@redhat.com wrote:

 On Mon, Aug 18, 2014 at 07:57:28AM +1000, Michael Still wrote:
 My recollection is that this was a request from the oslo team, but it
 was so long ago that I don't recall the details.
 
 I think the change is low value, so should only be done when someone
 is changing the logging in a file already (the log hinting for
 example).
 
 The lazy conversion approach really encourages bad practice and is very
 wasteful for developers/reviewers. In GIT commit guidelines we explicitly
 say not to make code cleanups in their code that are unrelated to the
 feature/bug being addressed. When we have done lazy conversion for this
 kind of thing, I've seen it waste a hell of alot of time for developers.
 People are never entirely clear which is the preferred style, so they
 end up just making a guess which will often be wrong. So now we consume
 scarce reviewer time pointing this out to people over  over  over again,
 and waste developer time having them re-post their patches again.
 
 If we want to change LOG.warning to LOG.warn, we should do a single
 patch with a global search  replace to get the pain over  done with
 as soon as possible, then enforce it with a hacking rule. No reviewer
 time gets wasted and developers will see their mistake right away.
 
 Regards,
 Daniel

The only issue I can see with that approach is causing existing patches to have 
to be rebased. That can be mitigated by making these sorts of cleanups at a 
time when other feature changes aren’t going to be under development, though.

Doug

 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Accessing environment information in javelin2

2014-08-18 Thread Chris Dent


To make some time oriented comparisons in javelin2 I'd like to be
able to access the timestamps on the data dumps in the $SAVE_DIR.

In my experiments I've done this by pushing SAVE_DIR and
BASE_RELEASE into the subshell that calls javelin2 -m create in
grenade.sh.

Is there:

* A better way to get those two chunks of info (that is, without
  changing grenade.sh)?
* Some other way to get a timestamp that is a time shortly before
  the services in the TARGET_RELEASE have started?

The reason for doing this? I want to be able to confirm that some
sample data retrieved in a query against the ceilometer API has
samples that span the upgrade.

Thoughts?

Thanks.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-18 Thread Edgar Magana
Thank you Akihiro.

I will propose a better organization for this section. Stay tune!

Edgar

On 8/17/14, 10:53 PM, Akihiro Motoki mot...@da.jp.nec.com wrote:


On 2014/08/18 0:12, Kyle Mestery wrote:
 On Fri, Aug 15, 2014 at 5:35 PM, Edgar Magana
edgar.mag...@workday.com wrote:
 Team,

 I did a quick audit on the Neutron CI. Very sad results. Only few
plugins
 and drivers are running properly and testing all Neutron commits.
 I created a report here:
 
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plu
gin
 _and_Drivers

 Can you link this and/or move it to this page:

 https://wiki.openstack.org/wiki/NeutronPlugins

 This is under the NeutronPolicies wiki page which I did at the start
 of Juno. This tracks all policies and procedures for Neutron, and
 there's a Plugins page (which I linked to above) where this should
 land.

I just added the link Neutron_Plugins_and_Drivers#Existing_Plugin
to NeutronPlugins wiki.

The wiki pages NeutronPlugins and Neutron_Plugins_and_Drivers
cover the similar contents. According to the history of the page,
the latter one was created by Mark at Nov 2013 (beginning of Icehouse
cycle).
It seems better to merge these two pages to avoid the confusion.

Akihiro



 We will discuss the actions to take on the next Neutron IRC meeting. So
 please, reach me out to clarify what is the status of your CI.
 I had two commits to quickly verify the CI reliability:

 https://review.openstack.org/#/c/114393/

 https://review.openstack.org/#/c/40296/


 I would expect all plugins and drivers passing on the first one and
 failing for the second but I got so many surprises.

 Neutron code quality and reliability is a top priority, if you ignore
this
 report that plugin/driver will be candidate to be remove from Neutron
tree.

 Cheers,

 Edgar

 P.s. I hate to be the inquisitor hereŠ but someone has to do the dirty
job!

 Thanks for sending this out Edgar and doing this analysis! Can you
 please put an agenda item on Monday's meeting to discuss this? I won't
 be at the meeting as I'm on PTO (Mark is running the meeting in my
 absence), but I'd like the team to discuss this and allow all
 third-party people a chance to be there and share their feelings here.

 Thanks,
 Kyle


 On 8/14/14, 8:30 AM, Kyle Mestery mest...@mestery.com wrote:

 Folks, I'm not sure if all CI accounts are running sufficient tests.
 Per the requirements wiki page here [1], everyone needs to be running
 more than just Tempest API tests, which I still see most neutron
 third-party CI setups doing. I'd like to ask everyone who operates a
 third-party CI account for Neutron to please look at the link below
 and make sure you are running appropriate tests. If you have
 questions, the weekly third-party meeting [2] is a great place to ask
 questions.

 Thanks,
 Kyle

 [1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
 [2] https://wiki.openstack.org/wiki/Meetings/ThirdParty

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread John Griffith
On Mon, Aug 18, 2014 at 9:18 AM, Russell Bryant rbry...@redhat.com wrote:

 On 08/18/2014 06:18 AM, Thierry Carrez wrote:
  Doug Hellmann wrote:
  On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
  Let me try to say it another way.  You seemed to say that it wasn't
 much
  to ask given the rate at which things happen in OpenStack.  I would
  argue that given the rate, we should not try to ask more of individuals
  (like this proposal) and risk burnout.  Instead, we should be doing our
  best to be more open an inclusive to give the project the best chance
 to
  grow, as that's the best way to get more done.
 
  I think an increased travel expectation is a raised bar that will
 hinder
  team growth, not help it.
 
  +1, well said.
 
  Sorry, I was away for a few days. This is a topic I have a few strong
  opinions on :)
 
  There is no denial that the meetup format is working well, comparatively
  better than the design summit format. There is also no denial that that
  requiring 4 travels per year for a core dev is unreasonable. Where is
  the limit ? Wouldn't we be more productive and aligned if we did one per
  month ? No, the question is how to reach a sufficient level of focus and
  alignment while keeping the number of mandatory travel at 2 per year.
 
  I don't think our issue comes from not having enough F2F time. Our issue
  is that the design summit no longer reaches its objectives of aligning
  key contributors on a common plan, and we need to fix it.
 
  We established the design summit as the once-per-cycle opportunity to
  have face-to-face time and get alignment across the main contributors to
  a project. That used to be completely sufficient, but now it doesn't
  work as well... which resulted in alignment and team discussions to be
  discussed at mid-cycle meetups instead. Why ? And what could we change
  to have those alignment discussions at the design summit again ?
 
  Why are design summits less productive that mid-cycle meetups those days
  ? Is it because there are too many non-contributors in the design summit
  rooms ? Is it the 40-min format ? Is it the distractions (having talks
  to give somewhere else, booths to attend, parties and dinners to be at)
  ? Is it that beginning of cycle is not the best moment ? Once we know
  WHY the design summit fails its main objective, maybe we can fix it.
 
  My gut feeling is that having a restricted audience and a smaller group
  lets people get to the bottom of an issue and reach consensus. And that
  you need at least half a day or a full day of open discussion to reach
  such alignment. And that it's not particularly great to get such
  alignment in the middle of the cycle, getting it at the start is still
  the right way to align with the release cycle.
 
  Nothing prevents us from changing part of the design summit format (even
  the Paris one!), and restrict attendance to some of the sessions. And if
  the main issue is the distraction from the conference colocation, we
  might have to discuss the future of co-location again. In that 2 events
  per year objective, we could make the conference the optional cycle
  thing, and a developer-oriented specific event the mandatory one.
 
  If we manage to have alignment at the design summit, then it doesn't
  spell the end of the mid-cycle things. But then, ideally the extra
  mid-cycle gatherings should be focused on getting specific stuff done,
  rather than general team alignment. Think workshop/hackathon rather than
  private gathering. The goal of the workshop would be published in
  advance, and people could opt to join that. It would be totally optional.

 Great response ... I agree with everything you've said here.  Let's
 figure out how to improve the design summit to better achieve team
 alignment.

 Of the things you mentioned, I think the biggest limit to alignment has
 been the 40 minute format.  There are some topics that need more time.
 It may be that we just need to take more advantage of the ability to
 give a single topic multiple time slots to ensure enough time is
 available.  As Dan discussed, there are some topics that we could stand
 to turn down and distribute information another way that is just as
 effective.

 I would also say that the number of things going on at one time is also
 problematic.  Not only are there several design summit sessions going
 once, but there are conference sessions and customer meetings.  The
 rapid rate of jumping around and context switching is exhausting.  It
 also makes it a bit harder to get critical mass for an extended period
 of time around a topic.  In mid-cycle meetups, there is one track and no
 other things competing for time and attention.

 I don't have a good suggestion for fixing this issue with so many things
 competing for time and attention.  I used to be a big proponent of
 splitting the event out completely, but I don't feel the same way
 anymore.  In theory we could call the conference the 

[openstack-dev] [neutron] [third-party] Mellanox CI Third party system is going down for Maintenance

2014-08-18 Thread Omri Marcovitch

Hi,

Mellanox CI is going down for maintenance.
We will notify as soon as the system is up and ready.

Sorry for the inconvenience,
Omri

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Mellanox CI Third party system is going down for Maintenance

2014-08-18 Thread Anita Kuno
On 08/18/2014 09:43 AM, Omri Marcovitch wrote:
 
 Hi,
 
 Mellanox CI is going down for maintenance.
 We will notify as soon as the system is up and ready.
 
 Sorry for the inconvenience,
 Omri
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Hi Omri:

Please attend the third party meeting today or send a delegate:
https://wiki.openstack.org/wiki/Meetings/ThirdParty

We need to discuss how to inform people of ci status changes and updates
without spamming the -dev mailing list. I have some thoughts and welcome
your input.

Thanks Omri,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Daniel P. Berrange
On Mon, Aug 18, 2014 at 11:18:52AM -0400, Russell Bryant wrote:
 On 08/18/2014 06:18 AM, Thierry Carrez wrote:
  Doug Hellmann wrote:
  On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
  Let me try to say it another way.  You seemed to say that it wasn't much
  to ask given the rate at which things happen in OpenStack.  I would
  argue that given the rate, we should not try to ask more of individuals
  (like this proposal) and risk burnout.  Instead, we should be doing our
  best to be more open an inclusive to give the project the best chance to
  grow, as that's the best way to get more done.
 
  I think an increased travel expectation is a raised bar that will hinder
  team growth, not help it.
 
  +1, well said.
  
  Sorry, I was away for a few days. This is a topic I have a few strong
  opinions on :)
  
  There is no denial that the meetup format is working well, comparatively
  better than the design summit format. There is also no denial that that
  requiring 4 travels per year for a core dev is unreasonable. Where is
  the limit ? Wouldn't we be more productive and aligned if we did one per
  month ? No, the question is how to reach a sufficient level of focus and
  alignment while keeping the number of mandatory travel at 2 per year.
  
  I don't think our issue comes from not having enough F2F time. Our issue
  is that the design summit no longer reaches its objectives of aligning
  key contributors on a common plan, and we need to fix it.
  
  We established the design summit as the once-per-cycle opportunity to
  have face-to-face time and get alignment across the main contributors to
  a project. That used to be completely sufficient, but now it doesn't
  work as well... which resulted in alignment and team discussions to be
  discussed at mid-cycle meetups instead. Why ? And what could we change
  to have those alignment discussions at the design summit again ?
  
  Why are design summits less productive that mid-cycle meetups those days
  ? Is it because there are too many non-contributors in the design summit
  rooms ? Is it the 40-min format ? Is it the distractions (having talks
  to give somewhere else, booths to attend, parties and dinners to be at)
  ? Is it that beginning of cycle is not the best moment ? Once we know
  WHY the design summit fails its main objective, maybe we can fix it.
  
  My gut feeling is that having a restricted audience and a smaller group
  lets people get to the bottom of an issue and reach consensus. And that
  you need at least half a day or a full day of open discussion to reach
  such alignment. And that it's not particularly great to get such
  alignment in the middle of the cycle, getting it at the start is still
  the right way to align with the release cycle.
  
  Nothing prevents us from changing part of the design summit format (even
  the Paris one!), and restrict attendance to some of the sessions. And if
  the main issue is the distraction from the conference colocation, we
  might have to discuss the future of co-location again. In that 2 events
  per year objective, we could make the conference the optional cycle
  thing, and a developer-oriented specific event the mandatory one.
  
  If we manage to have alignment at the design summit, then it doesn't
  spell the end of the mid-cycle things. But then, ideally the extra
  mid-cycle gatherings should be focused on getting specific stuff done,
  rather than general team alignment. Think workshop/hackathon rather than
  private gathering. The goal of the workshop would be published in
  advance, and people could opt to join that. It would be totally optional.
 
 Great response ... I agree with everything you've said here.  Let's
 figure out how to improve the design summit to better achieve team
 alignment.
 
 Of the things you mentioned, I think the biggest limit to alignment has
 been the 40 minute format.  There are some topics that need more time.
 It may be that we just need to take more advantage of the ability to
 give a single topic multiple time slots to ensure enough time is
 available.  As Dan discussed, there are some topics that we could stand
 to turn down and distribute information another way that is just as
 effective.
 
 I would also say that the number of things going on at one time is also
 problematic.  Not only are there several design summit sessions going
 once, but there are conference sessions and customer meetings.  The
 rapid rate of jumping around and context switching is exhausting.  It
 also makes it a bit harder to get critical mass for an extended period
 of time around a topic.  In mid-cycle meetups, there is one track and no
 other things competing for time and attention.
 
 I don't have a good suggestion for fixing this issue with so many things
 competing for time and attention.  I used to be a big proponent of
 splitting the event out completely, but I don't feel the same way
 anymore.  In theory we could call the conference the optional 

Re: [openstack-dev] [qa] Accessing environment information in javelin2

2014-08-18 Thread Chris Dent

On Mon, 18 Aug 2014, Chris Dent wrote:


The reason for doing this? I want to be able to confirm that some
sample data retrieved in a query against the ceilometer API has
samples that span the upgrade.


The associated change is here:

https://review.openstack.org/#/c/102354

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] LOG.warning/LOG.warn

2014-08-18 Thread Daniel P. Berrange
On Mon, Aug 18, 2014 at 11:27:39AM -0400, Doug Hellmann wrote:
 
 On Aug 18, 2014, at 10:15 AM, Daniel P. Berrange berra...@redhat.com wrote:
 
  On Mon, Aug 18, 2014 at 07:57:28AM +1000, Michael Still wrote:
  My recollection is that this was a request from the oslo team, but it
  was so long ago that I don't recall the details.
  
  I think the change is low value, so should only be done when someone
  is changing the logging in a file already (the log hinting for
  example).
  
  The lazy conversion approach really encourages bad practice and is very
  wasteful for developers/reviewers. In GIT commit guidelines we explicitly
  say not to make code cleanups in their code that are unrelated to the
  feature/bug being addressed. When we have done lazy conversion for this
  kind of thing, I've seen it waste a hell of alot of time for developers.
  People are never entirely clear which is the preferred style, so they
  end up just making a guess which will often be wrong. So now we consume
  scarce reviewer time pointing this out to people over  over  over again,
  and waste developer time having them re-post their patches again.
  
  If we want to change LOG.warning to LOG.warn, we should do a single
  patch with a global search  replace to get the pain over  done with
  as soon as possible, then enforce it with a hacking rule. No reviewer
  time gets wasted and developers will see their mistake right away.
 
 The only issue I can see with that approach is causing existing patches
 to have to be rebased. That can be mitigated by making these sorts of
 cleanups at a time when other feature changes aren’t going to be under
 development, though.

The pessimist in me says that the majority of existing patches are going
have to be rebased many more times for unrelated reasons already, so this
won't be as bad as you might fear. It would make sense to avoid doing
these kind of cleanups immediately before any release milestones though,
since there is no sense in intentionally inflicting this pain at the
worst possible time :-)

So IMHO there is a tiny window for someone to propose this for Juno
right now if reviewers commit to merging it quickly. Otherwise any
conversion should wait until Kilo opens, and don't attempt to do a
piece-by-piece conversion in the meantime.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Mellanox CI Third party system is going down for Maintenance

2014-08-18 Thread Omri Marcovitch
Mellanox CI is up and ready.

Thanks

From: Omri Marcovitch [mailto:om...@mellanox.com]
Sent: Monday, August 18, 2014 6:44 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] [third-party] Mellanox CI Third party system 
is going down for Maintenance


Hi,

Mellanox CI is going down for maintenance.
We will notify as soon as the system is up and ready.

Sorry for the inconvenience,
Omri

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra][Neutron] tempest requirements errors while fetching oslo.i18n=0.1.0

2014-08-18 Thread Jeremy Stanley
On 2014-08-17 23:53:12 -0700 (-0700), daya kamath wrote:
[...]
 openstack-infra does not get updated as part of the gate jobs
[...]

Right, we use puppet to continuously apply that configuration to our
durable workers and nodepool templates.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] Mellanox CI Third party system is going down for Maintenance

2014-08-18 Thread Anita Kuno
On 08/18/2014 10:18 AM, Omri Marcovitch wrote:
 Mellanox CI is up and ready.
 
 Thanks
 
 From: Omri Marcovitch [mailto:om...@mellanox.com]
 Sent: Monday, August 18, 2014 6:44 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [neutron] [third-party] Mellanox CI Third party 
 system is going down for Maintenance
 
 
 Hi,
 
 Mellanox CI is going down for maintenance.
 We will notify as soon as the system is up and ready.
 
 Sorry for the inconvenience,
 Omri
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Omri:

I sent an email to you as part of this thread asking you to attend or
send someone to the third party meeting to address the spamming of the
mailing list for ci status.

Please acknowledge my email. Please additionally stop spamming the
development mailing list with information which needs to be communicated
in a more efficient fashion.

I await your reply,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Devstack] q-svc fails to start in devstack.

2014-08-18 Thread Parikshit Manur
Hi All,
Start of q-svc  in devstack fails with error message No type 
driver for tenant network_type: vxlan. Service terminated!. I have not choosen 
vxlan as ML2 type driver in localrc. I have added the details of localrc file 
for my setup below for reference. Can you please point out if I am missing any 
config or there is any workaround to fix the issue. Could you also point me to 
a separate mailing list for devstack related queries if there is any.

localrc file contents:

RECLONE=yes
DEST=/opt/stack
SCREEN_LOGDIR=/opt/stack/new/screen-logs
LOGFILE=/opt/stack/new/devstacklog.txt
DATABASE_PASSWORD=password
RABBIT_PASSWORD= password
SERVICE_TOKEN= password
SERVICE_PASSWORD= password
ADMIN_PASSWORD= password
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-lbaas
enable_service neutron
# Optional, to enable tempest configuration as part of devstack
enable_service tempest
Q_PLUGIN=ml2
Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
Q_ML2_PLUGIN_TYPE_DRIVERS=vlan,flat
ML2_VLAN_RANGES=physnet1:1500:1600
ENABLE_TENANT_VLANS=True
PHYSICAL_NETWORK=physnet1
OVS_PHYSICAL_BRIDGE=br-eth1

Thanks,
Parikshit Manur

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which program for Rally

2014-08-18 Thread Matthew Treinish
On Fri, Aug 15, 2014 at 01:57:29AM +0400, Boris Pavlovic wrote:
 Matt,
 
 One thing did just occur to me while writing this though it's probably worth
  investigating splitting out the stress test framework as an external
  tool/project after we start work on the tempest library. [3]
 
 
 
 I fully agree with the fact that stress testing doesn't belong to Tempest.
 
 This current thread is all about this aspect and all my arguments, related
 to splitting Rally and merging it to tempest are related to this.
 
 
 Could you please elaborate, why instead of ready solution Rally  that has
 community and is aligned with OpenStack and OpenStack processes you are
 going to create from scratch similar solution?

This is the same issue which was brought up on your subunit2sql thread [1] and
has come up several times already in this thread. Rally doesn't work as
individual components, you need to use the whole tool chain to do anything with
it. There are pieces of Rally which would be very useful in conjunction with all
the other projects we have in the gating workflow. But, because Rally has
decided to duplicate much of what we already have and then tie everything
together in a monolithic toolchain we can't use these pieces which are useful by
itself. This doesn't work with the workflow we already have, it also ignores the
split in functionality we currently have and are working on improving. Instead
of rewriting all the details again just look at [2][3] they elaborate on all of
this already.

You're also ignoring that by splitting out the stress test framework we'd be
doing the exact thing you're so opposed to doing in rally. Which is splitting
out existing functionality from a larger more complex project into smaller more
purpose built consumable chunks that work together to build a modular pipeline,
which can be used in different configurations to suit the application. It's also
not being created from scratch, the stress framework already exists and has for
quite some time. (it pre-dates Rally) We would just be breaking it off into a
separate repo to make the split in functionality more clear. It is essentially
a separate tool already, it just lives in the tempest tree which I feel is a
source of some confusion around it.

 
 I really don't see any reasons why we need to duplicate existing and
 working solution and can't just work together on Rally?

So I have to point out the double standard with this statement. You're ignoring
all the functionality that Rally has already duplicated. For example, the fact
that tempest exists as a battery of tests which is being slowly duplicated in
Rally, or the stress test framework, which is functionality that was more or
less completely duplicated in Rally. You seem to be under the mistaken
impression that by continuing to improve things on QA program projects we're
duplicating Rally. Which is honestly something I'm having a hard time
understanding. I especially do not see the value in working on improvements to
tempest and the existing workflow by replacing everything with Rally.



So I think there's been continued confusion around exactly how all the projects
work together in the QA program. So I wrote a blog post giving a high level
overview here: http://blog.kortar.org/?p=4

Now if Rally is willing to work with us it would really be awesome since a lot
of the work has already been done by them. But we would need to rework Rally so
the bits which don't already exist in the QA program are interoperable with what
we have now and can be used independently. Duplicated functionality would need
to be removed, and we would need to add the improvements in Rally on existing
functionality back into the projects, where they really belong. However, as I've
been continually saying in this thread Rally in it's current form doesn't work
with our current model, nor does it work with the vision, at least I have, for
how things should be in the future.

Now I'm sure someone is going to see the flow diagrams on my blog and see it as
a challenge to explain how Rally does it better today, or something along those
lines. Please don't, because honestly it's irrelevant, these are just ideas in
my head, and where I want to help steer the QA program as long as I'm working on
it. (at least as of today) I fully expect things to evolve and grow more
organically resulting in something that will be completely different from that.

Also, I'm really done with this thread, I've outlined my stance repeatedly and
tried my best to explain my position as clearly as I can. At this point I have
nothing else to add. I view the burden as being fully on the Rally team to
decide whether they want to start working with the QA program towards
integrating Rally into the QA program, (the steps Sean outlined in [2] are a
good start) or remain a separate external project. (or I guess unless the TC
mandates something else)

One last comment, I do want to apologize in advance if any of my words seem
harsh or 

Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Maru Newby

On Aug 14, 2014, at 8:52 AM, Russell Bryant rbry...@redhat.com wrote:

 On 08/14/2014 11:40 AM, David Kranz wrote:
 On 08/14/2014 10:54 AM, Matt Riedemann wrote:
 
 
 On 8/14/2014 3:47 AM, Daniel P. Berrange wrote:
 On Thu, Aug 14, 2014 at 09:24:36AM +1000, Michael Still wrote:
 On Thu, Aug 14, 2014 at 3:09 AM, Dan Smith d...@danplanet.com wrote:
 I'm not questioning the value of f2f - I'm questioning the idea of
 doing f2f meetings sooo many times a year. OpenStack is very much
 the outlier here among open source projects - the vast majority of
 projects get along very well with much less f2f time and a far
 smaller % of their contributors attend those f2f meetings that do
 happen. So I really do question what is missing from OpenStack's
 community interaction that makes us believe that having 4 f2f
 meetings a year is critical to our success.
 
 How many is too many? So far, I have found the midcycles to be
 extremely
 productive -- productive in a way that we don't see at the summits,
 and
 I think other attendees agree. Obviously if budgets start limiting
 them,
 then we'll have to deal with it, but I don't want to stop meeting
 preemptively.
 
 I agree they're very productive. Let's pick on the nova v3 API case as
 an example... We had failed as a community to reach a consensus using
 our existing discussion mechanisms (hundreds of emails, at least three
 specs, phone calls between the various parties, et cetera), yet at the
 summit and then a midcycle meetup we managed to nail down an agreement
 on a very contentious and complicated topic.
 
 We thought we had agreement on v3 API after Atlanta f2f summit and
 after Hong Kong f2f too. So I wouldn't neccessarily say that we
 needed another f2f meeting to resolve that, but rather than this is
 a very complex topic that takes a long time to resolve no matter
 how we discuss it and the discussions had just happened to reach
 a natural conclusion this time around. But lets see if this agreement
 actually sticks this time
 
 I can see the argument that travel cost is an issue, but I think its
 also not a very strong argument. We have companies spending millions
 of dollars on OpenStack -- surely spending a relatively small amount
 on travel to keep the development team as efficient as possible isn't
 a big deal? I wouldn't be at all surprised if the financial costs of
 the v3 API debate (staff time mainly) were much higher than the travel
 costs of those involved in the summit and midcycle discussions which
 sorted it out.
 
 I think the travel cost really is a big issue. Due to the number of
 people who had to travel to the many mid-cycle meetups, a good number
 of people I work with no longer have the ability to go to the Paris
 design summit. This is going to make it harder for them to feel a
 proper engaged part of our community. I can only see this situation
 get worse over time if greater emphasis is placed on attending the
 mid-cycle meetups.
 
 Travelling to places to talk to people isn't a great solution, but it
 is the most effective one we've found so far. We should continue to
 experiment with other options, but until we find something that works
 as well as meetups, I think we need to keep having them.
 
 IMHO, the reasons to cut back would be:
 
 - People leaving with a well, that was useless... feeling
 - Not enough people able to travel to make it worthwhile
 
 So far, neither of those have been outcomes of the midcycles we've
 had,
 so I think we're doing okay.
 
 The design summits are structured differently, where we see a lot more
 diverse attendance because of the colocation with the user summit. It
 doesn't lend itself well to long and in-depth discussions about
 specific
 things, but it's very useful for what it gives us in the way of
 exposure. We could try to have less of that at the summit and more
 midcycle-ish time, but I think it's unlikely to achieve the same level
 of usefulness in that environment.
 
 Specifically, the lack of colocation with too many other projects has
 been a benefit. This time, Mark and Maru where there from Neutron.
 Last
 time, Mark from Neutron and the other Mark from Glance were there. If
 they were having meetups in other rooms (like at summit) they wouldn't
 have been there exposed to discussions that didn't seem like they'd
 have
 a component for their participation, but did after all (re: nova and
 glance and who should own flavors).
 
 I agree. The ability to focus on the issues that were blocking nova
 was very important. That's hard to do at a design summit when there is
 so much happening at the same time.
 
 Maybe we should change the way we structure the design summit to
 improve that. If there are critical issues blocking nova, it feels
 like it is better to be able to discuss and resolve as much as possible
 at the start of the dev cycle rather than in the middle of the dev
 cycle because I feel that means we are causing ourselves pain during
 milestone 1/2.
 
 Just speaking from 

Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Maru Newby

On Aug 13, 2014, at 10:32 PM, Michael Still mi...@stillhq.com wrote:

 On Thu, Aug 14, 2014 at 2:48 PM, Joe Gordon joe.gord...@gmail.com wrote:
 On Wed, Aug 13, 2014 at 8:31 PM, Michael Still mi...@stillhq.com wrote:
 On Thu, Aug 14, 2014 at 1:24 PM, Jay Pipes jaypi...@gmail.com wrote:
 
 Just wanted to quickly weigh in with my thoughts on this important
 topic. I
 very much valued the face-to-face interaction that came from the
 mid-cycle
 meetup in Beaverton (it was the only one I've ever been to).
 
 That said, I do not believe it should be a requirement that cores make
 it to
 the face-to-face meetings in-person. A number of folks have brought up
 very
 valid concerns about personal/family time, travel costs and burnout.
 
 I'm not proposing they be a requirement. I am proposing that they be
 strongly encouraged.
 
 I believe that the issue raised about furthering the divide between core
 and
 non-core folks is actually the biggest reason I don't support a mandate
 to
 have cores at the face-to-face meetings, and I think we should make our
 best
 efforts to support quality virtual meetings that can be done on a more
 frequent basis than the face-to-face meetings that would be optional.
 
 I am all for online meetings, but we don't have a practical way to do
 them at the moment apart from IRC. Until someone has a concrete
 proposal that's been shown to work, I feel its a straw man argument.
 
 What about making it easier for remote people to participate at the
 mid-cycle meetups? Set up some microphones and a Google hangout?  At least
 that way attending the mid-cycle is not all or nothing.
 
 We did something like this last cycle (IIRC we didn't have enough mics) and
 it worked pretty well.
 
 As I said, I'm open to experimenting, but I need someone other than me
 to own that. I'm simply too busy to get to it.
 
 However, I don't think we should throw away the thing that works for
 best for us now, until we have a working replacement. I'm very much in
 favour of work being done on a replacement though.

+1

I agree that that mid-cycles may not be sustainable over the long-term due to 
issues of travel cost (financial and otherwise) and a lack of inclusiveness, 
but I don't think they should stop happening until a suitably productive 
alternative has been found.


Maru



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes/log - 08/18/2014

2014-08-18 Thread Renat Akhmerov
Folks, 

Thanks for joining the meeting today.

As usually,
Meeting minutes:  
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-08-18-16.00.html
Meeting log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-08-18-16.00.log.html

Meeting agenda/archive: https://wiki.openstack.org/wiki/Meetings/MistralAgenda

Next Monday on Aug 25 we’ll meet again.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron router and nf_conntrack performance problems

2014-08-18 Thread Brian Haley
Stuart,

I also can't say I've seen this, but I am curious now.  I did have a few
questions for you though.

1. When you say you set nf_conntrack_max/nf_conntrack_hash to 256k, did you
really set the hash size that large?  Typically the hash is 1/8 of the max,
meaning you'd have 8 entries per hashbucket.

2. Does /sys/module/nf_conntrack/parameters/hashsize look correct?

3. Are you seeing any messages such as nf_conntrack: table full, dropping 
packet

4. How many entries are the in the conntrack table?  'sudo conntrack -C'

5. Have you been able to drill down any further into what's taking all the time
in nf_conntrack_tuple_taken() ?  I can't imagine you have a single bucket with
tons of entries and you're spinning looking at each, but it could be that 
simple.

Thanks,

-Brian

On 08/16/2014 12:12 PM, Stuart Fox wrote:
 Hey neutron dev!
 
 Im having a serious problem with my neutron router getting spin locked in
 nf_conntrack_tuple_taken.
 Has anybody else experienced it?
 perf top shows nf_conntrack_tuple_taken at 75%
 As the incoming request rate goes up, so nf_conntrack_tuple_taken runs very 
 hot
 on CPU0 causing ksoftirqd/0 to run at 100%. At that point internal pings on 
 the
 GRE network go sky high and its game over. Pinging from a vm to the subnet
 default gateway on the neutron goes from 0.2ms to 11s! pinging from the same 
 vm
 to another vm in the same subnet stays constant at 0.2ms.
 
 Very much indicates to me that the neutron router is having serious problems.
 No other part of the system seems under pressure.
 
 ipv6 is disabled, and nf_conntrack_max/nf_conntrack_hash are set to 256k.
 We've tried the default 3.13 and the utopic 3.16 kernel (3.16 has lots of work
 on removing spinlocks around nf_conntrack). 3.16 survives a little longer but
 still gets in the same state
 
 Neutron router
 1 x Ubuntu 14.04/Icehouse 2014.1.1 on an ibm x3550 with 4 10G intel nics.
 eth0 - Mgt
 eth1 - GRE
 eth2 - Public
 eth3 - unused 
 
 Compute/controller nodes
 43 x Ubuntu 14.04/Icehouse 2014.1.1 ibm x240 flex blades with 4 emulex nics
 eth0 Mgt
 eth2 GRE
 
 Any help very much appreciated!
 Replace the l2/l3 functions with hardware is very much an option if thats a
 better solution.
 Im running out of time before my client decides to stay on AWS.
 
 
 
 BR,
 Stuart
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack] q-svc fails to start in devstack.

2014-08-18 Thread Brian Haley
When you don't specify it, the default network type is:

(from lib/neutron_plugins/ml2)
Q_ML2_TENANT_NETWORK_TYPE=${Q_ML2_TENANT_NETWORK_TYPE:-vxlan}

You can try specifying that as vlan in your local.conf file and see what 
happens.

-Brian

BTW, this probably should have just gone to openst...@lists.openstack.org, not
the -dev list.

On 08/18/2014 12:35 PM, Parikshit Manur wrote:
 Hi All,
 
 Start of q-svc  in devstack fails with error message “No type
 driver for tenant network_type: vxlan. Service terminated!”. I have not 
 choosen
 vxlan as ML2 type driver in localrc. I have added the details of localrc file
 for my setup below for reference. Can you please point out if I am missing any
 config or there is any workaround to fix the issue. Could you also point me 
 to a
 separate mailing list for devstack related queries if there is any.
 
  
 
 localrc file contents:
 
  
 
 RECLONE=yes
 
 DEST=/opt/stack
 
 SCREEN_LOGDIR=/opt/stack/new/screen-logs
 
 LOGFILE=/opt/stack/new/devstacklog.txt
 
 DATABASE_PASSWORD=password
 
 RABBIT_PASSWORD= password
 
 SERVICE_TOKEN= password
 
 SERVICE_PASSWORD= password
 
 ADMIN_PASSWORD= password
 
 disable_service n-net
 
 enable_service q-svc
 
 enable_service q-agt
 
 enable_service q-dhcp
 
 enable_service q-l3
 
 enable_service q-meta
 
 enable_service q-lbaas
 
 enable_service neutron
 
 # Optional, to enable tempest configuration as part of devstack
 
 enable_service tempest
 
 Q_PLUGIN=ml2
 
 Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch
 
 Q_ML2_PLUGIN_TYPE_DRIVERS=*vlan,flat*
 
 ML2_VLAN_RANGES=physnet1:1500:1600
 
 ENABLE_TENANT_VLANS=True
 
 PHYSICAL_NETWORK=physnet1
 
 OVS_PHYSICAL_BRIDGE=br-eth1
 
  
 
 Thanks,
 
 Parikshit Manur
 
  
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Eichberger, German
Hi,

My 2 cents for the multiple listeners per load balancer discussion: We have 
customers who like to have a listener on port 80 and one on port 443 on the 
same VIP (we had to patch libra to allow two listeners in one single haproxy) 
- so having that would be great.

I like the proposed status :-)

Thanks,
German

-Original Message-
From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
Sent: Sunday, August 17, 2014 8:57 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

Oh hello again!

You know the drill!

On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
 Hi Brandon,
 
 
 Responses in-line:
 
 On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:
 Comments in-line
 
 On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
  Hi folks,
 
 
  I'm OK with going with no shareable child entities
 (Listeners, Pools,
  Members, TLS-related objects, L7-related objects, etc.).
 This will
  simplify a lot of things (like status reporting), and we can
 probably
  safely work under the assumption that any user who has a use
 case in
  which a shared entity is useful is probably also technically
 savvy
  enough to not only be able to manage consistency problems
 themselves,
  but is also likely to want to have that level of control.
 
 
  Also, an haproxy instance should map to a single listener.
 This makes
  management of the configuration template simpler and the
 behavior of a
  single haproxy instance more predictable. Also, when it
 comes to
  configuration updates (as will happen, say, when a new
 member gets
  added to a pool), it's less risky and error prone to restart
 the
  haproxy instance for just the affected listener, and not for
 all
  listeners on the Octavia VM. The only down-sides I see are
 that we
  consume slightly more memory, we don't have the advantage of
 a shared
  SSL session cache (probably doesn't matter for 99.99% of
 sites using
  TLS anyway), and certain types of persistence wouldn't carry
 over
  between different listeners if they're implemented poorly by
 the
  user. :/  (In other words, negligible down-sides to this.)
 
 
 This is fine by me for now, but I think this might be
 something we can
 revisit later after we have the advantage of hindsight.  Maybe
 a
 configurable option.
 
 
 Sounds good, as long as we agree on a path forward. In the mean time, 
 is there anything I'm missing which would be a significant advantage 
 of having multiple Listeners configured in a single haproxy instance?
 (Or rather, where a single haproxy instance maps to a loadbalancer
 object?)

No particular reason as of now.  Just feel like that could be something that 
could hinder a particular feature or even performance in the future.  It's not 
rooted in any fact or past experience.

  
 I have no problem with this. However, one thing I often do
 think about
 is that it's not really ever going to be load balancing
 anything with
 just a load balancer and listener.  It has to have a pool and
 members as
 well.  So having ACTIVE on the load balancer and listener, and
 still not
 really load balancing anything is a bit odd.  Which is why I'm
 in favor
 of only doing creates by specifying the entire tree in one
 call
 (loadbalancer-listeners-pool-members).  Feel free to
 disagree with me
 on this because I know this not something everyone likes.  I'm
 sure I am
 forgetting something that makes this a hard thing to do.  But
 if this
 were the case, then I think only having the provisioning
 status on the
 load balancer makes sense again.  The reason I am advocating
 for the
 provisioning status on the load balancer is because it still
 simpler,
 and only one place to look to see if everything were
 successful or if
 there was an issue.
 
 
 Actually, there is one case where it makes sense to have an ACTIVE 
 Listener when that listener has no pools or members:  Probably the 2nd 
 or 3rd most common type of load balancing service we deploy is just 
 an HTTP listener on port 80 that redirects all requests to the HTTPS 
 listener on port 443. While this can be done using a (small) pool of 
 back-end servers responding to the port 80 requests, there's really no 
 point in not having the haproxy instance do this redirect directly for 
 sites that want all access to happen over 

Re: [openstack-dev] [Devstack] q-svc fails to start in devstack.

2014-08-18 Thread Kevin Benton
I'm not sure why, but the default tenant network type was changed to vxlan.
[1]

You now need to specify Q_ML2_TENANT_NETWORK_TYPE=vlan

1.
https://github.com/openstack-dev/devstack/commit/8feaf6c9516094df58df84479d73779e87a79264


On Mon, Aug 18, 2014 at 9:35 AM, Parikshit Manur parikshit.ma...@citrix.com
 wrote:

  Hi All,

 Start of q-svc  in devstack fails with error message “No
 type driver for tenant network_type: vxlan. Service terminated!”. I have
 not choosen vxlan as ML2 type driver in localrc. I have added the details
 of localrc file for my setup below for reference. Can you please point out
 if I am missing any config or there is any workaround to fix the issue.
 Could you also point me to a separate mailing list for devstack related
 queries if there is any.



 localrc file contents:



 RECLONE=yes

 DEST=/opt/stack

 SCREEN_LOGDIR=/opt/stack/new/screen-logs

 LOGFILE=/opt/stack/new/devstacklog.txt

 DATABASE_PASSWORD=password

 RABBIT_PASSWORD= password

 SERVICE_TOKEN= password

 SERVICE_PASSWORD= password

 ADMIN_PASSWORD= password

 disable_service n-net

 enable_service q-svc

 enable_service q-agt

 enable_service q-dhcp

 enable_service q-l3

 enable_service q-meta

 enable_service q-lbaas

 enable_service neutron

 # Optional, to enable tempest configuration as part of devstack

 enable_service tempest

 Q_PLUGIN=ml2

 Q_ML2_PLUGIN_MECHANISM_DRIVERS=openvswitch

 Q_ML2_PLUGIN_TYPE_DRIVERS=*vlan,flat*

 ML2_VLAN_RANGES=physnet1:1500:1600

 ENABLE_TENANT_VLANS=True

 PHYSICAL_NETWORK=physnet1

 OVS_PHYSICAL_BRIDGE=br-eth1



 Thanks,

 Parikshit Manur



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party] One CI for several OpenStack projects

2014-08-18 Thread Ivan Kolodyazhny
Hi All,

I'm working on Third Party CI for Cinder and I've got several issues with
Zuul configuration.


Third Party CI should run dsvm-tempest-full job to test Cinder driver in my
case. It means, that all components should work well, not only Cinder.

E.g.: I'm working on Cinder + Ceph integration CI. It requires that
RBD-related code in Nova works well.
https://bugs.launchpad.net/nova/+bug/1352595 breaks my Cinder CI with Ceph
backend last week.

So, it looks like I need to setup Cinder Third Party CI with Ceph backend
for Cinder and Nova projects. But there are no needs to test Nova with
Cinder and Ceph for every Nova commit.

I'm looking for something like following:

1) run my Third Party CI for all patch-sets in Cinder
2) run my Third Party CI (Cinder + Ceph backend) for Nova only if it
changes nova/virt/libvirt/rbd.py module.

Does such approach acceptable for Third Party CI? If yes, does Zuul could
handle such kind of triggers?



Regards,
Ivan Kolodyazhny,
Software Engineer,
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Adrian Otto
If you want to run OpenStack services in Docker, I suggest having a look at 
Dockenstack:

https://github.com/ewindisch/dockenstack

Adrian

On Aug 18, 2014, at 3:04 AM, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:

I see that there are some openstack docker images in public docker repo, 
perhaps you can check them on github to see how to use them.

[root@db03b04 ~]# docker search openstack
NAME DESCRIPTION
 STARS OFFICIAL   AUTOMATED
ewindisch/dockenstackOpenStack development environment 
(using D...   6[OK]
jyidiego/openstack-clientAn ubuntu 12.10 LTS image that has 
nova, s...   1
dkuffner/docker-openstack-stress A docker container for openstack which 
pro...   0[OK]
garland/docker-openstack-keystone   
 0[OK]
mpaone/openstack
 0
nirmata/openstack-base  
 0
balle/openstack-ipython2-client  Features Python 2.7.5, Ipython 2.1.0 
and H...   0
booleancandy/openstack_clients  
 0[OK]
leseb/openstack-keystone
 0
raxcloud/openstack-client   
 0
paulczar/openstack-agent
 0
booleancandy/openstack-clients  
 0
jyidiego/openstack-client-rumm-ansible  
 0
bodenr/jumpgate  SoftLayer Jumpgate WSGi OpenStack REST 
API...   0[OK]
sebasmagri/docker-marconiDocker images for the Marconi Message 
Queu...   0[OK]
chamerling/openstack-client 
 0[OK]
centurylink/openstack-cli-wetty  This image provides a Wetty terminal 
with ...   0[OK]


2014-08-18 16:47 GMT+08:00 Philip Cheong 
philip.che...@elastx.semailto:philip.che...@elastx.se:
I think it's a very interesting test for docker. I too have been think about 
this for some time to try and dockerise OpenStack services, but as the usual 
story goes, I have plenty things I'd love to try, but there are only so many 
hours in a day...

Would definitely be interested to hear if anyone has attempted this and what 
the outcome was.

Any suggestions on what the most appropriate service would be to begin with?


On 14 August 2014 14:54, Jay Lau 
jay.lau@gmail.commailto:jay.lau@gmail.com wrote:
I see a few mentions of OpenStack services themselves being containerized in 
Docker. Is this a serious trend in the community?

http://allthingsopen.com/2014/02/12/why-containers-for-openstack-services/

--
Thanks,

Jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Philip Cheong
Elastx | Public and Private PaaS
email: philip.che...@elastx.semailto:philip.che...@elastx.se
office: +46 8 557 728 10tel:%2B46%208%C2%A0557%20728%2010
mobile: +46 702 8170 814
twitter: @Elastxhttps://twitter.com/Elastx
http://elastx.sehttp://elastx.se/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday August 19th at 19:00 UTC

2014-08-18 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday August 19th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Joe Gordon
On Mon, Aug 18, 2014 at 3:18 AM, Thierry Carrez thie...@openstack.org
wrote:

 Doug Hellmann wrote:
  On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
  Let me try to say it another way.  You seemed to say that it wasn't much
  to ask given the rate at which things happen in OpenStack.  I would
  argue that given the rate, we should not try to ask more of individuals
  (like this proposal) and risk burnout.  Instead, we should be doing our
  best to be more open an inclusive to give the project the best chance to
  grow, as that's the best way to get more done.
 
  I think an increased travel expectation is a raised bar that will hinder
  team growth, not help it.
 
  +1, well said.

 Sorry, I was away for a few days. This is a topic I have a few strong
 opinions on :)

 There is no denial that the meetup format is working well, comparatively
 better than the design summit format. There is also no denial that that
 requiring 4 travels per year for a core dev is unreasonable. Where is
 the limit ? Wouldn't we be more productive and aligned if we did one per
 month ? No, the question is how to reach a sufficient level of focus and
 alignment while keeping the number of mandatory travel at 2 per year.

 I don't think our issue comes from not having enough F2F time. Our issue
 is that the design summit no longer reaches its objectives of aligning
 key contributors on a common plan, and we need to fix it.

 We established the design summit as the once-per-cycle opportunity to
 have face-to-face time and get alignment across the main contributors to
 a project. That used to be completely sufficient, but now it doesn't
 work as well... which resulted in alignment and team discussions to be
 discussed at mid-cycle meetups instead. Why ? And what could we change
 to have those alignment discussions at the design summit again ?

 Why are design summits less productive that mid-cycle meetups those days
 ? Is it because there are too many non-contributors in the design summit
 rooms ? Is it the 40-min format ? Is it the distractions (having talks
 to give somewhere else, booths to attend, parties and dinners to be at)
 ? Is it that beginning of cycle is not the best moment ? Once we know
 WHY the design summit fails its main objective, maybe we can fix it.


For my self, the issue with the design summits have been around the
duration and the number of sessions that I would like to attend that are
scheduled at the same time. I have rarely seen the issue be too many
non-contributors in the room, if they don't have anything to add they
usually just listen. The 40 minute format is a bit too restrictive, but if
all design summit tracks dropped the 40-min format it would be even harder
for me to attend sessions across tracks, one of the main benefits of the
design summit IMHO.


 My gut feeling is that having a restricted audience and a smaller group
 lets people get to the bottom of an issue and reach consensus. And that
 you need at least half a day or a full day of open discussion to reach
 such alignment. And that it's not particularly great to get such
 alignment in the middle of the cycle, getting it at the start is still
 the right way to align with the release cycle.

 Nothing prevents us from changing part of the design summit format (even
 the Paris one!), and restrict attendance to some of the sessions. And if
 the main issue is the distraction from the conference colocation, we
 might have to discuss the future of co-location again. In that 2 events
 per year objective, we could make the conference the optional cycle
 thing, and a developer-oriented specific event the mandatory one.

 If we manage to have alignment at the design summit, then it doesn't
 spell the end of the mid-cycle things. But then, ideally the extra
 mid-cycle gatherings should be focused on getting specific stuff done,
 rather than general team alignment. Think workshop/hackathon rather than
 private gathering. The goal of the workshop would be published in
 advance, and people could opt to join that. It would be totally optional.

 Cheers,

 --
 Thierry Carrez (ttx)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Joe Gordon
On Mon, Aug 18, 2014 at 8:18 AM, Russell Bryant rbry...@redhat.com wrote:

 On 08/18/2014 06:18 AM, Thierry Carrez wrote:
  Doug Hellmann wrote:
  On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com wrote:
  Let me try to say it another way.  You seemed to say that it wasn't
 much
  to ask given the rate at which things happen in OpenStack.  I would
  argue that given the rate, we should not try to ask more of individuals
  (like this proposal) and risk burnout.  Instead, we should be doing our
  best to be more open an inclusive to give the project the best chance
 to
  grow, as that's the best way to get more done.
 
  I think an increased travel expectation is a raised bar that will
 hinder
  team growth, not help it.
 
  +1, well said.
 
  Sorry, I was away for a few days. This is a topic I have a few strong
  opinions on :)
 
  There is no denial that the meetup format is working well, comparatively
  better than the design summit format. There is also no denial that that
  requiring 4 travels per year for a core dev is unreasonable. Where is
  the limit ? Wouldn't we be more productive and aligned if we did one per
  month ? No, the question is how to reach a sufficient level of focus and
  alignment while keeping the number of mandatory travel at 2 per year.
 
  I don't think our issue comes from not having enough F2F time. Our issue
  is that the design summit no longer reaches its objectives of aligning
  key contributors on a common plan, and we need to fix it.
 
  We established the design summit as the once-per-cycle opportunity to
  have face-to-face time and get alignment across the main contributors to
  a project. That used to be completely sufficient, but now it doesn't
  work as well... which resulted in alignment and team discussions to be
  discussed at mid-cycle meetups instead. Why ? And what could we change
  to have those alignment discussions at the design summit again ?
 
  Why are design summits less productive that mid-cycle meetups those days
  ? Is it because there are too many non-contributors in the design summit
  rooms ? Is it the 40-min format ? Is it the distractions (having talks
  to give somewhere else, booths to attend, parties and dinners to be at)
  ? Is it that beginning of cycle is not the best moment ? Once we know
  WHY the design summit fails its main objective, maybe we can fix it.
 
  My gut feeling is that having a restricted audience and a smaller group
  lets people get to the bottom of an issue and reach consensus. And that
  you need at least half a day or a full day of open discussion to reach
  such alignment. And that it's not particularly great to get such
  alignment in the middle of the cycle, getting it at the start is still
  the right way to align with the release cycle.
 
  Nothing prevents us from changing part of the design summit format (even
  the Paris one!), and restrict attendance to some of the sessions. And if
  the main issue is the distraction from the conference colocation, we
  might have to discuss the future of co-location again. In that 2 events
  per year objective, we could make the conference the optional cycle
  thing, and a developer-oriented specific event the mandatory one.
 
  If we manage to have alignment at the design summit, then it doesn't
  spell the end of the mid-cycle things. But then, ideally the extra
  mid-cycle gatherings should be focused on getting specific stuff done,
  rather than general team alignment. Think workshop/hackathon rather than
  private gathering. The goal of the workshop would be published in
  advance, and people could opt to join that. It would be totally optional.

 Great response ... I agree with everything you've said here.  Let's
 figure out how to improve the design summit to better achieve team
 alignment.

 Of the things you mentioned, I think the biggest limit to alignment has
 been the 40 minute format.  There are some topics that need more time.
 It may be that we just need to take more advantage of the ability to
 give a single topic multiple time slots to ensure enough time is
 available.  As Dan discussed, there are some topics that we could stand
 to turn down and distribute information another way that is just as
 effective.

 I would also say that the number of things going on at one time is also
 problematic.  Not only are there several design summit sessions going
 once, but there are conference sessions and customer meetings.  The
 rapid rate of jumping around and context switching is exhausting.  It
 also makes it a bit harder to get critical mass for an extended period
 of time around a topic.  In mid-cycle meetups, there is one track and no
 other things competing for time and attention.

 I don't have a good suggestion for fixing this issue with so many things
 competing for time and attention.  I used to be a big proponent of
 splitting the event out completely, but I don't feel the same way
 anymore.  In theory we could call the conference the 

Re: [openstack-dev] [nova][core] Expectations of core reviewers

2014-08-18 Thread Joe Gordon
On Mon, Aug 18, 2014 at 5:22 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Mon, Aug 18, 2014 at 12:18:16PM +0200, Thierry Carrez wrote:
  Doug Hellmann wrote:
   On Aug 13, 2014, at 4:42 PM, Russell Bryant rbry...@redhat.com
 wrote:
   Let me try to say it another way.  You seemed to say that it wasn't
 much
   to ask given the rate at which things happen in OpenStack.  I would
   argue that given the rate, we should not try to ask more of
 individuals
   (like this proposal) and risk burnout.  Instead, we should be doing
 our
   best to be more open an inclusive to give the project the best chance
 to
   grow, as that's the best way to get more done.
  
   I think an increased travel expectation is a raised bar that will
 hinder
   team growth, not help it.
  
   +1, well said.
 
  Sorry, I was away for a few days. This is a topic I have a few strong
  opinions on :)
 
  There is no denial that the meetup format is working well, comparatively
  better than the design summit format. There is also no denial that that
  requiring 4 travels per year for a core dev is unreasonable. Where is
  the limit ? Wouldn't we be more productive and aligned if we did one per
  month ? No, the question is how to reach a sufficient level of focus and
  alignment while keeping the number of mandatory travel at 2 per year.
 
  I don't think our issue comes from not having enough F2F time. Our issue
  is that the design summit no longer reaches its objectives of aligning
  key contributors on a common plan, and we need to fix it.
 
  We established the design summit as the once-per-cycle opportunity to
  have face-to-face time and get alignment across the main contributors to
  a project. That used to be completely sufficient, but now it doesn't
  work as well... which resulted in alignment and team discussions to be
  discussed at mid-cycle meetups instead. Why ? And what could we change
  to have those alignment discussions at the design summit again ?
 
  Why are design summits less productive that mid-cycle meetups those days
  ? Is it because there are too many non-contributors in the design summit
  rooms ? Is it the 40-min format ? Is it the distractions (having talks
  to give somewhere else, booths to attend, parties and dinners to be at)
  ? Is it that beginning of cycle is not the best moment ? Once we know
  WHY the design summit fails its main objective, maybe we can fix it.
 
  My gut feeling is that having a restricted audience and a smaller group
  lets people get to the bottom of an issue and reach consensus. And that
  you need at least half a day or a full day of open discussion to reach
  such alignment. And that it's not particularly great to get such
  alignment in the middle of the cycle, getting it at the start is still
  the right way to align with the release cycle.
 
  Nothing prevents us from changing part of the design summit format (even
  the Paris one!), and restrict attendance to some of the sessions. And if
  the main issue is the distraction from the conference colocation, we
  might have to discuss the future of co-location again. In that 2 events
  per year objective, we could make the conference the optional cycle
  thing, and a developer-oriented specific event the mandatory one.
 
  If we manage to have alignment at the design summit, then it doesn't
  spell the end of the mid-cycle things. But then, ideally the extra
  mid-cycle gatherings should be focused on getting specific stuff done,
  rather than general team alignment. Think workshop/hackathon rather than
  private gathering. The goal of the workshop would be published in
  advance, and people could opt to join that. It would be totally optional.

 This pretty much all aligns with my thoughts on the matter. The key point
 is that the design summit is the right place from a cycle timing POV to
 have the critical f2f discussions  debates, and we need to figure out
 what we can do to make it a more effective venue than it currently is.

 IME I'd probably say the design summit sessions I've been to fall into
 two broad camps.

  - Information dissemination - just talk through proposal(s) to let
everyone know what's being planned / thought. Some questions and
debate, but mostly a one-way presentation.

  - Technical debates - the topic is just a high level hook, around
which, a lively argument  debate was planned  took place.

 I think that the number of the information dissemination sessions could
 be cut back on by encouraging people to take advantage of other equally
 as effective methods of communication. In many cases it would suffice to
 just have a more extensive blueprint / spec created, or a detailed wiki
 page or similar doc to outline the problem space. If we had some regular
 slot where people could do online presentations (Technical talks) that
 could be a good way to push the information, out of band from the main
 summits. If those online talks led to significant questions, then those
 questions 

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Brandon Logan
Hi German,
I don't think it is a requirement that those two frontend sections (or
listen sections) have to live in the same config.  I thought if they
were listening on the same IP but different ports it could be in two
different haproxy instances.  I could be wrong though.

Thanks,
Brandon

On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
 Hi,
 
 My 2 cents for the multiple listeners per load balancer discussion: We have 
 customers who like to have a listener on port 80 and one on port 443 on the 
 same VIP (we had to patch libra to allow two listeners in one single 
 haproxy) - so having that would be great.
 
 I like the proposed status :-)
 
 Thanks,
 German
 
 -Original Message-
 From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
 Sent: Sunday, August 17, 2014 8:57 PM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure
 
 Oh hello again!
 
 You know the drill!
 
 On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
  Hi Brandon,
  
  
  Responses in-line:
  
  On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan 
  brandon.lo...@rackspace.com wrote:
  Comments in-line
  
  On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
   Hi folks,
  
  
   I'm OK with going with no shareable child entities
  (Listeners, Pools,
   Members, TLS-related objects, L7-related objects, etc.).
  This will
   simplify a lot of things (like status reporting), and we can
  probably
   safely work under the assumption that any user who has a use
  case in
   which a shared entity is useful is probably also technically
  savvy
   enough to not only be able to manage consistency problems
  themselves,
   but is also likely to want to have that level of control.
  
  
   Also, an haproxy instance should map to a single listener.
  This makes
   management of the configuration template simpler and the
  behavior of a
   single haproxy instance more predictable. Also, when it
  comes to
   configuration updates (as will happen, say, when a new
  member gets
   added to a pool), it's less risky and error prone to restart
  the
   haproxy instance for just the affected listener, and not for
  all
   listeners on the Octavia VM. The only down-sides I see are
  that we
   consume slightly more memory, we don't have the advantage of
  a shared
   SSL session cache (probably doesn't matter for 99.99% of
  sites using
   TLS anyway), and certain types of persistence wouldn't carry
  over
   between different listeners if they're implemented poorly by
  the
   user. :/  (In other words, negligible down-sides to this.)
  
  
  This is fine by me for now, but I think this might be
  something we can
  revisit later after we have the advantage of hindsight.  Maybe
  a
  configurable option.
  
  
  Sounds good, as long as we agree on a path forward. In the mean time, 
  is there anything I'm missing which would be a significant advantage 
  of having multiple Listeners configured in a single haproxy instance?
  (Or rather, where a single haproxy instance maps to a loadbalancer
  object?)
 
 No particular reason as of now.  Just feel like that could be something that 
 could hinder a particular feature or even performance in the future.  It's 
 not rooted in any fact or past experience.
 
   
  I have no problem with this. However, one thing I often do
  think about
  is that it's not really ever going to be load balancing
  anything with
  just a load balancer and listener.  It has to have a pool and
  members as
  well.  So having ACTIVE on the load balancer and listener, and
  still not
  really load balancing anything is a bit odd.  Which is why I'm
  in favor
  of only doing creates by specifying the entire tree in one
  call
  (loadbalancer-listeners-pool-members).  Feel free to
  disagree with me
  on this because I know this not something everyone likes.  I'm
  sure I am
  forgetting something that makes this a hard thing to do.  But
  if this
  were the case, then I think only having the provisioning
  status on the
  load balancer makes sense again.  The reason I am advocating
  for the
  provisioning status on the load balancer is because it still
  simpler,
  and only one place to look to see if everything were
  successful or if
  there was an issue.
  
  
  Actually, there is one case where it makes sense to have an ACTIVE 

[openstack-dev] I've published parallels SDK

2014-08-18 Thread Dmitry Guryanov
Hello!

I've published parallels-sdk:

https://github.com/Parallels/parallels-sdk

-- 
Dmitry Guryanov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] One CI for several OpenStack projects

2014-08-18 Thread Jeremy Stanley
On 2014-08-18 20:40:48 +0300 (+0300), Ivan Kolodyazhny wrote:
[...]
 I'm looking for something like following:
 
 1) run my Third Party CI for all patch-sets in Cinder
 2) run my Third Party CI (Cinder + Ceph backend) for Nova only if it
 changes nova/virt/libvirt/rbd.py module.
 
 Does such approach acceptable for Third Party CI? If yes, does
 Zuul could handle such kind of triggers?

Zuul has the option to run certain jobs only when it sees specific
files mentioned as altered in a Gerrit event. See the files option
documented at http://ci.openstack.org/zuul/zuul.html#jobs
specifically. This does mean you'd need different job names for
those separate cases, but they could effectively perform the same
activities (for example if you're also using jenkins-job-builder you
could use a single job-template to generate configuration for both
and embed a parameter expansion in the template name which is unique
per project).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift global cluster replication latency...

2014-08-18 Thread Clay Gerrard
Correct, best-effort.  There is no guarantee or time boxing on cross-region
replication.  The best way to manage cross site replication is by tuning
your replica count to ensure you have primary copies in each region -
eventually.  Possibly evaluate if you need write_affinity at all (you can
always just stream across the WAN on PUT directly into the primary
locations).  Global replication is a great feature, but still ripe for
optimization and turning:

https://review.openstack.org/#/c/99824/

With storage policies you now also have the ability to have a local policy
and global policy giving operators and users even more control about where
they need their objects.  For example you might upload to local policy and
then manage geo-distribution with a COPY request to the global policy.

Do you have a specific use case for geo-distributed objects that you could
share or are you just trying to understand the implementation?

-Clay


On Mon, Aug 18, 2014 at 3:32 AM, Shyam Prasad N nspmangal...@gmail.com
wrote:

 Hi,

 Went through the following link:

 https://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/

 I'm trying to simulate the 2-region 3-replica scenario. The document says
 that the 3rd replica will be asynchronously moved to the remote location
 with a 2-region setup.

 What I want to understand is if whether the latency of this asynchronous
 copy can be tweaked/monitored? I couldn't find any configuration parameters
 to tweak this. Do we have such an option? Or is it done on a best-effort
 basis?

 Thanks in advance...

 --
 -Shyam

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] I've published parallels SDK

2014-08-18 Thread Dmitry Guryanov
On Monday 18 August 2014 22:45:17 Dmitry Guryanov wrote:
 Hello!
 
 I've published parallels-sdk:
 
 https://github.com/Parallels/parallels-sdk

Sorry, I've sent this mail to the wrong list :(, please, ignore.

-- 
Dmitry Guryanov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [third-party] What tests are required to be run

2014-08-18 Thread Hemanth Ravi
Edgar,

Our CI is running the tests (non voting), but don't see it listed on the
review for any patch. Is this due to missing logs? I would like to confirm
this is the issue, will resolve this.

Thanks,
-hemanth


On Mon, Aug 18, 2014 at 8:35 AM, Edgar Magana edgar.mag...@workday.com
wrote:

 Thank you Akihiro.

 I will propose a better organization for this section. Stay tune!

 Edgar

 On 8/17/14, 10:53 PM, Akihiro Motoki mot...@da.jp.nec.com wrote:

 
 On 2014/08/18 0:12, Kyle Mestery wrote:
  On Fri, Aug 15, 2014 at 5:35 PM, Edgar Magana
 edgar.mag...@workday.com wrote:
  Team,
 
  I did a quick audit on the Neutron CI. Very sad results. Only few
 plugins
  and drivers are running properly and testing all Neutron commits.
  I created a report here:
 
 
 https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plu
 gin
  _and_Drivers
 
  Can you link this and/or move it to this page:
 
  https://wiki.openstack.org/wiki/NeutronPlugins
 
  This is under the NeutronPolicies wiki page which I did at the start
  of Juno. This tracks all policies and procedures for Neutron, and
  there's a Plugins page (which I linked to above) where this should
  land.
 
 I just added the link Neutron_Plugins_and_Drivers#Existing_Plugin
 to NeutronPlugins wiki.
 
 The wiki pages NeutronPlugins and Neutron_Plugins_and_Drivers
 cover the similar contents. According to the history of the page,
 the latter one was created by Mark at Nov 2013 (beginning of Icehouse
 cycle).
 It seems better to merge these two pages to avoid the confusion.
 
 Akihiro
 
 
 
  We will discuss the actions to take on the next Neutron IRC meeting. So
  please, reach me out to clarify what is the status of your CI.
  I had two commits to quickly verify the CI reliability:
 
  https://review.openstack.org/#/c/114393/
 
  https://review.openstack.org/#/c/40296/
 
 
  I would expect all plugins and drivers passing on the first one and
  failing for the second but I got so many surprises.
 
  Neutron code quality and reliability is a top priority, if you ignore
 this
  report that plugin/driver will be candidate to be remove from Neutron
 tree.
 
  Cheers,
 
  Edgar
 
  P.s. I hate to be the inquisitor hereŠ but someone has to do the dirty
 job!
 
  Thanks for sending this out Edgar and doing this analysis! Can you
  please put an agenda item on Monday's meeting to discuss this? I won't
  be at the meeting as I'm on PTO (Mark is running the meeting in my
  absence), but I'd like the team to discuss this and allow all
  third-party people a chance to be there and share their feelings here.
 
  Thanks,
  Kyle
 
 
  On 8/14/14, 8:30 AM, Kyle Mestery mest...@mestery.com wrote:
 
  Folks, I'm not sure if all CI accounts are running sufficient tests.
  Per the requirements wiki page here [1], everyone needs to be running
  more than just Tempest API tests, which I still see most neutron
  third-party CI setups doing. I'd like to ask everyone who operates a
  third-party CI account for Neutron to please look at the link below
  and make sure you are running appropriate tests. If you have
  questions, the weekly third-party meeting [2] is a great place to ask
  questions.
 
  Thanks,
  Kyle
 
  [1] https://wiki.openstack.org/wiki/NeutronThirdPartyTesting
  [2] https://wiki.openstack.org/wiki/Meetings/ThirdParty
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][Cinder] Cinder Core nomination

2014-08-18 Thread Eric Harney
On 08/14/2014 02:55 AM, Boring, Walter wrote:
 Hey guys,
I wanted to pose a nomination for Cinder core.
 
 Xing Yang.
 She has been active in the cinder community for many releases and has worked 
 on several drivers as well as other features for cinder itself.   She has 
 been doing an awesome job doing reviews and helping folks out in the 
 #openstack-cinder irc channel for a long time.   I think she would be a good 
 addition to the core team.
 

+1 from me.

Thanks Xing!

 
 Walt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Eric Windisch
On Mon, Aug 18, 2014 at 1:42 PM, Adrian Otto adrian.o...@rackspace.com
wrote:

  If you want to run OpenStack services in Docker, I suggest having a look
 at Dockenstack:

  https://github.com/ewindisch/dockenstack


Note, this is for simplifying and speeding-up the use of devstack. It
provides an environment similar to openstack-infra that can consistently
and reliably run on one's laptop, while bringing a devstack-managed
OpenStack installation online in 5-8 minutes.

Like other devstack-based installs, this is not for running production
OpenStack deployments.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Eric Windisch
On Mon, Aug 18, 2014 at 8:49 AM, Jyoti Ranjan jran...@gmail.com wrote:

 I believe that everything can not go as a dock container. For e.g.

 1. compute nodes
 2. baremetal provisioning
 3. L3 router etc


Containers are a good solution for all of the above, for some value of
container. There is some terminology overloading here, however.

There are Linux namespaces, capability sets, and cgroups which may not be
appropriate for using around some workloads. These, however, are granular.
For instance, one may run a container without networking namespaces,
allowing the container to directly manipulate host networking. Such a
container would still see nothing outside its own chrooted filesystem, PID
namespace, etc.

Docker in particular offers a number of useful features around filesystem
management, images, etc. These features make it easier to deploy and manage
systems, even if many of the Linux containers features are disabled for
one reason or another.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Minutes from 8/13/2014 meeting

2014-08-18 Thread Trevor Vardeman
Agenda items are numbered, and topics, as discussed, are described beneath in 
list format.

1) Discuss future of Octavia in light of Neutron-incubator project proposal.
a) There are many problems with Neutron-Incubator as currently described
b) The political happenings in Neutron leave our LBaaS patches under review 
unlikely to land in Juno
c) The Incubator proposal doesn't affect Octavia development direction, 
with inclination to distance ourselves from Neutron proper
d) With the Neutron Incubator proposal in current scope, efforts of people 
pushing forward Neutron LBaaS patches should be re-focused into Octavia.

2) Discuss operator networking requirements (carry-over from last week)
a) Both HP and Rackspace seem to agree that as long as Octavia uses 
Neutron-like floating IPs, their networks should be able to work with proposed 
Octavia topologies
b) (Blue Box) also wanted to meet with Rackspace's networking team during 
the operator summit a few weeks from now to thoroughly discuss network concerns

3) Discuss v0.5 component design proposal  
[https://review.openstack.org/#/c/113458/]
a) Notification for back-end node health (aka being offline) isn't required 
for 0.5, but is a must have later
b) Notification of LB health (HA Proxy, etc) is definitely a requirement in 
0.5
c) Still looking for more feedback on the proposal itself

4) Discuss timeline on moving these meetings to IRC.
a) Most members in favor of keeping the webex meetings for the time being
b) One major point was other openstack/stackforge use video meetings as 
their primary source as well


Sorry for the lack of density.  I forgot to have the meeting recorded, but 
I hope I included some major points.  Feel free to respond in line with any 
more information anyone can recall concerning the meeting information.  Thanks!

-Trevor
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Stephen Balukoff
Yes, I'm advocating keeping each listener in a separate haproxy
configuration (and separate running instance). This includes the example I
mentioned: One that listens on port 80 for HTTP requests and redirects
everything to the HTTPS listener on port 443.  (The port 80 listener is a
simple configuration with no pool or members, and it doesn't take much to
have it run on the same host as the port 443 listener.)

I've not explored haproxy's new redirect scheme capabilities in 1.5 yet.
Though I doubt it would have a significant impact on the operational model
where each listener is a separate haproxy configuration and instance.

German: Are you saying that the port 80 listener and port 443 listener
would have the exact same back-end configuration? If so, then what we're
discussing here with no sharing of child entities, would mean that the
customer has to set up and manage these duplicate pools and members. If
that's not acceptable, now is the time to register that opinion, eh!

Stephen


On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan brandon.lo...@rackspace.com
 wrote:

 Hi German,
 I don't think it is a requirement that those two frontend sections (or
 listen sections) have to live in the same config.  I thought if they
 were listening on the same IP but different ports it could be in two
 different haproxy instances.  I could be wrong though.

 Thanks,
 Brandon

 On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
  Hi,
 
  My 2 cents for the multiple listeners per load balancer discussion: We
 have customers who like to have a listener on port 80 and one on port 443
 on the same VIP (we had to patch libra to allow two listeners in one
 single haproxy) - so having that would be great.
 
  I like the proposed status :-)
 
  Thanks,
  German
 
  -Original Message-
  From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
  Sent: Sunday, August 17, 2014 8:57 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure
 
  Oh hello again!
 
  You know the drill!
 
  On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
   Hi Brandon,
  
  
   Responses in-line:
  
   On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
   brandon.lo...@rackspace.com wrote:
   Comments in-line
  
   On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
Hi folks,
   
   
I'm OK with going with no shareable child entities
   (Listeners, Pools,
Members, TLS-related objects, L7-related objects, etc.).
   This will
simplify a lot of things (like status reporting), and we can
   probably
safely work under the assumption that any user who has a use
   case in
which a shared entity is useful is probably also technically
   savvy
enough to not only be able to manage consistency problems
   themselves,
but is also likely to want to have that level of control.
   
   
Also, an haproxy instance should map to a single listener.
   This makes
management of the configuration template simpler and the
   behavior of a
single haproxy instance more predictable. Also, when it
   comes to
configuration updates (as will happen, say, when a new
   member gets
added to a pool), it's less risky and error prone to restart
   the
haproxy instance for just the affected listener, and not for
   all
listeners on the Octavia VM. The only down-sides I see are
   that we
consume slightly more memory, we don't have the advantage of
   a shared
SSL session cache (probably doesn't matter for 99.99% of
   sites using
TLS anyway), and certain types of persistence wouldn't carry
   over
between different listeners if they're implemented poorly by
   the
user. :/  (In other words, negligible down-sides to this.)
  
  
   This is fine by me for now, but I think this might be
   something we can
   revisit later after we have the advantage of hindsight.  Maybe
   a
   configurable option.
  
  
   Sounds good, as long as we agree on a path forward. In the mean time,
   is there anything I'm missing which would be a significant advantage
   of having multiple Listeners configured in a single haproxy instance?
   (Or rather, where a single haproxy instance maps to a loadbalancer
   object?)
 
  No particular reason as of now.  Just feel like that could be something
 that could hinder a particular feature or even performance in the future.
 It's not rooted in any fact or past experience.
 
  
   I have no problem with this. However, one thing I often do
   think about
   is that it's not really ever going to be load 

[openstack-dev] oslo.i18n 0.2.0 released

2014-08-18 Thread Doug Hellmann
The Oslo team is pleased to announce release 0.2.0 of oslo.i18n, the library 
for managing translated messages in OpenStack.

This release includes a new test fixture for writing tests for classes that 
need to use both lazily and immediately translated strings.

Please report bugs on the Oslo project page on launchpad: 
https://launchpad.net/oslo

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Picking a Name for the Tempest Library

2014-08-18 Thread Matthew Treinish
On Sat, Aug 16, 2014 at 06:27:19PM +0200, Marc Koderer wrote:
 Hi all,
 
 Am 15.08.2014 um 23:31 schrieb Jay Pipes jaypi...@gmail.com:
  
  I suggest that tempest should be the name of the import'able library, and 
  that the integration tests themselves should be what is pulled out of the 
  current Tempest repository, into their own repo called 
  openstack-integration-tests or os-integration-tests“.
 
 why not keeping it simple:
 
 tempest: importable test library
 tempest-tests: all the test cases
 
 Simple, obvious and clear ;)
 

While I agree that I like how this looks, and that it keeps things simple, I
don't think it's too feasible. The problem is the tempest namespace is already
kind of large and established. The libification effort, while reducing some of
that, doesn't eliminate it completely. So what this ends meaning is that we'll
have to do a rename for a large project in order to split certain functionality
out into a smaller library. Which really doesn't seem like the best way to do
it, because a rename is a considerable effort.

Another wrinkle to consider is that the tempest namespace on pypi is already in
use: https://pypi.python.org/pypi/Tempest so if we wanted to publish the library
as tempest we'd need to figure out what to do about that.

-Matt Treinish


pgpvUChCvCCKH.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] LOG.warning/LOG.warn

2014-08-18 Thread Joe Gordon
On Sun, Aug 17, 2014 at 9:24 AM, Jay Bryant jsbry...@electronicjungle.net
wrote:

 +2

 I prefer the LOG.warning format and support that given the documentation
 you shared.

 If there is agreement I would create a hacking check.


I think a better approach is to just not care which i used here. IMHO
mixing up LOG.warn and LOG.warning doesn't hurt readability in any
meaningful way. And not caring will result in less churn for very little
benefit.


  Jay
 On Aug 17, 2014 1:28 AM, Gary Kotton gkot...@vmware.com wrote:

  Hi,
 Over the last few weeks I have seen a number of patches where LOG.warn is
 replacing LOG.warning. I think that if we do something it should be the
 opposite as warning is the documented one in python 2 and 3
 https://docs.python.org/3/howto/logging.html.
 Any thoughts?
 Thanks
 Gary

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Issues with POSIX semaphores and other locks in lockutils

2014-08-18 Thread Pádraig Brady
On 08/18/2014 03:38 PM, Julien Danjou wrote:
 On Thu, Aug 14 2014, Yuriy Taraday wrote:
 
 Hi Yuriy,
 
 […]
 
 Looking forward to your opinions.
 
 This looks like a good summary of the situation.
 
 I've added a solution E based on pthread, but didn't get very far about
 it for now.

In my experience I would just go with the fcntl locks.
They're auto unlocked and well supported, and importantly,
supported for distributed processes.

I'm not sure how problematic the lock_path config is TBH.
That is adjusted automatically in certain cases where needed anyway.

Pádraig.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Minutes from 8/13/2014 meeting

2014-08-18 Thread Salvatore Orlando
Hi Trevor,

thanks for sharing this minutes!
I would like to cooperate a bit to this project's developments, possibly
without ending up being just deadweight.

To this aim I have some comments inline.

Salvatore


On 18 August 2014 22:25, Trevor Vardeman trevor.varde...@rackspace.com
wrote:

  Agenda items are numbered, and topics, as discussed, are described
 beneath in list format.

  1) Discuss future of Octavia in light of Neutron-incubator project
 proposal.
 a) There are many problems with Neutron-Incubator as currently
 described


 Have you listed your concerns somewhere? AFAICT the incubator definition
is in progress and feedback is very valuable at this stage.

b) The political happenings in Neutron leave our LBaaS patches under
 review unlikely to land in Juno


I am a bit disappointed if you feel like  you have been victims of
political discussions.
The truth in my opinion is much simpler, and is that most of the neutron
core team has prioritised features for achieving parity with nova-network
or increasing neutron scalability and reliability.
In my opinion the incubator proposal will improve this situation by making
the lbaas team a lot less dependent on the neutron core team.
Considering the level of attention the load balancing team has received I
would not be surprised if neutron cores are topping your most-hated list!


  c) The Incubator proposal doesn't affect Octavia development
 direction, with inclination to distance ourselves from Neutron proper


It depends on what do you mean by proper here. If you're into a neutron
incubator, your ultimate path ideally should be integration with neutron.
Instead if you're planning on total independence, then it might the case of
considering the typical paths new projects follow. I'm not an expert here,
but I think that usually starts from stackforge.


 d) With the Neutron Incubator proposal in current scope, efforts of
 people pushing forward Neutron LBaaS patches should be re-focused into
 Octavia.


Which probably sounds a reasonable thing to do (and a lot less effort for
you as well)


  2) Discuss operator networking requirements (carry-over from last week)
 a) Both HP and Rackspace seem to agree that as long as Octavia uses
 Neutron-like floating IPs, their networks should be able to work with
 proposed Octavia topologies
 b) (Blue Box) also wanted to meet with Rackspace's networking team
 during the operator summit a few weeks from now to thoroughly discuss
 network concerns

  3) Discuss v0.5 component design proposal  [
 https://review.openstack.org/#/c/113458/]
 a) Notification for back-end node health (aka being offline) isn't
 required for 0.5, but is a must have later
 b) Notification of LB health (HA Proxy, etc) is definitely a
 requirement in 0.5
 c) Still looking for more feedback on the proposal itself


I'll try and find some time to review it.


  4) Discuss timeline on moving these meetings to IRC.
 a) Most members in favor of keeping the webex meetings for the time
 being
 b) One major point was other openstack/stackforge use video meetings
 as their primary source as well


This is one of the reasons for which I don't attend load balancing meetings.
I find IRC much simpler and effective - and is also fairer to people for
whom English is not their first language.
Also, perusing IRC logs is much easier than watch/listen to webex
recordings.
Moreover, you'd get minutes for free - and you can control the density you
want them to have during the meeting!



  Sorry for the lack of density.  I forgot to have the meeting
 recorded, but I hope I included some major points.  Feel free to respond in
 line with any more information anyone can recall concerning the meeting
 information.  Thanks!

  -Trevor

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Eichberger, German
Hi Steven,

In my example we don’t share anything except the VIP ☺ So my motivation is if 
we can have two listeners share the same VIP. Hope that makes sense.

German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Monday, August 18, 2014 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

Yes, I'm advocating keeping each listener in a separate haproxy configuration 
(and separate running instance). This includes the example I mentioned: One 
that listens on port 80 for HTTP requests and redirects everything to the HTTPS 
listener on port 443.  (The port 80 listener is a simple configuration with no 
pool or members, and it doesn't take much to have it run on the same host as 
the port 443 listener.)

I've not explored haproxy's new redirect scheme capabilities in 1.5 yet. Though 
I doubt it would have a significant impact on the operational model where each 
listener is a separate haproxy configuration and instance.

German: Are you saying that the port 80 listener and port 443 listener would 
have the exact same back-end configuration? If so, then what we're discussing 
here with no sharing of child entities, would mean that the customer has to set 
up and manage these duplicate pools and members. If that's not acceptable, now 
is the time to register that opinion, eh!

Stephen

On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Hi German,
I don't think it is a requirement that those two frontend sections (or
listen sections) have to live in the same config.  I thought if they
were listening on the same IP but different ports it could be in two
different haproxy instances.  I could be wrong though.

Thanks,
Brandon

On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
 Hi,

 My 2 cents for the multiple listeners per load balancer discussion: We have 
 customers who like to have a listener on port 80 and one on port 443 on the 
 same VIP (we had to patch libra to allow two listeners in one single 
 haproxy) - so having that would be great.

 I like the proposed status :-)

 Thanks,
 German

 -Original Message-
 From: Brandon Logan 
 [mailto:brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com]
 Sent: Sunday, August 17, 2014 8:57 PM
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

 Oh hello again!

 You know the drill!

 On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
  Hi Brandon,
 
 
  Responses in-line:
 
  On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
  brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
  Comments in-line
 
  On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
   Hi folks,
  
  
   I'm OK with going with no shareable child entities
  (Listeners, Pools,
   Members, TLS-related objects, L7-related objects, etc.).
  This will
   simplify a lot of things (like status reporting), and we can
  probably
   safely work under the assumption that any user who has a use
  case in
   which a shared entity is useful is probably also technically
  savvy
   enough to not only be able to manage consistency problems
  themselves,
   but is also likely to want to have that level of control.
  
  
   Also, an haproxy instance should map to a single listener.
  This makes
   management of the configuration template simpler and the
  behavior of a
   single haproxy instance more predictable. Also, when it
  comes to
   configuration updates (as will happen, say, when a new
  member gets
   added to a pool), it's less risky and error prone to restart
  the
   haproxy instance for just the affected listener, and not for
  all
   listeners on the Octavia VM. The only down-sides I see are
  that we
   consume slightly more memory, we don't have the advantage of
  a shared
   SSL session cache (probably doesn't matter for 99.99% of
  sites using
   TLS anyway), and certain types of persistence wouldn't carry
  over
   between different listeners if they're implemented poorly by
  the
   user. :/  (In other words, negligible down-sides to this.)
 
 
  This is fine by me for now, but I think this might be
  something we can
  revisit later after we have the advantage of hindsight.  Maybe
  a
  configurable option.
 
 
  Sounds good, as long as we agree on a path forward. In the mean time,
  is there anything I'm missing which would be a significant advantage
  of having multiple Listeners configured in a 

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Stephen Balukoff
German--

By 'VIP' do you mean something roughly equivalent to 'loadbalancer' in the
Neutron LBaaS object model (as we've discussed in the past)?  That is to
say, is this thingy a parent object to the Listener in the hierarchy? If
so, then what we're describing definitely accommodates that.

(And yes, we commonly see deployments with listeners on port 80 and port
443 on the same virtual IP address.)

Stephen


On Mon, Aug 18, 2014 at 2:16 PM, Eichberger, German 
german.eichber...@hp.com wrote:

  Hi Steven,



 In my example we don’t share anything except the VIP J So my motivation
 is if we can have two listeners share the same VIP. Hope that makes sense.



 German



 *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
 *Sent:* Monday, August 18, 2014 1:39 PM
 *To:* OpenStack Development Mailing List (not for usage questions)

 *Subject:* Re: [openstack-dev] [Octavia] Object Model and DB Structure



 Yes, I'm advocating keeping each listener in a separate haproxy
 configuration (and separate running instance). This includes the example I
 mentioned: One that listens on port 80 for HTTP requests and redirects
 everything to the HTTPS listener on port 443.  (The port 80 listener is a
 simple configuration with no pool or members, and it doesn't take much to
 have it run on the same host as the port 443 listener.)



 I've not explored haproxy's new redirect scheme capabilities in 1.5 yet.
 Though I doubt it would have a significant impact on the operational model
 where each listener is a separate haproxy configuration and instance.



 German: Are you saying that the port 80 listener and port 443 listener
 would have the exact same back-end configuration? If so, then what we're
 discussing here with no sharing of child entities, would mean that the
 customer has to set up and manage these duplicate pools and members. If
 that's not acceptable, now is the time to register that opinion, eh!



 Stephen



 On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan 
 brandon.lo...@rackspace.com wrote:

 Hi German,
 I don't think it is a requirement that those two frontend sections (or
 listen sections) have to live in the same config.  I thought if they
 were listening on the same IP but different ports it could be in two
 different haproxy instances.  I could be wrong though.

 Thanks,
 Brandon


 On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
  Hi,
 
  My 2 cents for the multiple listeners per load balancer discussion: We
 have customers who like to have a listener on port 80 and one on port 443
 on the same VIP (we had to patch libra to allow two listeners in one
 single haproxy) - so having that would be great.
 
  I like the proposed status :-)
 
  Thanks,
  German
 
  -Original Message-
  From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
  Sent: Sunday, August 17, 2014 8:57 PM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure
 
  Oh hello again!
 
  You know the drill!
 
  On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
   Hi Brandon,
  
  
   Responses in-line:
  
   On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
   brandon.lo...@rackspace.com wrote:
   Comments in-line
  
   On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
Hi folks,
   
   
I'm OK with going with no shareable child entities
   (Listeners, Pools,
Members, TLS-related objects, L7-related objects, etc.).
   This will
simplify a lot of things (like status reporting), and we can
   probably
safely work under the assumption that any user who has a use
   case in
which a shared entity is useful is probably also technically
   savvy
enough to not only be able to manage consistency problems
   themselves,
but is also likely to want to have that level of control.
   
   
Also, an haproxy instance should map to a single listener.
   This makes
management of the configuration template simpler and the
   behavior of a
single haproxy instance more predictable. Also, when it
   comes to
configuration updates (as will happen, say, when a new
   member gets
added to a pool), it's less risky and error prone to restart
   the
haproxy instance for just the affected listener, and not for
   all
listeners on the Octavia VM. The only down-sides I see are
   that we
consume slightly more memory, we don't have the advantage of
   a shared
SSL session cache (probably doesn't matter for 99.99% of
   sites using
TLS anyway), and certain types of persistence wouldn't carry
   over
between different listeners if they're implemented poorly by
   the
  

Re: [openstack-dev] [Octavia] Minutes from 8/13/2014 meeting

2014-08-18 Thread Brandon Logan
Hi Salvatore,
It'd be great to get your contributions in this! If you could only bring
your knowledge and experience with Neutron to the table that'd be very
beneficial.  Looking forward to it.

Comments in-line

On Mon, 2014-08-18 at 23:06 +0200, Salvatore Orlando wrote:
 Hi Trevor,
 
 
 thanks for sharing this minutes!
 I would like to cooperate a bit to this project's developments,
 possibly without ending up being just deadweight.
 
 
 To this aim I have some comments inline.
 
 
 Salvatore
 
 
 On 18 August 2014 22:25, Trevor Vardeman
 trevor.varde...@rackspace.com wrote:
 Agenda items are numbered, and topics, as discussed, are
 described beneath in list format.
 
 
 1) Discuss future of Octavia in light of Neutron-incubator
 project proposal.
 a) There are many problems with Neutron-Incubator as
 currently described
 
 
  Have you listed your concerns somewhere? AFAICT the incubator
 definition is in progress and feedback is very valuable at this stage.

We have, many times.  In etherpad, to Kyle and Mark in IRC meetings.
Not sure how it will shape up in the end yet though.

 
 
 b) The political happenings in Neutron leave our LBaaS
 patches under review unlikely to land in Juno
 
 
 I am a bit disappointed if you feel like  you have been victims of
 political discussions.
 The truth in my opinion is much simpler, and is that most of the
 neutron core team has prioritised features for achieving parity with
 nova-network or increasing neutron scalability and reliability.
 In my opinion the incubator proposal will improve this situation by
 making the lbaas team a lot less dependent on the neutron core team.
 Considering the level of attention the load balancing team has
 received I would not be surprised if neutron cores are topping your
 most-hated list!

I think the main issue everyone has had was that there was an
expectation that if we got our code in and went through many iterations
of reviews then we'd get into Juno.  We did everything asked of us and
this the incubator came in very late.  However, I (and I'm pretty sure
most others) absolutely agree that the incubator should exist, and that
the lbaas v2 belongs in there (so does v1).  It was just the timing of
it based on what the expectations were set forth at the summit.

Neutron cores are not topping our most-hated list at all.  I think we
all see the cores have a ton of reviews and work to do.  I think its
more a symptom of two issues: 1) Neutron's scope is too large 2) If
Neutron's scope wants to be that large, then adding more core reviewers
seems to be the logical solution.

Now the incubator can definitely solve for #1 because it has a path for
spinning out.  However, the fear is that the incubator will become an
after thought for neutron cores.  I hope this is not the case, and I
have high hopes for it, but it's still a fear even from the most
optimistic of people.

  
 c) The Incubator proposal doesn't affect Octavia
 development direction, with inclination to distance ourselves
 from Neutron proper
 
 
 It depends on what do you mean by proper here. If you're into a
 neutron incubator, your ultimate path ideally should be integration
 with neutron.
 Instead if you're planning on total independence, then it might the
 case of considering the typical paths new projects follow. I'm not an
 expert here, but I think that usually starts from stackforge.

The incubator does have a spin out path and I believe that may be the
way forward on this.  I don't think spinning out should be something we
should focus on too much right now until Octavia is in a good state.
Either way, I'm sure we have the blessing of Kyle and Mark to spin out
at some point and become a project under the Networking Program.  This
could be accomplished a number of ways.

  
 d) With the Neutron Incubator proposal in current scope,
 efforts of people pushing forward Neutron LBaaS patches should
 be re-focused into Octavia.
 
 
 Which probably sounds a reasonable thing to do (and a lot less effort
 for you as well) 
 
 
 2) Discuss operator networking requirements (carry-over from
 last week)
 a) Both HP and Rackspace seem to agree that as long as
 Octavia uses Neutron-like floating IPs, their networks should
 be able to work with proposed Octavia topologies
 b) (Blue Box) also wanted to meet with Rackspace's
 networking team during the operator summit a few weeks from
 now to thoroughly discuss network concerns
 
 
 3) Discuss v0.5 component design proposal
  [https://review.openstack.org/#/c/113458/]
 a) Notification for back-end node health (aka being
 offline) isn't required for 0.5, but is a must have later
 b) Notification of LB health (HA Proxy, etc) is definitely

Re: [openstack-dev] [Octavia] Object Model and DB Structure

2014-08-18 Thread Eichberger, German
No, I mean with VIP the original meaning more akin to a Floating IP…

German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Monday, August 18, 2014 2:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

German--

By 'VIP' do you mean something roughly equivalent to 'loadbalancer' in the 
Neutron LBaaS object model (as we've discussed in the past)?  That is to say, 
is this thingy a parent object to the Listener in the hierarchy? If so, then 
what we're describing definitely accommodates that.

(And yes, we commonly see deployments with listeners on port 80 and port 443 on 
the same virtual IP address.)

Stephen

On Mon, Aug 18, 2014 at 2:16 PM, Eichberger, German 
german.eichber...@hp.commailto:german.eichber...@hp.com wrote:
Hi Steven,

In my example we don’t share anything except the VIP ☺ So my motivation is if 
we can have two listeners share the same VIP. Hope that makes sense.

German

From: Stephen Balukoff 
[mailto:sbaluk...@bluebox.netmailto:sbaluk...@bluebox.net]
Sent: Monday, August 18, 2014 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

Yes, I'm advocating keeping each listener in a separate haproxy configuration 
(and separate running instance). This includes the example I mentioned: One 
that listens on port 80 for HTTP requests and redirects everything to the HTTPS 
listener on port 443.  (The port 80 listener is a simple configuration with no 
pool or members, and it doesn't take much to have it run on the same host as 
the port 443 listener.)

I've not explored haproxy's new redirect scheme capabilities in 1.5 yet. Though 
I doubt it would have a significant impact on the operational model where each 
listener is a separate haproxy configuration and instance.

German: Are you saying that the port 80 listener and port 443 listener would 
have the exact same back-end configuration? If so, then what we're discussing 
here with no sharing of child entities, would mean that the customer has to set 
up and manage these duplicate pools and members. If that's not acceptable, now 
is the time to register that opinion, eh!

Stephen

On Mon, Aug 18, 2014 at 11:37 AM, Brandon Logan 
brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
Hi German,
I don't think it is a requirement that those two frontend sections (or
listen sections) have to live in the same config.  I thought if they
were listening on the same IP but different ports it could be in two
different haproxy instances.  I could be wrong though.

Thanks,
Brandon

On Mon, 2014-08-18 at 17:21 +, Eichberger, German wrote:
 Hi,

 My 2 cents for the multiple listeners per load balancer discussion: We have 
 customers who like to have a listener on port 80 and one on port 443 on the 
 same VIP (we had to patch libra to allow two listeners in one single 
 haproxy) - so having that would be great.

 I like the proposed status :-)

 Thanks,
 German

 -Original Message-
 From: Brandon Logan 
 [mailto:brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com]
 Sent: Sunday, August 17, 2014 8:57 PM
 To: 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Octavia] Object Model and DB Structure

 Oh hello again!

 You know the drill!

 On Sat, 2014-08-16 at 11:42 -0700, Stephen Balukoff wrote:
  Hi Brandon,
 
 
  Responses in-line:
 
  On Fri, Aug 15, 2014 at 9:43 PM, Brandon Logan
  brandon.lo...@rackspace.commailto:brandon.lo...@rackspace.com wrote:
  Comments in-line
 
  On Fri, 2014-08-15 at 17:18 -0700, Stephen Balukoff wrote:
   Hi folks,
  
  
   I'm OK with going with no shareable child entities
  (Listeners, Pools,
   Members, TLS-related objects, L7-related objects, etc.).
  This will
   simplify a lot of things (like status reporting), and we can
  probably
   safely work under the assumption that any user who has a use
  case in
   which a shared entity is useful is probably also technically
  savvy
   enough to not only be able to manage consistency problems
  themselves,
   but is also likely to want to have that level of control.
  
  
   Also, an haproxy instance should map to a single listener.
  This makes
   management of the configuration template simpler and the
  behavior of a
   single haproxy instance more predictable. Also, when it
  comes to
   configuration updates (as will happen, say, when a new
  member gets
   added to a pool), it's less risky and error prone to restart
  the
   haproxy instance for just the affected listener, and not for
  all
   listeners on the Octavia VM. The only 

Re: [openstack-dev] [Octavia] Minutes from 8/13/2014 meeting

2014-08-18 Thread Doug Wiegley
 a) Most members in favor of keeping the webex meetings for the time being

Correction: most of the people that like to talk over each other in a
large voice conference voiced their approval of voice.  Those that prefer
to wait for pauses to speak were unsurprisingly silent, or tried and
failed to break into the cries of love for webex.

I also prefer IRC, with voice tactically as is useful.

Doug



On 8/18/14, 3:06 PM, Salvatore Orlando sorla...@nicira.com wrote:

Hi Trevor,


thanks for sharing this minutes!
I would like to cooperate a bit to this project's developments, possibly
without ending up being just deadweight.


To this aim I have some comments inline.


Salvatore


On 18 August 2014 22:25, Trevor Vardeman
trevor.varde...@rackspace.com wrote:

Agenda items are numbered, and topics, as discussed, are described
beneath in list format.


1) Discuss future of Octavia in light of Neutron-incubator project
proposal.
a) There are many problems with Neutron-Incubator as currently
described






 Have you listed your concerns somewhere? AFAICT the incubator definition
is in progress and feedback is very valuable at this stage.



b) The political happenings in Neutron leave our LBaaS patches under
review unlikely to land in Juno






I am a bit disappointed if you feel like  you have been victims of
political discussions.
The truth in my opinion is much simpler, and is that most of the neutron
core team has prioritised features for achieving parity with nova-network
or increasing neutron scalability and reliability.
In my opinion the incubator proposal will improve this situation by
making the lbaas team a lot less dependent on the neutron core team.
Considering the level of attention the load balancing team has received I
would not be surprised if neutron cores are topping your most-hated list!
 

c) The Incubator proposal doesn't affect Octavia development
direction, with inclination to distance ourselves from Neutron proper






It depends on what do you mean by proper here. If you're into a neutron
incubator, your ultimate path ideally should be integration with neutron.
Instead if you're planning on total independence, then it might the case
of considering the typical paths new projects follow. I'm not an expert
here, but I think that usually starts from stackforge.
 

d) With the Neutron Incubator proposal in current scope, efforts of
people pushing forward Neutron LBaaS patches should be re-focused into
Octavia.






Which probably sounds a reasonable thing to do (and a lot less effort for
you as well) 



2) Discuss operator networking requirements (carry-over from last week)
a) Both HP and Rackspace seem to agree that as long as Octavia uses
Neutron-like floating IPs, their networks should be able to work with
proposed Octavia topologies
b) (Blue Box) also wanted to meet with Rackspace's networking team
during the operator summit a few weeks from now to thoroughly discuss
network concerns


3) Discuss v0.5 component design proposal
[https://review.openstack.org/#/c/113458/]
a) Notification for back-end node health (aka being offline) isn't
required for 0.5, but is a must have later
b) Notification of LB health (HA Proxy, etc) is definitely a
requirement in 0.5
c) Still looking for more feedback on the proposal itself






I'll try and find some time to review it.



4) Discuss timeline on moving these meetings to IRC.
a) Most members in favor of keeping the webex meetings for the time
being
b) One major point was other openstack/stackforge use video meetings
as their primary source as well






This is one of the reasons for which I don't attend load balancing
meetings.
I find IRC much simpler and effective - and is also fairer to people for
whom English is not their first language.
Also, perusing IRC logs is much easier than watch/listen to webex
recordings.
Moreover, you'd get minutes for free - and you can control the density
you want them to have during the meeting!







Sorry for the lack of density.  I forgot to have the meeting
recorded, but I hope I included some major points.  Feel free to respond
in line with any more information anyone can recall concerning the
meeting information.  Thanks!


-Trevor



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev










___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3] HA Router Review Help

2014-08-18 Thread Carl Baldwin
Hi all,

This is intended for those readers interested in reviewing and soon
merging the HA routers implementation for Juno.  Assaf Muller has
written a blog [1] about this new feature which serves as a good
overview.  It will be useful for reviewers to get up to speed and I
recommend reading it before getting started.  He and I have
collaborated to create sort of a map [2] which lists the relevant
reviews.  The map groups the patches by project and area of focus.
Under each heading, it shows the order in which the patches should be
reviewed.

I hope that this information will help to ease that overwhelming
feeling you might have when faced with the list of patches under this
topic.

Carl

[1] http://assafmuller.wordpress.com/2014/08/16/layer-3-high-availability/
[2] 
https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam#Blueprint:_l3-high-availability_.28safchain.2C_amuller.29

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Minutes from 8/13/2014 meeting

2014-08-18 Thread Doug Wiegley
I agree almost completely with Brandon¹s comments on the incubator.

For Octavia, I think we need to not stress neutron vs incubator vs
spin-out, and just focus on writing some load-balancing code.  We¹ve spent
far too much time in Juno working on processes, glue, and APIs, and
precious little on moving packets around or adding LB features.  And if we
re-focus now on processes, glue, and APIs, I think we¹ll be missing the
mark.

I certainly don¹t hate the neutron cores; quite the opposite.  But what I
absolutely *NEED* is a fairly predictable process for where/how code will
land, and what sorts of resources it will take to get there. It is very
difficult right now to plan/evangelize company resources, when the rules
are almost constantly in flux. Or appear to be so, from an outside
perspective.

Review cycles are a huge issue.  Of course, adding a bunch of reviewers is
not a recipe for an increase in stability.

I¹m not going to focus overly much on the incubator process needing to be
perfect.  It lives or dies with people, their judgement, and their good
faith effort to see all of this stuff succeed.  If others are not bringing
good faith to the table, then it¹s not something that any process is going
to fix.

Thanks,
doug

On 8/18/14, 3:44 PM, Brandon Logan brandon.lo...@rackspace.com wrote:

Hi Salvatore,
It'd be great to get your contributions in this! If you could only bring
your knowledge and experience with Neutron to the table that'd be very
beneficial.  Looking forward to it.

Comments in-line

On Mon, 2014-08-18 at 23:06 +0200, Salvatore Orlando wrote:
 Hi Trevor,
 
 
 thanks for sharing this minutes!
 I would like to cooperate a bit to this project's developments,
 possibly without ending up being just deadweight.
 
 
 To this aim I have some comments inline.
 
 
 Salvatore
 
 
 On 18 August 2014 22:25, Trevor Vardeman
 trevor.varde...@rackspace.com wrote:
 Agenda items are numbered, and topics, as discussed, are
 described beneath in list format.
 
 
 1) Discuss future of Octavia in light of Neutron-incubator
 project proposal.
 a) There are many problems with Neutron-Incubator as
 currently described
 
 
  Have you listed your concerns somewhere? AFAICT the incubator
 definition is in progress and feedback is very valuable at this stage.

We have, many times.  In etherpad, to Kyle and Mark in IRC meetings.
Not sure how it will shape up in the end yet though.

 
 
 b) The political happenings in Neutron leave our LBaaS
 patches under review unlikely to land in Juno
 
 
 I am a bit disappointed if you feel like  you have been victims of
 political discussions.
 The truth in my opinion is much simpler, and is that most of the
 neutron core team has prioritised features for achieving parity with
 nova-network or increasing neutron scalability and reliability.
 In my opinion the incubator proposal will improve this situation by
 making the lbaas team a lot less dependent on the neutron core team.
 Considering the level of attention the load balancing team has
 received I would not be surprised if neutron cores are topping your
 most-hated list!

I think the main issue everyone has had was that there was an
expectation that if we got our code in and went through many iterations
of reviews then we'd get into Juno.  We did everything asked of us and
this the incubator came in very late.  However, I (and I'm pretty sure
most others) absolutely agree that the incubator should exist, and that
the lbaas v2 belongs in there (so does v1).  It was just the timing of
it based on what the expectations were set forth at the summit.

Neutron cores are not topping our most-hated list at all.  I think we
all see the cores have a ton of reviews and work to do.  I think its
more a symptom of two issues: 1) Neutron's scope is too large 2) If
Neutron's scope wants to be that large, then adding more core reviewers
seems to be the logical solution.

Now the incubator can definitely solve for #1 because it has a path for
spinning out.  However, the fear is that the incubator will become an
after thought for neutron cores.  I hope this is not the case, and I
have high hopes for it, but it's still a fear even from the most
optimistic of people.

  
 c) The Incubator proposal doesn't affect Octavia
 development direction, with inclination to distance ourselves
 from Neutron proper
 
 
 It depends on what do you mean by proper here. If you're into a
 neutron incubator, your ultimate path ideally should be integration
 with neutron.
 Instead if you're planning on total independence, then it might the
 case of considering the typical paths new projects follow. I'm not an
 expert here, but I think that usually starts from stackforge.

The incubator does have a spin out path and I believe that may be the
way forward on this.  I don't think spinning out should be something we
should focus on too much right now 

Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Jay Lau
2014-08-19 4:11 GMT+08:00 Eric Windisch ewindi...@docker.com:




 On Mon, Aug 18, 2014 at 8:49 AM, Jyoti Ranjan jran...@gmail.com wrote:

 I believe that everything can not go as a dock container. For e.g.

 1. compute nodes
 2. baremetal provisioning
 3. L3 router etc


 Containers are a good solution for all of the above, for some value of
 container. There is some terminology overloading here, however.


Hi Eric, one more question, not quite understand what you mean for
Containers are a good solution for all of the above, you mean docker
container can manage all of three above? How? Can you please show more
details? Thanks!


 There are Linux namespaces, capability sets, and cgroups which may not be
 appropriate for using around some workloads. These, however, are granular.
 For instance, one may run a container without networking namespaces,
 allowing the container to directly manipulate host networking. Such a
 container would still see nothing outside its own chrooted filesystem, PID
 namespace, etc.

 Docker in particular offers a number of useful features around filesystem
 management, images, etc. These features make it easier to deploy and manage
 systems, even if many of the Linux containers features are disabled for
 one reason or another.

 --
 Regards,
 Eric Windisch

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Docker] Run OpenStack Service in Docker Container

2014-08-18 Thread Eric Windisch


 On Mon, Aug 18, 2014 at 8:49 AM, Jyoti Ranjan jran...@gmail.com wrote:

 I believe that everything can not go as a dock container. For e.g.

 1. compute nodes
 2. baremetal provisioning
 3. L3 router etc


 Containers are a good solution for all of the above, for some value of
 container. There is some terminology overloading here, however.


 Hi Eric, one more question, not quite understand what you mean for
 Containers are a good solution for all of the above, you mean docker
 container can manage all of three above? How? Can you please show more
 details? Thanks!


I'm not sure this is the right forum for a nuanced explanation of every
use-case and every available option, but I can give some examples. Keep in
mind, again, that even in absence of security constraints offered by
Docker, that Docker provides imaging facilities and server management
solutions that are highly useful. For instance, there are use-cases of
Docker that might leverage it simply for attestation or runtime artifact
management.

First, one could in the case of an L3 router or baremetal provisioning
where host networking is required,  one might specify 'docker run -net
host' to allow the process(es) running inside of the container to operate
as if running on the host, but only as it pertains to networking.
Essentially, it would uncontain the networking aspect of the process(es).

As of Docker 1.2, to be released this week, one may also specify docker
run --cap-add to provide granular control of the addition of Linux
capabilities that might be needed by processes (see
http://linux.die.net/man/7/capabilities). This allows granular loosing of
restrictions which might allow container-breakout, without fully opening
the gates.  From a security perspective, I'd rather provide some
restrictions than none at all.

On compute nodes, it should be possible to run qemu/kvm inside of a
container. The nova-compute program does many things on a host and it may
be difficult to provide a simplified set of restrictions for it without
running a privileged container (or one with many --cap-add statements,
--net host, etc). Again, while containment might be minimized, the
deployment facilities of Docker are still very useful.  That said, all of
the really interesting things done by Nova that require privileges are
done by rootwrap... a rootwrap which leveraged Docker would make
containerization of Nova more meaningful and would be a boon for Nova
security overall.

-- 
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >