Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Adam Gandelman
The stable-maint team has been more active in the last couple months of
keeping on top of stable branch specific gate breakage (usually identified
by periodic job failures).  We managed to flush a bunch of reviews through
the gate over the last couple weeks [1] Yea, many required rechecks, but
the biggest bugs I hit werehttp://pad.lv/1323658 and
http://pad.lv/1374175 which, according to elastic-recheck, are
project-wide, affecting master and not specific to the stable branches.

mikal's right, code review has indeed been lagging over the last cycle..
Tho the last month or two a number of new faces have showed and are
actively helping get things reviewed in a timely manner.

I'm curious what else is failing that is specific to the stable trees?  I
spent time over the weekend babysitting many stable merges and found it to
be no more / no less painful than trying to get a Tempest patch merged.

Cheers,
-Adam

[1]
https://review.openstack.org/#/q/status:merged+branch:stable/icehouse,n,z

On Wed, Oct 1, 2014 at 9:42 AM, Sean Dague s...@dague.net wrote:

 As stable branches got discussed recently, I'm kind of curious who is
 actually stepping up to make icehouse able to pass tests in any real
 way. Because right now I've been trying to fix devstack icehouse so that
 icehouse requirements can be unblocked (and to land code that will
 reduce grenade failures)

 I'm on retry #7 of modifying the tox.ini file in devstack.

 During the last summit people said they wanted to support icehouse for
 15 months. Right now we're at 6 months and the tree is basically unable
 to merge code.

 So who is actually standing up to fix these things, or are we going to
 just leave it broken and shoot icehouse in the head early?

 -Sean

 --
 Sean Dague
 http://dague.net

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] [I18N] compiling translation message catalogs

2014-10-02 Thread Łukasz Jernaś
On Wed, Oct 1, 2014 at 6:04 PM, Akihiro Motoki amot...@gmail.com wrote:
 Hi,

Hi Akihiro!

 To display localized strings, we need to compile translated message
 catalogs (PO files) into compiled one (MO files).
 I would like to discuss and get a consensus who and when generate
 compiled message catalogs.
 Inputs from packagers are really appreciated.

 [The current status]
 * Horizon contains compile message catalogs in the git repo. (It is
 just a history and there seems no strong reason to have compiled one
 in the repo. There is a bug report on it.)
 * Other all projects do not contain compiled message catalogs and have
 only PO files.

 [Possible choices]
 I think there are several options. (there may be other options)
 (a) OpenStack does not distribute compiled message catalogs, and only
 provides a command (setup.py integration) to compile message catalogs.
 Deployers or distributors need to compile message catalogs.
 (b) Similar to (a), but compile message catalogs as a part of pip install.
 (c) OpenStack distributes compiled message catalogs as a part of the release.
 (c1) the git repo maintains compiled message catalogs.
 (c2) only tarball contains compiled message catalogs

 Note that the current Horizon is (c1) and others are (a).

I'd go for (a), as traditionally message catalogs were compiled during
the packaging step for Linux software (of course your experiences may
vary).
Of course if it was pretty straightforward to integrate it into pip
install it would also be a good solution.

Anyway keeping the compiled files in git isn't a good solution as they
get easily outdated...

Best regards,
-- 
Łukasz [DeeJay1] Jernaś

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-02 Thread Gregory Haynes
Excerpts from James Polley's message of 2014-10-02 05:37:25 +:
 All three of the options presented here seem to assume that UIDs will always 
 be allocated at image-build time. I think that's because most of these UIDs 
 will be used to write files into the chroot at image-create time - if I could 
 think of some way around that, I think we could avoid this problem more 
 neatly by not assigning the UIDs until first boot
 
 But since we can't do that, would it be possible to compromise by having the 
 UIDs read in from heat metadata, and using the current allocation process if 
 none is provided?
 
 This should allow people who prefer to have static UIDs to have simple 
 drop-in config, but also allow people who want to dynamically read from 
 existing images to scrape the details and then drop them in.
 
 To aid people who have existing images, perhaps we could provide a small tool 
 (if one doesn't already exist) that simply reads /etc/passwd and returns a 
 JSON username:uid map, to be added into the heat local environment when 
 building the next image?
 

What I was suggesting before as an alternate solution is a more simple
version of this - just copy the existing /etc/passwd and friends into
the chroot at the start of building a new image. This should cause new
users to be created in a safe way.

IMO I like the uid pinning better as a solution, though.

Cheers,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Alan Pevec
 The original idea was that these stable branches would be maintained by the
 distros, and that is clearly not happening if you look at the code review

Stable branches are maintained by the _upstream_ stable-maint team[1]
where most members might be from (two) distros but please note that
all PTLs are also included and there are members who are not from a
distro.
But you're right, if this stays mostly one distro effort, we'll pull
out and do it on our own.
/me looks at other non-named distros

 latency there. We need to sort that out before we even consider supporting a
 release for more than the one year we currently do.

Please consider that stable branches are also needed for the security
fixes and we, as a responsible upstream project, need to provide that
with or without distros. Stable branch was a master branch just few
months ago and it inherited all the bugs present there, so everybody
fixing a gate bug on master should consider backporting to stable at
the same time. It can't be stable-maint-only responsiblity e.g.
stable-maint doesn't have +2 in devstack stable/* or in tempest (now
brancheless, so master) branches.

Cheers,
Alan

[1] https://review.openstack.org/#/admin/groups/120,members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] openvswitch integration

2014-10-02 Thread Andreas Scheuring
Hi together, 
I'm wondering why ovs was integrated into openstack in the way it is
today
(http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html)

Especially I would like to understand
- why does every physical interface have its own bridge? (I guess you
could also plug it directly into the br-int)
- and why does the br-int use vlan separation and not directly the
configured tenant-network-type separation (e.g. vxlan or something
else)? Tagging a packet with the internal vlan and then converting it to
the external vlan again looks strange to me in the first place.

It's just a feeling but this surely has impact on the performance. I
guess latency and cpu consumption will surely go up with this design.
Are there any technical or historical reasons for it? Or is it just to
reduce complexity?

Thanks!


-- 
Andreas 
(irc: scheuran)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Alan Pevec
 I'm on retry #7 of modifying the tox.ini file in devstack.

Which review# is that so I can have a look?

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] [fuel] Executor task affinity

2014-10-02 Thread Nikolay Makhotkin
Hi, folks!

I drafted the document where we can see how task affinity will be applied
to Mistral:

https://docs.google.com/a/mirantis.com/document/d/17O51J1822G9KY_Fkn66Ul2fc56yt9T4NunnSgmaehmg/edit

-- 
Best Regards,
Nikolay
@Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Dave Walker
On 2 Oct 2014 08:19, Alan Pevec ape...@gmail.com wrote:

  The original idea was that these stable branches would be maintained by
the
  distros, and that is clearly not happening if you look at the code
review

 Stable branches are maintained by the _upstream_ stable-maint team[1]
 where most members might be from (two) distros but please note that
 all PTLs are also included and there are members who are not from a
 distro.
 But you're right, if this stays mostly one distro effort, we'll pull
 out and do it on our own.
 /me looks at other non-named distros

  latency there. We need to sort that out before we even consider
supporting a
  release for more than the one year we currently do.

 Please consider that stable branches are also needed for the security
 fixes and we, as a responsible upstream project, need to provide that
 with or without distros. Stable branch was a master branch just few
 months ago and it inherited all the bugs present there, so everybody
 fixing a gate bug on master should consider backporting to stable at
 the same time. It can't be stable-maint-only responsiblity e.g.
 stable-maint doesn't have +2 in devstack stable/* or in tempest (now
 brancheless, so master) branches.

 Cheers,
 Alan

 [1] https://review.openstack.org/#/admin/groups/120,members


Hey,

When I initially proposed the concept of stable branches, it was indeed
targeted as a collaborative distro effort.

It became clear in the summit session that there was not just shared
interest from distros, but vendors and large consumers.

It was /not/ something that I envisaged would become a project
responsibility, just an area for the various types of consumer to
collaborate, rather than duplicating effort in downstreams.. Most likely
missing pretty important stability patches.

I didn't want dedicated point releases, just an always stable area where
consumers could pull/rebase from. This idea pretty much changed, and
vendors wanted a stamped-point release to make the situation clearer to
their users.

I think everyone would agree that the project and scope has grown pretty
significantly since the early days, and I agree that there does need to be
project-wide share of the burden of keeping the stable branches maintained,
with stable-maint becoming the driver. It can only scale if there is
sustained interest from each project.

I do not think it *can* now work with a small team of generalists, without
support from SME of projects.

I am pretty nervous of the point you make about Red Hat taking their ball
and going home if more distros don't commit to more effort. This is pretty
simply not the way to encourage more participation.

Sadly, the git pull stats cannot be public.. But I am pretty sure that a
reasonably large consumer-base slurp up the branches directly. If this is
true, then it is clear that the project has a responsibility to users..
Therefore, the quick fire point of talking about stable branch ongoing
feasibility is a bit rash.  The general project clearly isn't ready for
rolling release, so we need to talk about how we can make this work.

I have been absent from the stable-maint effort for the last year, but have
been tracking the stable mailing list.

This feels like the first credible 'we are struggling' that has been raised
- I actually believed it was reasonably healthy. It does seem that this
issue has been brewing for a while.

Therefore, I think we need to do a better effort of tracking weak areas in
the process. We do not have a decent TODO list.

Tracking what needs to be done, allows better granular sharing of the
burden.

This is not a problem of looking at gerrit stable/* open reviews but bugs
in the process.

Is the issue mostly about changing dependencies upper versions?

Should we consider whitelisting updated dependencies in requirements.txt,
rather than blacklisting/racing to backport a fix?

Are enough patchsets proposed from current Master?

Are project core's routinely asking themselves if a patchset should be
backported?

Are we tracking open-bugs on Master well enough as also affecting stable
releases?

I do not think we are struggling primarily with technical issues, but
procedural issues.

I hope we are all agreed we /need/ something. Let's talk about 'what' and
'how', rather than 'if'.

[I will look to be more involved with stable this cycle.]

--
Kind Regards,
Dave Walker
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

I guess the following review is meant:
https://review.openstack.org/#/c/125075/

I went thru each of the failure for the patch (no dependency failures
checked), and here are some damned lies (c) about those failures:

- - bug 1323658: 2 failures (ssh connection timeout issue, shows up in
Neutron jobs)
- - bug 1331274: 2 failures (Grenade not starting services)
- - bug 1375108: 4 failures (bug in Nova EC2 reboot code?)
- - bug 1348204: 1 failure (Cinder volume detach failing)
- - bug 1374175: 1 failure (Heat bug)

Neither of those bugs are solved in master. Some of those bugs are
staying opened for months. The first bug was raised as Neutron bug and
marked for RC-1, but then was untargeted due to believe among Neutron
developers that it's a bug in Tempest.

Nevertheless, with all the hopeless state of the gate, in the Icehouse
release scheduled for today stable maint team was able to merge fixes
for more than 120 bugs.

So, all that said, does anyone believe that it's fair to bitch about
stable maintainers not doing their job? Wouldn't it be more fair to
bitch about the overall hopeless state of the gate and projects
feeling ok releasing Juno with major failures in gate (like in case of
the very first bug in the list)?

/Ihar

On 02/10/14 09:21, Alan Pevec wrote:
 I'm on retry #7 of modifying the tox.ini file in devstack.
 
 Which review# is that so I can have a look?
 
 Cheers, Alan
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJULRESAAoJEC5aWaUY1u57xa0IAJQyvTM7ibfImE0TzXT3AuYE
WWy8YmYEbXyH+dEuo6JmaLyKcPAnnlS14Rw+rU2MPtgQWIH1ePkrsrv2PELlF/QI
beoVXgUdXemq5AUl3I79H/de7wOAsNhlfrfUdY1GqonVoDkyD5zjQAy4pOUP475G
r2kAhIR6EBfS68MWNAJhhjUiP+m+l8kcb0ylenk1AC/JqKtHlSs8DVx25e/FaZtl
46aGKPcbRC2PvHJZ1CmeXDaKasiY3M9lFZvJDmPpNF7qGqlw3WChSRw3yMX/qvNe
owLDhP6GGSI6wVQvNo2LVESZK5Fs3n2L2WhHXsjiFYEoYZ57Worm4n5mQogw6uc=
=Q55h
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [IPv6] [ironic] New API format for extra_dhcp_opts

2014-10-02 Thread Lucas Alvares Gomes
Thanks guys for the heads up

Indeed making it backwards compat by adding the [ip_]version key to
the dictionary sounds like the best way to go.

Cheers,
Lucas

On Thu, Oct 2, 2014 at 3:53 AM, Carlino, Chuck chuck.carl...@hp.com wrote:
 As a 'heads up', adding ironic to the thread since they are a 'key' consumer 
 of this api.


 On Oct 1, 2014, at 3:15 AM, Xu Han Peng 
 pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

 ip_version sounds great.

 Currently the opt-names are written into the configuration file of dnsmasq 
 directly. So I would say yes, they are coming from dnsmasq definitions.

 It will make more sense when ip_version is missing or null, the option apply 
 to both since we could have only ipv6 or ipv4 address on the port. However, 
 the validation of opt-value should rule out the ones which are not suitable 
 for the current address. For example, an IPv6 dns server should not be 
 specified for IPv4 address port, etc...

 Xu Han

 On 09/30/2014 08:41 PM, Robert Li (baoli) wrote:
 Xu Han,

 That looks good to me. To keep it consistent with existing CLI, we should use 
 ip-version instead of ‘version’. It seems to be identical to prefixing the 
 option_name with v4 or v6, though.

 Just to clarify, are the available opt-names coming from dnsmasq definitions?

 With regard to the default, your suggestion version is optional (no version 
 means version=4). seems to be different from Mark’s:
 I’m -1 for both options because neither is properly backwards compatible.  
 Instead we should add an optional 3rd value to the dictionary: “version”.  
 The version key would be used to make the option only apply to either version 
 4 or 6.  If the key is missing or null, then the option would apply to both.

 Thanks,
 Robert

 On 9/30/14, 1:46 AM, Xu Han Peng 
 pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

 Robert,

 I think the CLI will look something like based on Mark's suggestion:

 neutron port-create extra_dhcp_opts 
 opt_name=dhcp_option_name,opt_value=value,version=4(or 6) network

 This extra_dhcp_opts can be repeated and version is optional (no version 
 means version=4).

 Xu Han

 On 09/29/2014 08:51 PM, Robert Li (baoli) wrote:
 Hi Xu Han,

 My question is how the CLI user interface would look like to distinguish 
 between v4 and v6 dhcp options?

 Thanks,
 Robert

 On 9/28/14, 10:29 PM, Xu Han Peng 
 pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

 Mark's suggestion works for me as well. If no one objects, I am going to 
 start the implementation.

 Thanks,
 Xu Han

 On 09/27/2014 01:05 AM, Mark McClain wrote:

 On Sep 26, 2014, at 2:39 AM, Xu Han Peng 
 pengxu...@gmail.commailto:pengxu...@gmail.com wrote:

 Currently the extra_dhcp_opts has the following API interface on a port:

 {
 port:
 {
 extra_dhcp_opts: [
 {opt_value: testfile.1,opt_name: bootfile-name},
 {opt_value: 123.123.123.123, opt_name: tftp-server},
 {opt_value: 123.123.123.45, opt_name: server-ip-address}
 ],
 
  }
 }

 During the development of DHCPv6 function for IPv6 subnets, we found this 
 format doesn't work anymore because an port can have both IPv4 and IPv6 
 address. So we need to find a new way to specify extra_dhcp_opts for DHCPv4 
 and DHCPv6, respectively. ( https://bugs.launchpad.net/neutron/+bug/1356383)

 Here are some thoughts about the new format:

 Option1: Change the opt_name in extra_dhcp_opts to add a prefix (v4 or v6) so 
 we can distinguish opts for v4 or v6 by parsing the opt_name. For backward 
 compatibility, no prefix means IPv4 dhcp opt.

 extra_dhcp_opts: [
 {opt_value: testfile.1,opt_name: bootfile-name},
 {opt_value: 123.123.123.123, opt_name: v4:tftp-server},
 {opt_value: [2001:0200:feed:7ac0::1], opt_name: 
 v6:dns-server}
 ]

 Option2: break extra_dhcp_opts into IPv4 opts and IPv6 opts. For backward 
 compatibility, both old format and new format are acceptable, but old format 
 means IPv4 dhcp opts.

 extra_dhcp_opts: {
  ipv4: [
 {opt_value: testfile.1,opt_name: bootfile-name},
 {opt_value: 123.123.123.123, opt_name: 
 tftp-server},
  ],
  ipv6: [
 {opt_value: [2001:0200:feed:7ac0::1], opt_name: 
 dns-server}
  ]
 }

 The pro of Option1 is there is no need to change API structure but only need 
 to add validation and parsing to opt_name. The con of Option1 is that user 
 need to input prefix for every opt_name which can be error prone. The pro of 
 Option2 is that it's clearer than Option1. The con is that we need to check 
 two formats for backward compatibility.

 We discussed this in IPv6 sub-team meeting and we think Option2 is preferred. 
 Can I also get community's feedback on which one is preferred or any other 
 comments?


 I’m -1 for both options because neither is properly backwards compatible.  
 Instead we 

[openstack-dev] [Heat] Juno RC1 available

2014-10-02 Thread Thierry Carrez
Hello everyone,

Another day, another RC. Heat just published its first Juno release
candidate. The list of fixed bugs and the RC1 tarball are available at:
https://launchpad.net/heat/juno/juno-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released as the 2014.2 final
version on October 16. You are therefore strongly encouraged to test and
validate this tarball !

Alternatively, you can directly test the proposed/juno branch at:
https://github.com/openstack/heat/tree/proposed/juno

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/heat/+filebug

and tag it *juno-rc-potential* to bring it to the release crew's
attention.

Note that the master branch of Heat is now open for Kilo
development, and feature freeze restrictions no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] [I18N] compiling translation message catalogs

2014-10-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 01/10/14 18:04, Akihiro Motoki wrote:
 Hi,
 
 To display localized strings, we need to compile translated
 message catalogs (PO files) into compiled one (MO files). I would
 like to discuss and get a consensus who and when generate compiled
 message catalogs. Inputs from packagers are really appreciated.
 
 [The current status] * Horizon contains compile message catalogs in
 the git repo. (It is just a history and there seems no strong
 reason to have compiled one in the repo. There is a bug report on
 it.) * Other all projects do not contain compiled message catalogs
 and have only PO files.
 
 [Possible choices] I think there are several options. (there may be
 other options) (a) OpenStack does not distribute compiled message
 catalogs, and only provides a command (setup.py integration) to
 compile message catalogs. Deployers or distributors need to compile
 message catalogs. (b) Similar to (a), but compile message catalogs
 as a part of pip install. (c) OpenStack distributes compiled
 message catalogs as a part of the release. (c1) the git repo
 maintains compiled message catalogs. (c2) only tarball contains
 compiled message catalogs
 
 Note that the current Horizon is (c1) and others are (a).
 
 [Pros/Cons] (a) It is a simplest way as OpenStack upstream. 
 Packagers need to compile message catalogs and customize there
 scripts. Deployers who install openstack from the source need to
 compile them too. (b) It might be a simple approach from users
 perspective. However to compile message catalogs during
 installation, we need to install required modules (like babel or
 django) before running installation (or require them as the first 
 citizen in setup.py require). I don't think it is much simpler 
 compared to option (a). (c1) Compiled message catalogs are a kind
 of binary files and they can be generated from PO files. There is
 no need to manage two same data. (c2) OpenStack is downloaded in
 several ways (release tarballs is not the only option), so I don't
 see much merits that the only tarball contains compiled message
 catalogs. A merit is that compiled message catalogs are available
 and there is no additional step users need to do.
 
 My preference is (a), but would like to know broader opinions 
 especially from packagers.

I'm for (a). There is no much sense in storing compiled catalogs in
git. Just provide a setup.py command for us to invoke during build.

I think it's ok for us to introduce new build time dependencies.

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJULRUFAAoJEC5aWaUY1u57y7MH/iaEAKaRgaedVoEBcGwuxmHA
yEO60pnIpGm6f2aD8zHc04jMDBXCLaarNdNoRP/D+5QSadrycnHNIrSWvvkb3V9+
TJqlv8JISU+zvA4gsstrk6qwu074TNyp/CI9gS6UjC9kKf65OW2ENERhCGT/pBur
GGGXaryJJs07xtCD+H0Cg2xqntRx104CkUop9+queGG2by4IUgX8FhSRgq5YKmLj
iTFl2FYZS8T+vdRgTYJrMq1KyZoPZhXfKM7I/4r8zqVXqJuglEgGzbgpzqqLHwjU
J6qPNopVuNi+75wGLYtiQNXiv73plYjosyLuFPrcL2/trLS7CijBBdm+KtZUWwM=
=dCGR
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] openvswitch integration

2014-10-02 Thread Kevin Benton
- why does every physical interface have its own bridge? (I guess you could
also plug it directly into the br-int)

This is where iptables rules are applied. Until they are implemented in OVS
directly, this bridge is necessary.

- and why does the br-int use vlan separation and not directly the configured
tenant-network-type separation..

The process plugging into the vswitch (Nova) has no idea what the network
segmentation method will be, which is set by Neutron via the Neutron agent.


On Thu, Oct 2, 2014 at 12:20 AM, Andreas Scheuring 
scheu...@linux.vnet.ibm.com wrote:

 Hi together,
 I'm wondering why ovs was integrated into openstack in the way it is
 today
 (
 http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html
 )

 Especially I would like to understand
 - why does every physical interface have its own bridge? (I guess you
 could also plug it directly into the br-int)
 - and why does the br-int use vlan separation and not directly the
 configured tenant-network-type separation (e.g. vxlan or something
 else)? Tagging a packet with the internal vlan and then converting it to
 the external vlan again looks strange to me in the first place.

 It's just a feeling but this surely has impact on the performance. I
 guess latency and cpu consumption will surely go up with this design.
 Are there any technical or historical reasons for it? Or is it just to
 reduce complexity?

 Thanks!


 --
 Andreas
 (irc: scheuran)




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-02 Thread Sullivan, Jon Paul
 -Original Message-
 From: Clint Byrum [mailto:cl...@fewbar.com]
 Sent: 02 October 2014 02:51
 To: openstack-dev
 Subject: [openstack-dev] [TripleO] a need to assert user ownership in
 preserved state
 
 Recently we've been testing image based updates using TripleO, and we've
 run into an interesting conundrum.
 
 Currently, our image build scripts create a user per service for the
 image. We don't, at this time, assert a UID, so it could get any UID in
 the /etc/passwd database of the image.
 
 However, if we add a service that happens to have its users created
 before a previously existing service, the UID's shift by one. When this
 new image is deployed, the username might be 'ceilometer', but
 /mnt/state/var/lib/ceilometer is now owned by 'cinder'.
 
 Here are 3 approaches, which are not mutually exclusive to one another.
 There are likely others, and I'd be interested in hearing your ideas.
 
 * Static UID's for all state-preserving services. Basically we'd just
   allocate these UID's from a static pool and those are always the UIDs
   no matter what. This is the simplest solution, but does not help
   anybody who is already looking to update a TripleO cloud. Also, this
   would cause problems if TripleO wants to merge with any existing
   system that might also want to use similar UID's. This also provides
   no guard against non-static UID's storing things on the state
   partition.
 
 * Fix the UID's on image update. We can backup /etc/passwd and
   /etc/group to /mnt/state, and on bootup we can diff the two, and any
   UIDs that changed can be migrated. This could be very costly if the
   swift storage UID changed, with millions of files present on the
   system. This merge process is also not atomic and may not be
   reversible, so it is a bit scary to automate this.
 
 * Assert ownership when registering state path. We could have any
   state-preserving elements register their desire for any important
   globs for the state drive to be owned by a particular symbolic
   username. This is just a different, more manual way to fix the UID's
   and carries the same cons.

For these last two cases, of fixing the file ownership on first boot based on 
the previous UIDs of a username, why would we decide to fix the data files?

If instead we were to change the UIDs such that the data files were correct, 
the only thing to fix up would be the installed files in the image, which are a 
well-defined and limited set of files.

 
 So, what do people think?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [fuel] Executor task affinity

2014-10-02 Thread Dmitriy Shulyak
Hi,

As i understood you want to store some mappings of tags to hosts in
database, but then you need to sort out api
for registering hosts and/or discovery mechanism for such hosts. It is
quite complex.
It maybe be usefull, in my opinion it would be better to have simpler/more
flexible variant.

For example:

1. Provide targets in workbook description, like:

task:
  targets: [nova, cinder, etc]

2. Get targets from execution contexts by using yaql:

task:
  targets: $.uids

task:
  targets: [$.role, $.uid]

In this case all simple relations will be covered by amqp routing
configuration
What do you think about such approach?

On Thu, Oct 2, 2014 at 11:35 AM, Nikolay Makhotkin nmakhot...@mirantis.com
wrote:

 Hi, folks!

 I drafted the document where we can see how task affinity will be applied
 to Mistral:


 https://docs.google.com/a/mirantis.com/document/d/17O51J1822G9KY_Fkn66Ul2fc56yt9T4NunnSgmaehmg/edit

 --
 Best Regards,
 Nikolay
 @Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Limitation of permissions on modification some resources

2014-10-02 Thread Andrey Epifanov

Thank you Mark for the answer.

andrey

On 29.09.2014 18:31, Mark McClain wrote:


On Sep 29, 2014, at 7:09 AM, Andrey Epifanov aepifa...@mirantis.com 
mailto:aepifa...@mirantis.com wrote:



Hi All,

I started working on the the 
https://bugs.launchpad.net/neutron/+bug/1339028
and realized that we have the same issue with other connected 
resources in Neutron.


The is a bug in how we’re implementing the logic to manage routes on 
the router instance in the l3-agent implementation.  There are other 
implementations of the logical router that do not need this restriction.




The problem is that we have API for the modification of any resources 
without
limitations, for example, we can modify Router IP and connected to 
this subnet
VMs never will know about it and lose the default router. The same 
situation

with routes and IP for DHCP/DNS ports.

https://bugs.launchpad.net/neutron/+bug/1374398
https://bugs.launchpad.net/neutron/+bug/1267310


I don’t see any of these as a bug.  If tenant wants to make changes to 
their network (even ill advised ones), we should allow it. 
 Restricting these API operations to admin’s means we’re inhibiting 
users from making changes that could be regular maintenance operations 
of a tenant.


mark



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] discussion of an implementation detail for boot from network feature

2014-10-02 Thread Ondrej Wisniewski

Hi all,

This is related to the following blueprint:
https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance

I would like to discuss here briefly an implementation detail and 
collect some feedback.


With this new feature, the boot option boot from network will be added 
to the existing options boot from disk and boot from volume. The 
first approach to implement this was to define a specific IMAGE_ID_TOKEN 
which will be used to handle the boot from network option as a special 
case of boot from disk option. This is a simple solution and has the 
advantage of avoiding changes to the Nova REST API.


The second option would be to introduce the new boot from network 
option in the Nova REST API with all the consequences of an API change 
(test, documentation, etc).


Any thoughts on these two alternatives? This is a preliminary 
investigation in order to avoid wasting time on an implementation which 
would be rejected during review due to wrong design decisions.


Ondrej


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] discussion of an implementation detail for boot from network feature

2014-10-02 Thread Daniel P. Berrange
On Thu, Oct 02, 2014 at 11:45:54AM +0200, Ondrej Wisniewski wrote:
 Hi all,
 
 This is related to the following blueprint:
 https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance
 
 I would like to discuss here briefly an implementation detail and collect
 some feedback.
 
 With this new feature, the boot option boot from network will be added to
 the existing options boot from disk and boot from volume. The first
 approach to implement this was to define a specific IMAGE_ID_TOKEN which
 will be used to handle the boot from network option as a special case of
 boot from disk option. This is a simple solution and has the advantage of
 avoiding changes to the Nova REST API.
 
 The second option would be to introduce the new boot from network option
 in the Nova REST API with all the consequences of an API change (test,
 documentation, etc).
 
 Any thoughts on these two alternatives? This is a preliminary investigation
 in order to avoid wasting time on an implementation which would be rejected
 during review due to wrong design decisions.

When booting from the network there is potentially a choice of multiple
NICs from which todo PXE. 

With KVM you are not restricted to saying disk or network as exclusive
choices, but rather you can setup arbitrary prioritization of boot order
across devices, whether disk, nic or PCI assigned device.

So we should really consider this broader problem of boot device
prioritization not merely a PXE flag. IOW, we should extend the Nova
boot command so that the --block-device-mapping and --nic args both
allow for an integer boot priority value to be specified per device.

  bootindex=NNN

And likewise allow it to be set for PCI assigned devices.

Hypervisors that don't support such fine grained ordering, can simply
ignore anything except the device with bootindex=1.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread Duncan Thomas
So I have substantial concerns about hierarchy based designs and data
mass - the interconnect between leaves in the hierarchy are often
going to be fairly thin, particularly if they are geographically
distributed, so the semantics of what is allowed to access what data
resource (glance, swift, cinder, manilla) need some very careful
thought, and the way those restrictions are portrayed to the user to
avoid confusion needs even more thought.

On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to have a 
 discussion as a formal cross program session, because many core programs are 
 involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access 
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

 Best Regards
 Chaoyi Huang ( Joe Huang )
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] No Nova API meeting this week

2014-10-02 Thread Christopher Yeoh
Hi,

About half the regulars at the Nova API meeting are on vacation
this week so I'm cancelling this weeks meeting as I don't
think there is anything urgent to discuss. We'll meet as usual
next week.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Import errors in tests

2014-10-02 Thread Lucas Alvares Gomes
Hi,

I don't know if it's a known issue, but we have this patch in Ironic
here https://review.openstack.org/#/c/124610/ and the gate jobs for
python26 and python27 are failing because of some import error[1] and
it doesn't show me what is the error exactly, it's important to say
also that the tests run locally without any problem so I can't
reproduce the error locally here.

Have anyone seem something like that ?

I will continue to dig into it and see if I can spot something, but I
thought it would be nice to share it here too cause that's maybe a
potential gate problem.

[1] 
http://logs.openstack.org/10/124610/14/check/gate-ironic-python27/5c21433/console.html

Cheers,
Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Import errors in tests

2014-10-02 Thread Dmitry Tantsur

On 10/02/2014 01:30 PM, Lucas Alvares Gomes wrote:

Hi,

I don't know if it's a known issue, but we have this patch in Ironic
here https://review.openstack.org/#/c/124610/ and the gate jobs for
python26 and python27 are failing because of some import error[1] and
it doesn't show me what is the error exactly, it's important to say
also that the tests run locally without any problem so I can't
reproduce the error locally here.

Did you try with fresh environment?



Have anyone seem something like that ?
I have to say that our test toolchain is completely inadequate in case 
of import errors, even locally spotting import error involves manually 
importing all suspicious modules, because tox just outputs garbage. 
Something has to be done with it.




I will continue to dig into it and see if I can spot something, but I
thought it would be nice to share it here too cause that's maybe a
potential gate problem.

[1] 
http://logs.openstack.org/10/124610/14/check/gate-ironic-python27/5c21433/console.html

Cheers,
Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Sean Dague
On 10/02/2014 04:47 AM, Ihar Hrachyshka wrote:
 Hi,
 
 I guess the following review is meant:
 https://review.openstack.org/#/c/125075/
 
 I went thru each of the failure for the patch (no dependency failures
 checked), and here are some damned lies (c) about those failures:
 
 - bug 1323658: 2 failures (ssh connection timeout issue, shows up in
 Neutron jobs)
 - bug 1331274: 2 failures (Grenade not starting services)
 - bug 1375108: 4 failures (bug in Nova EC2 reboot code?)
 - bug 1348204: 1 failure (Cinder volume detach failing)
 - bug 1374175: 1 failure (Heat bug)
 
 Neither of those bugs are solved in master. Some of those bugs are
 staying opened for months. The first bug was raised as Neutron bug and
 marked for RC-1, but then was untargeted due to believe among Neutron
 developers that it's a bug in Tempest.
 
 Nevertheless, with all the hopeless state of the gate, in the Icehouse
 release scheduled for today stable maint team was able to merge fixes
 for more than 120 bugs.
 
 So, all that said, does anyone believe that it's fair to bitch about
 stable maintainers not doing their job? Wouldn't it be more fair to
 bitch about the overall hopeless state of the gate and projects
 feeling ok releasing Juno with major failures in gate (like in case of
 the very first bug in the list)?
 
 /Ihar

Fwiw, this whole patch stream is part of the fix for #1331274 in Juno /
Master (we need to get off of screen in grenade to have been process
control) -
https://review.openstack.org/#/q/status:open+project:openstack-dev/devstack+branch:stable/icehouse+topic:no_screen,n,z
and in order to merge any devstack icehouse changes we needed to fix the
tox bashate targets (https://review.openstack.org/#/c/125075/ is
actually a trivial additional add, so it made a really good indication
that this was latent state).

After those fails on this patch - we correctly disabled grenade on
icehouse (it should have been turned off when havana eoled, there was a
delay) - https://review.openstack.org/#/c/125371/

I made a bunch of noise about #1323658 in the project meeting, spent a
bunch of time on chasing that after, I agree that punting on it for the
release was a very questionable call. I proposed the skip -
https://review.openstack.org/#/c/125150/ which was merged.

I marked #1374175 as critical for the heat team -
https://bugs.launchpad.net/heat/+bug/1374175 - also skipping it is
working it's way through the gate now -
https://review.openstack.org/#/c/125545/.

Joe, Matt Treinish, and I looked at #1375108 last night, and found that
the test author had recheck grinded their test in ... even when it
failed in related areas. So that was straight reverted -
https://review.openstack.org/#/c/125543/

I have not looked into 1348204 at all - I just bumped it up to critical.
Looks like Matt Riedeman has a debug patch out there.

...

But that seems to answer the question I was asking. Who's maintaining
this seems to be me (or more specifically the same people that are
always working these bugs, me, joe, matt, and matt).

I don't want it to be me. Because as soon as I manage to get the
infrastructure for fixing #1331274 I want nothing more to do with
icehouse stable.

What I'm complaining about is that until I spent time on #1331274 no one
seemed to understand that devstack patches aren't mergable (and hadn't
been for weeks). If stable was maintained I'd expect that someone would
actually know the current state, or be helping with it. Also icehouse is
about to no longer be installable with pip because of pip changes.
https://review.openstack.org/#/c/124648/ has to land to fix that, other
devstack patches have to land to get us there.

If stable branches are important to the project, then stable branches
need to be front and center in the weekly project meeting. Maintaining a
thing is actually knowing the current status and working to make it better.

Removing tests is a totally fine thing to propose, for instance.

But the point is, raised well by Alan, the stable branches are basically
only being worked on by one distro, and no other vendors. So honestly,
if that doesn't change, I'd suggest dropping all but the most recent one
(so we can test upgrade testing for current master).

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Thierry Carrez
Michael Still wrote:
 I agree with Sean here.
 
 The original idea was that these stable branches would be maintained by
 the distros, and that is clearly not happening if you look at the code
 review latency there. We need to sort that out before we even consider
 supporting a release for more than the one year we currently do.

Well, it's just another area where the current model fails to scale.
It's easy to only talk about gating and release management and overlook
vulnerability management, stable maintenance and other horizontal tasks
where the resources also don't grow nearly as fast as new integrated
projects and complexity.

As far as stable is concerned, the fix is relatively simple and has been
proposed a while back: push responsibility of stable branch maintenance
down at project-level. The current stable-maint team would become
stable branch release managers and it would be the responsibility of
each project to maintain their stable branch, backport fixes and making
sure things can get merged to it.

Those projects may or may not be willing to commit to 15 months
maintenance (which means maintaining 2-3 stable branches in addition to
master). But I think what they can commit to is a better reflection of
what we can achieve -- since without upstream support it's difficult to
keep all stable branches for all integrated projects alive.

I already planned to dedicate a cross-project workshop (or a release
management scheduled slot) to that specific topic, so that we can have a
clear way forward in Kilo.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] [I18N] compiling translation message catalogs

2014-10-02 Thread Tom Fifield
On 02/10/14 14:32, Łukasz Jernaś wrote:
 On Wed, Oct 1, 2014 at 6:04 PM, Akihiro Motoki amot...@gmail.com wrote:
 Hi,
 
 Hi Akihiro!
 
 To display localized strings, we need to compile translated message
 catalogs (PO files) into compiled one (MO files).
 I would like to discuss and get a consensus who and when generate
 compiled message catalogs.
 Inputs from packagers are really appreciated.

 [The current status]
 * Horizon contains compile message catalogs in the git repo. (It is
 just a history and there seems no strong reason to have compiled one
 in the repo. There is a bug report on it.)
 * Other all projects do not contain compiled message catalogs and have
 only PO files.

 [Possible choices]
 I think there are several options. (there may be other options)
 (a) OpenStack does not distribute compiled message catalogs, and only
 provides a command (setup.py integration) to compile message catalogs.
 Deployers or distributors need to compile message catalogs.
 (b) Similar to (a), but compile message catalogs as a part of pip install.
 (c) OpenStack distributes compiled message catalogs as a part of the release.
 (c1) the git repo maintains compiled message catalogs.
 (c2) only tarball contains compiled message catalogs

 Note that the current Horizon is (c1) and others are (a).
 
 I'd go for (a), as traditionally message catalogs were compiled during
 the packaging step for Linux software (of course your experiences may
 vary).
 Of course if it was pretty straightforward to integrate it into pip
 install it would also be a good solution.

(a) sounds sane, but we should ensure that we tell the packagers that we
expect them to make the compiled message catalogues so ops can more
easily use the translations. (I guess this is like a modified version of
(b))

Regards,

Tom

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Sean Dague
On 10/02/2014 07:57 AM, Thierry Carrez wrote:
 Michael Still wrote:
 I agree with Sean here.

 The original idea was that these stable branches would be maintained by
 the distros, and that is clearly not happening if you look at the code
 review latency there. We need to sort that out before we even consider
 supporting a release for more than the one year we currently do.
 
 Well, it's just another area where the current model fails to scale.
 It's easy to only talk about gating and release management and overlook
 vulnerability management, stable maintenance and other horizontal tasks
 where the resources also don't grow nearly as fast as new integrated
 projects and complexity.
 
 As far as stable is concerned, the fix is relatively simple and has been
 proposed a while back: push responsibility of stable branch maintenance
 down at project-level. The current stable-maint team would become
 stable branch release managers and it would be the responsibility of
 each project to maintain their stable branch, backport fixes and making
 sure things can get merged to it.

I disagree that's the simple fix. Because the net effect is that it's
pushed back to the only people that seem to be working on OpenStack as a
whole see ranty rant in other part of this thread.

Decentralizing this responsibility if we're talking about any more than
5 or 6 integrated projects, makes it unsolvable IMHO. I just kicks the
can down the road with a we solved it stamp... when we did no such thing.

If I can't merge the nova fixes because heat is killing the stable tree
(which it currently is), then clearly I can't as a nova dev be
responsible for that. People have already given up on that in master,
there is no way they are going to care on stable.

 Those projects may or may not be willing to commit to 15 months
 maintenance (which means maintaining 2-3 stable branches in addition to
 master). But I think what they can commit to is a better reflection of
 what we can achieve -- since without upstream support it's difficult to
 keep all stable branches for all integrated projects alive.
 
 I already planned to dedicate a cross-project workshop (or a release
 management scheduled slot) to that specific topic, so that we can have a
 clear way forward in Kilo.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Thierry Carrez
Sean Dague wrote:
 If stable branches are important to the project, then stable branches
 need to be front and center in the weekly project meeting. Maintaining a
 thing is actually knowing the current status and working to make it better.

FWIW, the current weekly meeting is no longer a general catch-all
project status meeting -- it is now specifically about the release under
development, not about stable branches. It is why we don't talk about
stable branches there. We could change (again) the scope of that
meeting, or have a specific meeting about stable status.

For example, if we require stable liaisons in every project, those
could meet with the stable maint release managers every week to discuss
the state of the branches.

 Removing tests is a totally fine thing to propose, for instance.
 
 But the point is, raised well by Alan, the stable branches are basically
 only being worked on by one distro, and no other vendors. So honestly,
 if that doesn't change, I'd suggest dropping all but the most recent one
 (so we can test upgrade testing for current master).

I think another issue is that the stable maint team is traditionally
staffed with distro packagers, which are less involved upstream (and
have less time to dedicate upstream) than your average OpenStack
contributor. That doesn't make them the best candidates to know the gate
inside out, or to have connections in each and every project to get
issues solved. Which is why it tends to fall back on the usual suspects :)

So I'm not sure getting more distro packagers involved would make that
much difference. We need everyone upstream to care more about stable/*.
And we need to align our support period with what we can collectively
achieve.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] discussion of an implementation detail for boot from network feature

2014-10-02 Thread Ondrej Wisniewski

On 10/02/2014 11:56 AM, Daniel P. Berrange wrote:

On Thu, Oct 02, 2014 at 11:45:54AM +0200, Ondrej Wisniewski wrote:

Hi all,

This is related to the following blueprint:
https://blueprints.launchpad.net/nova/+spec/pxe-boot-instance

I would like to discuss here briefly an implementation detail and collect
some feedback.

With this new feature, the boot option boot from network will be added to
the existing options boot from disk and boot from volume. The first
approach to implement this was to define a specific IMAGE_ID_TOKEN which
will be used to handle the boot from network option as a special case of
boot from disk option. This is a simple solution and has the advantage of
avoiding changes to the Nova REST API.

The second option would be to introduce the new boot from network option
in the Nova REST API with all the consequences of an API change (test,
documentation, etc).

Any thoughts on these two alternatives? This is a preliminary investigation
in order to avoid wasting time on an implementation which would be rejected
during review due to wrong design decisions.

When booting from the network there is potentially a choice of multiple
NICs from which todo PXE.

With KVM you are not restricted to saying disk or network as exclusive
choices, but rather you can setup arbitrary prioritization of boot order
across devices, whether disk, nic or PCI assigned device.

So we should really consider this broader problem of boot device
prioritization not merely a PXE flag. IOW, we should extend the Nova
boot command so that the --block-device-mapping and --nic args both
allow for an integer boot priority value to be specified per device.

   bootindex=NNN

And likewise allow it to be set for PCI assigned devices.

Hypervisors that don't support such fine grained ordering, can simply
ignore anything except the device with bootindex=1.

Regards,
Daniel

Hi Daniel,

your proposal sounds reasonable to me. Implementing the possibility to 
choose the boot order priority from all available block devices and NICS 
(and possibly also PCI devices) would certainly cover more use cases 
then just the network boot. As you mentioned, some hypervisors like KVM 
support this, so we need to analyse what it takes to make the 
appropriate changes in OpenStack to pass the needed information down the 
chain. It will most likely involve Rest API changes but we need to do 
some digging into the Nova code here.


Thanks so far, Ondrej


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] icehouse failure rates are somewhat catastrophic - who is actually maintaining it?

2014-10-02 Thread Duncan Thomas
On 2 October 2014 12:57, Thierry Carrez thie...@openstack.org wrote:

 As far as stable is concerned, the fix is relatively simple and has been
 proposed a while back: push responsibility of stable branch maintenance
 down at project-level. The current stable-maint team would become
 stable branch release managers and it would be the responsibility of
 each project to maintain their stable branch, backport fixes and making
 sure things can get merged to it.

 Those projects may or may not be willing to commit to 15 months
 maintenance (which means maintaining 2-3 stable branches in addition to
 master). But I think what they can commit to is a better reflection of
 what we can achieve -- since without upstream support it's difficult to
 keep all stable branches for all integrated projects alive.

I don't see that much interest for doing this from many of the core
teams, so I think a likely result of this would be to make things
worse, not better. It's fine to say something is now their
responsibility, but that does little to nothing to influence where
they actually choose to work.

What is actually needed is those who rely on the stable branch(es)
existence need to step forward and dedicate resources to it. Putting
the work on people not interested is just the same as killing them
off, except slower, messier and creating more anger and otehr
community fallout along the way.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread joehuang
Hello, Duncan, 

Your substantial concerns are warmly welcome and very important.

Agree with you that the interconnect between leaves should be faily thin: 

During the PoC, all Nova/Cinder/Ceilometer/Neutron/Glance (Glance is optional 
to be located in leave) in the leave work independently from other leaves. The 
only interconnect between two leaves is the L2/L3 network across OpenStack for 
the tenant. But it will be done by the L2 proxy/L3 proxy located in the 
cascading level, and the instrcution will only be issued by the corresponding 
L2/L3 proxy one way.

And also, from Ceilometer perspective, it must work as distributed service. We 
roughly estimated how much meter data volume will be generated for 1 million 
level cloud, if we use current Ceilometer (not include Gnocchi), and sampling 
period is 1 minutes, it's about 20 GB / minute (quite roughly estimated). Using 
single Ceilometer instance is almost impossible for the large scale distributed 
cloud. Therefore, Ceilometer cascading must be designed very carefully.

In our PoC design principle, the cascaded OpenStack should work passively, and 
has no kowledge whether it is running under cascading senario or not to and 
whether there is sibling OpenStack or not, to reduce interconnect between 
cascaded OpenStacks as much as possible. And one level cascading is enough for 
foreseeable future.

PoC team planned to stay at Paris from Oct.29 to Nov.8, are you interested in a 
f2f workshop for deep diving in the OpenStack cascading?

Best Regards

Chaoyi Huang ( joehuang )


From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 02 October 2014 18:59
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

So I have substantial concerns about hierarchy based designs and data
mass - the interconnect between leaves in the hierarchy are often
going to be fairly thin, particularly if they are geographically
distributed, so the semantics of what is allowed to access what data
resource (glance, swift, cinder, manilla) need some very careful
thought, and the way those restrictions are portrayed to the user to
avoid confusion needs even more thought.

On 30 September 2014 14:04, joehuang joehu...@huawei.com wrote:
 Hello, Dear TC and all,

 Large cloud operators prefer to deploy multiple OpenStack instances(as 
 different zones), rather than a single monolithic OpenStack instance because 
 of these reasons:

 1) Multiple data centers distributed geographically;
 2) Multi-vendor business policy;
 3) Server nodes scale up modularized from 00's up to million;
 4) Fault and maintenance isolation between zones (only REST interface);

 At the same time, they also want to integrate these OpenStack instances into 
 one cloud. Instead of proprietary orchestration layer, they want to use 
 standard OpenStack framework for Northbound API compatibility with 
 HEAT/Horizon or other 3rd ecosystem apps.

 We call this pattern as OpenStack Cascading, with proposal described by 
 [1][2]. PoC live demo video can be found[3][4].

 Nova, Cinder, Neutron, Ceilometer and Glance (optional) are involved in the 
 OpenStack cascading.

 Kindly ask for cross program design summit session to discuss OpenStack 
 cascading and the contribution to Kilo.

 Kindly invite those who are interested in the OpenStack cascading to work 
 together and contribute it to OpenStack.

 (I applied for “other projects” track [5], but it would be better to have a 
 discussion as a formal cross program session, because many core programs are 
 involved )


 [1] wiki: https://wiki.openstack.org/wiki/OpenStack_cascading_solution
 [2] PoC source code: https://github.com/stackforge/tricircle
 [3] Live demo video at YouTube: https://www.youtube.com/watch?v=OSU6PYRz5qY
 [4] Live demo video at Youku (low quality, for those who can't access 
 YouTube):http://v.youku.com/v_show/id_XNzkzNDQ3MDg4.html
 [5] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg36395.html

 Best Regards
 Chaoyi Huang ( Joe Huang )
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-10-02 Thread Brant Knudson
On Thu, Oct 2, 2014 at 6:04 AM, Ken'ichi Ohmichi ken1ohmi...@gmail.com
wrote:



 Thanks for your advice, that is very useful input for me.
 I read both keystone-specs and ietf draft-spec for JSON-Home.
 I have a question.

 JSON-Home is useful for advertising API URL paths to clients, I guess
 but it cannot advertise the supported attributes of a request body.
 Is that right?


Right, it says right in the FAQ:
https://tools.ietf.org/html/draft-nottingham-json-home-03#appendix-B.5 :

How Do I find the schema for a format?

   That isn't addressed by home documents. ...


Also, you might want to check out section 5, Representation Hints :
https://tools.ietf.org/html/draft-nottingham-json-home-03#section-5
 . All it says is TBD. So we might have to make up our own standard here.


 For example, we can create a user nobody by passing the following
 request body to Keystone /v2.0/users with POST method:

   '{user: {email: null, password: null, enabled: true, name:
 nobody, tenantId: null}}'

 In this case, I hope Keystone can advertise the above
 attributes(email, name, etc).
 but JSON-Home doesn't cover it as its scope, I guess.


When discussing the document schema I think we're planning to use
JSONSchema... In Keystone, we've got J-S implemented on some parts (I don't
think it covers all resources yet). I also don't think our JSONSchema is
discoverable yet (i.e., you can't download the schema from the server). I
haven't heard of other projects implementing this yet, but maybe someone
has.

There probably is some way to integrate JSON Home with JSONSchema. Maybe
you can put a reference to the JSONSchema in the hints for the resource.

On current Nova v2 API, we need to add dummy extension when adding new
 attributes to the existing request/response body because of
 advertising something
 changed to clients. I'm glad if we can use more standard way for doing it.

 Thanks
 Ken'ichi Ohmichi

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Glance on swift problem

2014-10-02 Thread Timur Nurlygayanov
Slawek,

did you change Swift proxy-server.conf file too?

On Wed, Oct 1, 2014 at 11:26 PM, Sławek Kapłoński sla...@kaplonski.pl
wrote:

 Hello,

 Thanks for Your help but it not helps. I checked that for sure on each
 swift
 node there is a lot of free space. What can confirm that is fact that when
 I
 try to create image with size about 1.7GB and I have
 swift_store_large_object_size set to 1GB than there is error (always after
 send first chunk to swift (200MB). When I only change
 swift_store_large_object_size to 2GB and restart glance-api than the same
 image is created correctly (it is then in one big object).

 ---
 Best regards
 Sławek Kapłoński
 sla...@kaplonski.pl

 Dnia wtorek, 30 września 2014 22:28:11 Timur Nurlygayanov pisze:
  Hi Slawek,
 
  we faced the same error and this is issue with Swift.
  We can see 100% disk usage on the Swift node during the file upload and
  looks like Swift can't send info about status of the file loading in
 time.
 
  On our environments we found the workaround for this issue:
  1. Set  swift_store_large_object_size = 200 in glance.conf.
  2. Add to Swift proxy-server.conf:
 
  [DEFAULT]
  ...
  node_timeout = 90
 
  Probably we can set this value as default value for this parameter
 instead
  of '30'?
 
 
  Regards,
  Timur
 
 
  On Tue, Sep 30, 2014 at 7:41 PM, Sławek Kapłoński sla...@kaplonski.pl
 
  wrote:
   Hello,
  
   I can't find that upload from was previous logs but I now try to upload
   same image once again. In glance there was exactly same error. In swift
   logs I have:
  
   Sep 30 17:35:10 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y
   30/Sep/2014/15/35/10 HEAD
 /v1/AUTH_7ef5a7661ccd4c069e3ad387a6dceebd/glance
   HTTP/1.0 204
   Sep 30 17:35:16 127.0.0.1 proxy-server X.X.X.X Y.Y.Y.Y
   30/Sep/2014/15/35/16 PUT /v1/AUTH_7ef5a7661ccd4c069e3ad387a6dcee
   bd/glance/fa5dfe09-74f5-4287-9852-d2f1991eebc0-1 HTTP/1.0 201 - -
  
   Best regards
   Slawek Kaplonski
  
   W dniu 2014-09-30 17:03, Kuo Hugo napisał(a):
   Hi ,
  
   Could you please post the log of related requests in Swift's log ???
  
   Thanks // Hugo
  
   2014-09-30 22:20 GMT+08:00 Sławek Kapłoński sla...@kaplonski.pl:
Hello,
  
   I'm using openstack havana release and glance with swift backend.
   Today I found that I have problem when I create image with url in
   --copy-from when image is bigger than my
   swift_store_large_object_size because then glance is trying to
   split image to chunks with size given in
   swift_store_large_object_chunk_size and when try to upload first
   chunk to swift I have error:
  
   2014-09-30 15:05:29.361 18023 ERROR glance.store.swift [-] Error
   during chunked upload to backend, deleting stale chunks
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift Traceback
   (most recent call last):
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
   /usr/lib/python2.7/dist-packages/glance/store/swift.py, line 384,
   in add
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
  
content_length=content_length)
  
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
   /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1234,
   in put_object
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
  
response_dict=response_dict)
  
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
   /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1143,
   in _retry
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
  
reset_func(func, *args, **kwargs)
  
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
   /usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1215,
   in _default_reset
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift %
   (container, obj))
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
   ClientException: put_object('glance',
   '9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
   ability to reset contents for reupload.
   2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
   2014-09-30 15:05:29.362 18023 ERROR glance.store.swift [-] Failed
   to add object to Swift.
   Got error from Swift: put_object('glance',
   '9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
   ability to reset contents for reupload.
   2014-09-30 15:05:29.362 18023 ERROR glance.api.v1.upload_utils [-]
   Failed to upload image 9f56ccec-deeb-4020-95ba-ca7bf1170056
   2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
   Traceback (most recent call last):
   2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
  
File
  
   /usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py,
   line 101, in upload_data_to_store
   2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
  
store)
  
   2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
  
File /usr/lib/python2.7/dist-packages/glance/store/__init__.py,
  
   line 333, in store_add_to_backend
  

[openstack-dev] [Nova] Launching multiple VMs in Nova

2014-10-02 Thread Oleg Bondarev
Hi,

It turns out that there is a 1:1 relationship between rpc_thread_pool_size
messaging config [1] and the number of instances that can be spawned
simultaneously.
Please see bug [2] for more details.
I think this should be at least documented. Thoughts?

Thanks,
Oleg

[1]
https://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_executors/impl_eventlet.py
[2] https://bugs.launchpad.net/neutron/+bug/1372049
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Glance on swift problem

2014-10-02 Thread Sławek Kapłoński

Hello,

I make some more debugging on that problem and I found what is happend:
Glance is sending first chunk (in my config it was 200MB) and after 
finishing sending it to swift it send some bad http request. Swift has 
got in logs something like:
proxy-server ERROR WSGI: code 400, message Bad request syntax 
('\x01\x00\x1c[n\x01\x00\x00\x96s\x01\x00\x00*t') (txn: 
txa21d64d49ac347bb87023-00542d4e59) (client_ip: X.X.X.X)

and uploading is finished with error in glance

Maybe someone of You know such bug?

Best regards
Slawek Kaplonski

W dniu 2014-09-30 16:20, Sławek Kapłoński napisał(a):

Hello,

I'm using openstack havana release and glance with swift backend.
Today I found that I have problem when I create image with url in
--copy-from when image is bigger than my
swift_store_large_object_size because then glance is trying to split
image to chunks with size given in
swift_store_large_object_chunk_size and when try to upload first
chunk to swift I have error:

2014-09-30 15:05:29.361 18023 ERROR glance.store.swift [-] Error
during chunked upload to backend, deleting stale chunks
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift Traceback (most
recent call last):
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
/usr/lib/python2.7/dist-packages/glance/store/swift.py, line 384, in
add
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
content_length=content_length)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
/usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1234,
in put_object
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
response_dict=response_dict)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
/usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1143,
in _retry
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
reset_func(func, *args, **kwargs)
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift   File
/usr/lib/python2.7/dist-packages/swiftclient/client.py, line 1215,
in _default_reset
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift % 
(container, obj))

2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
ClientException: put_object('glance',
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
ability to reset contents for reupload.
2014-09-30 15:05:29.361 18023 TRACE glance.store.swift
2014-09-30 15:05:29.362 18023 ERROR glance.store.swift [-] Failed to
add object to Swift.
Got error from Swift: put_object('glance',
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
ability to reset contents for reupload.
2014-09-30 15:05:29.362 18023 ERROR glance.api.v1.upload_utils [-]
Failed to upload image 9f56ccec-deeb-4020-95ba-ca7bf1170056
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
Traceback (most recent call last):
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   File
/usr/lib/python2.7/dist-packages/glance/api/v1/upload_utils.py, line
101, in upload_data_to_store
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils 
store)

2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   File
/usr/lib/python2.7/dist-packages/glance/store/__init__.py, line 333,
in store_add_to_backend
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
(location, size, checksum, metadata) = store.add(image_id, data, size)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils   File
/usr/lib/python2.7/dist-packages/glance/store/swift.py, line 447, in
add
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
raise glance.store.BackendException(msg)
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils
BackendException: Failed to add object to Swift.
2014-09-30 15:05:29.362 18023 TRACE glance.api.v1.upload_utils Got
error from Swift: put_object('glance',
'9f56ccec-deeb-4020-95ba-ca7bf1170056-1', ...) failure and no
ability to reset contents for reupload.


Does someone of You got same error and know what is solution of it? I
was searching about that in google but I not found anything what could
solve my problem.


--
Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-02 Thread Clint Byrum
Excerpts from Gregory Haynes's message of 2014-10-01 19:09:38 -0700:
 Excerpts from Clint Byrum's message of 2014-10-02 01:50:33 +:
  Recently we've been testing image based updates using TripleO, and we've
  run into an interesting conundrum.
  
  Currently, our image build scripts create a user per service for the
  image. We don't, at this time, assert a UID, so it could get any UID in
  the /etc/passwd database of the image.
  
  However, if we add a service that happens to have its users created
  before a previously existing service, the UID's shift by one. When
  this new image is deployed, the username might be 'ceilometer', but
  /mnt/state/var/lib/ceilometer is now owned by 'cinder'.
 
 Wow, nice find!
 

Indeed, the Helion dev team discovered this one whilst working on updating
between alternating builds that added or removed some services.

Oh, I forgot to mention the bug reference:

https://bugs.launchpad.net/tripleo/+bug/1374626

  
  Here are 3 approaches, which are not mutually exclusive to one another.
  There are likely others, and I'd be interested in hearing your ideas.
  
  * Static UID's for all state-preserving services. Basically we'd just
allocate these UID's from a static pool and those are always the UIDs
no matter what. This is the simplest solution, but does not help
anybody who is already looking to update a TripleO cloud. Also, this
would cause problems if TripleO wants to merge with any existing
system that might also want to use similar UID's. This also provides
no guard against non-static UID's storing things on the state
partition.
 
 +1 for this approach for the reasons mentioned.
 
  
  * Fix the UID's on image update. We can backup /etc/passwd and
/etc/group to /mnt/state, and on bootup we can diff the two, and any
UIDs that changed can be migrated. This could be very costly if the
swift storage UID changed, with millions of files present on the
system. This merge process is also not atomic and may not be
reversible, so it is a bit scary to automate this.
 
 If we really want to go with this type of aproach we could also just
 copy the existing /etc/passwd into the image thats being built. Then
 when users are added they should be added in after existing users.
 

I do like this approach, and it isn't one I had considered. We will know
what image we want to update from in nearly every situation. Also this
supports another case, which is rolling back to the previous image,
quite well.

Really this is just an automated form of static UID assignment.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-02 Thread Clint Byrum
Excerpts from James Polley's message of 2014-10-01 22:37:25 -0700:
 All three of the options presented here seem to assume that UIDs will always 
 be allocated at image-build time. I think that's because most of these UIDs 
 will be used to write files into the chroot at image-create time - if I could 
 think of some way around that, I think we could avoid this problem more 
 neatly by not assigning the UIDs until first boot
 

Yeah I don't think we're going to work around that. It is part of the
magic of images that the metadata is all in place and there's no churn
at boot.

 But since we can't do that, would it be possible to compromise by having the 
 UIDs read in from heat metadata, and using the current allocation process if 
 none is provided?
 

I really, really dislike this. Post-boot tools like Heat are for
per-server customization and site-wide changes. UIDs seem like plumbing
under the hood.

 This should allow people who prefer to have static UIDs to have simple 
 drop-in config, but also allow people who want to dynamically read from 
 existing images to scrape the details and then drop them in.
 

I see your point, and I'm now confused as I don't really understand what
would make somebody prefer dynamic UID allocation.

 To aid people who have existing images, perhaps we could provide a small tool 
 (if one doesn't already exist) that simply reads /etc/passwd and returns a 
 JSON username:uid map, to be added into the heat local environment when 
 building the next image?
 

Or a tool that reads the image, and returns /etc/passwd and /etc/group.

Thanks very much for your thoughts. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-02 Thread Sullivan, Jon Paul
 -Original Message-
 From: Clint Byrum [mailto:cl...@fewbar.com]
 Sent: 02 October 2014 15:16
 To: openstack-dev
 Subject: Re: [openstack-dev] [TripleO] a need to assert user ownership
 in preserved state
 
 Excerpts from Gregory Haynes's message of 2014-10-01 19:09:38 -0700:
  Excerpts from Clint Byrum's message of 2014-10-02 01:50:33 +:
   Recently we've been testing image based updates using TripleO, and
   we've run into an interesting conundrum.
  
   Currently, our image build scripts create a user per service for the
   image. We don't, at this time, assert a UID, so it could get any UID
   in the /etc/passwd database of the image.
  
   However, if we add a service that happens to have its users created
   before a previously existing service, the UID's shift by one. When
   this new image is deployed, the username might be 'ceilometer', but
   /mnt/state/var/lib/ceilometer is now owned by 'cinder'.
 
  Wow, nice find!
 
 
 Indeed, the Helion dev team discovered this one whilst working on
 updating between alternating builds that added or removed some services.
 
 Oh, I forgot to mention the bug reference:
 
 https://bugs.launchpad.net/tripleo/+bug/1374626
 
  
   Here are 3 approaches, which are not mutually exclusive to one
 another.
   There are likely others, and I'd be interested in hearing your
 ideas.
  
   * Static UID's for all state-preserving services. Basically we'd
 just
 allocate these UID's from a static pool and those are always the
 UIDs
 no matter what. This is the simplest solution, but does not help
 anybody who is already looking to update a TripleO cloud. Also,
 this
 would cause problems if TripleO wants to merge with any existing
 system that might also want to use similar UID's. This also
 provides
 no guard against non-static UID's storing things on the state
 partition.
 
  +1 for this approach for the reasons mentioned.
 
  
   * Fix the UID's on image update. We can backup /etc/passwd and
 /etc/group to /mnt/state, and on bootup we can diff the two, and
 any
 UIDs that changed can be migrated. This could be very costly if
 the
 swift storage UID changed, with millions of files present on the
 system. This merge process is also not atomic and may not be
 reversible, so it is a bit scary to automate this.
 
  If we really want to go with this type of aproach we could also just
  copy the existing /etc/passwd into the image thats being built. Then
  when users are added they should be added in after existing users.
 
 
 I do like this approach, and it isn't one I had considered. We will know
 what image we want to update from in nearly every situation. Also this
 supports another case, which is rolling back to the previous image,
 quite well.
 
 Really this is just an automated form of static UID assignment.

So for situations where images are built and then distributed 2 things would 
need to happen:
1. Identify new users and add them to the passswd file.
2. Modify ownership of files in the new image whose owners UID have 
changed, as per my previous mail.

 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as HP CONFIDENTIAL.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-02 Thread Clint Byrum
Excerpts from Sullivan, Jon Paul's message of 2014-10-02 02:08:26 -0700:
  -Original Message-
  From: Clint Byrum [mailto:cl...@fewbar.com]
  Sent: 02 October 2014 02:51
  To: openstack-dev
  Subject: [openstack-dev] [TripleO] a need to assert user ownership in
  preserved state
  
  Recently we've been testing image based updates using TripleO, and we've
  run into an interesting conundrum.
  
  Currently, our image build scripts create a user per service for the
  image. We don't, at this time, assert a UID, so it could get any UID in
  the /etc/passwd database of the image.
  
  However, if we add a service that happens to have its users created
  before a previously existing service, the UID's shift by one. When this
  new image is deployed, the username might be 'ceilometer', but
  /mnt/state/var/lib/ceilometer is now owned by 'cinder'.
  
  Here are 3 approaches, which are not mutually exclusive to one another.
  There are likely others, and I'd be interested in hearing your ideas.
  
  * Static UID's for all state-preserving services. Basically we'd just
allocate these UID's from a static pool and those are always the UIDs
no matter what. This is the simplest solution, but does not help
anybody who is already looking to update a TripleO cloud. Also, this
would cause problems if TripleO wants to merge with any existing
system that might also want to use similar UID's. This also provides
no guard against non-static UID's storing things on the state
partition.
  
  * Fix the UID's on image update. We can backup /etc/passwd and
/etc/group to /mnt/state, and on bootup we can diff the two, and any
UIDs that changed can be migrated. This could be very costly if the
swift storage UID changed, with millions of files present on the
system. This merge process is also not atomic and may not be
reversible, so it is a bit scary to automate this.
  
  * Assert ownership when registering state path. We could have any
state-preserving elements register their desire for any important
globs for the state drive to be owned by a particular symbolic
username. This is just a different, more manual way to fix the UID's
and carries the same cons.
 
 For these last two cases, of fixing the file ownership on first boot based on 
 the previous UIDs of a username, why would we decide to fix the data files?
 
 If instead we were to change the UIDs such that the data files were correct, 
 the only thing to fix up would be the installed files in the image, which are 
 a well-defined and limited set of files.
 

Great point JP!

I think that this is similar to Greg's suggestion of copying the uid/gid
database into the image. If we copy it in before the image is built, we
fix the image by building it right in the first place.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Dependency freeze exception: django-nose=1.2

2014-10-02 Thread Thomas Goirand
Hi,

murano-dashboard effectively needs django-nose=1.2. As per this:

https://review.openstack.org/125651

it's not a problem for Ubuntu and Debian. Does anyone have a concern
about this dependency freeze exception?

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread Duncan Thomas
On 2 October 2014 14:30, joehuang joehu...@huawei.com wrote:

 In our PoC design principle, the cascaded OpenStack should work passively, 
 and has no kowledge whether it is running under cascading senario or not to 
 and whether there is sibling OpenStack or not, to reduce interconnect 
 between cascaded OpenStacks as much as possible.
 And one level cascading is enough for foreseeable future.

The transparency is what worries me, e.g. at the moment I can attach
any volume to any vm (* depending on cinder AZ policy), which is going
to be broken in a cascaded scenario if the volume and vm are in
different leaves.


 PoC team planned to stay at Paris from Oct.29 to Nov.8, are you interested in 
 a f2f workshop for deep diving in the OpenStack cascading?

Definitely interested, yes please.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dependency freeze exception: django-nose=1.2

2014-10-02 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

Red Hat should be fine with the change. We're going to ship Juno for
Fedora 21+ and EL7 only (no EL6), and they both have the needed
versions packaged [1].

[1]: https://admin.fedoraproject.org/updates/python-django-nose

On 02/10/14 16:29, Thomas Goirand wrote:
 Hi,
 
 murano-dashboard effectively needs django-nose=1.2. As per this:
 
 https://review.openstack.org/125651
 
 it's not a problem for Ubuntu and Debian. Does anyone have a
 concern about this dependency freeze exception?
 
 Cheers,
 
 Thomas Goirand (zigo)
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJULWKaAAoJEC5aWaUY1u57uGkIAKX/Afrtg0tCQQLFjnw1Jjvi
129wl+j7gHod1LX9Jx9EEZDbHJSJG11krgF/Kyba+xGrer1XbKqm1F77VYxs3mtl
+/GfoaOQbbM4NMuyGRFcQKNYaihhOe3KGKmijOdpAhjO/LvQcF+pSFkOESzZj8D1
+vGKe001hEryJLjcvHoyy4usZOg1LfpDMbfyG+20KCa26M1jTPq8ZnXGwRlZPIKX
AWq9mbB/TKBQOkLoKSJ31vZwzp22CqsYjcDtxznq8I+iCK7i3PkkSfYxaBIbchhO
57gkJjjuulVt1k6nj/z5q6h7awgHV3HLf+BxuUC6Pkz8pCYuX+V5qUDNW54g86A=
=oE5D
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] formally distinguish server desired state from actual state?

2014-10-02 Thread Chen CH Ji
Not only ERROR state, but also VERIFY_RESIZE might have this kind problem
https://review.openstack.org/#/c/101435/ has more info
so guess the server task stuff might be the right direction to those
problems ...

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Chris Friesen chris.frie...@windriver.com
To: openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org
Date:   10/02/2014 03:05 AM
Subject:[openstack-dev] [nova] formally distinguish server desired
state  from actual state?




Currently in nova we have the vm_state, which according to the code
comments is supposed to represent a VM's current stable (not
transition) state, or what the customer expect the VM to be.

However, we then added in an ERROR state.  How does this possibly make
sense given the above definition?  Which customer would ever expect the
VM to be in an error state?

Given this, I wonder whether it might make sense to formally distinguish
between the expected/desired state (i.e. the state that the customer
wants the VM to be in), and the actual state (i.e. the state that nova
thinks the VM is in).

This would more easily allow for recovery actions, since if the actual
state changes to ERROR (or similar) we would still have the
expected/desired state available for reference when trying to take
recovery actions.

Thoughts?

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] IRC weekly meeting minutes 02-10-2014

2014-10-02 Thread Ilya Sviridov
Hello team,

Thank you for attending meeting today.

I'm puting here meeting minutes and link to logs [1] [2] [3]

As usually agenda for meeting is free to extend [4]


[1]
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.html
[2]
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.txt
[3]
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html
[4] https://wiki.openstack.org/wiki/MagnetoDB/WeeklyMeetingAgenda

Regards,
Ilya Sviridov
isviridov @ FreeNode
Meeting summary

   1. *Go through action items* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-20,
   13:04:06)
  1. ACTION: dukhlov ikhudoshyn review spec for
  https://blueprints.launchpad.net/magnetodb/+spec/monitoring-health-check
   (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-41,
  13:09:54)
  2. https://github.com/openstack/nova-specs (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-98,
  13:25:45)
  3.
  https://review.openstack.org/#/q/status:open+openstack/nova-specs,n,z
  (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-99,
  13:25:56)
  4.
  
http://docs-draft.openstack.org/41/125241/1/check/gate-nova-specs-docs/e966557/doc/build/html/specs/juno/add-all-in-list-operator-to-extra-spec-ops.html
   (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-100,
  13:26:12)
  5. ACTION: ikhudoshyn dukhlov review
  https://wiki.openstack.org/wiki/MagnetoDB/specs/rbac (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-113,
  13:29:28)
  6. ACTION: isviridov start create spec repo like
  https://github.com/openstack/nova-specs (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-115,
  13:29:59)

   2. *Support and enforce user roles defined in Keystone* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-118,
   13:30:28)
  1. https://blueprints.launchpad.net/magnetodb/+spec/support-roles (
  isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-119,
  13:30:33)

   3. *Monitoring - healthcheck http request* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-124,
   13:31:39)
  1.
  https://blueprints.launchpad.net/magnetodb/+spec/monitoring-health-check
   (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-125,
  13:31:47)

   4. *Monitoring API* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-130,
   13:32:34)
  1. https://blueprints.launchpad.net/magnetodb/+spec/monitoring-api (
  isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-131,
  13:32:46)
  2. https://wiki.openstack.org/wiki/MagnetoDB/specs/monitoring-api (
  ikhudoshyn
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-169,
  13:41:36)
  3. ACTION: ominakov describe security impact here
  https://wiki.openstack.org/wiki/MagnetoDB/specs/monitoring-api (
  isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-177,
  13:44:02)

   5. *Migrate MagnegoDB API to pecan lib* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-218,
   13:55:32)
  1. https://blueprints.launchpad.net/magnetodb/+spec/migration-to-pecan
   (isviridov
  
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-219,
  13:55:40)

   6. *Open discussion* (isviridov
   
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html#l-242,
   13:59:47)



Meeting ended at 14:00:28 UTC (full logs
http://eavesdrop.openstack.org/meetings/magnetodb/2014/magnetodb.2014-10-02-13.00.log.html
).

Action items

   1. dukhlov ikhudoshyn review spec for
   https://blueprints.launchpad.net/magnetodb/+spec/monitoring-health-check
   2. ikhudoshyn dukhlov review
   https://wiki.openstack.org/wiki/MagnetoDB/specs/rbac
   3. isviridov start create spec repo like
   https://github.com/openstack/nova-specs
   4. ominakov describe security impact here
   https://wiki.openstack.org/wiki/MagnetoDB/specs/monitoring-api



Action items, by person

   1. dukhlov
  1. dukhlov ikhudoshyn review spec for
  

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread Tiwari, Arvind
Hi Huang,

Thanks for looking in to my proposal.

Yes, Alliance is will be utilizing/retain all Northbound service APIs, in 
addition it will expose APIs for inter Alliance (inter cloud) communication. 
Alliance will be running at topmost layer on each individual OpenStack Cloud of 
multi-site distributed cloud setup. Additionally Alliance will provide loosely 
coupled integration among multiple clouds or cloudyfied data center.

In case of multi regions setup “regional Alliance” (RA) will orchestrate the 
resource (project, VMs, volumes, network ….) provisioning and state 
synchronization through its peers RA. In case cross enterprise integration 
(Enterprise/VPC/bursting like scenario) - multi site public cloud) 
“global Alliance” (GA) will be interface for external integration point and 
communicating with individual RAs.  I will update the wiki to make it more 
clear.

I will love to coordinate with your team and solve this issue together,  I will 
be reaching there in Paris on 1 Nov and we can site f2f before session. Let’s 
plan a time to meet, Monday will be easy for me.


Thanks,
Arvind



From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 5:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading


Hi, Tiwari,



Great to know you are also trying to address similar issues. For sure we are 
happy to work out a common solution for these issues.



I just go through the wiki page, the question for me is will the Alliance 
provide/retain current north bound OpenStack API ?. It's very important for 
the cloud still expose OpenStack API so that the OpenStack API ecosystem will 
not be lost.



And currently OpenStack cascading has not covered the hybrid cloud (private 
cloud and public cloud federation), so your project will be a good supplement.



May we have a f2f workshop before the formal Paris design summit, so that we 
can exchange ideas completely. 40 minutes design summit session is not enough 
for deep diving. PoC team will stay at Paris from Oct.29 to Nov.8.



Best Regards



Chaoyi Huang ( joehuang )




From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: 02 October 2014 0:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
Hi Chaoyi,

Thanks for sharing these information.

Sometime back I have stared a project called “Alliance” which trying to address 
the same concerns (see the link below). Alliance service is designed to provide 
Inter-Cloud Resource Federation which will enable resource sharing across 
cloud in distributed multi-site OpenStack clouds deployments. This service will 
run on top of OpenStack Cloud and fabricate different cloud (or data centers) 
instances in distributed cloud setup. This service will work closely with 
OpenStack components (Keystone, Nova, Cinder) to manage and provision 
different resources (token, VM, images, network .). Alliance service will 
provide abstraction to hide interoperability and integration complexities from 
underpinning cloud instance and enable following business use cases.

- Multi Region Capability
- Virtual Private Cloud
- Cloud Bursting

This service will provide true plug  play model for region expansion, VPC like 
use case, conceptual design can be found at  
https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation. We are working 
on POC using this concept which is in WIP.

I will be happy to coordinate with you on this and try to come up with common 
solution, seems we both are trying to address same issues.

Thoughts?

Thanks,
Arvind

From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 6:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hello,  Alex,

Thank you very much for your mail about remote cluster hypervisor.

One of the inspiration for OpenStack cascading is from the remote clustered 
hypervisor like vCenter. The difference between the remote clustered hypervisor 
and OpenStack cascading is that not only Nova involved in the cascading, but 
also Cinder, Neutron, Ceilometer, and even Glance(optional).

Please refer to 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Inspiration,
https://wiki.openstack.org/wiki/OpenStack_cascading_solution#Architecture for 
more detail information.

Best Regards

Chaoyi Huang ( joehuang )


From: Alex Glikson [glik...@il.ibm.com]
Sent: 01 October 2014 12:51
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
This sounds related to the discussion on the 'Nova clustered hypervisor driver' 
which started at Juno design summit [1]. 

Re: [openstack-dev] Import errors in tests

2014-10-02 Thread Lucas Alvares Gomes
On Thu, Oct 2, 2014 at 12:47 PM, Dmitry Tantsur dtant...@redhat.com wrote:
 On 10/02/2014 01:30 PM, Lucas Alvares Gomes wrote:

 Hi,

 I don't know if it's a known issue, but we have this patch in Ironic
 here https://review.openstack.org/#/c/124610/ and the gate jobs for
 python26 and python27 are failing because of some import error[1] and
 it doesn't show me what is the error exactly, it's important to say
 also that the tests run locally without any problem so I can't
 reproduce the error locally here.

 Did you try with fresh environment?

Yes, I even tried spawning a VM with a fresh OS and running the tests there :/



 Have anyone seem something like that ?

 I have to say that our test toolchain is completely inadequate in case of
 import errors, even locally spotting import error involves manually
 importing all suspicious modules, because tox just outputs garbage.
 Something has to be done with it.


 I will continue to dig into it and see if I can spot something, but I
 thought it would be nice to share it here too cause that's maybe a
 potential gate problem.

 [1]
 http://logs.openstack.org/10/124610/14/check/gate-ironic-python27/5c21433/console.html

 Cheers,
 Lucas

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Launching multiple VMs in Nova

2014-10-02 Thread Chris Friesen

On 10/02/2014 08:10 AM, Oleg Bondarev wrote:

Hi,

It turns out that there is a 1:1 relationship between
rpc_thread_pool_size messaging config [1] and the number of instances
that can be spawned simultaneously.
Please see bug [2] for more details.
I think this should be at least documented. Thoughts?

Thanks,
Oleg

[1]
https://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_executors/impl_eventlet.py
[2] https://bugs.launchpad.net/neutron/+bug/1372049


Seems like the fix would be to allow the oslo.messaging thread pool to 
grow as needed.


If we don't fix it, then yes this should probably be documented somewhere.

I'm guessing there are other places in nova where we might get bit by 
the same scenario if the timing is just right.


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [All?] Status vs State

2014-10-02 Thread Jay S. Bryant

Good questions!

Jay, I have a request for some clarification in-line.


On 10/01/2014 01:37 PM, Jay Pipes wrote:

Hi Akihiro!

IMO, this is precisely where having an API standards working group can 
help to make the user experience of our public APIs less frustrating. 
Such a working group should have the ability to vet terms like state 
vs status and ensure consistency across the public APIs.


More thoughts inline :)

On 10/01/2014 11:24 AM, Akihiro Motoki wrote:

Hi,

# The first half is related to Horizon and the latter half is about
the wording in Nova and Neutron API.

During Horizon translation for Juno, I noticed the words State and
Status in multiple contexts. Sometimes they are in very similar
contexts and sometimes they have different contexts.

I would like to know what are the difference between  Status and
State, and if the current usage is right or not, whether we can
reword them. Input from native speakers would be really appreciated.

I see three usages.

(1) Status to show operational status (e.g. 
Up/Down/Active/Error/Build/...)

(2) Status to show administrative status (e.g. Enabled/Disabled/...)
(3) State to show operational state (e.g., Up/Down/)

Note that (2) and (3) are shown in a same table (for example Compute
Host table in Hypervisor summary). Also (1) and (3) (e.g., task state
in nova) are used in a same table (for example, the instance table).

Status in (1) and (2) have different meaning to me, so at least
we need to add some contextual note (contextual marker in I18N term)
so that translators can distinguish (1) and (2).

Related to this, I check Nova and Neutron API, and
I don't see a clear usage of these words.

In Nova API, Status and Task State/Power State in instance list
  are both used to show current operational information (state is a
bit more detail
information compared to Status). On the other hand, in service lits
Status is used to show a current administrative status
(Enabled/Disabled) and State is used to show current operational
information like Up/Down.

In Neutron API, both State (admin_state_up)  and Status are
usually used in Neutron resources (networks, ports, routers, and so
on), but it seems the meaning of State and Status are reversed
from the meaning of Nova service list above.

I am really confused what is the right usage of these words


OK, so here are the definitions of these terms in English (at least, 
the relevant definition as used in the APIs...):


state: the particular condition that someone or something is in at a 
specific time.


example: the state of the company's finances

status: the position of affairs at a particular time, especially in 
political or commercial contexts.


example: an update on the status of the bill

Note that state is listed as a synonym for status, but status is 
*not* listed as a synonym for state, which is why there is so much 
frustrating vagueness and confusing duplicity around the terms.


IMO, the term state should be the only one used in the OpenStack 
APIs to refer to the condition of some thing at a point in time. The 
term state can and should be prefaced with a refining descriptor 
such task or power to denote the *thing* that the state represents 
a condition for.


As I re-read this I think maybe you have answered my question but want 
to be clear.  You feel that 'State' is the only term that should be 
used.  Based on the definitions above, that would make sense.  The 
complication is there are may places in the code where a variable like 
'status' is used.  Don't think we are going to be able to go back and 
fix all those, but it would be something that is good to watch for in 
reviews in the future.
One direct change I would make would be that Neutron's 
admin_state_up field would be instead admin_state with values of 
UP, DOWN (and maybe UNKNOWN?) instead of having the *same* GET 
/networks/{network_id} call return *both* a boolean admin_state_up 
field *and* a status field with a string value like ACTIVE. :(


Another thing that drives me crazy is the various things that 
represent enabled or disabled.


Throughout the APIs, we use, variably:

 * A field called disabled or enabled (Nova flavor-disabled API 
extension with the OS-FLV-DISABLED:disabled attribute, Ceilometer 
alarms, Keystone domains, users and projects but not groups or 
credentials)
 * enable_XXX or disable_XXX (for example, in Neutron's GET 
/subnets/{subnet_id} response, there is an enable_dhcp field. In 
Heat's GET /stacks/{stack_id} response, there is a disable_rollback 
field. We should be consistent in using either the word enable or the 
word disable (not both terms) and the tense of the verb should at the 
very least be consistent (disabled vs. disable))
 * status:disabled (Nova os-services API extension. The service 
records have a status field with disabled or enabled string in it. 
Gotta love it.)


Yet another thing to tack on the list of stuff that really should be 
cleaned up with an API working group.



[openstack-dev] [sahara] team meeting Oct 2 1800 UTC

2014-10-02 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141002T18

P.S. I'd like to start discussing design summit sessions.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit session brainstorming

2014-10-02 Thread Sergey Lukjanov
Reminder.

Folks, we have a month before summit to finalise list of sessions and
prepare to them. Please, propose things you're interested in.

On Tue, Sep 9, 2014 at 3:29 AM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Hi sahara folks,

 I'd like to start brainstorming ideas for the upcoming summit design
 sessions earlier than previous times to have more time to discuss
 topics and prioritize / filter / prepare them.

 Here is an etherpad to start the brainstorming:

 https://etherpad.openstack.org/p/kilo-sahara-summit-topics

 If you have ideas for summit sessions, please, add them to the
 etherpad and we'll select the most important topics later before the
 summit.

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Welcome three new members to project-config-core

2014-10-02 Thread Sergey Lukjanov
Congrats and welcome!

On Tue, Sep 30, 2014 at 5:25 PM, Kurt Taylor kurt.r.tay...@gmail.com wrote:
 Congratulations everyone, well deserved!

 Kurt Taylor (krtaylor)

 On Tue, Sep 30, 2014 at 9:54 AM, Jeremy Stanley fu...@yuggoth.org wrote:
 With unanimous consent[1][2][3] of the OpenStack Project
 Infrastructure core team (infra-core), I'm pleased to welcome
 Andreas Jaeger, Anita Kuno and Sean Dague as members of the
 newly-formed project-config-core team. Their assistance has been
 invaluable in reviewing changes to our project-specific
 configuration data, and I predict their addition to the core team
 for the newly split-out openstack-infra/project-config repository
 represents an immense benefit to everyone in OpenStack.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [All?] Status vs State

2014-10-02 Thread John Plocher
[Lurker popping up to get whacked :-)]


both State (admin_state_up)  and Status² are usually used in Neutron
resources...
but it seems the meaning of State and Status are reversed...

 I am really confused what is the right usage of these words

state: the particular condition that someone or something is in at a
 specific time.

 example: the state of the company's finances

 status: the position of affairs at a particular time, especially in
 political or commercial contexts.

 example: an update on the status of the bill


If it helps, ³state² many times is conceptually an attribute closely
attached to, or part of an item, while ³status² tends to be an attribute
applied by others to the item.  State feels more like an absolute, while
status feels more contextual, fluid or consensus based.

I tend to use the terms thus:

A state is what a resource consider itself to be:
³Change the state of this resource to disabled²

A status is what others conclude about a resource:
³The resource's status is not responding to requests²

For this discussion, the desired concept seems to be more the precise,
³comp sci state machine² one.  The context where the term is used implies
(at least to me) an assumption of absoluteness or active control and not
simply a passive interpretation of behavior.

To me this says ³state² is the right choice :-)

  -John


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [All] API standards working group

2014-10-02 Thread Anne Gentle
Hi all,

Definitely attend this Friday's bootstrapping session if you're interested
in this thread. Don't let the Docs in the title throw you off. :)

https://wiki.openstack.org/wiki/BootstrappingHour/Diving_Into_Docs

Schedule: Friday Oct 3rd - 19:00 UTC (15:00 Americas/New_York)
Host(s): Sean Dague, Jay Pipes, Dan Smith
Experts(s): Anne Gentle
Youtube Stream: http://www.youtube.com/watch?v=n2I3PFuoNj4
Etherpad: https://etherpad.openstack.org/p/obh-diving-into-docs

Thanks,
Anne

On Wed, Sep 24, 2014 at 11:48 AM, Everett Toews everett.to...@rackspace.com
 wrote:

 On Sep 24, 2014, at 9:42 AM, Dean Troyer dtro...@gmail.com wrote:

  I'll bring an API consumer's perspective.

 +1

 I’d bring an API consumer’s perspective as well.

 Looks like there’s lots of support for an API WG. What’s the next step?

 Form a WG under the User Committee [1] or is there something more
 appropriate?

 Thanks,
 Everett

 [1]
 https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Working_Groups


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Launching multiple VMs in Nova

2014-10-02 Thread Jay Pipes

On 10/02/2014 11:23 AM, Chris Friesen wrote:

On 10/02/2014 08:10 AM, Oleg Bondarev wrote:

Hi,

It turns out that there is a 1:1 relationship between
rpc_thread_pool_size messaging config [1] and the number of instances
that can be spawned simultaneously.
Please see bug [2] for more details.
I think this should be at least documented. Thoughts?

Thanks,
Oleg

[1]
https://github.com/openstack/oslo.messaging/blob/master/oslo/messaging/_executors/impl_eventlet.py

[2] https://bugs.launchpad.net/neutron/+bug/1372049


Seems like the fix would be to allow the oslo.messaging thread pool to
grow as needed.

If we don't fix it, then yes this should probably be documented somewhere.

I'm guessing there are other places in nova where we might get bit by
the same scenario if the timing is just right.


Note that this is *per compute node*. So, yes, the rpc_thread_pool_size 
directly limits the number of instances that can be spawned 
simultaneously on a compute node, but it's important to point out that 
this isn't across all of your Nova deployment, but just per compute 
node. If you have 10 compute nodes, you could theoretically spawn ~640 
instances simultaneously given the default configuration settings. 
However, at that point, you will likely run into other sources of 
contention in the nova-scheduler and nova-conductor communication. :)


So, bottom line, Oleg, yes, it should be documented. And just make sure 
the documentation is clear that it refers to the number of instances 
spawned at once on each compute node.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [All?] Status vs State

2014-10-02 Thread Jay Pipes

On 10/02/2014 11:23 AM, Jay S. Bryant wrote:

As I re-read this I think maybe you have answered my question but want
to be clear.  You feel that 'State' is the only term that should be
used.  Based on the definitions above, that would make sense.  The
complication is there are may places in the code where a variable like
'status' is used.  Don't think we are going to be able to go back and
fix all those, but it would be something that is good to watch for in
reviews in the future.


I'm talking about new versions of the public APIs that get proposed, not 
existing ones. Within the codebase, it would be great to use a single 
term state for this, prefixed with a refining descriptor like 
power_, task_, etc. But that's a lesser concern for me than our next 
generation REST APIs.


Best,
-anotherjay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-10-02 Thread Jay Pipes

On 10/02/2014 09:54 AM, Brant Knudson wrote:

When discussing the document schema I think we're planning to use
JSONSchema... In Keystone, we've got J-S implemented on some parts (I
don't think it covers all resources yet). I also don't think our
JSONSchema is discoverable yet (i.e., you can't download the schema from
the server). I haven't heard of other projects implementing this yet,
but maybe someone has.


Glance v2 API has had entirely discoverable API via JSONSchema for years 
now:


http://developer.openstack.org/api-ref-image-v2.html#image-schemas-v2

The oscomputevnext API I proposed also has fully discoverable resources 
(JSONSchema+JSON-HAL documents):


http://docs.oscomputevnext.apiary.io/#schema

and paths (JSONHome):

http://docs.oscomputevnext.apiary.io/#openstackcomputeapiroot

So, it's definitely doable and definitely doable without the mess of API 
extensions that litters Nova, Neutron (and Keystone...)


Best,
-jay


There probably is some way to integrate JSON Home with JSONSchema. Maybe
you can put a reference to the JSONSchema in the hints for the resource.

On current Nova v2 API, we need to add dummy extension when adding new
attributes to the existing request/response body because of
advertising something
changed to clients. I'm glad if we can use more standard way for
doing it.

Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-02 Thread James Polley




 On 3 Oct 2014, at 00:25, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from James Polley's message of 2014-10-01 22:37:25 -0700:
 All three of the options presented here seem to assume that UIDs will always 
 be allocated at image-build time. I think that's because most of these UIDs 
 will be used to write files into the chroot at image-create time - if I 
 could think of some way around that, I think we could avoid this problem 
 more neatly by not assigning the UIDs until first boot
 
 Yeah I don't think we're going to work around that. It is part of the
 magic of images that the metadata is all in place and there's no churn
 at boot.

Agree - it would be quite a significant change in how TripleO works, not just a 
workaround.

 
 But since we can't do that, would it be possible to compromise by having the 
 UIDs read in from heat metadata, and using the current allocation process if 
 none is provided?
 
 I really, really dislike this. Post-boot tools like Heat are for
 per-server customization and site-wide changes. UIDs seem like plumbing
 under the hood.

I think that the part of this you dislike is specifically storing the data in 
heat?

Would you object less if I phrased it as a job file to be read at image build 
time, which is closer to what I had in mind?

 
 This should allow people who prefer to have static UIDs to have simple 
 drop-in config, but also allow people who want to dynamically read from 
 existing images to scrape the details and then drop them in.
 
 I see your point, and I'm now confused as I don't really understand what
 would make somebody prefer dynamic UID allocation.

I was thinking of a case where an operator might have several existing images 
with different sets of services, or different base distribtions, and hence 
different sets of uids; they'd probably prefer to have the build process 
extract the details from the previous image rather than having a single fixed 
map of uids.

Someone starting fresh might prefer to provide a static map of pre-assigned UIDs


 To aid people who have existing images, perhaps we could provide a small 
 tool (if one doesn't already exist) that simply reads /etc/passwd and 
 returns a JSON username:uid map, to be added into the heat local environment 
 when building the next image?
 
 Or a tool that reads the image, and returns /etc/passwd and /etc/group.

Sure, but I think it would be handy if it could accept data from another source 
as well as the previous image, to cater for people who want to be more 
prescriptive about which UIDs are used but don't have adv Bing existing image 
yet.

I don't know if this is a real use case though - maybe I'm just remembering bad 
experiences from a previous pre-cloud life.

 
 Thanks very much for your thoughts. :)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Request for voting permission with Pure Storage CI account

2014-10-02 Thread Duncan Thomas
Looks good to me (cinder core)

On 29 September 2014 19:59, Patrick East patrick.e...@purestorage.com wrote:
 Hi All,

 I am writing to request voting permissions as per the instructions for third
 party CI systems[1]. The account email is cinder...@purestorage.com.

 The system has been operational and stable for a little while now
 building/commenting on openstack/cinder gerrit. You can view its comment
 history on reviews here:
 https://review.openstack.org/#/q/cinder.ci%2540purestorage.com,n,z

 Please take a look and let me know if there are any issues. I will be the
 primary point of contact, but the alias openstack-...@purestorage.com is the
 best way for a quick response from our team. For immediate issues I can be
 reached in IRC as patrickeast

 I look forward to your feedback!

 [1]
 http://ci.openstack.org/third_party.html#permissions-on-your-third-party-system

 -Patrick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a need to assert user ownership in preserved state

2014-10-02 Thread James Polley




 On 3 Oct 2014, at 02:57, James Polley j...@jamezpolley.com wrote:
 
 
 
 
 
 On 3 Oct 2014, at 00:25, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from James Polley's message of 2014-10-01 22:37:25 -0700:
 All three of the options presented here seem to assume that UIDs will 
 always be allocated at image-build time. I think that's because most of 
 these UIDs will be used to write files into the chroot at image-create time 
 - if I could think of some way around that, I think we could avoid this 
 problem more neatly by not assigning the UIDs until first boot
 
 Yeah I don't think we're going to work around that. It is part of the
 magic of images that the metadata is all in place and there's no churn
 at boot.
 
 Agree - it would be quite a significant change in how TripleO works, not just 
 a workaround.
 
 
 But since we can't do that, would it be possible to compromise by having 
 the UIDs read in from heat metadata, and using the current allocation 
 process if none is provided?
 
 I really, really dislike this. Post-boot tools like Heat are for
 per-server customization and site-wide changes. UIDs seem like plumbing
 under the hood.
 
 I think that the part of this you dislike is specifically storing the data in 
 heat?
 
 Would you object less if I phrased it as a job file to be read at image 
 build time, which is closer to what I had in mind?
 
 
 This should allow people who prefer to have static UIDs to have simple 
 drop-in config, but also allow people who want to dynamically read from 
 existing images to scrape the details and then drop them in.
 
 I see your point, and I'm now confused as I don't really understand what
 would make somebody prefer dynamic UID allocation.
 
 I was thinking of a case where an operator might have several existing images 
 with different sets of services, or different base distribtions, and hence 
 different sets of uids; they'd probably prefer to have the build process 
 extract the details from the previous image rather than having a single fixed 
 map of uids.

 Someone starting fresh might prefer to provide a static map of pre-assigned 
 UIDs

To be clear - I don't think either of these is novel - these are cases 1 and  2 
from the mail that started the thread.

The point I'm ineptly trying to make (why am I sending email at 3am?) is that I 
think we can easily support both 1 and 2 simply by thinking of read list of 
UIDs from an existing image and apply existing list of UIDs to new image as 
separate tasks and implement both separately 

 
 
 To aid people who have existing images, perhaps we could provide a small 
 tool (if one doesn't already exist) that simply reads /etc/passwd and 
 returns a JSON username:uid map, to be added into the heat local 
 environment when building the next image?
 
 Or a tool that reads the image, and returns /etc/passwd and /etc/group.
 
 Sure, but I think it would be handy if it could accept data from another 
 source as well as the previous image, to cater for people who want to be more 
 prescriptive about which UIDs are used but don't have adv Bing existing image 
 yet.
 
 I don't know if this is a real use case though - maybe I'm just remembering 
 bad experiences from a previous pre-cloud life.
 
 
 Thanks very much for your thoughts. :)
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Cinder] Request for voting permission with Pure Storage CI account

2014-10-02 Thread Asselin, Ramy
Looks good to me using this query: http://paste.openstack.org/show/117844/ 
Generated via: https://review.openstack.org/#/c/125716/

Ramy


-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Thursday, October 02, 2014 10:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Infra][Cinder] Request for voting permission with 
Pure Storage CI account

Looks good to me (cinder core)

On 29 September 2014 19:59, Patrick East patrick.e...@purestorage.com wrote:
 Hi All,

 I am writing to request voting permissions as per the instructions for 
 third party CI systems[1]. The account email is cinder...@purestorage.com.

 The system has been operational and stable for a little while now 
 building/commenting on openstack/cinder gerrit. You can view its 
 comment history on reviews here:
 https://review.openstack.org/#/q/cinder.ci%2540purestorage.com,n,z

 Please take a look and let me know if there are any issues. I will be 
 the primary point of contact, but the alias 
 openstack-...@purestorage.com is the best way for a quick response 
 from our team. For immediate issues I can be reached in IRC as patrickeast

 I look forward to your feedback!

 [1]
 http://ci.openstack.org/third_party.html#permissions-on-your-third-par
 ty-system

 -Patrick

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Sideways grenade job to test Nova Network to Neutron migration

2014-10-02 Thread Kyle Mestery
Thanks for setting this up Clark, I'll look into this close once I'm
back from traveling this week.

Michael, from a parity perspective, we have closed the gaps and there
is nothing that should prevent this functionally from working. That
said, it's likely there could be gaps as part of this type of upgrade,
so I'm happy to help look at this and work with you to close the gaps
and get this going.

Thanks,
Kyle

On Wed, Oct 1, 2014 at 5:50 PM, Michael Still mi...@stillhq.com wrote:
 Thanks for doing this. My recollection is that we still need some features
 landed in Neutron before this work can complete, but its possible I am
 confused.

 A public status update on that from the Neutron team would be good.

 Michael

 On Thu, Oct 2, 2014 at 6:18 AM, Clark Boylan cboy...@sapwetik.org wrote:

 Hello,

 One of the requirements placed on Ironic was that they must have a path
 from Nova Baremetal to Ironic and that path should be tested. This
 resulted in a sideways grenade job which instead of going from one
 release of OpenStack to another, swaps out components within a release.
 In this case the current Juno release.

 When throwing this together for Ironic I went ahead and put a skeleton
 job, check-grenade-dsvm-neutron-sideways, in place for testing a Nova
 Network to Neutron sideways upgrade. This job is in the experimental
 queues for Neutron, grenade, and devstack-gate at the moment and does
 not pass. While it may be too late to focus on this for Juno it would be
 great if Neutron and Nova could make this test pass early in the Kilo
 cycle as a clear Nova Network to Neutron process is often asked for.

 Random choice of current job result can be found at

 http://logs.openstack.org/29/123629/1/experimental/check-grenade-dsvm-neutron-sideways/fb45df6/

 The way this job works is it sets up an old and new pair of master
 based cloud configs. The old side is configured to use Nova Network
 and the new side is configured to use Neutron. Grenade then fires up
 the old cloud, adds some things to it, runs some tests, shuts it down,
 upgrades, then checks that things still work in the new cloud. My
 best guess is that most of the work here will need to be done in the
 upgrade section where we teach Grenade (and consequently everyone
 else) how to make this transition.

 Thanks,
 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Comments on the concerns arose during the TC meeting

2014-10-02 Thread Soren Hansen
I'm sorry about my slow responses. For some reason, gmail didn't think
this was an important e-mail :(

2014-09-30 18:41 GMT+02:00 Jay Pipes jaypi...@gmail.com:
 On 09/30/2014 08:03 AM, Soren Hansen wrote:
 2014-09-12 1:05 GMT+02:00 Jay Pipes jaypi...@gmail.com:
 How would I go about getting the associated fixed IPs for a network?
 The query to get associated fixed IPs for a network [1] in Nova looks
 like this:

 SELECT
  fip.address,
  fip.instance_uuid,
[...]
 AND fip.instance_uuid IS NOT NULL
 AND i.host = :host

 would I have a Riak container for virtual_interfaces that would also
 have instance information, network information, fixed_ip information?
 How would I accomplish the query against a derived table that gets the
 minimum virtual interface ID for each instance UUID?

What's a minimum virtual interface ID?

Anyway, I think Clint answered this quite well.

 I've said it before, and I'll say it again. In Nova at least, the
 SQL schema is complex because the problem domain is complex. That
 means lots of relations, lots of JOINs, and that means the best way
 to query for that data is via an RDBMS.
[...]
 I don't think relying on a central data store is in any conceivable
 way appropriate for a project like OpenStack. Least of all Nova.

 I don't see how we can build a highly available, distributed service
 on top of a centralized data store like MySQL.
[...]
 I don't disagree with anything you say above. At all.

Really? How can you agree that we can't build a highly available,
distributed service on top of a centralized data store like MySQL while
also saying that the best way to handle data in Nova is in an RDBMS?

 For complex control plane software like Nova, though, an RDBMS is
 the best tool for the job given the current lay of the land in open
 source data storage solutions matched with Nova's complex query and
 transactional requirements.
 What transactional requirements?
 https://github.com/openstack/nova/blob/stable/icehouse/nova/db/sqlalchemy/api.py#L1654
 When you delete an instance, you don't want the delete to just stop
 half-way through the transaction and leave around a bunch of orphaned
 children.  Similarly, when you reserve something, it helps to not have
 a half-finished state change that you need to go clean up if something
 goes boom.

Looking at that particular example, it's about deleting an instance and
all its associated metadata. As we established earlier, these are things
that would just be in the same key as the instance itself, so it'd just
be a single key that would get deleted. Easy.

That said, there will certainly be situations where there'll be a need
for some sort of anti-entropy mechanism. It just so happens that those
situations already exist. We're dealing with about a complex distributed
system.  We're kidding ourselves if we think that any kind of
consistency is guaranteed, just because our data store favours
consistency over availability.

 https://github.com/openstack/nova/blob/stable/icehouse/nova/db/sqlalchemy/api.py#L3054

Sure, quotas will require stronger consistency. Any NoSQL data store
worth its salt gives you primitives to implement that.

 Folks in these other programs have actually, you know, thought about
 these kinds of things and had serious discussions about
 alternatives.  It would be nice to have someone acknowledge that
 instead of snarky comments implying everyone else has it wrong.
 I'm terribly sorry, but repeating over and over that an RDBMS is the
 best tool without further qualification than Nova's data model is
 really complex reads *exactly* like a snarky comment implying
 everyone else has it wrong.
 Sorry if I sound snarky. I thought your blog post was the definition
 of snark.

I don't see the relevance of the tone of my blog post?

You say it would be nice if people did something other than offer snarky
comments implying everyone else has it wrong.  I'm just pointing out
that such requests ring really hollow when put forth in the very e-mail
where you snarkily tell everyone else that they have it wrong.

Since you did bring up my blog post, I really am astounded you find it
snarky.  It was intended to be constructive and forward looking. The
first one in the series, perhaps, but certainly not the one linked in
this thread.

Perhaps I need to take writing classes.

-- 
Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer  | http://www.openstack.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting Oct 2 1800 UTC

2014-10-02 Thread Sergey Lukjanov
Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-10-02-18.02.html
Log: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-10-02-18.02.log.html

On Thu, Oct 2, 2014 at 8:26 AM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Hi folks,

 We'll be having the Sahara team meeting as usual in
 #openstack-meeting-alt channel.

 Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

 http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141002T18

 P.S. I'd like to start discussing design summit sessions.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] evacuating boot-from-volume instances with local storage is broken?

2014-10-02 Thread Chris Friesen


Hi,

I'm interested in running nova evacuate on an instance that has local 
storage but was booted from a cinder volume.  OpenStack allows 
live-migration of this sort of instance, so I'm assuming that we would 
want to allow evacuation as well...


I'm getting ready to test it, but I see that there was a nova bug opened 
against this case back in March 
(https://bugs.launchpad.net/nova/+bug/1299368).  It's been confirmed but 
hasn't even had an importance assigned yet.


It seems a bit unfortunate that this bug would go six months with no 
attention at all...


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] OpenStack Heat template and CoreOS...

2014-10-02 Thread Steve Chien
  
  Currently, within our Icehouse Openstack environment, we are trying to see if 
we can 1) setup a cluster of CoreOS VMs by using the Heat template 2) invoking 
the fleetctl command to deploy the Docker containers to the newly setup CoreOS 
cluster by placing some instructions in the Heat template.

  Achieving the goal 1) is not tough and we can see some samples over the net 
too. For example, we can use the following template to setup a CoreOS-based VM 
successfully.

heat_template_version: 2013-05-23
description: 
  A simple Heat template to deploy CoreOS into an existing cluster.
parameters:
  network_id:
type: string
label: Network ID
description: ID of existing Neutron network to use
default: 632e1048-0164-41bd-9332-01c664eb475f
  image_id:
type: string
label: Glance Image ID
description: ID of existing Glance image to use
default: dfdd6317-5156-4e7d-96a1-f7ce76a43687
resources:
  instance0_port0:
type: OS::Neutron::Port
properties:
  admin_state_up: true
  network_id: { get_param: network_id }
  security_groups:
- 435c19ea-64d0-47f9-97e6-bc04b98361eb
  instance0:
type: OS::Nova::Server
properties:
  name: coreos-test
  image: { get_param: image_id }
  flavor: m1.small
  networks:
- port: { get_resource: instance0_port0 }
  key_name: mykey
  user_data_format: RAW
  user_data: |
#cloud-config
coreos:
  etcd:
discovery: 
https://discovery.etcd.io/249d48e8dff562bdd8381177020ee405
addr: $private_ipv4:4001
peer-addr: $private_ipv4:7001
  units:
- name: etcd.service
  command: start
- name: fleet.service
  command: start

  Initially, we tried to achieve goal 2) by testing if we can send mime multi 
part user_data (the second part of the user_data will be a shell script that 
uses fleetctl command to deploy containers; if there is any synchronization / 
wait condition needs to be done, we can handle it there somehow too) to CoreOS 
cloud-init service. However, it seems like that CoreOS (at least Stable 
410.1.0) cloud-init does not support mime multi part yet.

  Any other good way to achieve both goals 1)  2)?

  Thanks!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help with EC2 Driver functionality using boto ...

2014-10-02 Thread Aparna S Parikh
Unfortunately, there is no error in nova or glance logs. It seems very
likely that we are missing doing something to make the status transition
happen, but we don't know what that could be.

On Wed, Oct 1, 2014 at 8:20 PM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

 It is hard to tell if this is a bug or a misconfiguration from your
 desctiption. The failure likely generated some kind of error message in
 nova or glance. If you can track down an error message and a tracback it
 would be worth submitting as a bug report to the appropriate project.

 Vish

 On Oct 1, 2014, at 11:13 AM, Aparna S Parikh apa...@thoughtworks.com
 wrote:

 Hi,

 We are currently working on writing a driver for Amazon's EC2 using the
 boto libraries, and are hung up on creating a snapshot of an instance. The
 instance remains in 'Queued' status on Openstack instead of becoming
  'Active'. The actual EC2 snapshot that gets created is in 'available'
 status.

 We are essentially calling create_image() from the boto/ec2/instance.py
 when snapshot of an instance is being called.

 Any help in figuring this out would be greatly appreciated.

 Thanks,

 Aparna
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [All?] Status vs State

2014-10-02 Thread Jay S. Bryant


On 10/02/2014 11:14 AM, Jay Pipes wrote:

On 10/02/2014 11:23 AM, Jay S. Bryant wrote:

As I re-read this I think maybe you have answered my question but want
to be clear.  You feel that 'State' is the only term that should be
used.  Based on the definitions above, that would make sense. The
complication is there are may places in the code where a variable like
'status' is used.  Don't think we are going to be able to go back and
fix all those, but it would be something that is good to watch for in
reviews in the future.


I'm talking about new versions of the public APIs that get proposed, 
not existing ones. Within the codebase, it would be great to use a 
single term state for this, prefixed with a refining descriptor like 
power_, task_, etc. But that's a lesser concern for me than our 
next generation REST APIs.


Best,
-anotherjay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks for the clarification.  I think the proposal is a good idea.

-theotherjay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [All?] Status vs State

2014-10-02 Thread Chris Friesen

On 10/01/2014 12:37 PM, Jay Pipes wrote:


IMO, the term state should be the only one used in the OpenStack APIs
to refer to the condition of some thing at a point in time. The term
state can and should be prefaced with a refining descriptor such
task or power to denote the *thing* that the state represents a
condition for.

One direct change I would make would be that Neutron's admin_state_up
field would be instead admin_state with values of UP, DOWN (and maybe
UNKNOWN?) instead of having the *same* GET /networks/{network_id} call
return *both* a boolean admin_state_up field *and* a status field
with a string value like ACTIVE. :(


Hi Jay,

I wonder if this would tie into our other discussion about 
distinguishing between the desired state vs the actual state. 
Conceivably you could have the admin state be UP, but a fault has 
resulted in an actual state other than ACTIVE.


As a reference point, CCITT X.731 goes into huge detail about state and 
status.  They define three orthogonal types of state (operational, 
usage, and administrative), and many types of status (availability, 
alarm, control, etc.)  I'm not suggesting that OpenStack should use 
those exact terms, but it suggests that some people have found it useful 
to have state along multiple axes rather than trying to stuff everything 
into a single variable.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [All?] Status vs State

2014-10-02 Thread Jay Pipes

On 10/02/2014 03:14 PM, Chris Friesen wrote:

On 10/01/2014 12:37 PM, Jay Pipes wrote:


IMO, the term state should be the only one used in the OpenStack APIs
to refer to the condition of some thing at a point in time. The term
state can and should be prefaced with a refining descriptor such
task or power to denote the *thing* that the state represents a
condition for.

One direct change I would make would be that Neutron's admin_state_up
field would be instead admin_state with values of UP, DOWN (and maybe
UNKNOWN?) instead of having the *same* GET /networks/{network_id} call
return *both* a boolean admin_state_up field *and* a status field
with a string value like ACTIVE. :(


Hi Jay,

I wonder if this would tie into our other discussion about
distinguishing between the desired state vs the actual state.
Conceivably you could have the admin state be UP, but a fault has
resulted in an actual state other than ACTIVE.


My comment above was about the inconsistency of how things are named and 
the data types representing them. There is a status field of type 
string, and an admin_state_up field of type boolean, both in the same 
response. Why wasn't it called admin_state and made a string field to 
follow the convention of the status field? I'm guessing it probably has 
to do with the telecom IT recommendations you cite below...



As a reference point, CCITT X.731 goes into huge detail about state and
status.  They define three orthogonal types of state (operational,
usage, and administrative), and many types of status (availability,
alarm, control, etc.)  I'm not suggesting that OpenStack should use
those exact terms


The very last thing I believe OpenStack should use as a reference is 
anything the telecommunications IT industry has put together as a 
recommendation.


If we do use telecom IT as a guide, we'll be in a worse state (pun 
intended), ease-of-use and user-friendliness-wise, than we already are, 
and literally every API will just be a collection of random three and 
four letter acronyms with nobody other than a veteran network engineer 
understanding how anything works.


In other words, all our APIs would look like the Neutron API as it 
exists today.


, but it suggests that some people have found it useful

to have state along multiple axes rather than trying to stuff everything
into a single variable.


I'm not opposed to using multiple fields to indicate state; I thought I 
was pretty clear about that in my initial response?


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] evacuating boot-from-volume instances with local storage is broken?

2014-10-02 Thread Fei Long Wang
Hi Chris,

I have submitted a patch for nova evacuate against Ceph(RBD) backend,
see https://review.openstack.org/#/c/121745/. But I'm not really sure if
it can fix your issue.  So could you please post your error log of Nova?
Cheers.


On 03/10/14 07:29, Chris Friesen wrote:

 Hi,

 I'm interested in running nova evacuate on an instance that has
 local storage but was booted from a cinder volume.  OpenStack allows
 live-migration of this sort of instance, so I'm assuming that we would
 want to allow evacuation as well...

 I'm getting ready to test it, but I see that there was a nova bug
 opened against this case back in March
 (https://bugs.launchpad.net/nova/+bug/1299368).  It's been confirmed
 but hasn't even had an importance assigned yet.

 It seems a bit unfortunate that this bug would go six months with no
 attention at all...

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers  Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] evacuating boot-from-volume instances with local storage is broken?

2014-10-02 Thread Jay Pipes

On 10/02/2014 02:29 PM, Chris Friesen wrote:


Hi,

I'm interested in running nova evacuate on an instance that has local
storage but was booted from a cinder volume.  OpenStack allows
live-migration of this sort of instance, so I'm assuming that we would
want to allow evacuation as well...

I'm getting ready to test it, but I see that there was a nova bug opened
against this case back in March
(https://bugs.launchpad.net/nova/+bug/1299368).  It's been confirmed but
hasn't even had an importance assigned yet.


Anyone can sign up for the https://launchpad.net/~nova team on Launchpad 
and set an importance for any Nova bug.



It seems a bit unfortunate that this bug would go six months with no
attention at all...


Squeaky wheel gets the grease, I guess.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [All?] Status vs State

2014-10-02 Thread Chris Friesen

On 10/02/2014 01:47 PM, Jay Pipes wrote:

On 10/02/2014 03:14 PM, Chris Friesen wrote:

On 10/01/2014 12:37 PM, Jay Pipes wrote:


IMO, the term state should be the only one used in the OpenStack APIs
to refer to the condition of some thing at a point in time. The term
state can and should be prefaced with a refining descriptor such
task or power to denote the *thing* that the state represents a
condition for.

One direct change I would make would be that Neutron's admin_state_up
field would be instead admin_state with values of UP, DOWN (and maybe
UNKNOWN?) instead of having the *same* GET /networks/{network_id} call
return *both* a boolean admin_state_up field *and* a status field
with a string value like ACTIVE. :(


Hi Jay,

I wonder if this would tie into our other discussion about
distinguishing between the desired state vs the actual state.
Conceivably you could have the admin state be UP, but a fault has
resulted in an actual state other than ACTIVE.


My comment above was about the inconsistency of how things are named and
the data types representing them. There is a status field of type
string, and an admin_state_up field of type boolean, both in the same
response. Why wasn't it called admin_state and made a string field to
follow the convention of the status field? I'm guessing it probably has
to do with the telecom IT recommendations you cite below...


Sorry, I misread your statement to mean that there should be only a 
single state field rather than a comment on the type of the variable.


The telecom administrative state values are locked, unlocked, and 
shutting down, so it seems unlikely that they would be the impetus for 
the Neutron values.



If we do use telecom IT as a guide, we'll be in a worse state (pun
intended), ease-of-use and user-friendliness-wise, than we already are,
and literally every API will just be a collection of random three and
four letter acronyms with nobody other than a veteran network engineer
understanding how anything works.

In other words, all our APIs would look like the Neutron API as it
exists today.


:)

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] why do we have os-attach-interfaces in the v3 API?

2014-10-02 Thread Matt Riedemann
The os-interface (v2) and os-attach-interfaces (v3) APIs are only used 
for the neutron network API, you'll get a NotImplemented if trying to 
call the related methods with nova-network [1].


Since we aren't proxying to neutron in the v3 API (v2.1), why does 
os-attach-interfaces [2] exist?  Was this just an oversight?  If so, 
please allow me to delete it. :)


[1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/api.py?id=2014.2.rc1#n310
[2] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/plugins/v3/attach_interfaces.py?id=2014.2.rc1


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] evacuating boot-from-volume instances with local storage is broken?

2014-10-02 Thread Chris Friesen

On 10/02/2014 02:24 PM, Jay Pipes wrote:

On 10/02/2014 02:29 PM, Chris Friesen wrote:


Hi,

I'm interested in running nova evacuate on an instance that has local
storage but was booted from a cinder volume.  OpenStack allows
live-migration of this sort of instance, so I'm assuming that we would
want to allow evacuation as well...

I'm getting ready to test it, but I see that there was a nova bug opened
against this case back in March
(https://bugs.launchpad.net/nova/+bug/1299368).  It's been confirmed but
hasn't even had an importance assigned yet.


Anyone can sign up for the https://launchpad.net/~nova team on Launchpad
and set an importance for any Nova bug.


I don't think that's correct.  I'm a member of the nova team but I'm not 
allowed to change the importance of bugs.


The mouseover message for the Importance field says that it is 
changeable only by a project maintainer or bug supervisor.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] governance changes for big tent model

2014-10-02 Thread Doug Hellmann
As promised at this week’s TC meeting, I have applied the various blog posts 
and mailing list threads related to changing our governance model to a series 
of patches against the openstack/governance repository [1].

I have tried to include all of the inputs, as well as my own opinions, and look 
at how each proposal needs to be reflected in our current policies so we do not 
drop commitments we want to retain along with the processes we are shedding [2].

I am sure we need more discussion, so I have staged the changes as a series 
rather than one big patch. Please consider the patches together when 
commenting. There are many related changes, and some incremental steps won’t 
make sense without the changes that come after (hey, just like code!).

Doug

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
[2] https://etherpad.openstack.org/p/big-tent-notes
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] OpenStack Heat template and CoreOS...

2014-10-02 Thread Zane Bitter

Hi Steve,
Could you post this question on ask.openstack.org? The -dev mailing list 
is not for usage questions; ask.openstack is a much better place to 
ensure that others with the same question will benefit from the answer.


FWIW I'd be really surprised if the version of cloud-init in CoreOS is 
so old that it doesn't support multipart-mime.


cheers,
Zane.

On 02/10/14 14:46, Steve Chien wrote:


   Currently, within our Icehouse Openstack environment, we are trying to see 
if we can 1) setup a cluster of CoreOS VMs by using the Heat template 2) 
invoking the fleetctl command to deploy the Docker containers to the newly 
setup CoreOS cluster by placing some instructions in the Heat template.

   Achieving the goal 1) is not tough and we can see some samples over the net 
too. For example, we can use the following template to setup a CoreOS-based VM 
successfully.

heat_template_version: 2013-05-23
description: 
   A simple Heat template to deploy CoreOS into an existing cluster.
parameters:
   network_id:
 type: string
 label: Network ID
 description: ID of existing Neutron network to use
 default: 632e1048-0164-41bd-9332-01c664eb475f
   image_id:
 type: string
 label: Glance Image ID
 description: ID of existing Glance image to use
 default: dfdd6317-5156-4e7d-96a1-f7ce76a43687
resources:
   instance0_port0:
 type: OS::Neutron::Port
 properties:
   admin_state_up: true
   network_id: { get_param: network_id }
   security_groups:
 - 435c19ea-64d0-47f9-97e6-bc04b98361eb
   instance0:
 type: OS::Nova::Server
 properties:
   name: coreos-test
   image: { get_param: image_id }
   flavor: m1.small
   networks:
 - port: { get_resource: instance0_port0 }
   key_name: mykey
   user_data_format: RAW
   user_data: |
 #cloud-config
 coreos:
   etcd:
 discovery: 
https://discovery.etcd.io/249d48e8dff562bdd8381177020ee405
 addr: $private_ipv4:4001
 peer-addr: $private_ipv4:7001
   units:
 - name: etcd.service
   command: start
 - name: fleet.service
   command: start

   Initially, we tried to achieve goal 2) by testing if we can send mime multi 
part user_data (the second part of the user_data will be a shell script that 
uses fleetctl command to deploy containers; if there is any synchronization / 
wait condition needs to be done, we can handle it there somehow too) to CoreOS 
cloud-init service. However, it seems like that CoreOS (at least Stable 
410.1.0) cloud-init does not support mime multi part yet.

   Any other good way to achieve both goals 1)  2)?

   Thanks!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why do we have os-attach-interfaces in the v3 API?

2014-10-02 Thread Vishvananda Ishaya
os-attach-interfacees is actually a a forward port of:

http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/attach_interfaces.py

which is a compute action that is valid for both nova-network and neutron:

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n2991

On Oct 2, 2014, at 1:57 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 The os-interface (v2) and os-attach-interfaces (v3) APIs are only used for 
 the neutron network API, you'll get a NotImplemented if trying to call the 
 related methods with nova-network [1].
 
 Since we aren't proxying to neutron in the v3 API (v2.1), why does 
 os-attach-interfaces [2] exist?  Was this just an oversight?  If so, please 
 allow me to delete it. :)
 
 [1] 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/network/api.py?id=2014.2.rc1#n310
 [2] 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/plugins/v3/attach_interfaces.py?id=2014.2.rc1
 
 -- 
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [All?] Status vs State

2014-10-02 Thread Kevin Benton
In other words, all our APIs would look like the Neutron API as it exists
today.

That's a bad comparison because the Neutron API doesn't have a standard
that it follows at all. If there was a standard for states/statuses that
Neutron was following for all of the objects, the status of the Neutron API
today would be in a much less annoying state.

On Thu, Oct 2, 2014 at 12:47 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 10/02/2014 03:14 PM, Chris Friesen wrote:

 On 10/01/2014 12:37 PM, Jay Pipes wrote:

  IMO, the term state should be the only one used in the OpenStack APIs
 to refer to the condition of some thing at a point in time. The term
 state can and should be prefaced with a refining descriptor such
 task or power to denote the *thing* that the state represents a
 condition for.

 One direct change I would make would be that Neutron's admin_state_up
 field would be instead admin_state with values of UP, DOWN (and maybe
 UNKNOWN?) instead of having the *same* GET /networks/{network_id} call
 return *both* a boolean admin_state_up field *and* a status field
 with a string value like ACTIVE. :(


 Hi Jay,

 I wonder if this would tie into our other discussion about
 distinguishing between the desired state vs the actual state.
 Conceivably you could have the admin state be UP, but a fault has
 resulted in an actual state other than ACTIVE.


 My comment above was about the inconsistency of how things are named and
 the data types representing them. There is a status field of type string,
 and an admin_state_up field of type boolean, both in the same response. Why
 wasn't it called admin_state and made a string field to follow the
 convention of the status field? I'm guessing it probably has to do with the
 telecom IT recommendations you cite below...

  As a reference point, CCITT X.731 goes into huge detail about state and
 status.  They define three orthogonal types of state (operational,
 usage, and administrative), and many types of status (availability,
 alarm, control, etc.)  I'm not suggesting that OpenStack should use
 those exact terms


 The very last thing I believe OpenStack should use as a reference is
 anything the telecommunications IT industry has put together as a
 recommendation.

 If we do use telecom IT as a guide, we'll be in a worse state (pun
 intended), ease-of-use and user-friendliness-wise, than we already are, and
 literally every API will just be a collection of random three and four
 letter acronyms with nobody other than a veteran network engineer
 understanding how anything works.

 In other words, all our APIs would look like the Neutron API as it exists
 today.

 , but it suggests that some people have found it useful

 to have state along multiple axes rather than trying to stuff everything
 into a single variable.


 I'm not opposed to using multiple fields to indicate state; I thought I
 was pretty clear about that in my initial response?

 Best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why do we have os-attach-interfaces in the v3 API?

2014-10-02 Thread Matt Riedemann



On 10/2/2014 4:34 PM, Vishvananda Ishaya wrote:

os-attach-interfacees is actually a a forward port of:

http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/contrib/attach_interfaces.py

which is a compute action that is valid for both nova-network and neutron:

http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n2991

On Oct 2, 2014, at 1:57 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


The os-interface (v2) and os-attach-interfaces (v3) APIs are only used for the 
neutron network API, you'll get a NotImplemented if trying to call the related 
methods with nova-network [1].

Since we aren't proxying to neutron in the v3 API (v2.1), why does 
os-attach-interfaces [2] exist?  Was this just an oversight?  If so, please 
allow me to delete it. :)

[1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/network/api.py?id=2014.2.rc1#n310
[2] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/plugins/v3/attach_interfaces.py?id=2014.2.rc1

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




OK so create/delete call the compute_api to attach/detach, but show and 
index are calling the network_api on port methods which are neutron 
only, so I guess that's what I'm talking about as far as removing. 
Personally I don't think it hurts anything, but I'm getting mixed 
signals about the stance on neutron proxying in the v2.1 API.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] evacuating boot-from-volume instances with local storage is broken?

2014-10-02 Thread Vishvananda Ishaya

On Oct 2, 2014, at 2:05 PM, Chris Friesen chris.frie...@windriver.com wrote:

 On 10/02/2014 02:24 PM, Jay Pipes wrote:
 On 10/02/2014 02:29 PM, Chris Friesen wrote:
 
 Hi,
 
 I'm interested in running nova evacuate on an instance that has local
 storage but was booted from a cinder volume.  OpenStack allows
 live-migration of this sort of instance, so I'm assuming that we would
 want to allow evacuation as well...
 
 I'm getting ready to test it, but I see that there was a nova bug opened
 against this case back in March
 (https://bugs.launchpad.net/nova/+bug/1299368).  It's been confirmed but
 hasn't even had an importance assigned yet.
 
 Anyone can sign up for the https://launchpad.net/~nova team on Launchpad
 and set an importance for any Nova bug.
 
 I don't think that's correct.  I'm a member of the nova team but I'm not 
 allowed to change the importance of bugs.
 
 The mouseover message for the Importance field says that it is changeable 
 only by a project maintainer or bug supervisor”.

The team is: https://launchpad.net/~nova-bugs

Vish


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Introducing Re-Heat

2014-10-02 Thread Zane Bitter
Dredging up this thread because I was reminded of it today by a question 
on ask.openstack.org.


On 18/07/14 09:19, Ayenson, Michael D. wrote:

Hello All,

My name is Mika Ayenson and I have to privilege to intern at Johns Hopkins - Applied 
Physics Lab. I'm really excited to  release the latest proof of concept 
Re-Heat  Re-Heat is a JHUAPL developed tool for OpenStack users to help them 
quickly rebuild their OpenStack environments via OpenStack's Heat .

Here is a link to the Re-Heat paper: 
https://drive.google.com/open?id=0BzTq-ZB9F-b9b0ZXdy1PT2t3dk0authuser=0
Here is a link to Re-Heat: https://github.com/Mikaayenson/ReHeat

I have included the abstract to our paper here:


This makes me sad. Not because it isn't great work - I'm sure it is. It 
makes me sad because when I read statements like:



In the context of “entire lifecycle” management, Heat is the “create” aspect of 
OpenStack orchestration.


I realise that we have completely failed to communicate what Heat is :(

To be clear, in the context of entire lifecycle management, Heat is 
the entire lifecycle aspect of OpenStack orchestration.


I know I, and I suspect many of us, always hoped that this would be 
exactly the kind of application where Heat could make a difference, 
helping scientists to make their research more repeatable.


Heat does that by allowing you to represent your infrastructure as code, 
and store it under version control. Messing with it behind Heat's back 
instead of by modifying the template is the infrastructure equivalent of 
connecting a debugger and messing with the machine code at runtime 
instead of changing the source. It's the opposite of repeatable. And 
developing tools to make using this broken pattern more convenient is a 
step in the wrong direction IMHO.


I strongly recommend you try using the stack update mechanism instead. 
It's not perfect, but it's getting better all the time. We welcome any 
feedback you have.


To be clear, I do think there is a really good use of this kind of 
technology, and it's the one that the Flame developers are targeting: 
bringing existing applications under Heat's management.


cheers,
Zane.


Abstract


OpenStack has experienced tremendous growth since its initial release just over 
four years ago.  Many of the enhancements, such as the Horizon interface and Heat, 
facilitate making complex network environment deployments in the cloud from scratch 
easier.  The Johns Hopkins University Applied Physics Lab (JHU/APL) has been using 
the OpenStack environment to conduct research, host proofs-of-concepts, and perform 
testing  experimentation.  Our experience reveals that during the environment 
development lifecycle users and network architects are constantly changing the 
environments (stacks) they originally deployed.  Once development has reached a 
point at which experimentation and testing is prudent, scientific methodology 
requires recursive testing be conducted to determine the repetitiveness of the 
phenomena observed.  This requires the same entry point (an identical environment) 
into the testing cycle.  Thus, it was necessary to capture all the changes made to 
the initial !

environmen
t during the development phase and modify the original Heat template.  However, OpenStack 
has not had a tool to help automate this process.  In response, JHU/APL developed a 
poof-of-concept automation tool called Re-Heat, which this paper describes in 
detail.

I hope you all enjoy this as I have truly enjoyed playing with HEAT and 
developing Re-Heat.

Cheers,
Mika Ayenson





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [congress] policy summit specs review

2014-10-02 Thread Sean Roberts
These are the spec action items from the policy summit a few weeks ago. Some of 
these specs are in the process of being written. See 
https://github.com/stackforge/congress-specs/tree/master/specs/kilo for the 
merged specs and 
https://review.openstack.org/#/q/status:open+project:stackforge/congress-specs,n,z
 for the specs in review. If a spec is missing from below, you want to get 
involved on any of these specs below, are looking for help on writing your 
first spec, and/or looking for a status update. Respond on this thread. 

Granular/trigger-based datasource driver for ceilometer (Peter, yathi, debo, 
forbes )
Passing hints from congress to nova-scheduler so placement can leverage policy 
(Peter, yathi, debo, forbes)
Workflow description (Tim, Jim jimjunxu,yapengwu, rukhsana.ansari, nils.swart, 
Rukhsana, Nils, Cheng -- we'll split between overview  sources of truth 
workflows)
Architecture Overview
Source of Truth
- Congress (pushes/publishes down)
- GBP (Congress validates/checks what GBP has)
- Hybrid (where group membership comes from GBP and contract comes from 
Congress)
- Both (no single source of truth - possibly a migration scenario)
Translate to GBP -- push to (Alex, Cathy, Louise, Cheng, Helen, Hemanth)
Translate to Neutron Security Groups -- push to” (Alex, Rajdeep)
Runtime support for change to Tier Table -- (pull or trigger fetch) (Tim)
Add trigginer infrastructure to Congress engine (Alex, cathy, louie)
Implement GBP triggering methods (Alex, cathy, louie)
Add GBP translators from reachable table to GBP trigger-tables. (Alex, cathy, 
louie)
Add language semantics to existing catalog (Sean, straz)
Group primitives (Sean, straz)
Temporal const (Sean, straz)

Note that these are not all the specs that we will be reviewing at the Kilo 
summit, rather just the ones from the Policy summit.
Cheers!

~ sean


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] hold off on making releases

2014-10-02 Thread Doug Hellmann
Sean Dague is working on adjusting the gate tests related to Oslo libraries and 
the integrated projects. I don’t think we have any releases planned, but just 
in case:

Please wait to tag any new releases of any Oslo libraries until this work is 
complete to ensure that the new jobs are functioning properly and that we are 
actually running the tests that Jenkins reports as passing.

Either Sean or I will follow up to this email when the coast is clear again.

Thanks!
Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Introducing Re-Heat

2014-10-02 Thread Georgy Okrokvertskhov
Heat updates work pretty well and this is a right way to track all the
changes you do with your infrastructure. Heat declarative templates define
state of the OpenStack or any other application in a good, well designed
abstract form of resources and their dependencies. If you use Heat updates,
the sequence of Heat template diffs gives you the history of changes for
free. Heat updates approach is used in multiple projects like TripleO,
Murano, Solum and this is a proven way to control resources from a single
point of control. We use Heat updates in Murano for almost over the year
and we were able to cover almost every application life-cycle aspect with
Heat updates.

On Thu, Oct 2, 2014 at 2:51 PM, Zane Bitter zbit...@redhat.com wrote:

 Dredging up this thread because I was reminded of it today by a question
 on ask.openstack.org.

 On 18/07/14 09:19, Ayenson, Michael D. wrote:

 Hello All,

 My name is Mika Ayenson and I have to privilege to intern at Johns
 Hopkins - Applied Physics Lab. I'm really excited to  release the latest
 proof of concept Re-Heat  Re-Heat is a JHUAPL developed tool for
 OpenStack users to help them quickly rebuild their OpenStack environments
 via OpenStack's Heat .

 Here is a link to the Re-Heat paper: https://drive.google.com/open?
 id=0BzTq-ZB9F-b9b0ZXdy1PT2t3dk0authuser=0
 Here is a link to Re-Heat: https://github.com/Mikaayenson/ReHeat

 I have included the abstract to our paper here:


 This makes me sad. Not because it isn't great work - I'm sure it is. It
 makes me sad because when I read statements like:

  In the context of “entire lifecycle” management, Heat is the “create”
 aspect of OpenStack orchestration.


 I realise that we have completely failed to communicate what Heat is :(

 To be clear, in the context of entire lifecycle management, Heat is the
 entire lifecycle aspect of OpenStack orchestration.

 I know I, and I suspect many of us, always hoped that this would be
 exactly the kind of application where Heat could make a difference, helping
 scientists to make their research more repeatable.

 Heat does that by allowing you to represent your infrastructure as code,
 and store it under version control. Messing with it behind Heat's back
 instead of by modifying the template is the infrastructure equivalent of
 connecting a debugger and messing with the machine code at runtime instead
 of changing the source. It's the opposite of repeatable. And developing
 tools to make using this broken pattern more convenient is a step in the
 wrong direction IMHO.

 I strongly recommend you try using the stack update mechanism instead.
 It's not perfect, but it's getting better all the time. We welcome any
 feedback you have.

 To be clear, I do think there is a really good use of this kind of
 technology, and it's the one that the Flame developers are targeting:
 bringing existing applications under Heat's management.

 cheers,
 Zane.

  Abstract


 OpenStack has experienced tremendous growth since its initial release
 just over four years ago.  Many of the enhancements, such as the Horizon
 interface and Heat, facilitate making complex network environment
 deployments in the cloud from scratch easier.  The Johns Hopkins University
 Applied Physics Lab (JHU/APL) has been using the OpenStack environment to
 conduct research, host proofs-of-concepts, and perform testing 
 experimentation.  Our experience reveals that during the environment
 development lifecycle users and network architects are constantly changing
 the environments (stacks) they originally deployed.  Once development has
 reached a point at which experimentation and testing is prudent, scientific
 methodology requires recursive testing be conducted to determine the
 repetitiveness of the phenomena observed.  This requires the same entry
 point (an identical environment) into the testing cycle.  Thus, it was
 necessary to capture all the changes made to the initial !

 environmen
 t during the development phase and modify the original Heat template.
 However, OpenStack has not had a tool to help automate this process.  In
 response, JHU/APL developed a poof-of-concept automation tool called
 Re-Heat, which this paper describes in detail.

 I hope you all enjoy this as I have truly enjoyed playing with HEAT and
 developing Re-Heat.

 Cheers,
 Mika Ayenson





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread joehuang
Hello, Duncan, 

Good questions. Currently, the availability zone (AZ in short) terms are not 
applied to both Cinder and Nova together, but seperately. That is to say, the 
AZ for  Cinder can has no relationship to the AZ for Nova. 

Under OpenStack cascading scenario, we would like to make each cascaded 
OpenStack function as fault isolation AZ, therefore, the AZ meaning for Cinder 
and Nova would be kept same. Now it's done by configuration. And if a volume 
located in another AZ2 (cascaded OpenStack)  was attached to a VM located in 
AZ1, it'll be failed, and should not be allowed. 

It's good to add AZ enforcement check in the source code of proxy (no need to 
be done on the trunk source code) to make sure the volume and VM located in the 
same cascaded OpenStack.

That's great you are interested in deep diving before design summit. Please 
follow this thread for the venue and date-time.

Best Regards

Chaoyi Huang ( joehuang )


From: Duncan Thomas [duncan.tho...@gmail.com]
Sent: 02 October 2014 22:33
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
OpenStack cascading

On 2 October 2014 14:30, joehuang joehu...@huawei.com wrote:

 In our PoC design principle, the cascaded OpenStack should work passively, 
 and has no kowledge whether it is running under cascading senario or not to 
 and whether there is sibling OpenStack or not, to reduce interconnect 
 between cascaded OpenStacks as much as possible.
 And one level cascading is enough for foreseeable future.

The transparency is what worries me, e.g. at the moment I can attach
any volume to any vm (* depending on cinder AZ policy), which is going
to be broken in a cascaded scenario if the volume and vm are in
different leaves.


 PoC team planned to stay at Paris from Oct.29 to Nov.8, are you interested in 
 a f2f workshop for deep diving in the OpenStack cascading?

Definitely interested, yes please.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-02 Thread Devananda van der Veen
On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann d...@doughellmann.com wrote:
 As promised at this week’s TC meeting, I have applied the various blog posts 
 and mailing list threads related to changing our governance model to a series 
 of patches against the openstack/governance repository [1].

 I have tried to include all of the inputs, as well as my own opinions, and 
 look at how each proposal needs to be reflected in our current policies so we 
 do not drop commitments we want to retain along with the processes we are 
 shedding [2].

 I am sure we need more discussion, so I have staged the changes as a series 
 rather than one big patch. Please consider the patches together when 
 commenting. There are many related changes, and some incremental steps won’t 
 make sense without the changes that come after (hey, just like code!).

 Doug

 [1] 
 https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
 [2] https://etherpad.openstack.org/p/big-tent-notes

I've summed up a lot of my current thinking on this etherpad as well
(I should really blog, but hey ...)

https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy

-Deva

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-02 Thread joehuang
Hello, Tiwari,



Thanks for your interesting. We have tried to adress multi-site cloud 
integration in fully distributed manner. We found that it's ok if all OpenStack 
instances work with no association,  but if we want to introduce L2/L3 
networking across OpenStack, then it's very hard to track and adress recouses 
corelationship. For example, tenant A has VM1 in OpenStack 1 and VM 2 in 
OpenStack2 with network N1, tennat B has VM3 in OpenStack 2 and VM4 in 
OpenStack 3 with network N2..., the relationship track and data synchronization 
is very hard to address for fully distributed way.



Could you come to Paris a little early, I am afraid we have to prepare live 
demo on Nov.2, and Nov.3 is a very busy day. The f2f deep diving would be 
better to have before Nov.2.



Please follow this thread for the venue and date-time.

Best Regards.



Chaoyi Huang ( joehuang )




From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: 02 October 2014 23:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi Huang,

Thanks for looking in to my proposal.

Yes, Alliance is will be utilizing/retain all Northbound service APIs, in 
addition it will expose APIs for inter Alliance (inter cloud) communication. 
Alliance will be running at topmost layer on each individual OpenStack Cloud of 
multi-site distributed cloud setup. Additionally Alliance will provide loosely 
coupled integration among multiple clouds or cloudyfied data center.

In case of multi regions setup “regional Alliance” (RA) will orchestrate the 
resource (project, VMs, volumes, network ….) provisioning and state 
synchronization through its peers RA. In case cross enterprise integration 
(Enterprise/VPC/bursting like scenario) - multi site public cloud) 
“global Alliance” (GA) will be interface for external integration point and 
communicating with individual RAs.  I will update the wiki to make it more 
clear.

I will love to coordinate with your team and solve this issue together,  I will 
be reaching there in Paris on 1 Nov and we can site f2f before session. Let’s 
plan a time to meet, Monday will be easy for me.


Thanks,
Arvind



From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 5:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading


Hi, Tiwari,



Great to know you are also trying to address similar issues. For sure we are 
happy to work out a common solution for these issues.



I just go through the wiki page, the question for me is will the Alliance 
provide/retain current north bound OpenStack API ?. It's very important for 
the cloud still expose OpenStack API so that the OpenStack API ecosystem will 
not be lost.



And currently OpenStack cascading has not covered the hybrid cloud (private 
cloud and public cloud federation), so your project will be a good supplement.



May we have a f2f workshop before the formal Paris design summit, so that we 
can exchange ideas completely. 40 minutes design summit session is not enough 
for deep diving. PoC team will stay at Paris from Oct.29 to Nov.8.



Best Regards



Chaoyi Huang ( joehuang )




From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: 02 October 2014 0:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading
Hi Chaoyi,

Thanks for sharing these information.

Sometime back I have stared a project called “Alliance” which trying to address 
the same concerns (see the link below). Alliance service is designed to provide 
Inter-Cloud Resource Federation which will enable resource sharing across 
cloud in distributed multi-site OpenStack clouds deployments. This service will 
run on top of OpenStack Cloud and fabricate different cloud (or data centers) 
instances in distributed cloud setup. This service will work closely with 
OpenStack components (Keystone, Nova, Cinder) to manage and provision 
different resources (token, VM, images, network .). Alliance service will 
provide abstraction to hide interoperability and integration complexities from 
underpinning cloud instance and enable following business use cases.

- Multi Region Capability
- Virtual Private Cloud
- Cloud Bursting

This service will provide true plug  play model for region expansion, VPC like 
use case, conceptual design can be found at  
https://wiki.openstack.org/wiki/Inter_Cloud_Resource_Federation. We are working 
on POC using this concept which is in WIP.

I will be happy to coordinate with you on this and try to come up with common 
solution, seems we both are trying to address same issues.

Thoughts?

Thanks,
Arvind

From: joehuang [mailto:joehu...@huawei.com]
Sent: Wednesday, October 01, 2014 6:56 AM
To: 

Re: [openstack-dev] [nova] evacuating boot-from-volume instances with local storage is broken?

2014-10-02 Thread Jay Pipes

On 10/02/2014 05:44 PM, Vishvananda Ishaya wrote:

On Oct 2, 2014, at 2:05 PM, Chris Friesen chris.frie...@windriver.com wrote:


On 10/02/2014 02:24 PM, Jay Pipes wrote:

On 10/02/2014 02:29 PM, Chris Friesen wrote:


Hi,

I'm interested in running nova evacuate on an instance that has local
storage but was booted from a cinder volume.  OpenStack allows
live-migration of this sort of instance, so I'm assuming that we would
want to allow evacuation as well...

I'm getting ready to test it, but I see that there was a nova bug opened
against this case back in March
(https://bugs.launchpad.net/nova/+bug/1299368).  It's been confirmed but
hasn't even had an importance assigned yet.


Anyone can sign up for the https://launchpad.net/~nova team on Launchpad
and set an importance for any Nova bug.


I don't think that's correct.  I'm a member of the nova team but I'm not 
allowed to change the importance of bugs.

The mouseover message for the Importance field says that it is changeable 
only by a project maintainer or bug supervisor”.


The team is: https://launchpad.net/~nova-bugs

Vish


Oops, sorry. My mistake, yes, it's nova-bugs.

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] What I already have for openstack/kubernetes/docker

2014-10-02 Thread Angus Lees
Sorry I couldn't make the IRC meeting.  sdake quite rightly suggested I send 
this to the broader list for dissection.

I spent yesterday templatising my k8s configs so I could publish them without 
revealing all my passwords ;)

  https://github.com/anguslees/kube-openstack


Please take a look and let me know if any of this is useful.  I think the good 
bits are:

- A simpler method of handling k8s pod routes by just using etcd and two shell 
loops to setup a poor-mans dynamic routing protocol.  For all its simplicity, 
this should scale to hundreds of nodes just fine, and a sharding hierarchy 
would be easy enough to add at that point  (see the networking portions in 
heat-kube-coreos-rax.yaml)

- Dockerfiles for nova + keystone, and a start on glance.  The structure should 
be similar for all the other control jobs that don't need to mess with 
hardware directly.  In particular, I'm experimenting with what it would be 
like if environment variables were supported directly in oslo.config files, and 
so far it looks good.

I chose to build these from git master.  I'm not sure if that's a good idea or 
not, but it's what I need to use this for dev work.   A possible improvement 
would be to base these on something like dockerfile/python-runtime.

- k8s config for keystone + nova + a start on glance.  Again, these should be a 
good model for other control jobs.

- I use heat to setup the initial deployment environment and generate all 
the passwords, and then stamp the generated values into kubernetes template 
files.  This assumes an already active undercloud, but it also removes easily 
isolated tasks like set up a mysql server and provide its address here from 
our list of problems to tackle.


I'm trying to run servers independently wherever possible, rather than 
bundling them into the same pod or container.  This gives maximum freedom with 
very little overhead (thanks to docker).  This also means my containers are 
basically dumb software distribution, without a complicated start.sh.

I don't have anything that configures keystone users or catalog yet - I was 
going to do that in a single pass that just added all the service ports some 
time after keystone was configured but not as part of each individual service.

-- 
 - Gus


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Group-based Policy] Today's IRC meeting summary and renaming of resources

2014-10-02 Thread Sumit Naiksatam
Hi, For the past couple of weeks one of the agenda items on our weekly
IRC meetings [1][2] has been to finalize on resources'  naming
convention to avoid any conflict/confusion in the future. Based on
community feedback we had earlier agreed to rename Endpoints and
Endpoint Groups to Policy Targets and Policy Target Groups
respectively. Since then we have received additional feedback to
rename Contracts to Policy Rules Set, and Policy Labels to Policy
Tags. If there are no major objections, we will move forward with
these name changes.

For more information on these resources, please refer to the
Group-based Policy spec [3].

Please also note that current GBP development is continuing in the
StackForge repos [4].

Thanks,
~Sumit.

[1] 
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-09-25-18.02.log.html
[2] 
http://eavesdrop.openstack.org/meetings/networking_policy/2014/networking_policy.2014-10-02-18.01.log.html
[3] https://review.openstack.org/#/c/87825
[4] https://wiki.openstack.org/wiki/GroupBasedPolicy/StackForge/repos

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why do we have os-attach-interfaces in the v3 API?

2014-10-02 Thread Christopher Yeoh
On Thu, 02 Oct 2014 15:57:55 -0500
Matt Riedemann mrie...@linux.vnet.ibm.com wrote:

 The os-interface (v2) and os-attach-interfaces (v3) APIs are only
 used for the neutron network API, you'll get a NotImplemented if
 trying to call the related methods with nova-network [1].
 
 Since we aren't proxying to neutron in the v3 API (v2.1), why does 
 os-attach-interfaces [2] exist?  Was this just an oversight?  If so, 
 please allow me to delete it. :)

The proxying work was not done in Juno due to time constraints, but I
think we should be able to cover it early in Kilo (most of the patches
are pretty much ready). To have a V2.1 which is functionality
equivalent to V2 (allowing us to remove the V2 code) we have to
implement proxying.

Chris

 
 [1] 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/network/api.py?id=2014.2.rc1#n310
 [2] 
 http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/plugins/v3/attach_interfaces.py?id=2014.2.rc1
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] link element in column of a table

2014-10-02 Thread Rajdeep Dua
Hi,
I am trying to extend the DataTable column.
Trying to figure out how exactly it resolves the appropriate html page.

My code below

  from horizon import tables

  class MyTable(tables.DataTable):
  id = tables.Column(id, verbose_name=_(ID),
link=horizon:admin:panel:subpanel:detail)

It is not able to find the detail.html in the corresponding template path

Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some ideas for micro-version implementation

2014-10-02 Thread Kenichi Oomichi

 -Original Message-
 From: Brant Knudson [mailto:b...@acm.org]
 Sent: Thursday, October 02, 2014 10:55 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] Some ideas for micro-version 
 implementation
 
 Thanks for your advice, that is very useful input for me.
 I read both keystone-specs and ietf draft-spec for JSON-Home.
 I have a question.

 JSON-Home is useful for advertising API URL paths to clients, I guess
 but it cannot advertise the supported attributes of a request body.
 Is that right?
 
 Right, it says right in the FAQ: 
 https://tools.ietf.org/html/draft-nottingham-json-home-03#appendix-B.5 :
 
 How Do I find the schema for a format?

That isn't addressed by home documents. ...
 
 Also, you might want to check out section 5, Representation Hints :
 https://tools.ietf.org/html/draft-nottingham-json-home-03#section-5
 
  . All it says is TBD. So we might have to make up our own standard here.

Thanks again, I got it.
Our own standard seems necessary for this use case.

 For example, we can create a user nobody by passing the following
 request body to Keystone /v2.0/users with POST method:

   '{user: {email: null, password: null, enabled: true, name: 
 nobody, tenantId: null}}'

 In this case, I hope Keystone can advertise the above
 attributes(email, name, etc).
 but JSON-Home doesn't cover it as its scope, I guess.
 
 When discussing the document schema I think we're planning to use 
 JSONSchema... In Keystone, we've got J-S implemented
 on some parts (I don't think it covers all resources yet). I also don't think 
 our JSONSchema is discoverable yet (i.e.,
 you can't download the schema from the server). I haven't heard of other 
 projects implementing this yet, but maybe someone
 has.
 
 There probably is some way to integrate JSON Home with JSONSchema. Maybe you 
 can put a reference to the JSONSchema in
 the hints for the resource.

Oh, the hints is a nice idea for JSONSchema.
Do you have any plan/bp/spec for doing it on keystone?
I'd like to join into it if there is.

Thanks
Ken'ichi Ohmichi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-02 Thread Joe Gordon
On Thu, Oct 2, 2014 at 4:16 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 On Thu, Oct 2, 2014 at 2:16 PM, Doug Hellmann d...@doughellmann.com
 wrote:
  As promised at this week’s TC meeting, I have applied the various blog
 posts and mailing list threads related to changing our governance model to
 a series of patches against the openstack/governance repository [1].
 
  I have tried to include all of the inputs, as well as my own opinions,
 and look at how each proposal needs to be reflected in our current policies
 so we do not drop commitments we want to retain along with the processes we
 are shedding [2].
 
  I am sure we need more discussion, so I have staged the changes as a
 series rather than one big patch. Please consider the patches together when
 commenting. There are many related changes, and some incremental steps
 won’t make sense without the changes that come after (hey, just like code!).
 
  Doug
 
  [1]
 https://review.openstack.org/#/q/status:open+project:openstack/governance+branch:master+topic:big-tent,n,z
  [2] https://etherpad.openstack.org/p/big-tent-notes

 I've summed up a lot of my current thinking on this etherpad as well
 (I should really blog, but hey ...)

 https://etherpad.openstack.org/p/in-pursuit-of-a-new-taxonomy


After seeing Jay's idea of making a yaml file modeling things and talking
to devananda about this I went ahead and tried to graph the relationships
out.

repo: https://github.com/jogo/graphing-openstack
preliminary YAML file:
https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
sample graph: http://i.imgur.com/LwlkE73.png

It turns out its really hard to figure out what the relationships are
without digging deep into the code for each project, so I am sure I got a
few things wrong (along with missing a lot of projects).

-Deva

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral 0.2 planning

2014-10-02 Thread Renat Akhmerov
Hi,

Mistral team has selected blueprints for Mistral 0.2 which is currently 
scheduled for 10/31/2014 (may slightly change). Below are the links to release 
LP page and etherpad with our estimations
https://launchpad.net/mistral/+milestone/0.2
https://etherpad.openstack.org/p/mistral-0.2-planning

Please join the discussion if you have something to add/comment or if you’d 
like to contribute to Mistral 0.2.

Thanks

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [fuel] Executor task affinity

2014-10-02 Thread Renat Akhmerov
Yes, Dmitri, thank you for your comments. We basically came to the same 
conclusion. For now DB is not going to be involved and it will be possible to 
fill ‘targets’ from workflow input using YAQL expressions to be able to assign 
tasks to executors dynamically.

Renat Akhmerov
@ Mirantis Inc.



On 02 Oct 2014, at 16:11, Dmitriy Shulyak dshul...@mirantis.com wrote:

 Hi,
 
 As i understood you want to store some mappings of tags to hosts in database, 
 but then you need to sort out api
 for registering hosts and/or discovery mechanism for such hosts. It is quite 
 complex.
 It maybe be usefull, in my opinion it would be better to have simpler/more 
 flexible variant. 
 
 For example:
 
 1. Provide targets in workbook description, like:
 
 task:
   targets: [nova, cinder, etc]
 
 2. Get targets from execution contexts by using yaql:
 
 task:
   targets: $.uids
 
 task:
   targets: [$.role, $.uid]
 
 In this case all simple relations will be covered by amqp routing 
 configuration
 What do you think about such approach?
 
 On Thu, Oct 2, 2014 at 11:35 AM, Nikolay Makhotkin nmakhot...@mirantis.com 
 wrote:
 Hi, folks! 
 
 I drafted the document where we can see how task affinity will be applied to 
 Mistral:
 
 https://docs.google.com/a/mirantis.com/document/d/17O51J1822G9KY_Fkn66Ul2fc56yt9T4NunnSgmaehmg/edit
 
 -- 
 Best Regards,
 Nikolay
 @Mirantis Inc.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] governance changes for big tent model

2014-10-02 Thread Chris Friesen

On 10/02/2014 10:46 PM, Joe Gordon wrote:


After seeing Jay's idea of making a yaml file modeling things and
talking to devananda about this I went ahead and tried to graph the
relationships out.

repo: https://github.com/jogo/graphing-openstack
preliminary YAML file:
https://github.com/jogo/graphing-openstack/blob/master/openstack.yaml
sample graph: http://i.imgur.com/LwlkE73.png


To save people's time figuring out the colours:

black: requires
blue: can use
red: depends on

I'm not sure we need to explicitly track depends on.  If a service has 
only one incoming requires or can use arrow, then it depends on 
whoever uses it, no?


Also, once a service requires another one, it seems somewhat redundant 
to also say that it depends on it.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Mistral 0.2 planning

2014-10-02 Thread Dmitri Zimine
Thanks Renat for running and capturing.

One addition - allocate time for bug fixes, we’ll have quite a few :)

DZ.

On Oct 2, 2014, at 9:56 PM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Hi,
 
 Mistral team has selected blueprints for Mistral 0.2 which is currently 
 scheduled for 10/31/2014 (may slightly change). Below are the links to 
 release LP page and etherpad with our estimations
 https://launchpad.net/mistral/+milestone/0.2
 https://etherpad.openstack.org/p/mistral-0.2-planning
 
 Please join the discussion if you have something to add/comment or if you’d 
 like to contribute to Mistral 0.2.
 
 Thanks
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev