Re: [openstack-dev] [Ironic] (Non-)consistency of the Ironic hash ring implementation

2014-09-08 Thread Nejc Saje



On 09/08/2014 06:22 AM, Robert Collins wrote:

On 8 September 2014 05:57, Nejc Saje ns...@redhat.com wrote:
\

That generator API is pretty bad IMO - because it means you're very
heavily dependent on gc and refcount behaviour to keep things clean -
and there isn't (IMO) a use case for walking the entire ring from the
perspective of an item. Whats the concern with having replicas a part
of the API?



Because they don't really make sense conceptually. Hash ring itself doesn't
actually 'make' any replicas. The replicas parameter in the current Ironic
implementation is used solely to limit the amount of buckets returned.
Conceptually, that seems to me the same as take(replicas,
iterate_nodes()). I don't know python internals enough to know what problems
this would cause though, can you please clarify?


I could see replicas being a parameter to a function call, but take(N,
generator) has the same poor behaviour - generators in general that
won't be fully consumed rely on reference counting to be freed.
Sometimes thats absolutely the right tradeoff.


Ok, I can agree with it being a function call.





its absolutely a partition of the hash space - each spot we hash a
bucket onto is thats how consistent hashing works at all :)



Yes, but you don't assign the number of partitions beforehand, it depends on
the number of buckets. What you do assign is the amount of times you hash a
single bucket onto the ring, which is currently named 'replicas' in
Ceilometer code, but I suggested 'distribution_quality' or something
similarly descriptive in an earlier e-mail.


I think you misunderstand the code. We do assign the number of
partitions beforehand - its approximately fixed and independent of the
number of buckets. More buckets == less times we hash each bucket.


Ah, your first sentence tipped me off that we're not actually speaking 
about the same code :). I'm talking about current Ceilometer code and 
the way the algorithm is described in the original paper. There, the 
number of times we hash a bucket doesn't depend on the number of buckets 
at all. The implementation with an array that Ironic used to have 
definitely needed to define the number of partition, but I don't see the 
need for it with the new approach as well. Why would you want to limit 
yourself to a fixed number of partitions if you're limited only by the 
output range of the hash function?


Cheers,
Nejc



-Rob



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][cisco] Cisco Nexus requires patched ncclient

2014-09-08 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 24/08/14 20:40, Maru Newby wrote:
 
 On Aug 24, 2014, at 5:14 PM, Henry Gessau ges...@cisco.com
 wrote:
 
 Ihar Hrachyshka ihrac...@redhat.com wrote:
 Now, maybe putting the module into requirements.txt is an
 overkill (though I doubt it). In that case, we could be
 interested in getting the info in some other centralized way.
 
 Maru is of the opinion that it is overkill. I feel the same way,
 but I am not involved much with deployment issues so my feelings
 should not sway anyone.
 
 Note that ncclient is not the only package used by vendor
 solutions that is not listed in requirements.txt. The ones I am
 aware of are:
 
 ncclient (cisco and brocade) heleosapi (embrane) 
 a10_neutron_lbaas (a10networks)
 
 Maybe we should start exploring some other centralized way to
 list these type of dependencies?
 
 I think each plugin should be able to have its own requirements.txt
 file to aid in manual deployment and in packaging of a given
 plugin.  This suggests the need to maintain a global plugin
 requirements file (in the tree) to ensure use of only a single
 version of any dependencies used across more than one plugin.
 
 Given that 3rd party CI is likely having to install these
 dependencies anyway, I think it would be good to make this
 deployment reproducible while avoiding the need to add new
 dependencies to core Neutron.
 

With plans to move most of plugins into separate trees, we will
probably just move those plugin specific dependencies there, so that
dependencies are both explicit AND not adding exotic dependencies to
core package. This sounds like work for after-split time (=after Juno).

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)

iQEcBAEBCgAGBQJUDV53AAoJEC5aWaUY1u57qIIIANFifDL2/orIqZ4HL12IWteG
YpQrExTHdPksz02gv2WAzbvegcIplHdoJ057vTMYxLXxDKYwomhXkF1AD0ByNsVZ
6gJUYtmNclBGXv1Mf9bDa7xue69Ce/imrRuNbycov+0bZcBm4ilavX2HF6vv3TMW
AxF3N3FXlc+hIJYhu8SkgHoHDvonh+/oZ8KCQ8t53uAZWR54wUavQkIo7pwRVlcA
CJeJPXWB+nFJVfl5a8yZAUtobhyaN5gJlftLvP4ZcLDTeIpIyaP12Kzf7Ui9f55B
STK6VPBQcIr5rN85XaHm0h60bzOaRcJn0wj3pGDv2kEUEEldnLPXpLsRGsnFcnk=
=EeG1
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Doc team working with bugs

2014-09-08 Thread Dmitry Mescheryakov
Hello Fuelers,

On the previous meeting a topic was raised on how Fuel doc team should
work with bugs, see [1] for details. We agreed to move the discussion
into the mailing list.

The thing is there are two members in the team at the moment (Meg and
Irina) and they need to distribute work among themselves. The natural
way to distribute load is to assign bugs. But frequently they document
bugs which are in the process of being fixed, so they are already
assigned to an engineer. I.e. a bug needs to be assigned to an
engineer and a tech writer at the same time.

I've proposed to create a separate series 'docs' in launchpad (it is
the thing like '5.0.x', '5.1.x'). If bug affects several series, a
different engineer could be assigned on each of them. So doc team will
be free to assign bugs to themselves within this new series.

Mike Scherbakov and Dmitry Borodaenko objected creating another series
in launchpad. Instead they proposed to mark bugs with tags like
'docs-irina' and 'docs-meg' thus assigning them.

What do you think is the best way to handle this? As for me, I don't
have strong preference there.

One last note: the question applies to two launchpad projects
actually: Fuel and MOS. But naturally we want to do this the same way
in both projects.

Thanks,

Dmitry

[1] 
http://eavesdrop.openstack.org/meetings/fuel/2014/fuel.2014-09-04-15.59.log.html#l-310

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] (Non-)consistency of the Swift hash ring implementation

2014-09-08 Thread Nejc Saje
That's great to hear! I see now that Swift's implementation has some 
additional rebalancing logic that Ironic (and the code example from 
Gregory's blog) lacked.


Cheers,
Nejc

On 09/08/2014 05:39 AM, John Dickinson wrote:

To test Swift directly, I used the CLI tools that Swift provides for managing 
rings. I wrote the following short script:

$ cat remakerings
#!/bin/bash

swift-ring-builder object.builder create 16 3 0
for zone in {1..4}; do
for server in {200..224}; do
for drive in {1..12}; do
swift-ring-builder object.builder add 
r1z${zone}-10.0.${zone}.${server}:6010/d${drive} 3000
done
done
done
swift-ring-builder object.builder rebalance



This adds 1200 devices. 4 zones, each with 25 servers, each with 12 drives 
(4*25*12=1200). The important thing is that instead of adding 1000 drives in 
one zone or in one server, I'm splaying across the placement hierarchy that 
Swift uses.

After running the script, I added one drive to one server to see what the 
impact would be and rebalanced. The swift-ring-builder tool detected that less 
than 1% of the partitions would change and therefore didn't move anything (just 
to avoid unnecessary data movement).

--John





On Sep 7, 2014, at 11:20 AM, Nejc Saje ns...@redhat.com wrote:


Hey guys,

in Ceilometer we're using consistent hash rings to do workload
partitioning[1]. We've considered using Ironic's hash ring implementation, but 
found out it wasn't actually consistent (ML[2], patch[3]). The next thing I 
noticed that the Ironic implementation is based on Swift's.

The gist of it is: since you divide your ring into a number of equal sized 
partitions, instead of hashing hosts onto the ring, when you add a new host, an 
unbound amount of keys get re-mapped to different hosts (instead of the 
1/#nodes remapping guaranteed by hash ring).

Swift's hash ring implementation is quite complex though, so I took the 
conceptually similar code from Gregory Holt's blogpost[4] (which I'm guessing 
is based on Gregory's efforts on Swift's hash ring implementation) and tested 
that instead. With a simple test (paste[5]) of first having 1000 nodes and then 
adding 1, 99.91% of the data was moved.

I have no way to test this in Swift directly, so I'm just throwing this out 
there, so you guys can figure out whether there actually is a problem or not.

Cheers,
Nejc

[1] https://review.openstack.org/#/c/113549/
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044566.html
[3] https://review.openstack.org/#/c/118932/4
[4] http://greg.brim.net/page/building_a_consistent_hashing_ring.html
[5] http://paste.openstack.org/show/107782/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-08 Thread Steven Hardy
On Mon, Sep 08, 2014 at 03:14:24PM +1200, Robert Collins wrote:
 I hope the subject got your attention :).
 
 This might be a side effect of my having too many cosmic rays, but its
 been percolating for a bit.
 
 tl;dr I think we should drop our 'needs 2x+2 to land' rule and instead
 use 'needs 1x+2'. We can ease up a large chunk of pressure on our
 review bottleneck, with the only significant negative being that core
 reviewers may see less of the code going into the system - but they
 can always read more to stay in shape if thats an issue :)

I think this may be a sensible move, but only if it's used primarily to
land the less complex/risky patches more quickly.

As has been mentioned already by Angus, +1 can (and IMO should) be used for
any less trival and/or risky patches, as the more-eyeballs thing is really
important for big or complex patches (we are all fallible, and -core folks
quite regularly either disagree, spot different types of issue, or just
have better familiarity with some parts of the codebase than others).

FWIW, every single week in the Heat queue, disagreements between -core
reviewers result in issues getting fixed before merge, which would result
in more bugs if the 1x+2 scheme was used unconditionally.  I'm sure other
projects are the same, but I guess this risk can be mitigated with reviewer
+1 discretion.

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] SSL in Fuel

2014-09-08 Thread Sebastian Kalinowski
Hi all,

As next step for improving Fuel security we are introducing SSL for both
Fuel [1] and OS API endpoints [2]. Both specs assume usage of self-signed
certificates generated by Fuel.
It also required to allow users to use their own certs to secure their
deployments
(two blueprints that touch that part are [3] and [4])

We would like to start a discussion to see what opinions (and maybe ideas)
you
have for that feature.

Best,
Sebastian

[1] https://review.openstack.org/#/c/119330
[2] https://review.openstack.org/#/c/102273
[3] https://blueprints.launchpad.net/fuel/+spec/ca-deployment
[4] https://blueprints.launchpad.net/fuel/+spec/manage-ssl-certificate
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] KeyError: 'admin'

2014-09-08 Thread Daisuke Morita

Hi, rally developers!

Now, I am trying to use Rally to devstack cluster on AWS VM
(all-in-one). I'm following a blog post
https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
. I successfully installed Devstack, Rally and Tempest. Now, I just ran
Tempest by 'rally verify start' command, but the command failed with the
following stacktrace.


2014-09-08 10:57:57.803 17176 CRITICAL rally [-] KeyError: 'admin'
2014-09-08 10:57:57.803 17176 TRACE rally Traceback (most recent call last):
2014-09-08 10:57:57.803 17176 TRACE rally   File /usr/local/bin/rally,
line 10, in module
2014-09-08 10:57:57.803 17176 TRACE rally sys.exit(main())
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/main.py, line 40, in main
2014-09-08 10:57:57.803 17176 TRACE rally return
cliutils.run(sys.argv, categories)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py, line
184, in run
2014-09-08 10:57:57.803 17176 TRACE rally ret = fn(*fn_args,
**fn_kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File string, line 2, in
start
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py, line 64,
in default_from_global
2014-09-08 10:57:57.803 17176 TRACE rally return f(*args, **kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/commands/verify.py,
line 59, in start
2014-09-08 10:57:57.803 17176 TRACE rally api.verify(deploy_id,
set_name, regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/orchestrator/api.py, line
153, in verify
2014-09-08 10:57:57.803 17176 TRACE rally
verifier.verify(set_name=set_name, regex=regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 247, in verify
2014-09-08 10:57:57.803 17176 TRACE rally
self._prepare_and_run(set_name, regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/utils.py, line 165, in
wrapper
2014-09-08 10:57:57.803 17176 TRACE rally result = f(self, *args,
**kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 146, in _prepare_and_run
2014-09-08 10:57:57.803 17176 TRACE rally self.generate_config_file()
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 89, in generate_config_file
2014-09-08 10:57:57.803 17176 TRACE rally
config.TempestConf(self.deploy_id).generate(self.config_file)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
line 242, in generate
2014-09-08 10:57:57.803 17176 TRACE rally func()
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
line 115, in _set_boto
2014-09-08 10:57:57.803 17176 TRACE rally
self.conf.set(section_name, 'ec2_url', self._get_url('ec2'))
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
line 105, in _get_url
2014-09-08 10:57:57.803 17176 TRACE rally return
service['admin']['publicURL']
2014-09-08 10:57:57.803 17176 TRACE rally KeyError: 'admin'
2014-09-08 10:57:57.803 17176 TRACE rally


I tried to dig into the root cause of above error, but I did not have
any idea where to look into. The most doubtful may be automatically
generated configuration file, but I did not find anything odd.

If possible, could you give me some hints on what to do?

Sorry for bothering you. Thanks in advance.



Best Regards,
Daisuke

-- 
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] KeyError: 'admin'

2014-09-08 Thread Mikhail Dubov
Hi Daisuke,

seems like your issue is connected to the change in the deployment
configuration file format for existing clouds we've merged
https://review.openstack.org/#/c/116766/ recently.

Please see the updated Wiki How to page
https://wiki.openstack.org/wiki/Rally/HowTo#Step_1._Deployment_initialization_.28use_existing_cloud.29
that
describes the new format. In your case, you can just update the deployment
configuration file and run again *rally deployment create*. Everything
should work then.



Best regards,
Mikhail Dubov

Mirantis, Inc.
E-Mail: mdu...@mirantis.com
Skype: msdubov

On Mon, Sep 8, 2014 at 3:16 PM, Daisuke Morita morita.dais...@lab.ntt.co.jp
 wrote:


 Hi, rally developers!

 Now, I am trying to use Rally to devstack cluster on AWS VM
 (all-in-one). I'm following a blog post
 https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
 . I successfully installed Devstack, Rally and Tempest. Now, I just ran
 Tempest by 'rally verify start' command, but the command failed with the
 following stacktrace.


 2014-09-08 10:57:57.803 17176 CRITICAL rally [-] KeyError: 'admin'
 2014-09-08 10:57:57.803 17176 TRACE rally Traceback (most recent call
 last):
 2014-09-08 10:57:57.803 17176 TRACE rally   File /usr/local/bin/rally,
 line 10, in module
 2014-09-08 10:57:57.803 17176 TRACE rally sys.exit(main())
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/main.py, line 40, in
 main
 2014-09-08 10:57:57.803 17176 TRACE rally return
 cliutils.run(sys.argv, categories)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py, line
 184, in run
 2014-09-08 10:57:57.803 17176 TRACE rally ret = fn(*fn_args,
 **fn_kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File string, line 2, in
 start
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py, line 64,
 in default_from_global
 2014-09-08 10:57:57.803 17176 TRACE rally return f(*args, **kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/commands/verify.py,
 line 59, in start
 2014-09-08 10:57:57.803 17176 TRACE rally api.verify(deploy_id,
 set_name, regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/orchestrator/api.py, line
 153, in verify
 2014-09-08 10:57:57.803 17176 TRACE rally
 verifier.verify(set_name=set_name, regex=regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
 line 247, in verify
 2014-09-08 10:57:57.803 17176 TRACE rally
 self._prepare_and_run(set_name, regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/utils.py, line 165, in
 wrapper
 2014-09-08 10:57:57.803 17176 TRACE rally result = f(self, *args,
 **kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
 line 146, in _prepare_and_run
 2014-09-08 10:57:57.803 17176 TRACE rally self.generate_config_file()
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
 line 89, in generate_config_file
 2014-09-08 10:57:57.803 17176 TRACE rally
 config.TempestConf(self.deploy_id).generate(self.config_file)
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
 line 242, in generate
 2014-09-08 10:57:57.803 17176 TRACE rally func()
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
 line 115, in _set_boto
 2014-09-08 10:57:57.803 17176 TRACE rally
 self.conf.set(section_name, 'ec2_url', self._get_url('ec2'))
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
 line 105, in _get_url
 2014-09-08 10:57:57.803 17176 TRACE rally return
 service['admin']['publicURL']
 2014-09-08 10:57:57.803 17176 TRACE rally KeyError: 'admin'
 2014-09-08 10:57:57.803 17176 TRACE rally


 I tried to dig into the root cause of above error, but I did not have
 any idea where to look into. The most doubtful may be automatically
 generated configuration file, but I did not find anything odd.

 If possible, could you give me some hints on what to do?

 Sorry for bothering you. Thanks in advance.



 Best Regards,
 Daisuke

 --
 Daisuke Morita morita.dais...@lab.ntt.co.jp
 NTT Software Innovation Center, NTT Corporation


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] KeyError: 'admin'

2014-09-08 Thread Boris Pavlovic
Daisuke,

We have as well changes in our DB models.

So running:

  $ rally-manage db recreate

Will be as well required..


Best regards,
Boris Pavlovic



On Mon, Sep 8, 2014 at 3:24 PM, Mikhail Dubov mdu...@mirantis.com wrote:

 Hi Daisuke,

 seems like your issue is connected to the change in the deployment
 configuration file format for existing clouds we've merged
 https://review.openstack.org/#/c/116766/ recently.

 Please see the updated Wiki How to page
 https://wiki.openstack.org/wiki/Rally/HowTo#Step_1._Deployment_initialization_.28use_existing_cloud.29
  that
 describes the new format. In your case, you can just update the deployment
 configuration file and run again *rally deployment create*. Everything
 should work then.



 Best regards,
 Mikhail Dubov

 Mirantis, Inc.
 E-Mail: mdu...@mirantis.com
 Skype: msdubov

 On Mon, Sep 8, 2014 at 3:16 PM, Daisuke Morita 
 morita.dais...@lab.ntt.co.jp wrote:


 Hi, rally developers!

 Now, I am trying to use Rally to devstack cluster on AWS VM
 (all-in-one). I'm following a blog post

 https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
 . I successfully installed Devstack, Rally and Tempest. Now, I just ran
 Tempest by 'rally verify start' command, but the command failed with the
 following stacktrace.


 2014-09-08 10:57:57.803 17176 CRITICAL rally [-] KeyError: 'admin'
 2014-09-08 10:57:57.803 17176 TRACE rally Traceback (most recent call
 last):
 2014-09-08 10:57:57.803 17176 TRACE rally   File /usr/local/bin/rally,
 line 10, in module
 2014-09-08 10:57:57.803 17176 TRACE rally sys.exit(main())
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/main.py, line 40, in
 main
 2014-09-08 10:57:57.803 17176 TRACE rally return
 cliutils.run(sys.argv, categories)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py, line
 184, in run
 2014-09-08 10:57:57.803 17176 TRACE rally ret = fn(*fn_args,
 **fn_kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File string, line 2, in
 start
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py, line 64,
 in default_from_global
 2014-09-08 10:57:57.803 17176 TRACE rally return f(*args, **kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/commands/verify.py,
 line 59, in start
 2014-09-08 10:57:57.803 17176 TRACE rally api.verify(deploy_id,
 set_name, regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/orchestrator/api.py, line
 153, in verify
 2014-09-08 10:57:57.803 17176 TRACE rally
 verifier.verify(set_name=set_name, regex=regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
 line 247, in verify
 2014-09-08 10:57:57.803 17176 TRACE rally
 self._prepare_and_run(set_name, regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/utils.py, line 165, in
 wrapper
 2014-09-08 10:57:57.803 17176 TRACE rally result = f(self, *args,
 **kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
 line 146, in _prepare_and_run
 2014-09-08 10:57:57.803 17176 TRACE rally self.generate_config_file()
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
 line 89, in generate_config_file
 2014-09-08 10:57:57.803 17176 TRACE rally
 config.TempestConf(self.deploy_id).generate(self.config_file)
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
 line 242, in generate
 2014-09-08 10:57:57.803 17176 TRACE rally func()
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
 line 115, in _set_boto
 2014-09-08 10:57:57.803 17176 TRACE rally
 self.conf.set(section_name, 'ec2_url', self._get_url('ec2'))
 2014-09-08 10:57:57.803 17176 TRACE rally   File

 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
 line 105, in _get_url
 2014-09-08 10:57:57.803 17176 TRACE rally return
 service['admin']['publicURL']
 2014-09-08 10:57:57.803 17176 TRACE rally KeyError: 'admin'
 2014-09-08 10:57:57.803 17176 TRACE rally


 I tried to dig into the root cause of above error, but I did not have
 any idea where to look into. The most doubtful may be automatically
 generated configuration file, but I did not find anything odd.

 If possible, could you give me some hints on what to do?

 Sorry for bothering you. Thanks in advance.



 Best Regards,
 Daisuke

 --
 Daisuke Morita morita.dais...@lab.ntt.co.jp
 NTT 

[openstack-dev] [Glance][FFE] Refactoring Glance Logging

2014-09-08 Thread Kuvaja, Erno
All,

There is two changes still not landed from 
https://blueprints.launchpad.net/glance/+spec/refactoring-glance-logging

https://review.openstack.org/116626

and

https://review.openstack.org/#/c/117204/

Merge of the changes was delayed over J3 to avoid any potential merge 
conflicts. There was minor change made (couple of LOG.exceptions changed to 
LOG.error based on the review feedback) when rebased.

I would like to request Feature Freeze Exception if needed to finish the Juno 
Logging refactoring and getting these two changes merged in.

BR,
Erno (jokke_) Kuvaja
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] KeyError: 'admin'

2014-09-08 Thread Mikhail Dubov
Hi Daisuke,

have you executed as well

*   $ rally-manage db recreate*

as Boris pointed out in the previous letter?

Best regards,
Mikhail Dubov

Community
Mirantis, Inc.
E-Mail: mdu...@mirantis.com
Skype: msdubov

On Mon, Sep 8, 2014 at 3:46 PM, Daisuke Morita morita.dais...@lab.ntt.co.jp
 wrote:


 Hi Mikhail,

 Thanks for your quick reply. I have already added cloud by using json
 format. Just to be safe, I retried to create deployment by following How
 to, but the same error is shown now.


 My existing.json is as follows. Is there anything wrong?


 {
 type: ExistingCloud,
 auth_url: http://127.0.0.1:5000/v2.0/;,
 admin: {
 username: admin,
 password: pass,
 tenant_name: admin
 }
 }



 Best regards,
 Daisuke

 On 2014/09/08 20:24, Mikhail Dubov wrote:

 Hi Daisuke,

 seems like your issue is connected to the change in the deployment
 configuration file format for existing clouds we've merged
 https://review.openstack.org/#/c/116766/ recently.

 Please see the updated Wiki How to page
 https://wiki.openstack.org/wiki/Rally/HowTo#Step_1._
 Deployment_initialization_.28use_existing_cloud.29 that
 describes the new format. In your case, you can just update the
 deployment configuration file and run again /rally deployment create/.
 Everything should work then.



 Best regards,
 Mikhail Dubov

 Mirantis, Inc.
 E-Mail: mdu...@mirantis.com mailto:mdu...@mirantis.com
 Skype: msdubov

 On Mon, Sep 8, 2014 at 3:16 PM, Daisuke Morita
 morita.dais...@lab.ntt.co.jp mailto:morita.dais...@lab.ntt.co.jp
 wrote:


 Hi, rally developers!

 Now, I am trying to use Rally to devstack cluster on AWS VM
 (all-in-one). I'm following a blog post
 https://www.mirantis.com/blog/rally-openstack-tempest-
 testing-made-simpler/
 . I successfully installed Devstack, Rally and Tempest. Now, I just
 ran
 Tempest by 'rally verify start' command, but the command failed with
 the
 following stacktrace.


 2014-09-08 10:57:57.803 17176 CRITICAL rally [-] KeyError: 'admin'
 2014-09-08 10:57:57.803 17176 TRACE rally Traceback (most recent
 call last):
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/bin/rally,
 line 10, in module
 2014-09-08 10:57:57.803 17176 TRACE rally sys.exit(main())
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/main.py, line 40,
 in main
 2014-09-08 10:57:57.803 17176 TRACE rally return
 cliutils.run(sys.argv, categories)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py, line
 184, in run
 2014-09-08 10:57:57.803 17176 TRACE rally ret = fn(*fn_args,
 **fn_kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File string, line 2,
 in
 start
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py, line
 64,
 in default_from_global
 2014-09-08 10:57:57.803 17176 TRACE rally return f(*args,
 **kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/
 commands/verify.py,
 line 59, in start
 2014-09-08 10:57:57.803 17176 TRACE rally api.verify(deploy_id,
 set_name, regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/orchestrator/api.py,
 line
 153, in verify
 2014-09-08 10:57:57.803 17176 TRACE rally
 verifier.verify(set_name=set_name, regex=regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/
 tempest/tempest.py,
 line 247, in verify
 2014-09-08 10:57:57.803 17176 TRACE rally
 self._prepare_and_run(set_name, regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/utils.py, line 165, in
 wrapper
 2014-09-08 10:57:57.803 17176 TRACE rally result = f(self, *args,
 **kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/
 tempest/tempest.py,
 line 146, in _prepare_and_run
 2014-09-08 10:57:57.803 17176 TRACE rally
   self.generate_config_file()
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/
 tempest/tempest.py,
 line 89, in generate_config_file
 2014-09-08 10:57:57.803 17176 TRACE rally
 config.TempestConf(self.deploy_id).generate(self.config_file)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/
 tempest/config.py,
 line 242, in generate
 2014-09-08 10:57:57.803 17176 TRACE rally func()
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 

Re: [openstack-dev] [rally] KeyError: 'admin'

2014-09-08 Thread Daisuke Morita


Thanks, Boris.

I tried rally-manage db recreate before registering a deployment, but 
nothing changed at all in running Tempest...


It is late in Japan, so I will try it tomorrow.


Best regards,
Daisuke

On 2014/09/08 20:38, Boris Pavlovic wrote:

Daisuke,

We have as well changes in our DB models.

So running:

   $ rally-manage db recreate

Will be as well required..


Best regards,
Boris Pavlovic



On Mon, Sep 8, 2014 at 3:24 PM, Mikhail Dubov mdu...@mirantis.com
mailto:mdu...@mirantis.com wrote:

Hi Daisuke,

seems like your issue is connected to the change in the deployment
configuration file format for existing clouds we've merged
https://review.openstack.org/#/c/116766/ recently.

Please see the updated Wiki How to page

https://wiki.openstack.org/wiki/Rally/HowTo#Step_1._Deployment_initialization_.28use_existing_cloud.29
 that
describes the new format. In your case, you can just update the
deployment configuration file and run again /rally deployment
create/. Everything should work then.



Best regards,
Mikhail Dubov

Mirantis, Inc.
E-Mail: mdu...@mirantis.com mailto:mdu...@mirantis.com
Skype: msdubov

On Mon, Sep 8, 2014 at 3:16 PM, Daisuke Morita
morita.dais...@lab.ntt.co.jp mailto:morita.dais...@lab.ntt.co.jp
wrote:


Hi, rally developers!

Now, I am trying to use Rally to devstack cluster on AWS VM
(all-in-one). I'm following a blog post

https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
. I successfully installed Devstack, Rally and Tempest. Now, I
just ran
Tempest by 'rally verify start' command, but the command failed
with the
following stacktrace.


2014-09-08 10:57:57.803 17176 CRITICAL rally [-] KeyError: 'admin'
2014-09-08 10:57:57.803 17176 TRACE rally Traceback (most recent
call last):
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/bin/rally,
line 10, in module
2014-09-08 10:57:57.803 17176 TRACE rally sys.exit(main())
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/main.py, line
40, in main
2014-09-08 10:57:57.803 17176 TRACE rally return
cliutils.run(sys.argv, categories)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py, line
184, in run
2014-09-08 10:57:57.803 17176 TRACE rally ret = fn(*fn_args,
**fn_kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File string,
line 2, in
start
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py,
line 64,
in default_from_global
2014-09-08 10:57:57.803 17176 TRACE rally return f(*args,
**kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/commands/verify.py,
line 59, in start
2014-09-08 10:57:57.803 17176 TRACE rally api.verify(deploy_id,
set_name, regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/orchestrator/api.py,
line
153, in verify
2014-09-08 10:57:57.803 17176 TRACE rally
verifier.verify(set_name=set_name, regex=regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 247, in verify
2014-09-08 10:57:57.803 17176 TRACE rally
self._prepare_and_run(set_name, regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/utils.py, line
165, in
wrapper
2014-09-08 10:57:57.803 17176 TRACE rally result = f(self,
*args,
**kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 146, in _prepare_and_run
2014-09-08 10:57:57.803 17176 TRACE rally
  self.generate_config_file()
2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 89, in generate_config_file
2014-09-08 10:57:57.803 17176 TRACE rally
config.TempestConf(self.deploy_id).generate(self.config_file)
2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
line 242, in generate
2014-09-08 10:57:57.803 17176 TRACE rally func()
2014-09-08 10:57:57.803 17176 TRACE rally   File


Re: [openstack-dev] about Distributed OpenStack Cluster

2014-09-08 Thread Jesse Pretorius
On 8 September 2014 14:00, Vo Hoang, Tri t.vo_ho...@telekom.de wrote:


 Today  I am searching for a solution to distribute OpenStack over several
 geographies/availabilitiy zones and we found one from Huawei in [1]. In
 short, it’s a classical centralized solution like Eucalyptus cloud [2],
 whereby there is a central control for all clusters in a top down topology.
 The central control actively collects available resource from each cluster
 and also proxy the request from one cluster to the other one.




The openstack-dev list is meant to be for the discussion of current and
future development of OpenStack itself, whereas the question you're asking
is more suited to the openstack-operators list. I encourage you to send
your question there instead.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-08 Thread Russell Bryant
On 09/08/2014 05:17 AM, Steven Hardy wrote:
 On Mon, Sep 08, 2014 at 03:14:24PM +1200, Robert Collins wrote:
 I hope the subject got your attention :).

 This might be a side effect of my having too many cosmic rays, but its
 been percolating for a bit.

 tl;dr I think we should drop our 'needs 2x+2 to land' rule and instead
 use 'needs 1x+2'. We can ease up a large chunk of pressure on our
 review bottleneck, with the only significant negative being that core
 reviewers may see less of the code going into the system - but they
 can always read more to stay in shape if thats an issue :)
 
 I think this may be a sensible move, but only if it's used primarily to
 land the less complex/risky patches more quickly.
 
 As has been mentioned already by Angus, +1 can (and IMO should) be used for
 any less trival and/or risky patches, as the more-eyeballs thing is really
 important for big or complex patches (we are all fallible, and -core folks
 quite regularly either disagree, spot different types of issue, or just
 have better familiarity with some parts of the codebase than others).
 
 FWIW, every single week in the Heat queue, disagreements between -core
 reviewers result in issues getting fixed before merge, which would result
 in more bugs if the 1x+2 scheme was used unconditionally.  I'm sure other
 projects are the same, but I guess this risk can be mitigated with reviewer
 +1 discretion.

Agreed with this.  I think this is a worthwhile move for simpler
patches.  I've already done it plenty of times for a very small category
of things (like translations updates).  It would be worth having someone
write up a proposal that reflects this, with some examples that
demonstrate patches that really need the second review vs others that
don't.  In the end, it has to be based on trust in a -core team member
judgement call.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Chris Dent

On Sun, 7 Sep 2014, Monty Taylor wrote:


1. Caring about end user experience at all
2. Less features, more win
3. Deleting things


Yes. I'll give away all of my list for any one of these.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] convergence flow diagrams

2014-09-08 Thread Tyagi, Ishant
Hi All,

As per the heat mid cycle meetup whiteboard, we have created the flowchart and 
sequence diagram for the convergence . Can you please review these diagrams and 
provide your feedback?

https://www.dropbox.com/sh/i8qbjtgfdxn4zx4/AAC6J-Nps8J12TzfuCut49ioa?dl=0

Thanks,
Ishant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-08 Thread Sean Dague
On 09/08/2014 08:52 AM, Russell Bryant wrote:
 On 09/08/2014 05:17 AM, Steven Hardy wrote:
 On Mon, Sep 08, 2014 at 03:14:24PM +1200, Robert Collins wrote:
 I hope the subject got your attention :).

 This might be a side effect of my having too many cosmic rays, but its
 been percolating for a bit.

 tl;dr I think we should drop our 'needs 2x+2 to land' rule and instead
 use 'needs 1x+2'. We can ease up a large chunk of pressure on our
 review bottleneck, with the only significant negative being that core
 reviewers may see less of the code going into the system - but they
 can always read more to stay in shape if thats an issue :)

 I think this may be a sensible move, but only if it's used primarily to
 land the less complex/risky patches more quickly.

 As has been mentioned already by Angus, +1 can (and IMO should) be used for
 any less trival and/or risky patches, as the more-eyeballs thing is really
 important for big or complex patches (we are all fallible, and -core folks
 quite regularly either disagree, spot different types of issue, or just
 have better familiarity with some parts of the codebase than others).

 FWIW, every single week in the Heat queue, disagreements between -core
 reviewers result in issues getting fixed before merge, which would result
 in more bugs if the 1x+2 scheme was used unconditionally.  I'm sure other
 projects are the same, but I guess this risk can be mitigated with reviewer
 +1 discretion.
 
 Agreed with this.  I think this is a worthwhile move for simpler
 patches.  I've already done it plenty of times for a very small category
 of things (like translations updates).  It would be worth having someone
 write up a proposal that reflects this, with some examples that
 demonstrate patches that really need the second review vs others that
 don't.  In the end, it has to be based on trust in a -core team member
 judgement call.

One of the review queries I've got is the existing Nova patches which
already have 1 +2 on those. I'd say ~ 25% of them I find an issue in.
I'm assuming another 25% of them had an issue I didn't find.

2 +2 has been part of OpenStack culture for a long time, and there is a
good reason for it, it really does keep bugs out.

It should also be clear that the subject of this email really should
have been merging code faster, because nothing in here doubles the
review bandwidth, it just provides us with less review coverage.

I'm currently less convinced that raw core merge speed is our primary
issue right now. I think it's a symptom of accumulated debt, and
complexity growth from the features over the last couple of cycles.
Treating symptoms means we'll just be discussing this in another 6
months (if we're lucky) in a new process change thread.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-08 Thread Flavio Percoco
On 09/08/2014 02:52 PM, Russell Bryant wrote:
 On 09/08/2014 05:17 AM, Steven Hardy wrote:
 On Mon, Sep 08, 2014 at 03:14:24PM +1200, Robert Collins wrote:
 I hope the subject got your attention :).

 This might be a side effect of my having too many cosmic rays, but its
 been percolating for a bit.

 tl;dr I think we should drop our 'needs 2x+2 to land' rule and instead
 use 'needs 1x+2'. We can ease up a large chunk of pressure on our
 review bottleneck, with the only significant negative being that core
 reviewers may see less of the code going into the system - but they
 can always read more to stay in shape if thats an issue :)

 I think this may be a sensible move, but only if it's used primarily to
 land the less complex/risky patches more quickly.

 As has been mentioned already by Angus, +1 can (and IMO should) be used for
 any less trival and/or risky patches, as the more-eyeballs thing is really
 important for big or complex patches (we are all fallible, and -core folks
 quite regularly either disagree, spot different types of issue, or just
 have better familiarity with some parts of the codebase than others).

 FWIW, every single week in the Heat queue, disagreements between -core
 reviewers result in issues getting fixed before merge, which would result
 in more bugs if the 1x+2 scheme was used unconditionally.  I'm sure other
 projects are the same, but I guess this risk can be mitigated with reviewer
 +1 discretion.
 
 Agreed with this.  I think this is a worthwhile move for simpler
 patches.  I've already done it plenty of times for a very small category
 of things (like translations updates).  It would be worth having someone
 write up a proposal that reflects this, with some examples that
 demonstrate patches that really need the second review vs others that
 don't.  In the end, it has to be based on trust in a -core team member
 judgement call.
 

What about simply leaving it as-is and allow people to ninja-approve
things when they feel like it? When in doubt, reviewers should just
stick to the 2x+2 and don't approve the patch.

I'm basically saying the same thing that has been proposed but based on
the current default + an exception to the rule. What changes is the way
people approach reviews. Leaving 2x+2 as the default but allowing
exceptions where people can 1x+2A is better than making the default 1x+2
and asking people to not approve patches if they think it requires more
reviews.

Just my $0.02, I agree with the proposal of trusting cores and letting
them do 1x+2A when they think it's worth it.

Flavio

-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder][Nova] Moving Brick out of Cinder

2014-09-08 Thread Ivan Kolodyazhny
Hi All!

I would to start moving Cinder Brick [1] to oslo as was described on Cinder
mid-cycle meetup [2]. Unfortunately I missed meetup so I want be sure that
nobody started it and we are on the same page.

According to the Juno 3 release, there was not enough time to discuss [3]
on the latest Cinder weekly meeting and I would like to get some feedback
from the all OpenStack community, so I propose to start this discussion on
mailing list for all projects.

I anybody didn't started it and it is useful at least for both Nova and
Cinder I would to start this work according oslo guidelines [4] and
creating needed blueprints to make it finished until Kilo 1 is over.



[1] https://wiki.openstack.org/wiki/CinderBrick
[2] https://etherpad.openstack.org/p/cinder-meetup-summer-2014
[3]
http://lists.openstack.org/pipermail/openstack-dev/2014-September/044608.html
[4] https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary

Regards,
Ivan Kolodyazhny.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Remdiner: Team meeting on Tuesday at 1400 UTC

2014-09-08 Thread Kyle Mestery
Just a reminder since this is the first week we'll do the rotating
meeting. Please add agenda items to the meeting here [1].

See you all tomorrow!

Kyle

[1] https://wiki.openstack.org/wiki/Network/Meetings

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] List of granted FFEs

2014-09-08 Thread Genin, Daniel I.
Thank you, Michael.

Dan

From: Michael Still mi...@stillhq.com
Sent: Sunday, September 7, 2014 10:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] List of granted FFEs

Ahhh, I didn't realize Jay had added his name in the review. This FFE
is therefore approved.

Michael

On Mon, Sep 8, 2014 at 10:12 AM, Genin, Daniel I.
daniel.ge...@jhuapl.edu wrote:
 The FFE request thread is here:

 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg34100.html

 Daniel Berrange and Sean Dague signed up to sponsor the FFE on the mailing 
 list. Later, Jay Pipes reviewed the code and posted his agreement to sponsor 
 the FFE in his +2 comment on the patch here:

 https://review.openstack.org/#/c/40467/

 Sorry about the confusion but the email outlining the FFE process was not 
 specific about how sponsors had to register their support, just that there 
 should be 3 core sponsors.

 Dan
 
 From: Michael Still mi...@stillhq.com
 Sent: Saturday, September 6, 2014 4:50 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Nova] List of granted FFEs

 The process for requesting a FFE is to email openstack-dev and for the
 core sponsors to signup there. I've obviously missed the email
 thread... What is the subject line?

 Michael

 On Sun, Sep 7, 2014 at 3:03 AM, Genin, Daniel I.
 daniel.ge...@jhuapl.edu wrote:
 Hi Michael,

 I see that ephemeral storage encryption is not on the list of granted FFEs 
 but I sent an email to John Garbutt yesterday listing
 the 3 core sponsors for the FFE. Why was the FFE denied?

 Dan
 
 From: Michael Still mi...@stillhq.com
 Sent: Friday, September 5, 2014 5:23 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] [Nova] List of granted FFEs

 Hi,

 I've built this handy dandy list of granted FFEs, because searching
 email to find out what is approved is horrible. It would be good if
 people with approved FFEs could check their thing is listed here:

 https://etherpad.openstack.org/p/juno-nova-approved-ffes

 Michael

 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Complex resource_metadata could fail to store in MongoDB

2014-09-08 Thread Igor Degtiarov
On Thu, Sep 4, 2014 at 1:16 PM, Nadya Privalova nprival...@mirantis.com
wrote:

 IMHO it's ok and even very natural to expect escaped query from users.
 e.g, we store the following structure

 {metadata:
 { Zoo:
{Foo.Boo: ''value}}}


 Yep but such structure couldn't be stored in MongoDB without exchanging
dot in Foo.Boo




 and query should be metadata.Zoo.Foo\.Boo .


That could be a good  solution, but it is needed only if MongoDB is chosen
as a backend. So the question is
should we specify query only for MongoDB or change queries for all
backends?


 In this case it's not necessary to know deep of tree.

 Thanks,
 Nadya



Cheers,
Igor D.





 On Fri, Aug 29, 2014 at 3:21 PM, Igor Degtiarov idegtia...@mirantis.com
 wrote:

 Hi, folks.

 I was interested in the problem with storing of samples, that contain
 complex resource_metadata, in MongoDB database [1].

 If data is a dict that has a  key(s) with dots (i.e. .), dollar signs
 (i.e. $), or null characters,
 it wouldn't be stored. It is happened because these characters are
 restricted to use in fields name in MongoDB [2], but so far there is no any
 verification of the metadata in ceilometers mongodb driver, as a result we
 will lose data.

 Solution of this problem seemed to be rather simple, before storing data
 we check keys in resourse_metadata, if it is a dict, and quote keys with
 restricted characters in a similar way, as it was done in a change request
 of redesign separators in columns in HBase [2]. After that store metering
 data.

 But other unexpected difficulties appear on the step of getting data. To
 get stored data we constructs a meta query, and structure of that query was
 chosen identical to initial query in MongoDB. So dots is used as a
 separator for three nods of stored data.

 Ex. If it is needed to check value in field Foo

 {metadata:
 { Zoo:
{Foo: ''value}}}

 query would be: metadata.Zoo.Foo

 We don't know how deep is dict in metadata, so it is impossible to
 propose any correct parsing of query, to quote field names contain dots.

 I see two way for improvements. First is rather complex and based of
 redesign structure of the metadata query in ceilometer. Don't know if it is
 ever possible.

 And second is based on removing from the samples bad resource_metadata.
 In this case we also lose metadata,  but save other metering data. Of
 course queries for not saved metadata will return nothing, so it is not
 complete solution, but some kind of the hook.

 What do you think about that?
 Any thoughts and propositions are kindly welcome.

 [1] https://bugs.launchpad.net/mos/+bug/1360240
 [2] http://docs.mongodb.org/manual/reference/limits/
 [3] https://review.openstack.org/#/c/106376/

 -- Igor Degtiarov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Doc team working with bugs

2014-09-08 Thread Jay Pipes

On 09/08/2014 03:46 AM, Dmitry Mescheryakov wrote:

Hello Fuelers,

On the previous meeting a topic was raised on how Fuel doc team should
work with bugs, see [1] for details. We agreed to move the discussion
into the mailing list.

The thing is there are two members in the team at the moment (Meg and
Irina) and they need to distribute work among themselves. The natural
way to distribute load is to assign bugs. But frequently they document
bugs which are in the process of being fixed, so they are already
assigned to an engineer. I.e. a bug needs to be assigned to an
engineer and a tech writer at the same time.

I've proposed to create a separate series 'docs' in launchpad (it is
the thing like '5.0.x', '5.1.x'). If bug affects several series, a
different engineer could be assigned on each of them. So doc team will
be free to assign bugs to themselves within this new series.

Mike Scherbakov and Dmitry Borodaenko objected creating another series
in launchpad. Instead they proposed to mark bugs with tags like
'docs-irina' and 'docs-meg' thus assigning them.

What do you think is the best way to handle this? As for me, I don't
have strong preference there.

One last note: the question applies to two launchpad projects
actually: Fuel and MOS. But naturally we want to do this the same way
in both projects.


Hi Dmitry!

Good question and problem! :) This limitation of Launchpad has bugged me 
for some time (pun intended). It would be great to have the ability to 
assign a bug to more than one person.


I actually think using tags is more appropriate here. Series are 
intended for releasable things. And since we'd never be releasing just 
a docs package, having a docs series doesn't really work. So, though 
it's a bit kludgy, I think using tags is the best solution here.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] non-deterministic gate failures due to unclosed eventlet Timeouts

2014-09-08 Thread Thierry Carrez
John Schwarz wrote:
 Long story short: for future reference, if you initialize an eventlet
 Timeout, make sure you close it (either with a context manager or simply
 timeout.close()), and be extra-careful when writing tests using
 eventlet Timeouts, because these timeouts don't implicitly expire and
 will cause unexpected behaviours (see [1]) like gate failures. In our
 case this caused non-deterministic failures on the dsvm-functional test
 suite.

Nice catch, John!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] new cinderclient release this week?

2014-09-08 Thread Thierry Carrez
John Griffith wrote:
 Yes, now that RC1 is tagged I'm planning to tag a new cinderclient
 tomorrow.  I'll be sure to send an announcement out as soon as it's up.

You mean, now that juno-3 is tagged ;) RC1 is still a few weeks and a
few dozens bugfixes away.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.config 1.4.0.0a4 released - juno rc1

2014-09-08 Thread Doug Hellmann
The Oslo team is pleased to release version 1.4.0.0a4 of oslo.config, our first 
release candidate for oslo.config for juno.

This release includes:

$ git log --no-merges --oneline 1.4.0.0a3..1.4.0.0a4
6acc2dd Updated from global requirements
a590c2a Log a fixed length string of asterisks for obfuscation
46eabcc Added link to bug tracker and documentation in oslo.config readme
4c1ada2 Bump hacking to version 0.9.2

Please report issues to the NEW oslo.config bug tracker: 
https://bugs.launchpad.net/oslo.config

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-08 Thread Dan Smith
 The last few days have been interesting as I watch FFEs come through.
 People post explaining their feature, its importance, and the risk
 associated with it. Three cores sign on for review. All of the ones
 I've looked at have received active review since being posted. Would
 it be bonkers to declare nova to be in permanent feature freeze? If
 we could maintain the level of focus we see now, then we'd be getting
 heaps more done that before.
 
 Agreed. Honestly, this has been a really nice flow. I'd love to figure
 out what part of this focus is capturable for normal cadence. This
 realistically is what I was hoping slots would provide, because I feel
 like we actually move really fast when we call out 5-10 things to go
 look at this week.

The funny thing is, last week I was thinking how similar FF is to what
slots/runways would likely provide. That is, intense directed focus on a
single thing by a group of people until it's merged (or fails). Context
is kept between iterations because everyone is on board for quick
iterations with minimal distraction between them. It *does* work during
FF, as we've seen in the past -- I'd expect we have nearly 100% merge
rate of FFEs. How we arrive at a thing getting focus is different in
slots/runways, but I feel the result could be the same.

Splitting out the virt drivers is an easy way to make the life of a core
much easier, but I think the negative impacts are severe and potentially
irreversible, so I'd rather make sure we're totally out of options
before we exercise it.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.i18n 0.3.0 released - juno rc1

2014-09-08 Thread Doug Hellmann
The Oslo team is pleased to announce the release of version 0.3.0 of oslo.i18n, 
our first release candidate for oslo.i18n for juno.

This release includes:

$ git log --oneline --no-merges 0.2.0..0.3.0
08bee34 Imported Translations from Transifex
aa251be Updated from global requirements
106387b Imported Translations from Transifex
688076e Document how to add import exceptions
a4fc251 Remove mention of Message objects from public docs

Please report issues via the NEW launchpad bug tracker: 
https://bugs.launchpad.net/oslo.i18n

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.messaging 1.4.0.0a5 released - juno rc1

2014-09-08 Thread Doug Hellmann
The Oslo team is pleased to announce the release of version 1.4.0.0a5 of 
oslo.messaging, our first release candidate for oslo.messaging for juno.

This release includes:

$ git log --oneline --no-merges 1.4.0.0a4..1.4.0.0a5
6ea3b12 Imported Translations from Transifex
fbee941 An initial implementation of an AMQP 1.0 based messaging driver
a9ec73f Switch to oslo.utils
48a9ba4 Fix Python 3 testing
265b21f Import oslo-incubator context module
f480494 Import oslo-incubator/middleware/base
7381ccd Should not send replies for cast messages
4cb33ec Port to Python 3
710dd17 Sync jsonutils from oslo-incubator
2464ca0 Add parameter to customize Qpid receiver capacity
220ccb8 Make tests pass with random python hashseed
92d5679 Set sample_default for rpc_zmq_host
dd1d6d1 Enable PEP8 check E714
ec9ffdb Enable PEP8 check E265
8151da8 Enable PEP8 check E241
ba5b547 Fix error in example of an RPC server
7fdedda Replace lambda method _
500f1e5 Enable check for E226
63a6d62 Updated from global requirements
bea9723 Add release notes for 1.4.0.0a4
d020cb8 Add release notes for stable/icehouse 1.3.1 release
6684565 Bump hacking to version 0.9.2
5fbb55b Add missing docs for list_opts()

Of special note is the experimental AMQP 1.0 driver.

Please report issues via the launchpad bug tracker: 
https://bugs.launchpad.net/oslo.messaging

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-novaclient] servers list

2014-09-08 Thread Abbass MAROUNI

Hi guys,

I'm working with 'servers.list' call in 'nova.novaclient' and I'm 
getting different server lists depending on whether the user issuing the 
call is and admin or not.
It seems like only an admin user can filter the returned list while a 
non-admin gets a list of all the servers no matter what filter is being 
used.


Is this a know issue ?

Best regards,

--
--
Abbass MAROUNI
VirtualScale


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.serialization 0.2.0 released - juno rc1

2014-09-08 Thread Doug Hellmann
The Oslo team is pleased to release version 0.2.0 of oslo.serialization, our 
first release candidate for oslo.serialization for juno.

This release includes:

$ git log --no-merges --oneline 0.1.0..0.2.0
4d61c82 Check for namedtuple_as_object support before using it

Please report issues to the NEW bug tracker: 
https://bugs.launchpad.net/oslo.serialization

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient] servers list

2014-09-08 Thread Jay Pipes

On 09/08/2014 10:27 AM, Abbass MAROUNI wrote:

Hi guys,

I'm working with 'servers.list' call in 'nova.novaclient' and I'm
getting different server lists depending on whether the user issuing the
call is and admin or not.
It seems like only an admin user can filter the returned list while a
non-admin gets a list of all the servers no matter what filter is being
used.

Is this a know issue ?


That would definitely be a bug. Please do report it with the steps you 
are taking to reproduce! :)


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] NFV Meetings

2014-09-08 Thread Steve Gordon
- Original Message -
 From: ITAI MENDELSOHN (ITAI) itai.mendels...@alcatel-lucent.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 
 Hi,
 
 Hope you are doing good.
 Did we have a meeting last week?
 I was under the impression it¹s was scheduled to Thursday (as in the wiki)
 but found other meetings in the IRCŠ
 What am I missing?
 Do we have one this week?

Hi Itai,

Yes there was a meeting last Thursday IN #openstack-meeting @ 1600 UTC, the 
minutes are here:


http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-09-04-16.00.log.html

This week's meeting will be on Wednesday at 1400 UTC in #openstack-meeting-alt.

 Also,
 I sent a mail about the sub groups goals as we agreed ten days ago.
 Did you see it?
 
 Happy to hear your thoughts.

I did see this and thought it was a great attempt to re-frame the discussion (I 
think I said as much in the meeting). I'm personally still mulling over my own 
thoughts on the matter and how to respond. Maybe we will have more opportunity 
to discuss this week?

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] default allow security group

2014-09-08 Thread Brian Haley
On 09/05/2014 11:27 AM, Monty Taylor wrote:
 Hi!
 
 I've decided that as I have problems with OpenStack while using it in the
 service of Infra, I'm going to just start spamming the list.
 
 Please make something like this:
 
 neutron security-group-create default --allow-every-damn-thing

Does this work?  Sure, it's a rule in the default group and not a group itself,
but it's a one-liner:

$ neutron security-group-rule-create --direction ingress --remote-ip-prefix
0.0.0.0/0 default

 Right now, to make security groups get the hell out of our way because they do
 not provide us any value because we manage our own iptables, it takes adding
 something like 20 rules.
 
 15:24:05  clarkb | one each for ingress and egress udp tcp over ipv4
 then ipv6 and finaly icmp

I guess you mean 20 rules because there's services using ~20 different ports,
which sounds about right.  If you really didn't care you could have just opened
all of ICMP, TCP and UDP with three rules.

And isn't egress typically wide-open by default?  You shouldn't need any rules
there.

And I do fall in the more security camp - giving someone a publicly-routable
IP address with all ports open is not typically a good idea, I wouldn't want to
hear the complaints from customers on that one...

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] cliff 1.7.0 released

2014-09-08 Thread Doug Hellmann
The Oslo team is pleased to release version 1.7.0 of cliff.

This release includes:

$ git log --oneline --no-merges 1.6.1..1.7.0
42675b2 Add release notes for 1.7.0
86fe20f Fix stable integration tests
bf0c611 Updated from global requirements
ac1347c Clean up default tox environment list
db4eef5 Do not allow wheels for stable tests
6bb6944 Set the main logger name to match the application
c383448 CSV formatter should use system-dependent line ending
e3bec7b Make show option compatible with Python 2.6.
9315a32 Use six.add_metaclass instead of __metaclass__
9f331fb fixed typos found by RETF rules
d150502 The --variable option to shell format is redundant
69966df Expose load_commands publicly
a37ef60 Fix wrong method name assert_called_once
4bdf5fc Updated from global requirements
90ea2b2 Fix pep8 failures on rule E265

Please report issues to the bug tracker: https://bugs.launchpad.net/python-cliff

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stevedore 1.0.0.0a2 released -- juno rc1

2014-09-08 Thread Doug Hellmann
The Oslo team is pleased to release version 1.0.0.0a2 of stevedore, our first 
release candidate for stevedore for the OpenStack Juno cycle.

This release includes:

$ git log --oneline --no-merges 1.0.0.0a1..1.0.0.0a2
860bd8f Build universal wheels

Please report issues to the stevedore bug tracker: 
https://bugs.launchpad.net/python-stevedore

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-08 Thread Alexis Lee
Sean Dague said on Mon, Sep 08, 2014 at 09:22:56AM -0400:
  On 09/08/2014 05:17 AM, Steven Hardy wrote:
  I think this may be a sensible move, but only if it's used primarily to
  land the less complex/risky patches more quickly.
 
 2 +2 has been part of OpenStack culture for a long time, and there is a
 good reason for it, it really does keep bugs out.
 
 It should also be clear that the subject of this email really should
 have been merging code faster, because nothing in here doubles the
 review bandwidth, it just provides us with less review coverage.

For these reasons, I'm also wary of changing this in general.

Sometimes I yell in IRC if I +2 something important, this shortens loop
time as hopefully I already understand the patch and can answer
questions.

Single-approvals for small + simple changes could be worth trying.
Perhaps also for large + simple changes like whitespace fixes.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Defining what is a SupportStatus version

2014-09-08 Thread Anne Gentle
On Fri, Sep 5, 2014 at 5:27 AM, Steven Hardy sha...@redhat.com wrote:

 On Fri, Sep 05, 2014 at 03:56:34PM +1000, Angus Salkeld wrote:
 On Fri, Sep 5, 2014 at 3:29 PM, Gauvain Pocentek
 gauvain.pocen...@objectif-libre.com wrote:
 
   Hi,
 
   A bit of background: I'm working on the publication of the HOT
 resources
   reference on docs.openstack.org. This book is mostly autogenerated
 from
   the heat source code, using the sphinx XML output. To avoid
 publishing
   several references (one per released version, as is done for the
   OpenStack config-reference), I'd like to add information about the
   support status of each resource (when they appeared, when they've
 been
   deprecated, and so on).
 
   So the plan is to use the SupportStatus class and its `version`
   attribute (see https://review.openstack.org/#/c/116443/ ). And the
   question is, what information should the version attribute hold?
   Possibilities include the release code name (Icehouse, Juno), or the
   release version (2014.1, 2014.2). But this wouldn't be useful for
 users
   of clouds continuously deployed.
 
   From my documenter point of view, using the code name seems the
 right
   option, because it fits with the rest of the documentation.
 
   What do you think would be the best choice from the heat devs POV?
 
 IMHO it should match the releases and tags
 (https://github.com/openstack/heat/releases).

 +1 this makes sense to me.  Couldn't we have the best of both worlds by
 having some logic in the docs generation code which maps the milestone to
 the release series, so we can say e.g

 Supported since 2014.2.b3 (Juno)



I agree with the matching of releases, but let's set expectations for how
often it'll be generated. That is to say, each tag is a bit much to ask. I
think that even each milestone is asking a bit much. How about each release
and include the final rc tag (2014.2?)

Anne



 This would provide sufficient detail to be useful to both folks consuming
 the stable releases and those trunk-chasing via CD?

 Steve

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Mike Bayer

On Sep 7, 2014, at 9:27 PM, Anita Kuno ante...@anteaya.info wrote:

 On 09/07/2014 09:12 PM, Angus Salkeld wrote:
 Lets prevent blogs like this: http://jimhconsulting.com/?p=673 by making
 users happy.
 I don't understand why you would encourage writers of blog posts you
 disagree with by sending them traffic.

silencing users who have issues with your project is a really bad idea.If 
you want to create something great you absolutely need to be obsessed with your 
detractors and the weight of what they have to say.  Because unless they are a 
competitor engaged in outright slander, there will be some truth in it.   
Ignore criticism at your peril.Someone who takes the time to write out an 
even somewhat well reasoned criticism is doing your project a service.

I found the above blog post very interesting as I’d like to get more data on 
what the large, perceived issues are.




 
 Anita.
 
 1) Consistent/easy upgrading.
 all projects should follow a consistent model to the way they approach
 upgrading.
 it should actually work.
 - REST versioning
 - RPC versioning
 - db (data) migrations
 - ordering of procedures and clear documentation of it.
[this has been begged for by operators, but not sure how we have
 delivered]
 
 2) HA
  - ability to continue operations after been restated
  - functional tests to prove the above?
 
 3) Make it easier for small business to give OpenStack a go
  - produce standard docker images as part of ci with super simple
 instructions on running them.
 
 -Angus
 
 
 
 On Thu, Sep 4, 2014 at 1:37 AM, Joe Gordon joe.gord...@gmail.com wrote:
 
 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].
 
 To that end, I would like to propose an exercise as discussed in the TC
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of what
 they think the project wide Kilo cycle goals should be and post them on
 this thread by end of day Wednesday, September 10th. After which time we
 can begin discussing the results.
 The goal of this exercise is to help us see if our individual world views
 align with the greater community, and to get the ball rolling on a larger
 discussion of where as a project we should be focusing more time.
 
 
 best,
 Joe Gordon
 
 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
 [1]
 http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-08 Thread Sean Dague
On 09/08/2014 11:07 AM, Alexis Lee wrote:
 Sean Dague said on Mon, Sep 08, 2014 at 09:22:56AM -0400:
 On 09/08/2014 05:17 AM, Steven Hardy wrote:
 I think this may be a sensible move, but only if it's used primarily to
 land the less complex/risky patches more quickly.

 2 +2 has been part of OpenStack culture for a long time, and there is a
 good reason for it, it really does keep bugs out.

 It should also be clear that the subject of this email really should
 have been merging code faster, because nothing in here doubles the
 review bandwidth, it just provides us with less review coverage.
 
 For these reasons, I'm also wary of changing this in general.
 
 Sometimes I yell in IRC if I +2 something important, this shortens loop
 time as hopefully I already understand the patch and can answer
 questions.
 
 Single-approvals for small + simple changes could be worth trying.
 Perhaps also for large + simple changes like whitespace fixes.

Realistically core teams are already doing that. Being considered an
exception people typically provide a good reason in the review as to why
it's a fast approve, which is fine.

The large cross cutting whitespace fixes are actually completely
problematic in their own way, because they typically force a non trivial
rebase on 10s - 100s of patches in flight, so they generate *a ton* of
work for other people.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] non-deterministic gate failures due to unclosed eventlet Timeouts

2014-09-08 Thread Doug Hellmann

On Sep 7, 2014, at 9:39 AM, John Schwarz jschw...@redhat.com wrote:

 Hi,
 
 Long story short: for future reference, if you initialize an eventlet
 Timeout, make sure you close it (either with a context manager or simply
 timeout.close()), and be extra-careful when writing tests using
 eventlet Timeouts, because these timeouts don't implicitly expire and
 will cause unexpected behaviours (see [1]) like gate failures. In our
 case this caused non-deterministic failures on the dsvm-functional test
 suite.

It would be good to have a fixture class in oslotest to set up eventlet 
timeouts properly.

Doug

 
 
 Late last week, a bug was found ([2]) in which an eventlet Timeout
 object was initialized but not closed. This instance was left inside
 eventlet's inner-workings and triggered non-deterministic Timeout: 10
 seconds errors and failures in dsvm-functional tests.
 
 As mentioned earlier, initializing a new eventlet.timeout.Timeout
 instance also registers it to inner mechanisms that exist within the
 library, and the reference remains there until it is explicitly removed
 (and not until the scope leaves the function block, as some would have
 thought). Thus, the old code (simply creating an instance without
 assigning it to a variable) left no way to close the timeout object.
 This reference remains throughout the life of a worker, so this can
 (and did) effect other tests and procedures using eventlet under the
 same process. Obviously this could easily effect production-grade
 systems with very high load.
 
 For future reference:
 1) If you run into a Timeout: %d seconds exception whose traceback
 includes hub.switch() and self.greenlet.switch() calls, there might
 be a latent Timeout somewhere in the code, and a search for all
 eventlet.timeout.Timeout instances will probably produce the culprit.
 
 2) The setup used to reproduce this error for debugging purposes is a
 baremetal machine running a VM with devstack. In the baremetal machine I
 used some 6 dd if=/dev/zero of=/dev/null to simulate high CPU load
 (full command can be found at [3]), and in the VM I ran the
 dsvm-functional suite. Using only a VM with similar high CPU simulation
 fails to produce the result.
 
 [1]
 http://eventlet.net/doc/modules/timeout.html#eventlet.timeout.eventlet.timeout.Timeout.Timeout.cancel
 [2] https://review.openstack.org/#/c/119001/
 [3]
 http://stackoverflow.com/questions/2925606/how-to-create-a-cpu-spike-with-a-bash-command
 
 
 --
 John Schwarz,
 Software Engineer, Red Hat.
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Anita Kuno
On 09/08/2014 11:12 AM, Mike Bayer wrote:
 
 On Sep 7, 2014, at 9:27 PM, Anita Kuno ante...@anteaya.info wrote:
 
 On 09/07/2014 09:12 PM, Angus Salkeld wrote:
 Lets prevent blogs like this: http://jimhconsulting.com/?p=673 by making
 users happy.
 I don't understand why you would encourage writers of blog posts you
 disagree with by sending them traffic.
 
 silencing users who have issues with your project is a really bad idea.If 
 you want to create something great you absolutely need to be obsessed with 
 your detractors and the weight of what they have to say.  Because unless they 
 are a competitor engaged in outright slander, there will be some truth in it. 
   Ignore criticism at your peril.Someone who takes the time to write out 
 an even somewhat well reasoned criticism is doing your project a service.
 
 I found the above blog post very interesting as I’d like to get more data on 
 what the large, perceived issues are.
 
Wow, we are really taking liberties with my question today.

What part of any of my actions current or previous have led you to
believe that I want to now or ever have silenced anyone? I am curious
what led you to believe that silencing users was the motivation for my
question of Angus.

I now see, through asking Angus for clarity which he did provide (not
silencing him, you will notice), that Angus' motivation was prevention
of poor user experience through better attention.

I am highly aware, particularly in the area in which I work - the third
party space- of the leading nature of behavioural training that takes
place particularly of new contributors and contributors who don't have
English as a first language of anything we ask or expect them to do.
Many times what seems to be a reasonable comment or expectation can be
taken completely out of context by folks who don't have English as a
first language and don't have the cultural context and filters that
English speakers have.

Actually my question was motivated from a user experience point of view,
the third party user, since I am only too aware of what kind of
questions and confusion many comments cause because the commenter
doesn't take the non-English speaker point of view into account.

By clarifying Angus' motivation with Angus, hopefully his meaning -
create better user experiences, and better relationships with users -
has come through.

And I agree with all of your points, which is why I take such pains to
create clarity on the mailing lists and other communication.

Thanks,
Anita.
 
 
 

 Anita.

 1) Consistent/easy upgrading.
 all projects should follow a consistent model to the way they approach
 upgrading.
 it should actually work.
 - REST versioning
 - RPC versioning
 - db (data) migrations
 - ordering of procedures and clear documentation of it.
[this has been begged for by operators, but not sure how we have
 delivered]

 2) HA
  - ability to continue operations after been restated
  - functional tests to prove the above?

 3) Make it easier for small business to give OpenStack a go
  - produce standard docker images as part of ci with super simple
 instructions on running them.

 -Angus



 On Thu, Sep 4, 2014 at 1:37 AM, Joe Gordon joe.gord...@gmail.com wrote:

 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].

 To that end, I would like to propose an exercise as discussed in the TC
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of what
 they think the project wide Kilo cycle goals should be and post them on
 this thread by end of day Wednesday, September 10th. After which time we
 can begin discussing the results.
 The goal of this exercise is to help us see if our individual world views
 align with the greater community, and to get the ball rolling on a larger
 discussion of where as a project we should be focusing more time.


 best,
 Joe Gordon

 [0]
 http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
 [1]
 http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Mike Bayer

On Sep 7, 2014, at 8:14 PM, Monty Taylor mord...@inaugust.com wrote:

 
 
 2. Less features, more win
 
 In a perfect world, I'd argue that we should merge exactly zero new features 
 in all of kilo, and instead focus on making the ones we have work well. Some 
 of making the ones we have work well may wind up feeling just like writing 
 features, as I imagine some of our features are probably only half features 
 in the first place.
 
 3. Deleting things
 
 We should delete a bunch of code. Deleting code is fun, and it makes your 
 life better, because it means you have less code. So we should start doing 
 it. In particular, we should look for places where we wrote something as part 
 of OpenStack because the python community did not have a thing already, but 
 now there is one. In those cases, we should delete ours and use theirs. Or we 
 should contribute to theirs if it's not quite good enough yet. Or we should 
 figure out how to make more of the oslo libraries things that can truly 
 target non-OpenStack things.
 

I have to agree that “Deleting things” is the best, best thing.  Anytime you 
can refactor around things and delete more code, a weight is lifted, your code 
becomes easier to understand, maintain, and expand upon.Simpler code then 
gives way to refactorings that you couldn’t even see earlier, and sometimes you 
can even get a big performance boost once a bunch of supporting code now 
reveals itself to be superfluous.  This is most critical for Openstack as 
Openstack is written in Python, and for as long as we have to stay on the 
cPython interpreter, number of function calls is directly proportional to how 
slow something is.  Function calls are enormously expensive in Python.

Something that helps greatly with the goal of “Deleting things” is to reduce 
dependencies between systems. In SQLAlchemy, the kind of change I’m usually 
striving for is one where we take a module that does one Main Thing, but then 
has a bunch of code spread throughout it to do some Other Thing, that is really 
much less important, but complicates the Main Thing.   What we do is reorganize 
the crap out of it and get the Other Thing out of the core Main Thing, move it 
out to a totally optional “extension” module that bothers noone, and we 
essentially forget about it because nobody ever uses it (examples include 
http://docs.sqlalchemy.org/en/rel_0_9/changelog/migration_08.html#instrumentationmanager-and-alternate-class-instrumentation-is-now-an-extension,
 
http://docs.sqlalchemy.org/en/rel_0_9/changelog/migration_08.html#mutabletype). 
   When we make these kinds of changes, major performance enhancements come 
right in - the Main Thing no longer has to worry about those switches and left 
turns introduced by the Other Thing, and tons of superfluous logic can be 
thrown away.SQLAlchemy’s architecture gains from these kinds of changes 
with every major release and 1.0 is no exception.

This is not quite the same as “Deleting things” but it has more or less the 
same effect; you isolate code that everyone uses from code that only some 
people occasionally use.   In SQLAlchemy specifically, we have the issue of 
individual database dialects that are still packaged along; e.g. there is 
sqlalchemy.dialects.mysql, sqlalchemy.dialects.postgresql, etc.  However, a few 
years back I went through a lot of effort to modernize the system by which 
users can provide their own database backends; while you can not only provide 
your own custom backend using setuptools entry points, I also made a major 
reorganization of SQLAlchemy’s test suite to produce the “dialect test suite”; 
so that when you write your custom dialect, you can actually run a large, 
pre-fabricated test suite out of SQLAlchemy’s core against your dialect, 
without the need for your dialect to be actually *in* SQLAlchemy.  There 
were many wins from this system, including that it forced me to write lots of 
tests that were very well focused on testing specifically what a dialect needs 
to do, in isolation of anything SQLAlchemy itself needs to do.   It allowed a 
whole batch of new third party dialects like that for Amazon Redshift, 
FoundationDB, MonetDB, and also was a huge boon to IBM’s DB2 driver who I 
helped to get onto the new system.   And since then I’ve been able to go into 
SQLAlchemy and dump out lots of old dialects that are much better off being 
maintained separately, at a different level of velocity and hopefully by 
individual contributors who are interested in them, like MS Access, Informix, 
MaxDB, and Drizzle.   Having all these dialects in one big codebase only served 
as a weight on the project, and theoretically it wouldn’t be a bad idea for 
SQLA to have *all* dialects as separate projects, but we’re not there yet.

The only reason I’m rambling on about a SQLAlchemy’s Core/Dialect dichotomy is 
just that I was very much *reminded* of it by the thread regarding Nova and the 
various “virt” drivers.  I know 

Re: [openstack-dev] [Fuel] SSL in Fuel

2014-09-08 Thread Adam Lawson
blueprints for non-self-signed certs/PKI for starters.


*Adam Lawson*
*CEO, Principal Architect*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072



On Mon, Sep 8, 2014 at 2:49 AM, Sebastian Kalinowski 
skalinow...@mirantis.com wrote:

 Hi all,

 As next step for improving Fuel security we are introducing SSL for both
 Fuel [1] and OS API endpoints [2]. Both specs assume usage of self-signed
 certificates generated by Fuel.
 It also required to allow users to use their own certs to secure their
 deployments
 (two blueprints that touch that part are [3] and [4])

 We would like to start a discussion to see what opinions (and maybe ideas)
 you
 have for that feature.

 Best,
 Sebastian

 [1] https://review.openstack.org/#/c/119330
 [2] https://review.openstack.org/#/c/102273
 [3] https://blueprints.launchpad.net/fuel/+spec/ca-deployment
 [4] https://blueprints.launchpad.net/fuel/+spec/manage-ssl-certificate

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Should docker plugin remove containers on delete?

2014-09-08 Thread Zane Bitter

On 01/09/14 19:18, Steve Baker wrote:

On 02/09/14 05:58, Lars Kellogg-Stedman wrote:

Hello all,

I recently submitted this change:

   https://review.openstack.org/#/c/118190/

This causes the Docker plugin to *remove* containers on delete,
rather than simply *stopping* them.  When creating named containers,
the stop but do not remove behavior would cause conflicts when try
to re-create the stack.

Do folks have an opinion on which behavior is correct?


Removing after stopping seems reasonable.


+1


If we wanted to support both
behaviors then that resource could always have deletion_policy: Retain
implemented.


I think you mean Snapshot (every resource supports Retain, because it's 
just a NOOP), which we may well be able to do something with. However I 
think the correct way of stopping ( starting) containers is to 
implement the Suspend/Resume lifecycle operations on the resource.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Mike Bayer

On Sep 8, 2014, at 11:30 AM, Anita Kuno ante...@anteaya.info wrote:

 Wow, we are really taking liberties with my question today.
 
 What part of any of my actions current or previous have led you to
 believe that I want to now or ever have silenced anyone? I am curious
 what led you to believe that silencing users was the motivation for my
 question of Angus.

I was only replying to your single message in isolation of the full 
conversation; the notion that one would not want to send traffic to a blog 
because they disagree with it, at face value seems like a bad idea.  Apparently 
that isn’t the meaning you wished to convey, so I apologize for missing the 
larger context.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] doubling our core review bandwidth

2014-09-08 Thread Kevin L. Mitchell
 tl;dr I think we should drop our 'needs 2x+2 to land' rule and instead
 use 'needs 1x+2'. We can ease up a large chunk of pressure on our
 review bottleneck, with the only significant negative being that core
 reviewers may see less of the code going into the system - but they
 can always read more to stay in shape if thats an issue :)

I'm going to make a tangential but somewhat related suggestion.  Instead
of reducing the number of +2s required, what would happen if we gave our
more trusted but non-core reviewers the ability to +2, but leave +A in
the hands of the existing cores?  That way, their reviews can be counted
by the core reviewers.  With this change in policy, you still need two
+2s, but you have more people that can +2, and you only need one of our
limited number of cores to review.
-- 
Kevin L. Mitchell kevin.mitch...@rackspace.com
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Anita Kuno
On 09/08/2014 12:00 PM, Mike Bayer wrote:
 
 On Sep 8, 2014, at 11:30 AM, Anita Kuno ante...@anteaya.info wrote:
 
 Wow, we are really taking liberties with my question today.

 What part of any of my actions current or previous have led you to
 believe that I want to now or ever have silenced anyone? I am curious
 what led you to believe that silencing users was the motivation for my
 question of Angus.
 
 I was only replying to your single message in isolation of the full 
 conversation; the notion that one would not want to send traffic to a blog 
 because they disagree with it, at face value seems like a bad idea.
Actually the word used was prevent, and if I personally want to prevent
something I don't encourage it by giving it attention.

Not understanding something due to disagreement with it is, I agree, a
perspective which is limiting and which ultimately does the party at the
heart of the discussion the most harm, it is self-defeating, yes.

  Apparently that isn’t the meaning you wished to convey, so I apologize for 
 missing the larger context.
I appreciate you taking the time to talk to me so that we might
understand each other better. Thank you, Mike, I'm grateful for your
time with this.

Thanks,
Anita.
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Averting the Nova crisis by splitting out virt drivers

2014-09-08 Thread Sylvain Bauza


Le 08/09/2014 18:06, Steven Dake a écrit :

On 09/05/2014 06:10 AM, Sylvain Bauza wrote:


Le 05/09/2014 12:48, Sean Dague a écrit :

On 09/05/2014 03:02 AM, Sylvain Bauza wrote:

Le 05/09/2014 01:22, Michael Still a écrit :

On Thu, Sep 4, 2014 at 5:24 AM, Daniel P. Berrange
berra...@redhat.com wrote:

[Heavy snipping because of length]


The radical (?) solution to the nova core team bottleneck is thus to
follow this lead and split the nova virt drivers out into separate
projects and delegate their maintainence to new dedicated teams.

   - Nova becomes the home for the public APIs, RPC system, database
 persistent and the glue that ties all this together with the
 virt driver API.

   - Each virt driver project gets its own core team and is 
responsible

 for dealing with review, merge  release of their codebase.
I think this is the crux of the matter. We're not doing a great 
job of

landing code at the moment, because we can't keep up with the review
workload.

So far we've had two proposals mooted:

   - slots / runways, where we try to rate limit the number of things
we're trying to review at once to maintain focus
   - splitting all the virt drivers out of the nova tree

Ahem, IIRC, there is a third proposal for Kilo :
  - create subteam's half-cores responsible for reviewing patch's
iterations and send to cores approvals requests once they consider the
patch enough stable for it.

As I explained, it would allow to free up reviewing time for cores
without loosing the control over what is being merged.
I don't really understand how the half core idea works outside of a 
math

equation, because the point is in core is to have trust over the
judgement of your fellow core members so that they can land code when
you aren't looking. I'm not sure how I manage to build up half trust in
someone any quicker.


Well, this thread is becoming huge so that's becoming hard to follow 
all the discussion but I explained the idea elsewhere. Let me just 
provide it here too :
The idea is *not* to land patches by the halfcores. Core team will 
still be fully responsible for approving patches. The main problem in 
Nova is that cores are spending lots of time because they review each 
iteration of a patch, and also have to look at if a patch is good or 
not.


That's really time consuming, and for most of the time, quite 
frustrating as it requires to follow the patch's life, so there are 
high risks that your core attention is becoming distracted over the 
life of the patch.


Here, the idea is to reduce dramatically this time by having teams 
dedicated to specific areas (as it's already done anyway for the 
various majority of reviewers) who could on their own take time for 
reviewing all the iterations. Of course, that doesn't mean cores 
would loose the possibility to specifically follow a patch and bypass 
the halfcores, that's just for helping them if they're overwhelmed.


About the question of trusting cores or halfcores, I can just say 
that Nova team is anyway needing to grow up or divide it so the 
trusting delegation has to be real anyway.


This whole process is IMHO very encouraging for newcomers because 
that creates dedicated teams that could help them to improve their 
changes, and not waiting 2 months for getting a -1 and a frank reply.



Interesting idea, but having been core on Heat for ~2 years, it is 
critical to be involved in the review from the beginning of the patch 
set.  Typically you won't see core reviewer's participate in a review 
that is already being handled by two core reviewers.


The reason it is important from the beginning of the change request is 
that the project core can store the iterations and purpose of the 
change in their heads.  Delegating all that up front work to a 
non-core just seems counter to the entire process of code reviews. 
Better would be reduce the # of reviews in the queue (what is proposed 
by this change) or trust new reviewers faster.  I'm not sure how you 
do that - but this second model is what your proposing.


I think one thing that would be helpful is to point out somehow in the 
workflow that two core reviewers are involved in the review so core 
reviewers don't have to sift through 10 pages of reviews to find new 
work.




Now that the specs repo is in place and has been proved with Juno, most 
of the design stage is approved before the implementation is going. If 
the cores are getting more time because they wouldn't be focused on each 
single patchset, they could really find some patches they would like to 
look at, or they could just wait for the half-approvals from the halfcores.


If a core thinks that a patch is enough tricky for looking at each 
iteration, I don't see any bad things. At least, it's up to the core 
reviewer to choose which patches he could look at, and he would be more 
free than if the slots proposal would be there.


I'm a core from a tiny project but I know how time consuming it is. I 
would really enjoy if 

[openstack-dev] [Mistral] Team meeting minutes/log - 08/09/2014

2014-09-08 Thread Renat Akhmerov
Thanks for joining us today at our team meeting!

Meeting minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-09-08-16.00.html
Meeting log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-09-08-16.00.log.html
Agenda/Meeting archive: https://wiki.openstack.org/wiki/Meetings/MistralAgenda

The next meeting will be at the same time/place on Sep 15.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-08 Thread Dmitry Mescheryakov
Hello Fuelers,

Right now we have the following policy in place: the branches for a
release are opened only after its 'parent' release have reached hard
code freeze (HCF). Say, 5.1 release is parent releases for 5.1.1 and
6.0.

And that is the problem: if parent release is delayed, we can't
properly start development of a child release because we don't have
branches to commit. That is current issue with 6.0: we already started
to work on pushing Juno in to 6.0, but if we are to make changes to
our deployment code we have nowhere to store them.

IMHO the issue could easily be resolved by creation of pre-release
branches, which are merged together with parent branches once the
parent reaches HCF. Say, we use branch 'pre-6.0' for initial
development of 6.0. Once 5.1 reaches HCF, we merge pre-6.0 into master
and continue development here. After that pre-6.0 is abandoned.

What do you think?

Thanks,

Dmitry

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Reminder: Tempest Bug Day: Tuesday September 9 (tomorrow)

2014-09-08 Thread David Kranz


It's been a while since we had a bug day. We now have 121 (now 124) NEW 
bugs:


https://bugs.launchpad.net/tempest/+bugs?field.searchtext=field.status%3Alist=NEWorderby=-importance

The first order of business is to triage these bugs. This is a large 
enough number that I hesitate to
mention anything else, but there are also many In Progress bugs that 
should be looked at to see if they should

be closed or an assignee removed if no work is actually planned:

https://bugs.launchpad.net/tempest/+bugs?search=Searchfield.status=In+Progress

I hope we will see a lot of activity on this bug day. During the 
Thursday meeting right after we can see if
there are ideas for how to manage the bugs on a more steady-state basis. 
We could also discuss how the grenade and

devstack bugs should fit in to such activities.

-David



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday September 9th at 19:00 UTC

2014-09-08 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday September 9th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Brad Topol
Monty,

+1!!   I fully agree!!  How can I help :-)?  Can we dedicate some design 
summit sessions to this topic?  Ideally,  having some stakeholder driven 
sessions where we can hear about the user experiences issues causing the 
most pain would be a good start to get this to become a focus area.

Thanks,

Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Monty Taylor mord...@inaugust.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, 
Date:   09/07/2014 08:15 PM
Subject:Re: [openstack-dev] Kilo Cycle Goals Exercise



On 09/03/2014 08:37 AM, Joe Gordon wrote:
 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One 
idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].

 To that end, I would like to propose an exercise as discussed in the TC
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of 
what
 they think the project wide Kilo cycle goals should be and post them on
 this thread by end of day Wednesday, September 10th. After which time we
 can begin discussing the results.
 The goal of this exercise is to help us see if our individual world 
views
 align with the greater community, and to get the ball rolling on a 
larger
 discussion of where as a project we should be focusing more time.

If I were king ...

1. Caring about end user experience at all

It's pretty clear, if you want to do things with OpenStack that are not 
running your own cloud, that we collectively have not valued the class 
of user who is a person who wants to use the cloud. Examples of this 
are that the other day I had to read a section of the admin guide to 
find information about how to boot a nova instance with a cinder volume 
attached all in one go. Spoiler alert, it doesn't work. Another spoiler 
alert - even though the python client has an option for requesting that 
a volume that is to be attached on boot be formatted in a particular 
way, this does not work for cinder volumes, which means it does not work 
for an end user - EVEN THOUGH this is a very basic thing to want.

Our client libraries are clearly not written with end users in mind, and 
this has been the case for quite some time. However, openstacksdk is not 
yet to the point of being usable for end users - although good work is 
going on there to get it to be a basis for an end user python library.

We give deployers so much flexibility, that in order to write even a 
SIMPLE program that uses OpenStack, an end user has to know generally 
four of five pieces of information to check for that are different ways 
that a deployer may have decided to do things.

Example:

  - As a user, I want a compute instance that has an IP address that can 
do things.

WELL, now you're screwed, because there is no standard way to do that. 
You may first want to try booting your instance and then checking to see 
if nova returns a network labeled public. You may get no networks. 
This indicates that your provider decided to deploy neutron, but as part 
of your account creation did not create default networks. You now need 
to go create a router, network and port in neutron. Now you can try 
again. Or, you may get networks back, but neither of them are labeled 
public - instead, you may get a public and a private address back in 
the network labeled private. Or, you may only get a private network 
back. This indicates that you may be expected to create a thing called a 
floating-ip. First, you need to verify that your provider has 
installed the floating-ip's extension. If they have, then you can create 
a floating-ip and attach it to your host. NOW - once you have those 
things done, you need to connect to your host and verify that its 
outbound networking has not been blocked by a thing called security 
groups, which you also may not have been expecting to exist, but I'll 
stop there, because the above is long enough.

Every. Single. One. Of. Those. Cases. is real and has occurred across 
only the two public openstack clouds that infra uses. That means that 
our provisioning code takes every single one of them in to account, and 
anyone who writes code that wants to get a machine to use must take them 
all in to account or else their code is buggy.

That's RIDICULOUS. So we should fix it. I'd say we should fix it by 
removing 1000% of the choices we've given deployers in this case, but I 
won't win there. So how about let's make at least one client library 
that encodes all of the above logic behind some simple task oriented API 
calls? How about we make that library not something which is just a 
repackaging of requests that does not contain intelligent, but instead 
something that is fundamentally usable. How about we have 

Re: [openstack-dev] doubling our core review bandwidth

2014-09-08 Thread Zane Bitter

On 07/09/14 23:43, Morgan Fainberg wrote:

## avoiding collaboration between bad actors

The two core requirement means that it takes three people (proposer +
2 core) to collaborate on landing something inappropriate (whether its
half baked, a misfeature, whatever). Thats only 50% harder than 2
people (proposer + 1 core) and its still not really a high bar to
meet. Further, we can revert things.

Solid assessment. I tend to agree with this point. If you are going to have bad 
actors try and get code in you will have bad actors trying to get code in. The 
real question is: how many (if any) extra reverts will be needed in the case of 
bad actors? My guess is 1 per bad actor (which that actor is likely no longer 
going to be core), if there are even any bad actors out there.


I think this misses the point, which isn't so much to prevent bad actors 
(and I don't think we have any of those). It's to protect good (and 
sometimes maybe slightly misguided) actors from any perception that they 
might be behaving as bad actors.


I think Rob missed another possible benefit off the list: it allows us 
to add core team members more aggressively than we might if adding 
someone meant allowing them to approve patches by themselves.


I'm not convinced that dropping the 2 x +2 is the right trade-off, 
though I would definitely support more official documentation of the 
wiggle room available for reviewer discretion, such as what Flavio 
suggested. In Heat we agreed on a policy of allowing immediate approval 
in the case where you're effectively reviewing a rebase of or a minor 
fix to a patchset that already had the assent in principle of two core 
reviewers. I rarely see anyone actually do it though, I think in part 
because the OpenStack-wide documentation makes it sound very naughty. I 
was interested to learn from this thread that many programs appear to 
have informally instituted something similar.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Tim Bell

The End User working group is being described at 
https://wiki.openstack.org/wiki/End_User_Working_Group. Chris Kemp is 
establishing the structure.

This page covers how to get involved...

Tim

From: Brad Topol [mailto:bto...@us.ibm.com]
Sent: 08 September 2014 19:50
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Kilo Cycle Goals Exercise

Monty,

+1!!   I fully agree!!  How can I help :-)?  Can we dedicate some design summit 
sessions to this topic?  Ideally,  having some stakeholder driven sessions 
where we can hear about the user experiences issues causing the most pain would 
be a good start to get this to become a focus area.

Thanks,

Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.commailto:bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:Monty Taylor mord...@inaugust.commailto:mord...@inaugust.com
To:OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org,
Date:09/07/2014 08:15 PM
Subject:Re: [openstack-dev] Kilo Cycle Goals Exercise




On 09/03/2014 08:37 AM, Joe Gordon wrote:
 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].

 To that end, I would like to propose an exercise as discussed in the TC
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of what
 they think the project wide Kilo cycle goals should be and post them on
 this thread by end of day Wednesday, September 10th. After which time we
 can begin discussing the results.
 The goal of this exercise is to help us see if our individual world views
 align with the greater community, and to get the ball rolling on a larger
 discussion of where as a project we should be focusing more time.

If I were king ...

1. Caring about end user experience at all

It's pretty clear, if you want to do things with OpenStack that are not
running your own cloud, that we collectively have not valued the class
of user who is a person who wants to use the cloud. Examples of this
are that the other day I had to read a section of the admin guide to
find information about how to boot a nova instance with a cinder volume
attached all in one go. Spoiler alert, it doesn't work. Another spoiler
alert - even though the python client has an option for requesting that
a volume that is to be attached on boot be formatted in a particular
way, this does not work for cinder volumes, which means it does not work
for an end user - EVEN THOUGH this is a very basic thing to want.

Our client libraries are clearly not written with end users in mind, and
this has been the case for quite some time. However, openstacksdk is not
yet to the point of being usable for end users - although good work is
going on there to get it to be a basis for an end user python library.

We give deployers so much flexibility, that in order to write even a
SIMPLE program that uses OpenStack, an end user has to know generally
four of five pieces of information to check for that are different ways
that a deployer may have decided to do things.

Example:

 - As a user, I want a compute instance that has an IP address that can
do things.

WELL, now you're screwed, because there is no standard way to do that.
You may first want to try booting your instance and then checking to see
if nova returns a network labeled public. You may get no networks.
This indicates that your provider decided to deploy neutron, but as part
of your account creation did not create default networks. You now need
to go create a router, network and port in neutron. Now you can try
again. Or, you may get networks back, but neither of them are labeled
public - instead, you may get a public and a private address back in
the network labeled private. Or, you may only get a private network
back. This indicates that you may be expected to create a thing called a
floating-ip. First, you need to verify that your provider has
installed the floating-ip's extension. If they have, then you can create
a floating-ip and attach it to your host. NOW - once you have those
things done, you need to connect to your host and verify that its
outbound networking has not been blocked by a thing called security
groups, which you also may not have been expecting to exist, but I'll
stop there, because the above is long enough.

Every. Single. One. Of. Those. Cases. is real and has occurred across
only the two public openstack clouds that infra uses. That means that
our provisioning code takes every single one of them in to account, and
anyone who writes code that wants to get a machine to use must take them
all in to account or else their code is buggy.


Re: [openstack-dev] [Neutron] - reading router external IPs

2014-09-08 Thread Kevin Benton
IIUC, the required policy change is to allow a tenant to list ports that
don't belong to you. I don't think the policy.json is powerful enough to
allow tenants to list their external ports but no other ports they don't
own.
On Sep 8, 2014 10:30 AM, Carl Baldwin c...@ecbaldwin.net wrote:

 I think there could be some discussion about the validity of this as a
 bug report vs a feature enhancement.  Personally, I think I could be
 talked in to accepting a small change to address this bug but I
 won't try to speak for everyone.

 This bug report [1] -- linked by devvesa to the bug report to which
 Kevin linked -- suggests that the external IP address can be seen by
 an admin user.  Is there a policy.json setting that can be set at
 deployment time to allow this without making a change to the code
 base?

 Carl

 [1] https://bugs.launchpad.net/neutron/+bug/1189358

 On Sun, Sep 7, 2014 at 3:41 AM, Kevin Benton blak...@gmail.com wrote:
  https://review.openstack.org/#/c/83664/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Joshua Harlow
Amen to this!

I've always felt bad that before yahoo tries to include a new feature in its 
openstack cloud/s we have to figure out how much the feature is a land-mine, 
how much of it works, how much of it doesn't and so-on. That type of 
investigation imho shouldn't really be needed and the fact that it is makes me 
want more and more a stability cycle or two (or three).

More and more recently I've be thinking that we have spent way to much on 
drivers and features and not enough on our own 'infrastructure'. 

While of course there is a balance, it just seems like the balance currently 
isn't right (IMHO).

Maybe we should start asking ourselves why it is so much easier to add a driver 
vs. do a cross-project functionality like gantt (or centralized quota 
management or other...) that removes some of those land-mines. When it becomes 
easier to land gantt vs a new driver then I think we might be in a better 
place. After all, our infrastructure is what makes the project a long-term 
success and not adding new drivers.

Just my 2 cents,

Josh

On Sep 7, 2014, at 5:14 PM, Monty Taylor mord...@inaugust.com wrote:

 On 09/03/2014 08:37 AM, Joe Gordon wrote:
 As you all know, there has recently been several very active discussions
 around how to improve assorted aspects of our development process. One idea
 that was brought up is to come up with a list of cycle goals/project
 priorities for Kilo [0].
 
 To that end, I would like to propose an exercise as discussed in the TC
 meeting yesterday [1]:
 Have anyone interested (especially TC members) come up with a list of what
 they think the project wide Kilo cycle goals should be and post them on
 this thread by end of day Wednesday, September 10th. After which time we
 can begin discussing the results.
 The goal of this exercise is to help us see if our individual world views
 align with the greater community, and to get the ball rolling on a larger
 discussion of where as a project we should be focusing more time.
 
 If I were king ...
 
 1. Caring about end user experience at all
 
 It's pretty clear, if you want to do things with OpenStack that are not 
 running your own cloud, that we collectively have not valued the class of 
 user who is a person who wants to use the cloud. Examples of this are that 
 the other day I had to read a section of the admin guide to find information 
 about how to boot a nova instance with a cinder volume attached all in one 
 go. Spoiler alert, it doesn't work. Another spoiler alert - even though the 
 python client has an option for requesting that a volume that is to be 
 attached on boot be formatted in a particular way, this does not work for 
 cinder volumes, which means it does not work for an end user - EVEN THOUGH 
 this is a very basic thing to want.
 
 Our client libraries are clearly not written with end users in mind, and this 
 has been the case for quite some time. However, openstacksdk is not yet to 
 the point of being usable for end users - although good work is going on 
 there to get it to be a basis for an end user python library.
 
 We give deployers so much flexibility, that in order to write even a SIMPLE 
 program that uses OpenStack, an end user has to know generally four of five 
 pieces of information to check for that are different ways that a deployer 
 may have decided to do things.
 
 Example:
 
 - As a user, I want a compute instance that has an IP address that can do 
 things.
 
 WELL, now you're screwed, because there is no standard way to do that. You 
 may first want to try booting your instance and then checking to see if nova 
 returns a network labeled public. You may get no networks. This indicates 
 that your provider decided to deploy neutron, but as part of your account 
 creation did not create default networks. You now need to go create a router, 
 network and port in neutron. Now you can try again. Or, you may get networks 
 back, but neither of them are labeled public - instead, you may get a 
 public and a private address back in the network labeled private. Or, you may 
 only get a private network back. This indicates that you may be expected to 
 create a thing called a floating-ip. First, you need to verify that your 
 provider has installed the floating-ip's extension. If they have, then you 
 can create a floating-ip and attach it to your host. NOW - once you have 
 those things done, you need to connect to your host and verify that its 
 outbound networkin
 g has not been blocked by a thing called security groups, which you also may 
not have been expecting to exist, but I'll stop there, because the above is 
long enough.
 
 Every. Single. One. Of. Those. Cases. is real and has occurred across only 
 the two public openstack clouds that infra uses. That means that our 
 provisioning code takes every single one of them in to account, and anyone 
 who writes code that wants to get a machine to use must take them all in to 
 account or else their code is buggy.
 
 That's 

[openstack-dev] [oslo] Anticipated Final Release Versions for Juno

2014-09-08 Thread Doug Hellmann
I spent some time today looking over our current set of libraries trying to 
anticipate which will be ready for 1.0 (or later) and which are still 
considered pre-release. I came up with this set of anticipated version numbers 
for our final juno releases. Please let me know if you see any surprises on the 
list.

1.0 or later

oslo.config - 1.4.0
oslo.i18n - 1.0.0
oslo.messaging - 1.4.0
oslo.rootwrap - 1.3.0
oslo.serialization - 1.0.0
oslosphinx - 2.2.0
oslotest - 1.1.0
oslo.utils - 1.0.0
cliff - 1.7.x
stevedore - 1.0.0

Alphas or Pre-releases (Trying for 1.0 for Kilo)

oslo.concurrency -  1.0
oslo.log - 0.1.0
oslo.middleware - 0.1.0
pbr -  1.0
taskflow -  1.0

Unknown

oslo.db - I think we said 1.0.0 but I need to confirm.
oslo.vmware - I’ll need to talk to the vmware team to see where things stand 
with their release plans.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Anticipated Final Release Versions for Juno

2014-09-08 Thread Davanum Srinivas
LGTM Doug. for oslo.vmware - we need a fresh rev for use by Nova to
fix the following bug:
https://bugs.launchpad.net/nova/+bug/1341954

thanks,
dims

On Mon, Sep 8, 2014 at 2:32 PM, Doug Hellmann d...@doughellmann.com wrote:
 I spent some time today looking over our current set of libraries trying to 
 anticipate which will be ready for 1.0 (or later) and which are still 
 considered pre-release. I came up with this set of anticipated version 
 numbers for our final juno releases. Please let me know if you see any 
 surprises on the list.

 1.0 or later

 oslo.config - 1.4.0
 oslo.i18n - 1.0.0
 oslo.messaging - 1.4.0
 oslo.rootwrap - 1.3.0
 oslo.serialization - 1.0.0
 oslosphinx - 2.2.0
 oslotest - 1.1.0
 oslo.utils - 1.0.0
 cliff - 1.7.x
 stevedore - 1.0.0

 Alphas or Pre-releases (Trying for 1.0 for Kilo)

 oslo.concurrency -  1.0
 oslo.log - 0.1.0
 oslo.middleware - 0.1.0
 pbr -  1.0
 taskflow -  1.0

 Unknown

 oslo.db - I think we said 1.0.0 but I need to confirm.
 oslo.vmware - I’ll need to talk to the vmware team to see where things stand 
 with their release plans.



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Error creating instances in DevStack

2014-09-08 Thread Sharan Kumar M
Hi,

I am trying to setup my own OpenStack cloud for development. I installed
DevStack on a VM and the installation is fine. I am able to log in via
Horizon. I followed these steps to create an instance via Horizon.
http://isurues.wordpress.com/tag/devstack/. However, when I launch an
instance, I get the following error.

Failed to launch instance nova: Please try again later [Error: No valid
host was found. ]

I checked if the libvirt_type=qemu in nova.conf. Also the package
nova-compute-qemu is installed. I am using Ubuntu 14.04.

Can someone identify why this problem occurs and how to solve it? What are
the possible causes for this problem?

Thanks,
Sharan Kumar M
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Error creating instances in DevStack

2014-09-08 Thread Dean Troyer
On Mon, Sep 8, 2014 at 1:40 PM, Sharan Kumar M sharan.monikan...@gmail.com
wrote:

 I am trying to setup my own OpenStack cloud for development. I installed
 DevStack on a VM and the installation is fine. I am able to log in via

  ...

 I checked if the libvirt_type=qemu in nova.conf. Also the package
 nova-compute-qemu is installed. I am using Ubuntu 14.04.


First thing is you should not combine DevStack and any packaged OpenStack
installation.  DevStack installs from the source repos and has no knowledge
of any decisions made while packaging OpenStack by the distros.

After sorting that out you'll need to dig further into the Nova log files
to see why no working compute node is available.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Error creating instances in DevStack

2014-09-08 Thread hemant burman
Check your scheduler logs you should be able to figure out whats happening.
If not clear then turn on your logs from info mode to debug in nova.conf.

-Hemant


On Tue, Sep 9, 2014 at 12:10 AM, Sharan Kumar M sharan.monikan...@gmail.com
 wrote:

 Hi,

 I am trying to setup my own OpenStack cloud for development. I installed
 DevStack on a VM and the installation is fine. I am able to log in via
 Horizon. I followed these steps to create an instance via Horizon.
 http://isurues.wordpress.com/tag/devstack/. However, when I launch an
 instance, I get the following error.

 Failed to launch instance nova: Please try again later [Error: No valid
 host was found. ]

 I checked if the libvirt_type=qemu in nova.conf. Also the package
 nova-compute-qemu is installed. I am using Ubuntu 14.04.

 Can someone identify why this problem occurs and how to solve it? What are
 the possible causes for this problem?

 Thanks,
 Sharan Kumar M

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][FFE] request for exceptions in xiv_ds8k driver

2014-09-08 Thread Alon Marx
Hi,

Please consider the following code changes as Feature Freeze Exceptions. 
The first code change is needed to support the latest XIV release (version 
11.5.0) and is very important for us, and the second code change is simply 
support for backups. 

Add domain support for XIV with multitenancy
https://review.openstack.org/#/c/118290/

Add support for backups to xiv_ds8k driver
https://review.openstack.org/#/c/118298/

Both changes were uploaded, reviewed and all relevant changes made by 
Juno-3 date. 
Both changes are very small and simple.
Both changes affect only the xiv_ds8k driver.
Because the changes are small and simple and have no general affect the 
risk is negligible. 

Thank you,
Alon Marx


mobile +972549170122, office +97236897824
email alo...@il.ibm.com
IBM XIV, Cloud Storage Solutions (previously HSG)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Standard virtualenv for docs building

2014-09-08 Thread Joshua Harlow
Hi all,

I just wanted to get some feedback on a change that I think will make the docs 
building process more understood,

Currently there is a script @ 
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/slave_scripts/run-docs.sh
 (this is the github mirror fyi) that builds your docs when requested using an 
implicit virtualenv 'venv' with a single command `tox -e$venv -- python 
setup.py build_sphinx`. Over the weekend I was working on having the taskflow 
'docs' venv build a changelog and include it in the docs when I learned that 
the 'docs' virtualenv isn't actually what is called when docs are being built 
(and thus can't do customized things to include the changelog).

I wanted to get some some feedback on standardizing around the 'docs' 
virtualenv for docs building (it seems common to use this in most projects 
anyway) and depreciate or remove the implicitly used 'venv' + above command to 
build the docs and just have the infra setup call into the 'docs' virtualenv 
and have it build the docs as appropriate for the project.

This would mean that all projects would at least need the following in there 
tox.ini (if they don't already have it).

[docs]
commands = python setup.py build_sphinx

Does this seem reasonable to all?

There is also a change in the governance repo for this as well @ 
https://review.openstack.org/#/c/119875/

Thoughts, comments, other?

- Josh




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Gertty 1.0.0: A console interface to Gerrit

2014-09-08 Thread James E. Blair
I just released 1.0.1 with some bug fixes for issues found by early
adopters.  Thanks!

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] China blocking access to OpenStack git review push

2014-09-08 Thread Thomas Goirand
Am I dreaming, or is the Chinese government is trying to push for the
cloud, they said. However, today, bad surprise:

# nmap -p 29418 23.253.232.87

Starting Nmap 6.00 ( http://nmap.org ) at 2014-09-09 03:10 CST
Nmap scan report for review.openstack.org (23.253.232.87)
Host is up (0.21s latency).
PORT  STATESERVICE
29418/tcp filtered unknown

Oh dear ... not fun!

FYI, this is from China Unicom (eg: CNC Group)

I'm guessing that this is the Great Firewall of China awesome automated
ban script which detected too many ssh connection to a weird port. It
has blocked a few of my servers recently too, as it became a way too
aggressive. I very much prefer to use my laptop to use git review than
having to bounce around servers. :(

Are their alternative IPs that I could use for review.openstack.org?

Cheers,

Thomas Goirand (zigo)

P.S: If a Chinese official read this, an easy way to unlist (legitimate)
servers access would be the first action any reasonable Chinese
government people must do.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Jay Pipes

On 09/03/2014 12:16 PM, Doug Hellmann wrote:

On Sep 3, 2014, at 11:37 AM, Joe Gordon joe.gord...@gmail.com
mailto:joe.gord...@gmail.com wrote:


As you all know, there has recently been several very active discussions
around how to improve assorted aspects of our development process. One
idea
that was brought up is to come up with a list of cycle goals/project
priorities for Kilo [0].

To that end, I would like to propose an exercise as discussed in the
TC meeting yesterday [1]:
Have anyone interested (especially TC members) come up with a list of
what they think the project wide Kilo cycle goals should be and post
them on this thread by end of day Wednesday, September 10th. After
which time we can begin discussing the results.
The goal of this exercise is to help us see if our individual world
views align with the greater community, and to get the ball rolling on
a larger discussion of where as a project we should be focusing more time.


Thanks for starting this discussion, Joe. It’s important for us to start
working on “OpenStack” as a whole, in addition to our individual projects.


Agreed. Thank you, Joe.


1. Sean has done a lot of analysis and started a spec on standardizing
logging guidelines where he is gathering input from developers,
deployers, and operators [1]. Because it is far enough for us to see
real progress, it’s a good place for us to start experimenting with how
to drive cross-project initiatives involving code and policy changes
from outside of a single project. We have a couple of potentially
related specs in Oslo as part of the oslo.log graduation work [2] [3],
but I think most of the work will be within the applications.


No surprise, I'm a huge +1 on this.


2. A longer-term topic is standardizing our notification content and
format. See the thread Treating notifications as a contract” for
details. We could set a goal for Kilo of establishing the requirements
and proposing a spec, with implementation to begin in L.


+1.


3. Another long-term topic is standardizing our APIs so that we use
consistent terminology and formatting (I think we have at least 3 forms
of errors returned now?). I’m not sure we have anyone ready to drive
this, yet, so I don’t think it’s something to consider for Kilo.


+10

Frankly, I believe this should be our #1 priority from a cross-project 
perspective.


The inconsistencies in the current APIs (even within the same project's 
APIs!) is just poor form and since our REST APIs are the very first 
impression that we give to the outside developer community, it really is 
incumbent on us to make sure they are explicit, free of side-effects, 
well-documented, consistent, easy-to-use, and hide implementation 
details thoroughly.



4. I would also like to see the unified SDK and command line client
projects continued (or resumed, I haven’t been following the SDK work
closely). Both of those projects will eventually make using OpenStack
clouds easier. It would be nice to see some movement towards a “user
tools” program to encompass both of these projects, perhaps with an eye
on incubation at the end of Kilo.


+1


5. And we should also be considering the Python 3 porting work. We’ve
made some progress with the Oslo libraries, with oslo.messaging 
eventlet still our main blocker. We need to put together a more concrete
plan to finish that work and for tackling applications, as well as a
team willing to help projects through their transition. This is very
long term, but does need attention, and I think it’s reasonable to ask
for a plan by the end of Kilo.


+0 (only because I don't consider it a priority compared with the other 
things you've documented here)



 From a practical standpoint, we do need to work out details like where
we make decisions about the plans for these projects once the general
idea is approved. We’ve done some of this in the Oslo project in the
past (log translations, for example) but I don’t think that’s the right
place for projects at this scale. A new openstack-specs repository would
give us a place to work on them, but we need to answer the question of
how to decide what is approved.


An openstack-specs repo might indeed be a nice place to put this 
cross-project, OpenStack-wide type of stuff.


Best,
-jay


Doug




best,
Joe Gordon

[0]
http://lists.openstack.org/pipermail/openstack-dev/2014-August/041929.html
[1]
http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-09-02-20.04.log.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] China blocking access to OpenStack git review push

2014-09-08 Thread Clark Boylan
On Mon, Sep 8, 2014, at 12:20 PM, Thomas Goirand wrote:
 Am I dreaming, or is the Chinese government is trying to push for the
 cloud, they said. However, today, bad surprise:
 
 # nmap -p 29418 23.253.232.87
 
 Starting Nmap 6.00 ( http://nmap.org ) at 2014-09-09 03:10 CST
 Nmap scan report for review.openstack.org (23.253.232.87)
 Host is up (0.21s latency).
 PORT  STATESERVICE
 29418/tcp filtered unknown
 
 Oh dear ... not fun!
 
 FYI, this is from China Unicom (eg: CNC Group)
 
 I'm guessing that this is the Great Firewall of China awesome automated
 ban script which detected too many ssh connection to a weird port. It
 has blocked a few of my servers recently too, as it became a way too
 aggressive. I very much prefer to use my laptop to use git review than
 having to bounce around servers. :(
 
 Are their alternative IPs that I could use for review.openstack.org?
 
 Cheers,
 
 Thomas Goirand (zigo)
 
 P.S: If a Chinese official read this, an easy way to unlist (legitimate)
 servers access would be the first action any reasonable Chinese
 government people must do.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As an alternative to pushing via ssh you can push via https over port
443 which may bypass this port blockage. Both latest git review and the
version of gerrit that we are running support this.

The first step is to generate a gerrit http password, this will be used
to authenticate against Gerrit. Go to
https://review.openstack.org/#/settings/http-password and generate a
password there (note this is independent of your launchpad openid
password).

Next step is to get some code clone it from eg
https://git.openstack.org/openstack-dev/sandbox. Now I am sure there is
a better way to have git-review do this for you with config overrides
somewhere but we need to add a git remote in that repo called 'gerrit'.
By default all of our .gitreview files set this up for ssh so we will
manually add one. `git remote add gerrit
https://usern...@review.openstack.org/openstack-dev/sandbox`. Finally
run `git review -s` to get the needed commit hook and now you are ready
to push code with `git review` as you normally would. Note when git
review asks for a password it will want the password we generated in the
first step.

I am pretty sure this is can be made easier and the manual git remote
step is not required if you set up some overrides in git(review) config
files. Maybe the folks that added https support for git review can fill
us in.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][FFE] request for exceptions in xiv_ds8k driver

2014-09-08 Thread Jay S. Bryant
Alon,

Thanks, I have added your request to the etherpad
https://etherpad.openstack.org/p/juno-cinder-approved-ffes and it will
undergo review.

Thanks,
Jay


On Mon, 2014-09-08 at 21:55 +0300, Alon Marx wrote:
 Hi, 
 
 Please consider the following code changes as Feature Freeze
 Exceptions. 
 The first code change is needed to support the latest XIV release
 (version 11.5.0) and is very important for us, and the second code
 change is simply support for backups. 
 
 Add domain support for XIV with multitenancy 
 https://review.openstack.org/#/c/118290/ 
 
 Add support for backups to xiv_ds8k driver 
 https://review.openstack.org/#/c/118298/ 
 
 Both changes were uploaded, reviewed and all relevant changes made by
 Juno-3 date. 
 Both changes are very small and simple. 
 Both changes affect only the xiv_ds8k driver. 
 Because the changes are small and simple and have no general affect
 the risk is negligible. 
 
 Thank you, 
 Alon Marx 
 
 
 mobile +972549170122, office +97236897824
 email alo...@il.ibm.com
 IBM XIV, Cloud Storage Solutions (previously HSG) 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Doc team working with bugs

2014-09-08 Thread Anne Gentle
On Mon, Sep 8, 2014 at 9:04 AM, Jay Pipes jaypi...@gmail.com wrote:

 On 09/08/2014 03:46 AM, Dmitry Mescheryakov wrote:

 Hello Fuelers,

 On the previous meeting a topic was raised on how Fuel doc team should
 work with bugs, see [1] for details. We agreed to move the discussion
 into the mailing list.

 The thing is there are two members in the team at the moment (Meg and
 Irina) and they need to distribute work among themselves. The natural
 way to distribute load is to assign bugs. But frequently they document
 bugs which are in the process of being fixed, so they are already
 assigned to an engineer. I.e. a bug needs to be assigned to an
 engineer and a tech writer at the same time.

 I've proposed to create a separate series 'docs' in launchpad (it is
 the thing like '5.0.x', '5.1.x'). If bug affects several series, a
 different engineer could be assigned on each of them. So doc team will
 be free to assign bugs to themselves within this new series.

 Mike Scherbakov and Dmitry Borodaenko objected creating another series
 in launchpad. Instead they proposed to mark bugs with tags like
 'docs-irina' and 'docs-meg' thus assigning them.

 What do you think is the best way to handle this? As for me, I don't
 have strong preference there.

 One last note: the question applies to two launchpad projects
 actually: Fuel and MOS. But naturally we want to do this the same way
 in both projects.


 Hi Dmitry!

 Good question and problem! :) This limitation of Launchpad has bugged me
 for some time (pun intended). It would be great to have the ability to
 assign a bug to more than one person.

 I actually think using tags is more appropriate here. Series are intended
 for releasable things. And since we'd never be releasing just a docs
 package, having a docs series doesn't really work. So, though it's a bit
 kludgy, I think using tags is the best solution here.


Thanks to you both for noting! Yes, I agree tags are the best solution with
Launchpad.

Love that you're thinking about how to pair-bug-squash!

Anne



 All the best,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Standard virtualenv for docs building

2014-09-08 Thread Monty Taylor

On 09/08/2014 11:59 AM, Joshua Harlow wrote:

Hi all,

I just wanted to get some feedback on a change that I think will make the docs 
building process more understood,

Currently there is a script @ 
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/slave_scripts/run-docs.sh
 (this is the github mirror fyi) that builds your docs when requested using an 
implicit virtualenv 'venv' with a single command `tox -e$venv -- python 
setup.py build_sphinx`. Over the weekend I was working on having the taskflow 
'docs' venv build a changelog and include it in the docs when I learned that 
the 'docs' virtualenv isn't actually what is called when docs are being built 
(and thus can't do customized things to include the changelog).

I wanted to get some some feedback on standardizing around the 'docs' 
virtualenv for docs building (it seems common to use this in most projects 
anyway) and depreciate or remove the implicitly used 'venv' + above command to 
build the docs and just have the infra setup call into the 'docs' virtualenv 
and have it build the docs as appropriate for the project.

This would mean that all projects would at least need the following in there 
tox.ini (if they don't already have it).

[docs]
commands = python setup.py build_sphinx

Does this seem reasonable to all?

There is also a change in the governance repo for this as well @ 
https://review.openstack.org/#/c/119875/

Thoughts, comments, other?


For those who like numbers, across all of the repos in gerrit that have 
docs dirs, 11 need to have a doc env renamed to docs, and 9 need one 
added - all of the rest of the repos, including every integrated repo, 
has a docs env already.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] non-deterministic gate failures due to unclosed eventlet Timeouts

2014-09-08 Thread Jay Pipes

On 09/07/2014 10:43 AM, Matt Riedemann wrote:

On 9/7/2014 8:39 AM, John Schwarz wrote:

Hi,

Long story short: for future reference, if you initialize an eventlet
Timeout, make sure you close it (either with a context manager or simply
timeout.close()), and be extra-careful when writing tests using
eventlet Timeouts, because these timeouts don't implicitly expire and
will cause unexpected behaviours (see [1]) like gate failures. In our
case this caused non-deterministic failures on the dsvm-functional test
suite.


Late last week, a bug was found ([2]) in which an eventlet Timeout
object was initialized but not closed. This instance was left inside
eventlet's inner-workings and triggered non-deterministic Timeout: 10
seconds errors and failures in dsvm-functional tests.

As mentioned earlier, initializing a new eventlet.timeout.Timeout
instance also registers it to inner mechanisms that exist within the
library, and the reference remains there until it is explicitly removed
(and not until the scope leaves the function block, as some would have
thought). Thus, the old code (simply creating an instance without
assigning it to a variable) left no way to close the timeout object.
This reference remains throughout the life of a worker, so this can
(and did) effect other tests and procedures using eventlet under the
same process. Obviously this could easily effect production-grade
systems with very high load.

For future reference:
  1) If you run into a Timeout: %d seconds exception whose traceback
includes hub.switch() and self.greenlet.switch() calls, there might
be a latent Timeout somewhere in the code, and a search for all
eventlet.timeout.Timeout instances will probably produce the culprit.

  2) The setup used to reproduce this error for debugging purposes is a
baremetal machine running a VM with devstack. In the baremetal machine I
used some 6 dd if=/dev/zero of=/dev/null to simulate high CPU load
(full command can be found at [3]), and in the VM I ran the
dsvm-functional suite. Using only a VM with similar high CPU simulation
fails to produce the result.

[1]
http://eventlet.net/doc/modules/timeout.html#eventlet.timeout.eventlet.timeout.Timeout.Timeout.cancel

[2] https://review.openstack.org/#/c/119001/
[3]
http://stackoverflow.com/questions/2925606/how-to-create-a-cpu-spike-with-a-bash-command



--
John Schwarz,
Software Engineer, Red Hat.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks, that might be what's causing this timeout/gate failure in the
nova unit tests. [1]

[1] https://bugs.launchpad.net/nova/+bug/1357578


Indeed, there are a couple places where eventlet.timeout.Timeout() seems 
to be used in the test suite without a context manager or calling 
close() explicitly:


tests/virt/libvirt/test_driver.py
8925:raise eventlet.timeout.Timeout()

tests/virt/hyperv/test_vmops.py
196:mock_with_timeout.side_effect = etimeout.Timeout()

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] China blocking access to OpenStack git review push

2014-09-08 Thread Matt Riedemann



On 9/8/2014 2:20 PM, Thomas Goirand wrote:

Am I dreaming, or is the Chinese government is trying to push for the
cloud, they said. However, today, bad surprise:

# nmap -p 29418 23.253.232.87

Starting Nmap 6.00 ( http://nmap.org ) at 2014-09-09 03:10 CST
Nmap scan report for review.openstack.org (23.253.232.87)
Host is up (0.21s latency).
PORT  STATESERVICE
29418/tcp filtered unknown

Oh dear ... not fun!

FYI, this is from China Unicom (eg: CNC Group)

I'm guessing that this is the Great Firewall of China awesome automated
ban script which detected too many ssh connection to a weird port. It
has blocked a few of my servers recently too, as it became a way too
aggressive. I very much prefer to use my laptop to use git review than
having to bounce around servers. :(

Are their alternative IPs that I could use for review.openstack.org?

Cheers,

Thomas Goirand (zigo)

P.S: If a Chinese official read this, an easy way to unlist (legitimate)
servers access would be the first action any reasonable Chinese
government people must do.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah the IBM DB2 third party CI is run from a team in China and they've 
been blocked for a few weeks now.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] KeyError: 'admin'

2014-09-08 Thread Boris Pavlovic
Daisuke,


This patch https://review.openstack.org/#/c/116766/ introduced this bug:
https://bugs.launchpad.net/rally/+bug/1366824
That was fixed by this commit: https://review.openstack.org/#/c/119790/

So update your rally and try one more time. Everything should work now.


Best regards,
Boris Pavlovic

On Mon, Sep 8, 2014 at 4:05 PM, Daisuke Morita morita.dais...@lab.ntt.co.jp
 wrote:


 Thanks, Boris.

 I tried rally-manage db recreate before registering a deployment, but
 nothing changed at all in running Tempest...

 It is late in Japan, so I will try it tomorrow.


 Best regards,
 Daisuke

 On 2014/09/08 20:38, Boris Pavlovic wrote:

 Daisuke,

 We have as well changes in our DB models.

 So running:

$ rally-manage db recreate

 Will be as well required..


 Best regards,
 Boris Pavlovic



 On Mon, Sep 8, 2014 at 3:24 PM, Mikhail Dubov mdu...@mirantis.com
 mailto:mdu...@mirantis.com wrote:

 Hi Daisuke,

 seems like your issue is connected to the change in the deployment
 configuration file format for existing clouds we've merged
 https://review.openstack.org/#/c/116766/ recently.

 Please see the updated Wiki How to page
 https://wiki.openstack.org/wiki/Rally/HowTo#Step_1._
 Deployment_initialization_.28use_existing_cloud.29 that
 describes the new format. In your case, you can just update the
 deployment configuration file and run again /rally deployment
 create/. Everything should work then.



 Best regards,
 Mikhail Dubov

 Mirantis, Inc.
 E-Mail: mdu...@mirantis.com mailto:mdu...@mirantis.com
 Skype: msdubov

 On Mon, Sep 8, 2014 at 3:16 PM, Daisuke Morita
 morita.dais...@lab.ntt.co.jp mailto:morita.dais...@lab.ntt.co.jp

 wrote:


 Hi, rally developers!

 Now, I am trying to use Rally to devstack cluster on AWS VM
 (all-in-one). I'm following a blog post
 https://www.mirantis.com/blog/rally-openstack-tempest-
 testing-made-simpler/
 . I successfully installed Devstack, Rally and Tempest. Now, I
 just ran
 Tempest by 'rally verify start' command, but the command failed
 with the
 following stacktrace.


 2014-09-08 10:57:57.803 17176 CRITICAL rally [-] KeyError: 'admin'
 2014-09-08 10:57:57.803 17176 TRACE rally Traceback (most recent
 call last):
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/bin/rally,
 line 10, in module
 2014-09-08 10:57:57.803 17176 TRACE rally sys.exit(main())
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/main.py, line
 40, in main
 2014-09-08 10:57:57.803 17176 TRACE rally return
 cliutils.run(sys.argv, categories)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py,
 line
 184, in run
 2014-09-08 10:57:57.803 17176 TRACE rally ret = fn(*fn_args,
 **fn_kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File string,
 line 2, in
 start
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py,
 line 64,
 in default_from_global
 2014-09-08 10:57:57.803 17176 TRACE rally return f(*args,
 **kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/cmd/
 commands/verify.py,
 line 59, in start
 2014-09-08 10:57:57.803 17176 TRACE rally
  api.verify(deploy_id,
 set_name, regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/
 orchestrator/api.py,
 line
 153, in verify
 2014-09-08 10:57:57.803 17176 TRACE rally
 verifier.verify(set_name=set_name, regex=regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/
 verification/verifiers/tempest/tempest.py,
 line 247, in verify
 2014-09-08 10:57:57.803 17176 TRACE rally
 self._prepare_and_run(set_name, regex)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/utils.py, line
 165, in
 wrapper
 2014-09-08 10:57:57.803 17176 TRACE rally result = f(self,
 *args,
 **kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/
 verification/verifiers/tempest/tempest.py,
 line 146, in _prepare_and_run
 2014-09-08 10:57:57.803 17176 TRACE rally
   self.generate_config_file()
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/lib/python2.7/dist-packages/rally/
 verification/verifiers/tempest/tempest.py,
 line 89, 

Re: [openstack-dev] Kilo Cycle Goals Exercise

2014-09-08 Thread Mathieu Gagné

On 2014-09-07 8:14 PM, Monty Taylor wrote:


If I were king ...

1. Caring about end user experience at all

If I don't do anything at all next cycle, I will see the above fixed.
Because it's embarrassing. Seriously. Try to use OpenStack from python
some time. I dare you.


 [...]


Between 2 and 3, maybe we can make a kilo release that has a net
negative SLOC count. But, honestly, screw 2 and 3 - let's all just work
on 1.



On 2014-09-08 5:07 PM, James E. Blair wrote:

 3) A real SDK

 OpenStack is so nearly impossible to use, that we have a substantial
 amount of code in the infrastructure program to do things that,
 frankly, we are a bit surprised that the client libraries don't do.
 Just getting an instance with an IP address is an enormous challenge,
 and something that took us years to get right.  We still have problems
 deleting instances.  We need client libraries (an SDK if you will) and
 command line clients that are easy for users to understand and work
 with, and hide the gory details of how the sausage is made.


I 100% agree with both of you. The user experience is a MAJOR concern 
for us. I'm not a good writer able to articulate my thoughts as good as 
I would like but both Monty and James managed to communicate and 
summarize them.


As a technical person, I often don't see the level of complexity in 
tools I use, I like challenges. I will gladly learn new complex stuff if 
needed. But when I first tried to use OpenStack client libraries, it was 
one of those times where I told myself: wow, it sucks. Especially around 
lack of consistency. Or as Monty said, the number of hoops you have to 
go through just to get a pingable instance. And it was and still is the 
opinion shared by some of my coworkers.


If we could improve this aspect, I would be so happy.

--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] memory usage in devstack-gate (the oom-killer strikes again)

2014-09-08 Thread Joe Gordon
Hi All,

We have recently started seeing assorted memory issues in the gate
including the oom-killer [0] and libvirt throwing memory errors [1].
Luckily we run ps and dstat on every devstack run so we have some insight
into why we are running out of memory. Based on the output from job taken
at random [2][3] a typical run consists of:

* 68 openstack api processes alone
* the following services are running 8 processes (number of CPUs on test
nodes)
  * nova-api (we actually run 24 of these, 8 compute, 8 EC2, 8 metadata)
  * nova-conductor
  * cinder-api
  * glance-api
  * trove-api
  * glance-registry
  * trove-conductor
* together nova-api, nova-conductor, cinder-api alone take over 45 %MEM
(note: some of that is memory usage is counted multiple times as RSS
includes shared libraries)
* based on dstat numbers, it looks like we don't use that much memory
before tempest runs, and after tempest runs we use a lot of memory.

Based on this information I have two categories of questions:

1) Should we explicitly set the number of workers that services use in
devstack? Why have so many workers in a small all-in-one environment? What
is the right balance here?

2) Should we be worried that some OpenStack services such as nova-api,
nova-conductor and cinder-api take up so much memory? Does there memory
usage keep growing over time, does anyone have any numbers to answer this?
Why do these processes take up so much memory?

best,
Joe


[0]
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwib29tLWtpbGxlclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiIxNzI4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDEwMjExMjA5NzY3fQ==
[1] https://bugs.launchpad.net/nova/+bug/1366931
[2] http://paste.openstack.org/show/108458/
[3]
http://logs.openstack.org/83/119183/4/check/check-tempest-dsvm-full/ea576e7/logs/screen-dstat.txt.gz
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Puppet elements support

2014-09-08 Thread Emilien Macchi
Hi TripleO community,

I would be really interested by helping to bring Puppet elements support
in TripleO.
So far I've seen this work:
https://github.com/agroup/tripleo-puppet-elements/tree/puppet_dev_heat
which is a very good bootstrap but really outdated.
After some discussion with Greg Haynes on IRC, we came up with the idea
to create a repo (that would be move in Stackforge or OpenStack git) and
push the bits from what has been done by HP folks with updates 
improvements.

I started a basic repo
https://github.com/enovance/tripleo-puppet-elements that could be moved
right now on Stackforge to let the community start the work.

My proposal is:
* move this repo (or create a new one directly on
github/{stackforge,openstack?})
* push some bits from agroup original work.
* continue the contributions, updates  improvements.

Any thoughts?

-- 
Emilien Macchi




signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] memory usage in devstack-gate (the oom-killer strikes again)

2014-09-08 Thread Mike Bayer
Hi All - 

Joe had me do some quick memory profiling on nova, just an FYI if anyone wants 
to play with this technique, I place a little bit of memory profiling code 
using Guppy into nova/api/__init__.py, or anywhere in your favorite app that 
will definitely get imported when the thing first runs:

from guppy import hpy
import signal
import datetime

def handler(signum, frame):
print guppy memory dump

fname = /tmp/memory_%s.txt % 
datetime.datetime.now().strftime(%Y%m%d_%H%M%S)
prof = hpy().heap()
with open(fname, 'w') as handle:
prof.dump(handle)
del prof

signal.signal(signal.SIGUSR2, handler)



Then, run nova-api, run some API calls, then you hit the nova-api process with 
a SIGUSR2 signal, and it will dump a profile into /tmp/ like this:

http://paste.openstack.org/show/108536/

Now obviously everyone is like, oh boy memory lets go beat up SQLAlchemy 
again…..which is fine I can take it.  In that particular profile, there’s a 
bunch of SQLAlchemy stuff, but that is all structural to the classes that are 
mapped in Nova API, e.g. 52 classes with a total of 656 attributes mapped.   
That stuff sets up once and doesn’t change.   If Nova used less ORM,  e.g. 
didn’t map everything, that would be less.  But in that profile there’s no 
“data” lying around.

But even if you don’t have that many objects resident, your Python process 
might still be using up a ton of memory.  The reason for this is that the 
cPython interpreter has a model where it will grab all the memory it needs to 
do something, a time consuming process by the way, but then it really doesn’t 
ever release it  (see 
http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm
 for the “classic” answer on this, things may have improved/modernized in 2.7 
but I think this is still the general idea).

So in terms of SQLAlchemy, a good way to suck up a ton of memory all at once 
that probably won’t get released is to do this:

1. fetching a full ORM object with all of its data

2. fetching lots of them all at once


So to avoid doing that, the answer isn’t necessarily that simple.   The quick 
wins to loading full objects are to …not load the whole thing!   E.g. assuming 
we can get Openstack onto 0.9 in requirements.txt, we can start using 
load_only():

session.query(MyObject).options(load_only(“id”, “name”, “ip”))

or with any version, just load those columns - we should be using this as much 
as possible for any query that is row/time intensive and doesn’t need full ORM 
behaviors (like relationships, persistence):

session.query(MyObject.id, MyObject.name, MyObject.ip)

Another quick win, if we *really* need an ORM object, not a row, and we have to 
fetch a ton of them in one big result, is to fetch them using yield_per():

   for obj in session.query(MyObject).yield_per(100):
# work with obj and then make sure to lose all references to it

yield_per() will dish out objects drawing from batches of the number you give 
it.   But it has two huge caveats: one is that it isn’t compatible with most 
forms of eager loading, except for many-to-one joined loads.  The other is that 
the DBAPI, e.g. like the MySQL driver, does *not* stream the rows; virtually 
all DBAPIs by default load a result set fully before you ever see the first 
row.  psycopg2 is one of the only DBAPIs that even offers a special mode to 
work around this (server side cursors).

Which means its even *better* to paginate result sets, so that you only ask the 
database for a chunk at a time, only storing at most a subset of objects in 
memory at once.  Pagination itself is tricky, if you are using a naive 
LIMIT/OFFSET approach, it takes awhile if you are working with a large OFFSET.  
It’s better to SELECT into windows of data, where you can specify a start and 
end criteria (against an indexed column) for each window, like a timestamp.

Then of course, using Core only is another level of fastness/low memory.  
Though querying for individual columns with ORM is not far off, and I’ve also 
made some major improvements to that in 1.0 so that query(*cols) is pretty 
competitive with straight Core (and Core is…well I’d say becoming visible in 
raw DBAPI’s rear view mirror, at least….).

What I’d suggest here is that we start to be mindful of memory/performance 
patterns and start to work out naive ORM use into more savvy patterns; being 
aware of what columns are needed, what rows, how many SQL queries we really 
need to emit, what the “worst case” number of rows will be for sections that 
really need to scale.  By far the hardest part is recognizing and 
reimplementing when something might have to deal with an arbitrarily large 
number of rows, which means organizing that code to deal with a “streaming” 
pattern where you never have all the rows in memory at once - on other projects 
I’ve had tasks that would normally take about a day, but in order to organize 
it to “scale”, took weeks - such as being able 

Re: [openstack-dev] [Glance][FFE] Refactoring Glance Logging

2014-09-08 Thread Mark Washenberger
In principle I don't think these changes need FFE, because they aren't
really features so much as fixes for better logging and
internationalization.


On Mon, Sep 8, 2014 at 4:50 AM, Kuvaja, Erno kuv...@hp.com wrote:

  All,



 There is two changes still not landed from
 https://blueprints.launchpad.net/glance/+spec/refactoring-glance-logging



 https://review.openstack.org/116626



 and



 https://review.openstack.org/#/c/117204/



 Merge of the changes was delayed over J3 to avoid any potential merge
 conflicts. There was minor change made (couple of LOG.exceptions changed to
 LOG.error based on the review feedback) when rebased.



 I would like to request Feature Freeze Exception if needed to finish the
 Juno Logging refactoring and getting these two changes merged in.



 BR,

 Erno (jokke_) Kuvaja

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] convergence flow diagrams

2014-09-08 Thread Angus Salkeld
On Mon, Sep 8, 2014 at 11:22 PM, Tyagi, Ishant ishant.ty...@hp.com wrote:

  Hi All,



 As per the heat mid cycle meetup whiteboard, we have created the
 flowchart and sequence diagram for the convergence . Can you please review
 these diagrams and provide your feedback?



 https://www.dropbox.com/sh/i8qbjtgfdxn4zx4/AAC6J-Nps8J12TzfuCut49ioa?dl=0


Great! Good to see something.


I was expecting something like:
engine ~= like nova-conductor (it's the only process that talks to the db -
make upgrading easier)
observer - purely gets the actual state/properties and writes then to the
db (via engine)
worker - has a job queue and grinds away at running those (resource
actions)

Then engine then triggers on differences on goal vs. actual state and
create a job and sends it to the job queue.
- so, on create it sees there is no actual state so it sends a create job
for the first resource to the worker queue
- when the observer writes the new state for that resource it triggers the
next resource create in the dependency tree.
- like any system that relies on notifications we need timeouts and each
stack needs a periodic notification to make sure
  that progress is been made or notify the user that no progress is being
made.

One question about the observer (in either my setup or the one in the
diagram).
- If we are relying on rpc notifications all the observer processes will
receive a copy of the same notification
  (say nova create end) how are we going to decide on which one does
anything with it?
  We don't want 10 observers getting more detailed info from nova and then
writing to the db

In your diagram worker is communicating with observer, which seems odd to
me. I thought observer and worker were very
independent entities.

In my setup there are less API to worry about too:
- RPC api for the engine (access to the db)
- RPC api for sending a job to the worker
- the plugin API
- the observer might need an api just for the engine to tell it to
start/stop observing a stack

-Angus




 Thanks,

 Ishant



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-08 Thread James E. Blair
Sean Dague s...@dague.net writes:

 The crux of the issue is that zookeeper python modules are C extensions.
 So you have to either install from packages (which we don't do in unit
 tests) or install from pip, which means forcing zookeeper dev packages
 locally. Realistically this is the same issue we end up with for mysql
 and pg, but given their wider usage we just forced that pain on developers.
...
 Which feels like we need some decoupling on our requirements vs. tox
 targets to get there. CC to Monty and Clark as our super awesome tox
 hackers to help figure out if there is a path forward here that makes sense.

From a technical standpoint, all we need to do to make this work is to
add the zookeeper python client bindings to (test-)requirements.txt.
But as you point out, that makes it more difficult for developers who
want to run unit tests locally without having the requisite libraries
and header files installed.

We could add another requirements file with heavyweight optional
dependencies, and use that in gate testing, but also have a lightweight
tox environment that does not include them for ease of use in local
testing.

What would be really great is if we could use setuptools extras_require
for this:

https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies

However, I'm not sure what the situation is with support for that in pip
(and we might need pbr support too).

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to provide tests environments for python things that require C extensions

2014-09-08 Thread Monty Taylor

On 09/05/2014 07:21 AM, Monty Taylor wrote:

On 09/05/2014 06:32 AM, Sean Dague wrote:

While reviewing this zookeeper service group fix in Nova -
https://review.openstack.org/#/c/102639/ it was exposed that the
zookeeper tests aren't running in infra.

The crux of the issue is that zookeeper python modules are C extensions.
So you have to either install from packages (which we don't do in unit
tests) or install from pip, which means forcing zookeeper dev packages
locally. Realistically this is the same issue we end up with for mysql
and pg, but given their wider usage we just forced that pain on
developers.

But it seems like a bad stand off between testing upstream and testing
normal path locally.

Big picture it would be nice to not require a ton of dev libraries
locally for optional components, but still test them upstream. So that
in the base case I'm not running zookeeper locally, but if it fails
upstream because I broke something in zookeeper, it's easy enough to
spin up that dev env that has it.

Which feels like we need some decoupling on our requirements vs. tox
targets to get there. CC to Monty and Clark as our super awesome tox
hackers to help figure out if there is a path forward here that makes
sense.


Funny story - I've come to dislike what we're doing here, so I've been
slowly working on an idea in this area:

https://github.com/emonty/dox

The tl;dr is it's like tox, except it uses docker instead of
virtualenv - which means we can express all of our requirements, not
just pip ones.

It's not quite ready yet - although I'd be happy to move it in to
stackforge or even openstack-dev and get other people hacking on it with
me until it is. The main problem that needs solving, I think, is how to
sanely express multiple target environments (like py26,py27) without
making a stupidly baroque config file. OTOH, tox's insistence of making
a new virtualenv for each environment is heavyweight and has led to
some pretty crazy hacks across the project. Luckily, docker itself does
an EXCELLENT job at handling caching and reuse - so I think we can have
a set of containers that something in infra (waves hands) publishes to
dockerhub, like:

   infra/py27
   infra/py26

And then have things like nova build on those, like:

   infra/nova/py27

Which would have zookeeper as well

The _really_ fun part, again, if we can figure out how to express it in
config without reimplementing make accidentally, is that we could start
to have things like:

   infra/mysql
   infra/postgres
   infra/mongodb

And have dox files say things like:

   Nova unittests want a python27 environment, this means we want an
infra/mysql container, an infra/postgres container and for those to be
linked to the infra/nova/py27 container where the tests will run.

Since those are all reusable, the speed should be _Excellent_ and we
should be able to more easily get more things runnable locally without
full devstack.

Thoughts? Anybody wanna hack on it with me? I think it could wind up
being a pretty useful tool for folks outside of OpenStack too if we get
it right.


I'd like to follow up on this real quick - just to set some expectations 
appropriately...


Firstly, I'm super thrilled that folks are interested in hacking on this 
- and excited that people have piled on. It's extremely exciting.


I want to make sure people know that goal number 1 of this is to allow 
for a tox-like experience for developers on their laptops that isn't 
specific to python. We have growing numbers of folks in our numbers who 
work on things that are not python, and there are a ton of 
non-OpenStack/non-Python folks out there working on projects that could 
potentially benefit from something that does for them what tox has so 
far done for us.


Additionally, I personally am bothered by the fact that as a project we 
declare quite strongly what version of libraries we use that are 
_python_ but have no current way to do the same for non-python 
libraries. I may draw the ire ultimately from the distros from where I 
want my stance on this to get - but I would like very much for us to be 
able to stop caring about what version of libvirt is _Currently_ in 
Fedora, RHEL or Ubuntu and get to a point where we _tell_ people that to 
use OpenStack Kilo they'll need libvirt v7 or something. I think that's 
a ways off, but I personally want to get there nonetheless.


That said - although I'm hopeful it will be, I'd like to be clear that 
I'm not convinced this will be a useful tool for any of our automation. 
There are no problems we have in the gate infrastructure that dox is 
aiming at helping us solve. We already have single-use slaves that 
already can install anything we need at any time.


There are at least four outcomes I can see for the current dox effort:

a) dox winds up being a simple and powerful tool for developers both 
OpenStack and not to use in their day to day life
b) a is _so_ true that we feel it's an essential part of openstack 
developer 

Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-08 Thread Stefano Maffulli
On 09/05/2014 07:07 PM, James Bottomley wrote:
 Actually, I don't think this analysis is accurate.  Some people are
 simply interested in small aspects of a project.  It's the scratch your
 own itch part of open source.  The thing which makes itch scratchers
 not lone wolfs is the desire to go the extra mile to make what they've
 done useful to the community.  If they never do this, they likely have a
 forked repo with only their changes (and are the epitome of a lone
 wolf).  If you scratch your own itch and make the effort to get it
 upstream, you're assisting the community (even if that's the only piece
 of code you do) and that assistance makes you (at least for a time) part
 of the community.

I'm starting to think that the processes we have implemented are slowing
down (if not preventing) scratch your own itch contributions. The CLA
has been identified as the cause for this but after carefully looking at
our development processes and the documentation, I think that's only one
part of the problem (and maybe not even as big as initially thought).

The gerrit workflow for example is something that requires quite an
investment in time and energy and casual developers (think operators
fixing small bugs in code, or documentation) have little incentive to go
through the learning curve.

To go back in topic, to the proposal to split drivers out of tree, I
think we may want to evaluate other, simpler, paths before we embark in
a huge task which is already quite clear will require more cross-project
coordination.

From conversations with PTLs and core reviewers I get the impression
that lots of drivers contributions come with bad code. These require a
lot of time and reviewers energy to be cleaned up, causing burn out and
bad feelings on all sides. What if we establish a new 'place' of some
sort where we can send people to improve their code (or dump it without
interfering with core?) Somewhere there may be a workflow
go-improve-over-there where a Community Manager (or mentors or some
other program we may invent) takes over and does what core reviewers
have been trying to do 'on the side'? The advantage is that this way we
don't have to change radically how current teams operate, we may be able
to start this immediately with Kilo. Thoughts?

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Puppet elements support

2014-09-08 Thread Monty Taylor

On 09/08/2014 04:11 PM, Emilien Macchi wrote:

Hi TripleO community,

I would be really interested by helping to bring Puppet elements support
in TripleO.
So far I've seen this work:
https://github.com/agroup/tripleo-puppet-elements/tree/puppet_dev_heat
which is a very good bootstrap but really outdated.
After some discussion with Greg Haynes on IRC, we came up with the idea
to create a repo (that would be move in Stackforge or OpenStack git) and
push the bits from what has been done by HP folks with updates 
improvements.

I started a basic repo
https://github.com/enovance/tripleo-puppet-elements that could be moved
right now on Stackforge to let the community start the work.

My proposal is:
* move this repo (or create a new one directly on
github/{stackforge,openstack?})
* push some bits from agroup original work.
* continue the contributions, updates  improvements.

Any thoughts?



Be sure to also check this out:

https://review.openstack.org/#/c/88479/

Which is the main patch to start using diskimage-builder to create the 
base devstack-gate images. We use puppet inside of diskimage-builder 
here, and have a puppet element ... so it's entirely possible that there 
might be a basis for collaboration here.


Monty

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Puppet elements support

2014-09-08 Thread Spencer Krum
I would be happy to contribute to and help review this project.
On Sep 8, 2014 4:14 PM, Emilien Macchi emilien.mac...@enovance.com
wrote:

 Hi TripleO community,

 I would be really interested by helping to bring Puppet elements support
 in TripleO.
 So far I've seen this work:
 https://github.com/agroup/tripleo-puppet-elements/tree/puppet_dev_heat
 which is a very good bootstrap but really outdated.
 After some discussion with Greg Haynes on IRC, we came up with the idea
 to create a repo (that would be move in Stackforge or OpenStack git) and
 push the bits from what has been done by HP folks with updates 
 improvements.

 I started a basic repo
 https://github.com/enovance/tripleo-puppet-elements that could be moved
 right now on Stackforge to let the community start the work.

 My proposal is:
 * move this repo (or create a new one directly on
 github/{stackforge,openstack?})
 * push some bits from agroup original work.
 * continue the contributions, updates  improvements.

 Any thoughts?

 --
 Emilien Macchi



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] memory usage in devstack-gate (the oom-killer strikes again)

2014-09-08 Thread Clint Byrum
Excerpts from Joe Gordon's message of 2014-09-08 15:24:29 -0700:
 Hi All,
 
 We have recently started seeing assorted memory issues in the gate
 including the oom-killer [0] and libvirt throwing memory errors [1].
 Luckily we run ps and dstat on every devstack run so we have some insight
 into why we are running out of memory. Based on the output from job taken
 at random [2][3] a typical run consists of:
 
 * 68 openstack api processes alone
 * the following services are running 8 processes (number of CPUs on test
 nodes)
   * nova-api (we actually run 24 of these, 8 compute, 8 EC2, 8 metadata)
   * nova-conductor
   * cinder-api
   * glance-api
   * trove-api
   * glance-registry
   * trove-conductor
 * together nova-api, nova-conductor, cinder-api alone take over 45 %MEM
 (note: some of that is memory usage is counted multiple times as RSS
 includes shared libraries)
 * based on dstat numbers, it looks like we don't use that much memory
 before tempest runs, and after tempest runs we use a lot of memory.
 
 Based on this information I have two categories of questions:
 
 1) Should we explicitly set the number of workers that services use in
 devstack? Why have so many workers in a small all-in-one environment? What
 is the right balance here?

I'm kind of wondering why we aren't pushing everything to go the same
direction keystone did with apache. I may be crazy but apache gives us
all kinds of tools to tune around process forking that we'll have to
reinvent in our own daemon bits (like MaxRequestsPerChild to prevent
leaky or slow GC from eating all our memory over time).

Meanwhile, the idea on running api processes with ncpu is that we don't
want to block an API request if there is a CPU available to it. Of
course if we have enough cinder, nova, keystone, trove, etc. requests
all at one time that we do need to block, we defer to the CPU scheduler
of the box to do it, rather than queue things up at the event level.
This can lead to quite ugly CPU starvation issues, and that is a lot
easier to tune for if you have one tuning knob for apache + mod_wsgi
instead of nservices.

In production systems I'd hope that memory would be quite a bit more
available than on the bazillions of cloud instances that run tests. So,
while process-per-cpu-per-service is a large percentage of 8G, it is
a very small percentage of 24G+, which is a pretty normal amount of
memory to have on an all-in-one type of server that one might choose
as a baremetal controller. For VMs that are handling production loads,
It's a pretty easy trade-off to give them a little more RAM so they can
take advantage of all the CPU's as needed.

All this to say, since devstack is always expected to be run in a dev
context, and not production, I think it would make sense to dial it
back to 4 from ncpu.

 
 2) Should we be worried that some OpenStack services such as nova-api,
 nova-conductor and cinder-api take up so much memory? Does there memory
 usage keep growing over time, does anyone have any numbers to answer this?
 Why do these processes take up so much memory?

Yes I do think we should be worried that they grow quite a bit. I've
experienced this problem a few times in a few scripting languages, and
almost every time it turned out to be too much data being read from
the database or MQ. Moving to tighter messages, and tighter database
interaction, nearly always results in less wasted RAM.

I like the other suggestion to start graphing this. Since we have all
that dstat data, I wonder if we can just process that directly into
graphite.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][cinder] Averting the Nova crisis by splitting out virt drivers

2014-09-08 Thread Sean Roberts
A somewhat self-serving way to start to solve this is to make training and 
mentoring as the first steps to getting involved with contributions. We would 
continue to gradually give more responsibilities as the experience and skills 
increase. We do this last part already, but are missing the support for the 
mentoring and training. I think landing this mentoring responsibility into the 
ambassador program makes some sense. 

This doesn't directly solve the subject of this thread. But it does start the 
process of giving help to those that are trying to learn inline while the cores 
are trying to land quality code. 

What do you think?

~sean

 On Sep 8, 2014, at 5:20 PM, Stefano Maffulli stef...@openstack.org wrote:
 
 On 09/05/2014 07:07 PM, James Bottomley wrote:
 Actually, I don't think this analysis is accurate.  Some people are
 simply interested in small aspects of a project.  It's the scratch your
 own itch part of open source.  The thing which makes itch scratchers
 not lone wolfs is the desire to go the extra mile to make what they've
 done useful to the community.  If they never do this, they likely have a
 forked repo with only their changes (and are the epitome of a lone
 wolf).  If you scratch your own itch and make the effort to get it
 upstream, you're assisting the community (even if that's the only piece
 of code you do) and that assistance makes you (at least for a time) part
 of the community.
 
 I'm starting to think that the processes we have implemented are slowing
 down (if not preventing) scratch your own itch contributions. The CLA
 has been identified as the cause for this but after carefully looking at
 our development processes and the documentation, I think that's only one
 part of the problem (and maybe not even as big as initially thought).
 
 The gerrit workflow for example is something that requires quite an
 investment in time and energy and casual developers (think operators
 fixing small bugs in code, or documentation) have little incentive to go
 through the learning curve.
 
 To go back in topic, to the proposal to split drivers out of tree, I
 think we may want to evaluate other, simpler, paths before we embark in
 a huge task which is already quite clear will require more cross-project
 coordination.
 
 From conversations with PTLs and core reviewers I get the impression
 that lots of drivers contributions come with bad code. These require a
 lot of time and reviewers energy to be cleaned up, causing burn out and
 bad feelings on all sides. What if we establish a new 'place' of some
 sort where we can send people to improve their code (or dump it without
 interfering with core?) Somewhere there may be a workflow
 go-improve-over-there where a Community Manager (or mentors or some
 other program we may invent) takes over and does what core reviewers
 have been trying to do 'on the side'? The advantage is that this way we
 don't have to change radically how current teams operate, we may be able
 to start this immediately with Kilo. Thoughts?
 
 /stef
 
 -- 
 Ask and answer questions on https://ask.openstack.org
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] China blocking access to OpenStack git review push

2014-09-08 Thread Chen CH Ji
It was found more than weeks before, I asked this question through IRC and
finally found it's GFW issue

the easiest way is to find a server outside China and use ssh tunnel
solution, at least it works for me and other colleagues here

ssh -N -f -L 29418:review.openstack.org:29418 $server_in_US
then mapping review.openstack.org to local host in /etc/hosts.

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Matt Riedemann mrie...@linux.vnet.ibm.com
To: openstack-dev@lists.openstack.org
Date:   09/09/2014 05:28 AM
Subject:Re: [openstack-dev] China blocking access to OpenStack git
review push





On 9/8/2014 2:20 PM, Thomas Goirand wrote:
 Am I dreaming, or is the Chinese government is trying to push for the
 cloud, they said. However, today, bad surprise:

 # nmap -p 29418 23.253.232.87

 Starting Nmap 6.00 ( http://nmap.org ) at 2014-09-09 03:10 CST
 Nmap scan report for review.openstack.org (23.253.232.87)
 Host is up (0.21s latency).
 PORT  STATESERVICE
 29418/tcp filtered unknown

 Oh dear ... not fun!

 FYI, this is from China Unicom (eg: CNC Group)

 I'm guessing that this is the Great Firewall of China awesome automated
 ban script which detected too many ssh connection to a weird port. It
 has blocked a few of my servers recently too, as it became a way too
 aggressive. I very much prefer to use my laptop to use git review than
 having to bounce around servers. :(

 Are their alternative IPs that I could use for review.openstack.org?

 Cheers,

 Thomas Goirand (zigo)

 P.S: If a Chinese official read this, an easy way to unlist (legitimate)
 servers access would be the first action any reasonable Chinese
 government people must do.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Yeah the IBM DB2 third party CI is run from a team in China and they've
been blocked for a few weeks now.

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] China blocking access to OpenStack git review push

2014-09-08 Thread Huang Zhiteng
I am a China Telecom user and have been blocked by the same issue for
days.  I actually reported this to China Telecom customer service.  I
don't expect these ISPs have the authority to unblock this, all I
wanted is they file something to GFW so that those guys would be aware
that one innocent site got blocked.  Well, until then I'll go with
Clark's suggestion with https.

On Mon, Sep 8, 2014 at 12:20 PM, Thomas Goirand z...@debian.org wrote:
 Am I dreaming, or is the Chinese government is trying to push for the
 cloud, they said. However, today, bad surprise:

 # nmap -p 29418 23.253.232.87

 Starting Nmap 6.00 ( http://nmap.org ) at 2014-09-09 03:10 CST
 Nmap scan report for review.openstack.org (23.253.232.87)
 Host is up (0.21s latency).
 PORT  STATESERVICE
 29418/tcp filtered unknown

 Oh dear ... not fun!

 FYI, this is from China Unicom (eg: CNC Group)

 I'm guessing that this is the Great Firewall of China awesome automated
 ban script which detected too many ssh connection to a weird port. It
 has blocked a few of my servers recently too, as it became a way too
 aggressive. I very much prefer to use my laptop to use git review than
 having to bounce around servers. :(

 Are their alternative IPs that I could use for review.openstack.org?

 Cheers,

 Thomas Goirand (zigo)

 P.S: If a Chinese official read this, an easy way to unlist (legitimate)
 servers access would be the first action any reasonable Chinese
 government people must do.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Regards
Huang Zhiteng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Status of Neutron at Juno-3

2014-09-08 Thread loy wolfe
Is TOR based vxlan really a common interest to most of the people, and
should a BP be taken care now with its designed use case of 3rd TOR backend
integration?


On Fri, Sep 5, 2014 at 10:54 PM, Robert Kukura kuk...@noironetworks.com
wrote:

 Kyle,

 Please consider an FFE for https://blueprints.launchpad.
 net/neutron/+spec/ml2-hierarchical-port-binding. This was discussed
 extensively at Wednesday's ML2 meeting, where the consensus was that it
 would be valuable to get this into Juno if possible. The patches have had
 core reviews from Armando, Akihiro, and yourself. Updates to the three
 patches addressing the remaining review issues will be posted today, along
 with an update to the spec to bring it in line with the implementation.

 -Bob


 On 9/3/14, 8:17 AM, Kyle Mestery wrote:

 Given how deep the merge queue is (146 currently), we've effectively
 reached feature freeze in Neutron now (likely other projects as well).
 So this morning I'm going to go through and remove BPs from Juno which
 did not make the merge window. I'll also be putting temporary -2s in
 the patches to ensure they don't slip in as well. I'm looking at FFEs
 for the high priority items which are close but didn't quite make it:

 https://blueprints.launchpad.net/neutron/+spec/l3-high-availability
 https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security
 https://blueprints.launchpad.net/neutron/+spec/security-
 group-rules-for-devices-rpc-call-refactor

 Thanks,
 Kyle

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Working on 6.0 and new releases in general

2014-09-08 Thread Dmitry Borodaenko
TL;DR: Yes, our work on 6.0 features is currently blocked and it is
becoming a major problem. No, I don't think we should create
pre-release or feature branches. Instead, we should create stable/5.1
branches and open master for 6.0 work.

We have reached a point in 5.1 release cycle where the scope of issues
we are willing to address in this release is narrow enough to not
require full attention of the whole team. We have engineers working on
6.0 features, and their work is essentially blocked until they have
somewhere to commit their changes.

Simply creating new branches is not even close to solving this
problem: we have a whole CI infrastructure around every active release
series (currently 5.1, 5.0, 4.1), including test jobs for gerrit
commits, package repository mirrors updates, ISO image builds, smoke,
build verification, and swarm tests for ISO images, documentation
builds, etc. A branch without all that infrastructure isn't any better
than current status quo: every developer tracking their own 6.0 work
locally.

Unrelated to all that, we also had a lot of very negative experience
with feature branches in the past [0] [1], which is why we have
decided to follow the OpenStack branching strategy: commit all feature
changes directly to master and track bugfixes for stable releases in
stable/* branches.

[0] https://lists.launchpad.net/fuel-dev/msg00127.html
[1] https://lists.launchpad.net/fuel-dev/msg00028.html

I'm also against declaring a hard code freeze with exceptions, HCF
should remain tied to our ability to declare a release candidate. If
we can't release with the bugs we already know about, declaring HCF
before fixing these bugs would be an empty gesture.

Creating stable/5.1 now instead of waiting for hard code freeze for
5.1 will cost us two things:

1) DevOps team will have to update our CI infrastructure for one more
release series. It's something we have to do for 6.0 sooner or later,
so this may be a disruption, but not an additional effort.

2) All commits targeted for 5.1 will have to be proposed for two
branches (master and stable/5.1) instead of just one (master). This
will require additional effort, but I think that it is significantly
smaller than the cost of spinning our wheels on 6.0 efforts.

-DmitryB


On Mon, Sep 8, 2014 at 10:10 AM, Dmitry Mescheryakov
dmescherya...@mirantis.com wrote:
 Hello Fuelers,

 Right now we have the following policy in place: the branches for a
 release are opened only after its 'parent' release have reached hard
 code freeze (HCF). Say, 5.1 release is parent releases for 5.1.1 and
 6.0.

 And that is the problem: if parent release is delayed, we can't
 properly start development of a child release because we don't have
 branches to commit. That is current issue with 6.0: we already started
 to work on pushing Juno in to 6.0, but if we are to make changes to
 our deployment code we have nowhere to store them.

 IMHO the issue could easily be resolved by creation of pre-release
 branches, which are merged together with parent branches once the
 parent reaches HCF. Say, we use branch 'pre-6.0' for initial
 development of 6.0. Once 5.1 reaches HCF, we merge pre-6.0 into master
 and continue development here. After that pre-6.0 is abandoned.

 What do you think?

 Thanks,

 Dmitry

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] convergence flow diagrams

2014-09-08 Thread Qiming Teng
On Tue, Sep 09, 2014 at 10:15:04AM +1000, Angus Salkeld wrote:
 On Mon, Sep 8, 2014 at 11:22 PM, Tyagi, Ishant ishant.ty...@hp.com wrote:
(snip)
 
 Great! Good to see something.

Indeed. These diagrams are very useful for those who read plain English
text very slow and those who do visual thinking, like me.  Thanks.

Some comments/questions below, following Angus's.
 
 I was expecting something like:
 engine ~= like nova-conductor (it's the only process that talks to the db -
 make upgrading easier)
 observer - purely gets the actual state/properties and writes then to the
 db (via engine)
 worker - has a job queue and grinds away at running those (resource
 actions)

This looks much cleaner an overview. If I'm understanding this
correctly, an observer can be a pollster or just a consumer subscribing
to certain notifications on queue.  We are gonna have a lot of observers
dedicated to each sub-tree or even individual resource, right?  It would
be cool if we can depict how the observers coordinate their operations.

As for the workers, they appear to me like consumers to a (shared?) 'job'
queue. We are relying on the messaging backend to ensure the persistence of
jobs, right?  Or, we may persist the jobs in Heat DB so that an engine
restart will not cause confusions?

 Then engine then triggers on differences on goal vs. actual state and
 create a job and sends it to the job queue.
 - so, on create it sees there is no actual state so it sends a create job
 for the first resource to the worker queue
 - when the observer writes the new state for that resource it triggers the
 next resource create in the dependency tree.

It seems to me that the observers should observe and then calculate,
notify the differences?  I am also assuming that other stack operations
will fit into the big picture, but don't know how...

 - like any system that relies on notifications we need timeouts and each
 stack needs a periodic notification to make sure
   that progress is been made or notify the user that no progress is being
 made.

Vote on this. 

 One question about the observer (in either my setup or the one in the
 diagram).
 - If we are relying on rpc notifications all the observer processes will
 receive a copy of the same notification
   (say nova create end) how are we going to decide on which one does
 anything with it?
   We don't want 10 observers getting more detailed info from nova and then
 writing to the db
 
 In your diagram worker is communicating with observer, which seems odd to
 me. I thought observer and worker were very
 independent entities.

+1.

Things may get simplified if we use messaging only for notification's
purpose, not for RPC between observer and convergence workers.

  
 In my setup there are less API to worry about too:
 - RPC api for the engine (access to the db)
 - RPC api for sending a job to the worker
 - the plugin API
 - the observer might need an api just for the engine to tell it to
 start/stop observing a stack
 
 -Angus

Regards,
  Qiming


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] Scheduler sub-group meeting agenda 9/9

2014-09-08 Thread Dugger, Donald D
1)  Next steps (mainly want to talk about interface cleanup)

2)  Opens

--
Don Dugger
Censeo Toto nos in Kansa esse decisse. - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TaskFlow][Oslo] TaskFlow 0.4.0 Released!

2014-09-08 Thread Joshua Harlow
Howdy all,

Just wanted to send a mini-announcement about the new taskflow 0.4.0 release on 
behalf of the oslo and taskflow teams,

The details about what is new and what is changed and all that goodness can be 
found @ https://etherpad.openstack.org/p/TaskFlow-0.4

Updated developer docs can be found at 
http://docs.openstack.org/developer/taskflow as usual (state validation and 
enforcement were a big thing in this release IMHO).

Bugs of course can be reported @ 
http://bugs.launchpad.net/taskflow/0.4/+filebug (hopefully no bugs!).

The requirements repo is being updated (no expected issues there) @ 
https://review.openstack.org/119985

As always find the team in #openstack-state-management and come with questions, 
comments or other thoughts :-)

Onward and upward to the next release (hopefully a 0.4.1 release with some 
patch fixes, small adjustments soon...)!

- Josh

P.S.

Also wanted to give a shout-out to a recent blog about taskflow that one of the 
cores has been creating:

http://www.dankrause.net/2014/08/23/intro-to-taskflow.html (hopefully to turn 
into a series)



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Request for python-heatclient project to adopt heat-translator

2014-09-08 Thread Angus Salkeld
Hi

Would the translator ever need to talk to something like Mistral for
workflow?
If so does it make sense to hook the translator into heat client.

(this might not be an issue, just asking).

-Angus

On Wed, Sep 3, 2014 at 1:52 AM, Sahdev P Zala spz...@us.ibm.com wrote:

 Hello guys,

 As you know, the heat-translator project was started early this year with
 an aim to create a tool to translate non-Heat templates to HOT. It is a
 StackForge project licensed under Apache 2. We have made good progress with
 its development and a demo was given at the OpenStack 2014 Atlanta summit
 during a half-a-day session that was dedicated to heat-translator project
 and related TOSCA discussion. Currently the development and testing is done
 with the TOSCA template format but the tool is designed to be generic
 enough to work with templates other than TOSCA. There are five developers
 actively contributing to the development. In addition, all current Heat
 core members are already core members of the heat-translator project.

 Recently, I attended Heat Mid Cycle Meet Up for Juno in Raleigh and
 updated the attendees on heat-translator project and ongoing progress. I
 also requested everyone for a formal adoption of the project in the
 python-heatclient and the consensus was that it is the right thing to do.
 Also when the project was started, the initial plan was to make it
 available in python-heatclient. Hereby, the heat-translator team would like
 to make a request to have the heat-translator project to be adopted by the
 python-heatclient/Heat program.

 Below are some of links related to the project,

 *https://github.com/stackforge/heat-translator*
 https://github.com/stackforge/heat-translator

 *https://launchpad.net/heat-translator*
 https://launchpad.net/heat-translator

 *https://blueprints.launchpad.net/heat-translator*
 https://blueprints.launchpad.net/heat-translator

 *https://bugs.launchpad.net/heat-translator*
 https://bugs.launchpad.net/heat-translator

 *http://heat-translator.readthedocs.org/*
 http://heat-translator.readthedocs.org/ (in progress)

 Thanks!

 Regards,
 Sahdev Zala
 IBM SWG Standards Strategy
 Durham, NC
 (919)486-2915 T/L: 526-2915


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev