[openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Thierry Carrez
Hello everyone,

OpenStack has become quite big, and it's easier than ever to feel lost,
to feel like nothing is really happening. It's more difficult than ever
to feel part of a single community, and to celebrate little successes
and progress.

In a (small) effort to help with that, I suggested making it easier to
record little moments of joy and small success bits. Those are usually
not worth the effort of a blog post or a new mailing-list thread, but
they show that our community makes progress *every day*.

So whenever you feel like you made progress, or had a little success in
your OpenStack adventures, or have some joyful moment to share, just
throw the following message on your local IRC channel:

#success [Your message here]

The openstackstatus bot will take that and record it on this wiki page:

https://wiki.openstack.org/wiki/Successes

We'll add a few of those every week to the weekly newsletter (as part of
the developer digest that we reecently added there).

Caveats: Obviously that only works on channels where openstackstatus is
present (the official OpenStack IRC channels), and we may remove entries
that are off-topic or spam.

So... please use #success liberally and record lttle everyday OpenStack
successes. Share the joy and make the OpenStack community a happy place.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Jordan Pittier
Hi,
On Fri, Oct 9, 2015 at 11:00 AM, Tang Chen  wrote:

> Hi,
>
> CI systems will run tests for each patch once it is submitted or modified.
> But most CI systems occupy a lot of resource, and take a long time to
> run tests (1 or 2 hours for one patch).
>
> I think, not all the patches submitted need to be tested. Even those
> patches
> with an approved BP and spec may be reworked for 20+ versions. So I think
> CI should support a RFC (Require For Comments) mechanism for developers
> to submit and review the code detail and rework. When the patches are
> fully ready, I mean all reviewers have agreed on the implementation detail,
> then CI will test the patches.

So have the humans do the hard work to eventually find out that the patch
breaks the world ?


> For a 20+ version patch-set, maybe 3 or 4 rounds
> of tests are enough. Just test the last 3 or 4 versions.
>
 How do know, when a new patchset arrives, that it's part of the last 3 or
4 versions ?

>
> This can significantly reduce CI overload.
>
> This workflow appears in many other OSS communities, such as Linux kernel,
> qemu and libvirt. Testers won't test patches with a [RFC] tag in the
> commit message.
> So I want to enable CI to support a similar mechanism.
>
> I'm not sure if it is a good idea. Please help to review the following BP.
>
> https://blueprints.launchpad.net/openstack-ci/+spec/ci-rfc-mechanism
>
> Thanks.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I am running a 3rd party for Cinder. The amount of time to setup, operate
and watch after the CI results cost way more than the 1 or 2 servers it
take to run the jobs. So, I don"t want to be a party pooper here, but in my
opinion I am not sure it's worth the effort.

Note: I don"t know about nova or neutron.

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Tang Chen


On 10/09/2015 05:48 PM, Jordan Pittier wrote:

Hi,
On Fri, Oct 9, 2015 at 11:00 AM, Tang Chen > wrote:


Hi,

CI systems will run tests for each patch once it is submitted or
modified.
But most CI systems occupy a lot of resource, and take a long time to
run tests (1 or 2 hours for one patch).

I think, not all the patches submitted need to be tested. Even
those patches
with an approved BP and spec may be reworked for 20+ versions. So
I think
CI should support a RFC (Require For Comments) mechanism for
developers
to submit and review the code detail and rework. When the patches are
fully ready, I mean all reviewers have agreed on the
implementation detail,
then CI will test the patches. 

So have the humans do the hard work to eventually find out that the 
patch breaks the world ?


No. Developers of course will run some tests themselves before they 
submit patches.
It is just a waste of resource if reviewers are discussing about where 
this function should be,
or what the function should be named. After all these details are agreed 
on, run the CI.



For a 20+ version patch-set, maybe 3 or 4 rounds
of tests are enough. Just test the last 3 or 4 versions.

 How do know, when a new patchset arrives, that it's part of the last 
3 or 4 versions ?


I think it could work like this:
1. At first, developer submits v1 patch-set with RFC tag. CIs don't run.
2. After several versions reworked, like v5, v6, most reviewers have 
agreed on the implementation

is OK. Then submit v7 without RFC tag. Then CIs run.
3. After 3, 4 rounds of tests, v10 patch-set could be merged.

Thanks.



This can significantly reduce CI overload.

This workflow appears in many other OSS communities, such as Linux
kernel,
qemu and libvirt. Testers won't test patches with a [RFC] tag in
the commit message.
So I want to enable CI to support a similar mechanism.

I'm not sure if it is a good idea. Please help to review the
following BP.

https://blueprints.launchpad.net/openstack-ci/+spec/ci-rfc-mechanism

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I am running a 3rd party for Cinder. The amount of time to setup, 
operate and watch after the CI results cost way more than the 1 or 2 
servers it take to run the jobs. So, I don"t want to be a party pooper 
here, but in my opinion I am not sure it's worth the effort.


Note: I don"t know about nova or neutron.

Jordan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [CI] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Tang Chen

Hi,

CI systems will run tests for each patch once it is submitted or modified.
But most CI systems occupy a lot of resource, and take a long time to
run tests (1 or 2 hours for one patch).

I think, not all the patches submitted need to be tested. Even those 
patches

with an approved BP and spec may be reworked for 20+ versions. So I think
CI should support a RFC (Require For Comments) mechanism for developers
to submit and review the code detail and rework. When the patches are
fully ready, I mean all reviewers have agreed on the implementation detail,
then CI will test the patches. For a 20+ version patch-set, maybe 3 or 4 
rounds

of tests are enough. Just test the last 3 or 4 versions.

This can significantly reduce CI overload.

This workflow appears in many other OSS communities, such as Linux kernel,
qemu and libvirt. Testers won't test patches with a [RFC] tag in the 
commit message.

So I want to enable CI to support a similar mechanism.

I'm not sure if it is a good idea. Please help to review the following BP.

https://blueprints.launchpad.net/openstack-ci/+spec/ci-rfc-mechanism

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Testing result of new atomic-6 image

2015-10-09 Thread Qiao,Liyong

Testing result of new atomic-6 image [1] built by Tango
atomic-5 image has issue to start a container instance(docker version is 
1.7.1), Tango built a new atomic-6 image with docker 1.8.1 version.

eghobo and I (eliqiao) did some testing works (eghobo_ did most of them)

Here is the summary:

 * coe=swarm

1.  can not pull swarm:0.2.0, try to use 0.4.0 or latest works
2.  when creating a container with magnum CLI, the image name
   should use full name like "docker.io/cirros"

examples for 2:

   /taget@taget-ThinkStation-P300:~/kubernetes/examples/redis$ magnum
   container-create --name testcontainer --image cirros --bay swarmbay6
   --command "echo hello"//
   //ERROR: Docker internal Error: 404 Client Error: Not Found ("No
   such image: cirros") (HTTP 500)//
   //taget@taget-ThinkStation-P300:~/kubernetes/examples/redis$ magnum
   container-create --name testcontainer --image docker.io/cirros --bay
   swarmbay6 --command "echo hello"

   /

 * coe=k8s (tls_disabled=True)

kube-apiserver.service can not start up , but could use command line[2] 
to start, I tried to use kubctl get pod, but failed as


   /[minion@k8-5qx66ie62f-0-vaucgvagirv4-kube-master-oemtlcotgak6 ~]$
   kubectl get pod/
   /error: couldn't read version from server: Get
   http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused/


netstat shows that 8080 is not in listened, not sure why(not familiar 
with k8s)


   /[minion@k8-5qx66ie62f-0-vaucgvagirv4-kube-master-oemtlcotgak6 ~]$
   ps aux | grep kub/
   /kube   805  0.5  1.0  30232 21436 ?Ssl  08:12 0:29
   /usr/bin/kube-controller-manager --logtostderr=true --v=0
   --master=http://127.0.0.1:8080/
   /kube   806  0.1  0.6  17332 13048 ?Ssl  08:12 0:09
   /usr/bin/kube-scheduler --logtostderr=true --v=0
   --master=http://127.0.0.1:8080/
   /root  1246  0.0  1.0  33656 22300 pts/0Sl+  09:33 0:00
   /usr/bin/kube-apiserver --logtostderr=true --v=0
   --etcd_servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0
   --insecure-port=8080 --kubelet_port=10250 --allow_privileged=true
   --service-cluster-ip-range=10.254.0.0/16 --runtime_config=api/all=true/
   /minion1276  0.0  0.0  11140  1632 pts/1S+   09:46 0:00 grep
   --color=auto kub/


[1] https://fedorapeople.org/groups/magnum/fedora-21-atomic-6-d181.qcow2
[2] http://paste.openstack.org/show/475824/

-- BR, Eli(Li Yong)Qiao
<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Plan to port Swift to Python 3

2015-10-09 Thread vishal yadav
Victor,

I appreciate for your effort.

However I was just checking if you considered using 2to3. I can understand
that translation using this tool might not cover every area in the code
more specifically custom/3rd party libraries (non-standard python
libraries) but IMO it can do fixer translations to much extent. If needed
custom fixers can also be defined for 2to3.

- https://docs.python.org/2/library/2to3.html
- https://docs.python.org/3/howto/pyporting.html

Thanks,
Vishal

On Fri, Oct 9, 2015 at 2:34 PM, Thierry Carrez 
wrote:

> Victor Stinner wrote:
> > Good news, we made good progress last weeks on porting Swift to Python
> > 3, a few changes were merged and all dependencies now work on Python 3.
> > We only need two more simple changes to have a working pyhon34 check job:
> >
> > * "py3: Update pbr and dnspython requirements"
> >   https://review.openstack.org/#/c/217423/
> > * "py3: Add py34 test environment to tox"
> >   https://review.openstack.org/#/c/199034/
> >
> > With these changes, it will be possible to make the python34 check job
> > voting to avoid Python 3 regressions. It's very important to avoid
> > regressions, so we cannot go backward again in Python 3 support.
> >
> > On IRC, it was said that it's better to merge Python 3 changes at the
> > beginning of the Mitaka cycle, because Python 3 requires a lot of small
> > changes which can likely introduce (subtle) bugs, and it's better to
> > catch them early during the development cycle.
> >
> > John Dickinson prefers incremental and small changes, whereas clayg
> > looks to like giant patches to fix all Python 3 issues at once to avoid
> > conflicts in other (non-Python3) changes. (Sorry, if I didn't summarized
> > correctly the discussion we had yesterday.)
> >
> > The problem is that it's hard to fix "all" Python 3 issues in a single
> > patch, the patch would be super giant and just impossible to review.
> > It's also annoying to have to write dozens of small patches: we loose
> > time on merge conflicts, rebasing, random gate failures, etc.
> >
> > I proposed a first patch serie of 6 changes to fix a lot of simple
> > Python 3 issues "at once":
> > [...]
> >
> > The overall diff is impressive: "61 files changed, 233 insertions(+),
> > 189 deletions(-)" ... but each change is quite simple. It's only one
> > pattern replaced with a different pattern. For example, replace
> > "unicode" with "six.text_type" (and add "import six" if needed). So
> > these changes should be easy to review.
> >
> > With a working (and voting?) python34 check job and these 6 changes, it
> > will be (much) easier to work on porting Swift to Python 3. Following
> > patches will be validated by the python34 check job, shorter and
> > restricted to a few files.
> >
> > Victor
>
> That's great news. Thanks so much for your tireless efforts to get
> Python 3 supported everywhere in OpenStack, Victor !
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins related functionality in Fuel Client

2015-10-09 Thread Evgeniy L
>> I’d say even if it will be a separate service it’s better to proxy
requests through Nailgun’s API to have a single entry point.

I don't think that application such as Nailgun should be responsible
for proxying
requests, we solved similar problem for OSTF with adding proxy rule in
Nginx.

Thanks,

On Fri, Oct 9, 2015 at 11:45 AM, Roman Prykhodchenko  wrote:

> I’d say even if it will be a separate service it’s better to proxy
> requests through Nailgun’s API to have a single entry point.
>
> 9 жовт. 2015 р. о 10:23 Evgeniy L  написав(ла):
>
> Hi,
>
> +1, but I think it's better to spawn separate service, instead of adding
> it to Nailgun.
>
> Thanks,
>
> On Fri, Oct 9, 2015 at 1:40 AM, Roman Prykhodchenko  wrote:
>
>> Folks,
>>
>> it’s time to speak about Fuel Plugins and the way they are managed.
>>
>> Currently we have some methods in Fuel Client that allow to install,
>> remove and do some other things to plugins. Everything looks great except
>> that functionality requires Fuel Client to be installed on a master node
>> and be running under a root user. It’s time for us to grow up and realize
>> that nothing can require Fuel Client to be installed on a specific computer
>> and of course we cannot require root permissions for any actions.
>>
>> I’d like to move all that code to Nailgun, utilizing mules and hide it
>> behind Nailgun’s API as soon as possible. For that I filed a bug [1] and
>> I’d like to ask Fuel Enhancements subgroup of developers to take a close
>> look at it.
>>
>>
>> 1. https://bugs.launchpad.net/fuel/+bug/1504338
>>
>>
>> - romcheg
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins related functionality in Fuel Client

2015-10-09 Thread Vitaly Kramskikh
+1, that would allow to install plugins from Fuel UI

2015-10-09 15:53 GMT+07:00 Sergii Golovatiuk :

> +1 to Roman.
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Fri, Oct 9, 2015 at 10:45 AM, Roman Prykhodchenko 
> wrote:
>
>> I’d say even if it will be a separate service it’s better to proxy
>> requests through Nailgun’s API to have a single entry point.
>>
>> 9 жовт. 2015 р. о 10:23 Evgeniy L  написав(ла):
>>
>> Hi,
>>
>> +1, but I think it's better to spawn separate service, instead of adding
>> it to Nailgun.
>>
>> Thanks,
>>
>> On Fri, Oct 9, 2015 at 1:40 AM, Roman Prykhodchenko 
>> wrote:
>>
>>> Folks,
>>>
>>> it’s time to speak about Fuel Plugins and the way they are managed.
>>>
>>> Currently we have some methods in Fuel Client that allow to install,
>>> remove and do some other things to plugins. Everything looks great except
>>> that functionality requires Fuel Client to be installed on a master node
>>> and be running under a root user. It’s time for us to grow up and realize
>>> that nothing can require Fuel Client to be installed on a specific computer
>>> and of course we cannot require root permissions for any actions.
>>>
>>> I’d like to move all that code to Nailgun, utilizing mules and hide it
>>> behind Nailgun’s API as soon as possible. For that I filed a bug [1] and
>>> I’d like to ask Fuel Enhancements subgroup of developers to take a close
>>> look at it.
>>>
>>>
>>> 1. https://bugs.launchpad.net/fuel/+bug/1504338
>>>
>>>
>>> - romcheg
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins related functionality in Fuel Client

2015-10-09 Thread Roman Prykhodchenko
In that case I would suggest to also use Keystone service directory for 
discovering services.

> 9 жовт. 2015 р. о 11:00 Evgeniy L  написав(ла):
> 
> >> I’d say even if it will be a separate service it’s better to proxy 
> >> requests through Nailgun’s API to have a single entry point.
> 
> I don't think that application such as Nailgun should be responsible for 
> proxying
> requests, we solved similar problem for OSTF with adding proxy rule in Nginx.
> 
> Thanks,
> 
> On Fri, Oct 9, 2015 at 11:45 AM, Roman Prykhodchenko  > wrote:
> I’d say even if it will be a separate service it’s better to proxy requests 
> through Nailgun’s API to have a single entry point.
> 
>> 9 жовт. 2015 р. о 10:23 Evgeniy L > > написав(ла):
>> 
>> Hi,
>> 
>> +1, but I think it's better to spawn separate service, instead of adding it 
>> to Nailgun.
>> 
>> Thanks,
>> 
>> On Fri, Oct 9, 2015 at 1:40 AM, Roman Prykhodchenko > > wrote:
>> Folks,
>> 
>> it’s time to speak about Fuel Plugins and the way they are managed.
>> 
>> Currently we have some methods in Fuel Client that allow to install, remove 
>> and do some other things to plugins. Everything looks great except that 
>> functionality requires Fuel Client to be installed on a master node and be 
>> running under a root user. It’s time for us to grow up and realize that 
>> nothing can require Fuel Client to be installed on a specific computer and 
>> of course we cannot require root permissions for any actions.
>> 
>> I’d like to move all that code to Nailgun, utilizing mules and hide it 
>> behind Nailgun’s API as soon as possible. For that I filed a bug [1] and I’d 
>> like to ask Fuel Enhancements subgroup of developers to take a close look at 
>> it.
>> 
>> 
>> 1. https://bugs.launchpad.net/fuel/+bug/1504338 
>> 
>> 
>> 
>> - romcheg
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Inconsistent timestamping of polled data

2015-10-09 Thread Wen Zhi WW Yu

Hi all,

As Gordon descriped in https://bugs.launchpad.net/ceilometer/+bug/1491509 ,
many of pollsters define the timestamp individually for each sample that is
generated rather than basing on when the data was polled. I agree with
Gordon on that the timestamping of samples should base on when the data was
polled.

What's your opinion on this?

Best Regards,
Yu WenZhi(余文治)
OpenStack on Power Development, IBM Shanghai
2F, 399 Keyuan Rd, Zhangjiang Chuangxin No. 10 Building, Zhangjiang High
Tech Park Shanghai
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins related functionality in Fuel Client

2015-10-09 Thread Evgeniy L
Hi,

+1, but I think it's better to spawn separate service, instead of adding it
to Nailgun.

Thanks,

On Fri, Oct 9, 2015 at 1:40 AM, Roman Prykhodchenko  wrote:

> Folks,
>
> it’s time to speak about Fuel Plugins and the way they are managed.
>
> Currently we have some methods in Fuel Client that allow to install,
> remove and do some other things to plugins. Everything looks great except
> that functionality requires Fuel Client to be installed on a master node
> and be running under a root user. It’s time for us to grow up and realize
> that nothing can require Fuel Client to be installed on a specific computer
> and of course we cannot require root permissions for any actions.
>
> I’d like to move all that code to Nailgun, utilizing mules and hide it
> behind Nailgun’s API as soon as possible. For that I filed a bug [1] and
> I’d like to ask Fuel Enhancements subgroup of developers to take a close
> look at it.
>
>
> 1. https://bugs.launchpad.net/fuel/+bug/1504338
>
>
> - romcheg
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins related functionality in Fuel Client

2015-10-09 Thread Roman Prykhodchenko
I’d say even if it will be a separate service it’s better to proxy requests 
through Nailgun’s API to have a single entry point.

> 9 жовт. 2015 р. о 10:23 Evgeniy L  написав(ла):
> 
> Hi,
> 
> +1, but I think it's better to spawn separate service, instead of adding it 
> to Nailgun.
> 
> Thanks,
> 
> On Fri, Oct 9, 2015 at 1:40 AM, Roman Prykhodchenko  > wrote:
> Folks,
> 
> it’s time to speak about Fuel Plugins and the way they are managed.
> 
> Currently we have some methods in Fuel Client that allow to install, remove 
> and do some other things to plugins. Everything looks great except that 
> functionality requires Fuel Client to be installed on a master node and be 
> running under a root user. It’s time for us to grow up and realize that 
> nothing can require Fuel Client to be installed on a specific computer and of 
> course we cannot require root permissions for any actions.
> 
> I’d like to move all that code to Nailgun, utilizing mules and hide it behind 
> Nailgun’s API as soon as possible. For that I filed a bug [1] and I’d like to 
> ask Fuel Enhancements subgroup of developers to take a close look at it.
> 
> 
> 1. https://bugs.launchpad.net/fuel/+bug/1504338 
> 
> 
> 
> - romcheg
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins related functionality in Fuel Client

2015-10-09 Thread Sergii Golovatiuk
+1 to Roman.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Oct 9, 2015 at 10:45 AM, Roman Prykhodchenko  wrote:

> I’d say even if it will be a separate service it’s better to proxy
> requests through Nailgun’s API to have a single entry point.
>
> 9 жовт. 2015 р. о 10:23 Evgeniy L  написав(ла):
>
> Hi,
>
> +1, but I think it's better to spawn separate service, instead of adding
> it to Nailgun.
>
> Thanks,
>
> On Fri, Oct 9, 2015 at 1:40 AM, Roman Prykhodchenko  wrote:
>
>> Folks,
>>
>> it’s time to speak about Fuel Plugins and the way they are managed.
>>
>> Currently we have some methods in Fuel Client that allow to install,
>> remove and do some other things to plugins. Everything looks great except
>> that functionality requires Fuel Client to be installed on a master node
>> and be running under a root user. It’s time for us to grow up and realize
>> that nothing can require Fuel Client to be installed on a specific computer
>> and of course we cannot require root permissions for any actions.
>>
>> I’d like to move all that code to Nailgun, utilizing mules and hide it
>> behind Nailgun’s API as soon as possible. For that I filed a bug [1] and
>> I’d like to ask Fuel Enhancements subgroup of developers to take a close
>> look at it.
>>
>>
>> 1. https://bugs.launchpad.net/fuel/+bug/1504338
>>
>>
>> - romcheg
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins related functionality in Fuel Client

2015-10-09 Thread Andriy Popovych
Actually it's an old issue 
https://blueprints.launchpad.net/fuel/+spec/plugin-manager-as-separate-service


On 10/09/2015 11:53 AM, Sergii Golovatiuk wrote:

+1 to Roman.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Fri, Oct 9, 2015 at 10:45 AM, Roman Prykhodchenko > wrote:

I’d say even if it will be a separate service it’s better to proxy
requests through Nailgun’s API to have a single entry point.


9 жовт. 2015 р. о 10:23 Evgeniy L > написав(ла):

Hi,

+1, but I think it's better to spawn separate service, instead of
adding it to Nailgun.

Thanks,

On Fri, Oct 9, 2015 at 1:40 AM, Roman Prykhodchenko > wrote:

Folks,

it’s time to speak about Fuel Plugins and the way they are
managed.

Currently we have some methods in Fuel Client that allow to
install, remove and do some other things to plugins.
Everything looks great except that functionality requires Fuel
Client to be installed on a master node and be running under a
root user. It’s time for us to grow up and realize that
nothing can require Fuel Client to be installed on a specific
computer and of course we cannot require root permissions for
any actions.

I’d like to move all that code to Nailgun, utilizing mules and
hide it behind Nailgun’s API as soon as possible. For that I
filed a bug [1] and I’d like to ask Fuel Enhancements subgroup
of developers to take a close look at it.


1. https://bugs.launchpad.net/fuel/+bug/1504338


- romcheg



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Plan to port Swift to Python 3

2015-10-09 Thread Thierry Carrez
Victor Stinner wrote:
> Good news, we made good progress last weeks on porting Swift to Python
> 3, a few changes were merged and all dependencies now work on Python 3.
> We only need two more simple changes to have a working pyhon34 check job:
> 
> * "py3: Update pbr and dnspython requirements"
>   https://review.openstack.org/#/c/217423/
> * "py3: Add py34 test environment to tox"
>   https://review.openstack.org/#/c/199034/
> 
> With these changes, it will be possible to make the python34 check job
> voting to avoid Python 3 regressions. It's very important to avoid
> regressions, so we cannot go backward again in Python 3 support.
> 
> On IRC, it was said that it's better to merge Python 3 changes at the
> beginning of the Mitaka cycle, because Python 3 requires a lot of small
> changes which can likely introduce (subtle) bugs, and it's better to
> catch them early during the development cycle.
> 
> John Dickinson prefers incremental and small changes, whereas clayg
> looks to like giant patches to fix all Python 3 issues at once to avoid
> conflicts in other (non-Python3) changes. (Sorry, if I didn't summarized
> correctly the discussion we had yesterday.)
> 
> The problem is that it's hard to fix "all" Python 3 issues in a single
> patch, the patch would be super giant and just impossible to review.
> It's also annoying to have to write dozens of small patches: we loose
> time on merge conflicts, rebasing, random gate failures, etc.
> 
> I proposed a first patch serie of 6 changes to fix a lot of simple
> Python 3 issues "at once":
> [...]
> 
> The overall diff is impressive: "61 files changed, 233 insertions(+),
> 189 deletions(-)" ... but each change is quite simple. It's only one
> pattern replaced with a different pattern. For example, replace
> "unicode" with "six.text_type" (and add "import six" if needed). So
> these changes should be easy to review.
> 
> With a working (and voting?) python34 check job and these 6 changes, it
> will be (much) easier to work on porting Swift to Python 3. Following
> patches will be validated by the python34 check job, shorter and
> restricted to a few files.
> 
> Victor

That's great news. Thanks so much for your tireless efforts to get
Python 3 supported everywhere in OpenStack, Victor !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Serg Melikyan
Hi Vahid,

unfortunately we don't plan to remove support for py26 from the
python-muranoclient, most of the python-client support py26
in order to work out of the box on different OS including CentOS 6.5
and so on.

>So the options are:
>1. support py26 in tosca-parser

Support for py26 is pretty easy to implement, there are only few
things which are not available in py26 and available in py27. In our
case it was few places where we used {1, 2, 3} instead of set([1, 2,
3]).

On Thu, Oct 8, 2015 at 11:59 PM, Vahid S Hashemian
 wrote:
> Hello,
>
> I am wondering if there is any near-term plan for removing the py26 support
> from the client project (python-muranoclient).
> For the tosca support blueprint python-muranoclient will become dependent on
> tosca-parser project and expect tosca-parser to support py26 (it currently
> does not support py26).
>
> So the options are:
> 1. support py26 in tosca-parser
> 2. wait until py26 support is phased out in python-muranoclient (only if
> it's happening soon)
>
> Thanks.
> -
> Vahid Hashemian, Ph.D.
> Advisory Software Engineer, IBM Cloud
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-09 Thread Yuriy Zveryanskyy

+2 for both, Vladyslav and John.

On 10/09/2015 12:47 AM, Jim Rollenhagen wrote:

Hi all,

I've been thinking a lot about Ironic's core reviewer team and how we might
make it better.

I'd like to grow the team more through trust and mentoring. We should be
able to promote someone to core based on a good knowledge of *some* of
the code base, and trust them not to +2 things they don't know about. I'd
also like to build a culture of mentoring non-cores on how to review, in
preparation for adding them to the team. Through these pieces, I'm hoping
we can have a few rounds of core additions this cycle.

With that said...

I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
have been super high quality, and the quantity is ever-increasing. He's
also started helping out with some smaller efforts (full tempest, for
example), and I'd love to see that continue with larger efforts.

I'd also like to nominate John Villalovos (jlvillal). John has been
reviewing a ton of code and making a real effort to learn everything,
and keep track of everything going on in the project.

Ironic cores, please reply with your vote; provided feedback is positive,
I'd like to make this official next week sometime. Thanks!

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-09 Thread Jay Faulkner
+1


From: Jim Rollenhagen 
Sent: Thursday, October 8, 2015 2:47 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ironic] Nominating two new core reviewers

Hi all,

I've been thinking a lot about Ironic's core reviewer team and how we might
make it better.

I'd like to grow the team more through trust and mentoring. We should be
able to promote someone to core based on a good knowledge of *some* of
the code base, and trust them not to +2 things they don't know about. I'd
also like to build a culture of mentoring non-cores on how to review, in
preparation for adding them to the team. Through these pieces, I'm hoping
we can have a few rounds of core additions this cycle.

With that said...

I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
have been super high quality, and the quantity is ever-increasing. He's
also started helping out with some smaller efforts (full tempest, for
example), and I'd love to see that continue with larger efforts.

I'd also like to nominate John Villalovos (jlvillal). John has been
reviewing a ton of code and making a real effort to learn everything,
and keep track of everything going on in the project.

Ironic cores, please reply with your vote; provided feedback is positive,
I'd like to make this official next week sometime. Thanks!

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All][Elections] Results of the TC Election

2015-10-09 Thread Tristan Cacqueray
Please join me in congratulating the 6 newly elected members of the TC.

* Doug Hellmann (dhellmann)
* Monty Taylor (mordred)
* Anne Gentle (annegentle)
* Sean Dague (sdague)
* Russell Bryant (russellb)
* Kyle Mestery (mestery)

Full results:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4ef58718618691a0

Thank you to all candidates who stood for election, having a good group
of candidates helps engage the community in our democratic process,

Thank you to all who voted and who encouraged others to vote. We need to
ensure your voice is heard.

Thanks to my fellow election official, Tony Breeds, I appreciate your
help and perspective.

Thank you for another great round.

Here's to Mitaka,
Tristan




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/09/2015 03:36 PM, Ian Wells wrote:

On 9 October 2015 at 12:50, Chris Friesen > wrote:

Has anybody looked at why 1 instance is too slow and what it would take to

make 1 scheduler instance work fast enough? This does not preclude the
use of
concurrency for finer grain tasks in the background.


Currently we pull data on all (!) of the compute nodes out of the database
via a series of RPC calls, then evaluate the various filters in python code.


I'll say again: the database seems to me to be the problem here.  Not to
mention, you've just explained that they are in practice holding all the data in
memory in order to do the work so the benefit we're getting here is really a
N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
secondary, in fact), and that without incremental updates to the receivers.


I don't see any reason why you couldn't have an in-memory scheduler.

Currently the database serves as the persistant storage for the resource usage, 
so if we take it out of the picture I imagine you'd want to have some way of 
querying the compute nodes for their current state when the scheduler first 
starts up.


I think the current code uses the fact that objects are remotable via the 
conductor, so changing that to do explicit posts to a known scheduler topic 
would take some work.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-09 Thread Chen, Wei D
Great idea! core reviewer’s advice is definitely much important and valuable 
before proposing a fixing. I was always thinking it will help save us if we can 
get some agreement at some point.

 

 

Best Regards,

Dave Chen

 

From: David Stanek [mailto:dsta...@dstanek.com] 
Sent: Saturday, October 10, 2015 3:54 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [keystone] Let's get together and fix all the bugs

 

I would like to start running a recurring bug squashing day. The general idea 
is to get more focus on bugs and stability. You can find the details here: 
https://etherpad.openstack.org/p/keystone-office-hours

 

 

-- 

David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek

www: http://dstanek.com



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Clint Byrum
Excerpts from Sean Dague's message of 2015-10-09 14:00:40 -0700:
> On 10/09/2015 02:52 PM, Jonathan D. Proulx wrote:
> > On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
> > :On 10/09/2015 01:39 PM, David Stanek wrote:
> > :>
> > :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx  > :>> wrote:
> > :>As an operator I'd be happy to use SRV records to define endpoints,
> > :>though multiple regions could make that messy.
> > :>
> > :>would we make subdomins per region or include region name in the
> > :>service name?
> > :>
> > :>_compute-regionone._tcp.example.com 
> > :>-vs-
> > :>_compute._tcp.regionone.example.com 
> > :>
> > :>Also not all operators can controll their DNS to this level so it
> > :>couldn't be the only option.
> > :
> > :SO - XMPP does this. The way it works is that if your XMPP provider
> > :has put the approriate records in DNS, then everything Just Works. If
> > :not, then you, as a consumer, have several pieces of information you
> > :need to provide by hand.
> > :
> > :Of course, there are already several pieces of information you have
> > :to provide by hand to connect to OpenStack, so needing to download a
> > :manifest file or something like that to talk to a cloud in an
> > :environment where the people running a cloud do not have the ability
> > :to add information to DNS (boggles) shouldn't be that terrible.
> > 
> > yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
> > of local config options is managable. A cloud with X endpoints and Y
> > regions is significantly more.
> > 
> > Not to say this couldn't be done by packing more stuff into the openrc
> > or equivelent so users don't need to directly enter all that, but that
> > would be a significant change and one I think would be more difficult
> > for smaller operations.
> > 
> > :One could also imagine an in-between option where OpenStack could run
> > :an _optional_ DNS for this purpose - and then the only 'by-hand'
> > :you'd need for clouds with no real DNS is the location of the
> > :discover DNS.
> > 
> > Yes a special purpose DNS (a la dnsbl) might be preferable to
> > pushing around static configs.
> 
> I do realize lots of people want to go in much more radical directions
> here. I think we have to be really careful about that. The current
> cinder v1 -> v2 transition challenges demonstrate how much inertia there
> is. 3 years of talking about a Tasks API is another instance of it.
> 
> We aren't starting with a blank slate. This is brownfield development.
> There are enough users of this that making shifts need to be done in
> careful shifts that enable a new thing similar enough to the old thing,
> that people will easily be able to take advantage of it. Which means I
> think deciding to jump off the REST bandwagon for this is currently a
> bridge too far. At least to get anything tangible done in the next 6 to
> 12 months.
> 

I'm 100% in agreement that we can't abandon things that we've created. If
we create a DNS based catalog that is ready for prime time tomorrow,
we will have the REST based catalog for _years_.

> I think getting us a service catalog served over REST that doesn't
> require auth, and doesn't require tenant_ids in urls, gets us someplace
> we could figure out a DNS representation (for those that wanted that).
> But we have to tick / tock this and not change transports and
> representations at the same time.
> 

I don't think we're suggesting that we abandon the current one. We don't
break userspace!

However, replacing the underpinnings of the current one with the new one,
and leaving the current one as a compatibility layer _is_ a way to get
progress on the new one without shafting users. So I think considerable
consideration should be given to an approach where we limit working on
the core of the current solution, and replace that core with the new
solution + compatibility layer.

> And, as I've definitely discovered through this process the Service
> Catalog today has been fluid enough that where it is used, and what
> people rely on in it, isn't always clear all at once. For instance,
> tenant_ids in urls are very surface features in Nova (we don't rely on
> it, we're using the context), don't exist at all in most new services,
> and are very corely embedded in Swift. This is part of what has also
> required the service catalog is embedded in the Token, which causes toke
> bloat, and has led to other features to try to shrink the catalog by
> filtering it by what a user is allowed. Which in turn ended up being
> used by Horizon to populate the feature matrix users see.
> 
> So we're pulling on a thread, and we have to do that really carefully.
> 
> I think the important thing is to focus on what we have in 6 months
> doesn't break current users / applications, and is incrementally closer
> to our end 

Re: [openstack-dev] Different OpenStack components

2015-10-09 Thread Fox, Kevin M
The official list is 
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

Thanks,
Kevin

From: Amrith Kumar [amr...@tesora.com]
Sent: Friday, October 09, 2015 3:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Different OpenStack components

A google search produced this as result #2.

http://governance.openstack.org/reference/projects/index.html

Looks pretty complete to me.

-amrith

--
Amrith Kumar, CTO   | amr...@tesora.com
Tesora, Inc | @amrithkumar
125 CambridgePark Drive, Suite 400  | http://www.tesora.com
Cambridge, MA. 02140|



From: Abhishek Talwar [mailto:abhishek.tal...@tcs.com]
Sent: Friday, October 09, 2015 3:46 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] Different OpenStack components

Hi Folks,

I have been working with OpenStack from a while now, I know that other than the 
main componets (nova, neutron, glance, cinder, horizon, tempest, keystone etc) 
there are many more components in OpenStack (like Sahara, Trove).

So, where can I see the list of all existing OpenStack components and is there 
any documentation for these components so that I can read what all roles these 
components play.

Thanks and Regards
Abhishek Talwar

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Ian Wells
On 9 October 2015 at 18:29, Clint Byrum  wrote:

> Instead of having the scheduler do all of the compute node inspection
> and querying though, you have the nodes push their stats into something
> like Zookeeper or consul, and then have schedulers watch those stats
> for changes to keep their in-memory version of the data up to date. So
> when you bring a new one online, you don't have to query all the nodes,
> you just scrape the data store, which all of these stores (etcd, consul,
> ZK) are built to support atomically querying and watching at the same
> time, so you can have a reasonable expectation of correctness.
>

We have to be careful about our definition of 'correctness' here.  In
practice, the data is never going to be perfect because compute hosts
update periodically and the information is therefore always dated.  With
ZK, it's going to be strictly consistent with regard to the updates from
the compute hosts, but again that doesn't really matter too much because
the scheduler is going to have to make a best effort job with a mixed bag
of information anyway.

In fact, putting ZK in the middle basically means that your compute hosts
now synchronously update a majority of nodes in a minimum 3 node quorum -
not the fastest form of update - and then the quorum will see to notifying
the schedulers.  In practice this is just a store-and-fanout again. Once
more it's not clear to me whether the store serves much use, and as for the
fanout, I wonder if we'll need >>3 schedulers running so that this is
reducing communication overhead.

Even if you figured out how to make the in-memory scheduler crazy fast,
> There's still value in concurrency for other reasons. No matter how
> fast you make the scheduler, you'll be slave to the response time of
> a single scheduling request. If you take 1ms to schedule each node
> (including just reading the request and pushing out your scheduling
> result!) you will never achieve greater than 1000/s. 1ms is way lower
> than it's going to take just to shove a tiny message into RabbitMQ or
> even 0mq. So I'm pretty sure this is o-k for small clouds, but would be
> a disaster for a large, busy cloud.
>

Per before, my suggestion was that every scheduler tries to maintain a copy
of the cloud's state in memory (in much the same way, per the previous
example, as every router on the internet tries to make a route table out of
what it learns from BGP).  They don't have to be perfect.  They don't have
to be in sync.  As long as there's some variability in the decision making,
they don't have to update when another scheduler schedules something (and
you can make the compute node send an immediate update when a new VM is
run, anyway).  They all stand a good chance of scheduling VMs well
simultaneously.

If, however, you can have 20 schedulers that all take 10ms on average,
> and have the occasional lock contention for a resource counter resulting
> in 100ms, now you're at 2000/s minus the lock contention rate. This
> strategy would scale better with the number of compute nodes, since
> more nodes means more distinct locks, so you can scale out the number
> of running servers separate from the number of scheduling requests.
>

If you have 20 schedulers that take 1ms on average, and there's absolutely
no lock contention, then you're at 20,000/s.  (Unfair, granted, since what
I'm suggesting is more likely to make rejected scheduling decisions, but
they could be rare.)

But to be fair, we're throwing made up numbers around at this point.  Maybe
it's time to work out how to test this for scale in a harness - which is
the bit of work we all really need to do this properly, or there's no proof
we've actually helped - and leave people to code their ideas up?
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Suggestions for handling new panels and refactors in the future

2015-10-09 Thread Tripp, Travis S
Hi Doug!

I think the is a great discussion topic and you summarize your points very 
nicely!

 I wish you’d responded to this thread, though:  
https://openstack.nimeyo.com/58582/openstack-dev-horizon-patterns-for-angular-panels,
 because it is talking about the same problem. This is option 3 I mentioned 
there and I do think this is still a viable option to consider, but we should 
discuss all the options.

Please consider that thread as my initial response to your email… and let’s 
keep discussing!

Thanks,
Travis

From: Douglas Fish
Reply-To: OpenStack List
Date: Friday, October 9, 2015 at 8:42 AM
To: OpenStack List
Subject: [openstack-dev] [Horizon] Suggestions for handling new panels and 
refactors in the future

I have two suggestions for handling both new panels and refactoring existing 
panels that I think could benefit us in the future:
1) When we are creating a panel that's a major refactor of an existing it 
should be a new separate panel, not a direct code replacement of the existing 
panel
2) New panels (include the refactors of existing panels) should be developed in 
an out of tree gerrit repository.

Why make refactors a separate panel?

I was taken a bit off guard after we merged the Network Topology->Curvature 
improvement: this was a surprise to some people outside of the Horizon 
community (though it had been discussed within Horizon for as long as I've been 
on the project). In retrospect, I think it would have been better to keep both 
the old Network Topology and new curvature based topology in our Horizon 
codebase. Doing so would have allowed operators to perform A-B/ Red-Black 
testing if they weren't immediately convinced of the awesomeness of the panel. 
It also would have allowed anyone with a customization of the Network Topology 
panel to have some time to configure their Horizon instance to continue to use 
the Legacy panel while they updated their customization to work with the new 
panel.

Perhaps we should treat panels more like an API element and take them through a 
deprecation cycle before removing them completely. Giving time for customizers 
to update their code is going to be especially important as we build angular 
replacements for python panels. While we have much better plugin support for 
angular there is still a learning curve for those developers.

Why build refactors and new panels out of tree?

First off, it appears to me trying to build new panels in tree has been fairly 
painful. I've seen big long lived patches pushed along without being merged. 
It's quite acceptable and expected to quickly merge half-complete patches into 
a brand new repository - but you can't behave that way working in tree in 
Horizon. Horizon needs to be kept production/operator ready. External 
repositories do not. Merging code quickly can ease collaboration and avoid this 
kind of long lived patch set.

Secondly, keeping new panels/plugins in a separate repository decentralizes 
decisions about which panels are "ready" and which aren't. If one group feels a 
plugin is "ready" they can make it their default version of the panel, and 
perhaps put resources toward translating it. If we develop these panels in-tree 
we need to make a common decision about what "ready" means - and once it's in 
everyone who wants a translated Horizon will need to translate it.

Finally, I believe developing new panels out of tree will help improve our 
plugin support in Horizon. It's this whole "eating your own dog food" idea. As 
soon as we start using our own Horizon plugin mechanism for our own development 
we are going to become aware of it's shortcomings (like quotas) and will be 
sufficiently motivated to fix them.

Looking forward to further discussion and other ideas on this!

Doug Fish

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-10-09 Thread 王华
Thanks everyone!
It is my pleasure to be a magnum core reviewer. Let's make magnum better
together.

Thanks
Wanghua

On Wed, Oct 7, 2015 at 1:19 AM, Vilobh Meshram <
vilobhmeshram.openst...@gmail.com> wrote:

> Thanks everyone!
>
> I really appreciate this. Happy to join Magnum-Core  :)
>
> We have a great team, very diverse and very dedicated. It's pleasure to
> work with all of you.
>
> Thanks,
> Vilobh
>
> On Mon, Oct 5, 2015 at 5:26 PM, Adrian Otto 
> wrote:
>
>> Team,
>>
>> In accordance with our consensus and the current date/time, I hereby
>> welcome Vilobh and Hua as new core reviewers, and have added them to the
>> magnum-core group. I will announce this addition at tomorrow’s team meeting
>> at our new time of 1600 UTC (no more alternating schedule, remember?).
>>
>> Thanks,
>>
>> Adrian
>>
>> On Oct 1, 2015, at 7:33 PM, Jay Lau  wrote:
>>
>> +1 for both! Welcome!
>>
>> On Thu, Oct 1, 2015 at 7:07 AM, Hongbin Lu  wrote:
>>
>>> +1 for both. Welcome!
>>>
>>>
>>>
>>> *From:* Davanum Srinivas [mailto:dava...@gmail.com]
>>> *Sent:* September-30-15 7:00 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions)
>>> *Subject:* Re: [openstack-dev] [magnum] New Core Reviewers
>>>
>>>
>>>
>>> +1 from me for both Vilobh and Hua.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Dims
>>>
>>>
>>>
>>> On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
>>> wrote:
>>>
>>> Core Reviewers,
>>>
>>> I propose the following additions to magnum-core:
>>>
>>> +Vilobh Meshram (vilobhmm)
>>> +Hua Wang (humble00)
>>>
>>> Please respond with +1 to agree or -1 to veto. This will be decided by
>>> either a simple majority of existing core reviewers, or by lazy consensus
>>> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>>>
>>> Thanks,
>>>
>>> Adrian Otto
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Davanum Srinivas :: https://twitter.com/dims
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Jay Lau (Guangya Liu)
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Nominate Svetlana Karslioglu for fuel-docs core

2015-10-09 Thread Irina Povolotskaya
Hi all,

Svetlana is doing great work and I hope
our Fuel documentation will become even better;
+1 from me


-- 
Best regards,

Irina

*Business Analyst*
*unloc...@mirantis.com *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 9 Oct 2015

2015-10-09 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi everyone,

We're hurtling head first to Liberty, with only about a week left to go. I've 
been working on Summit preparations, and making sure we're ready to go when the 
time hits. This week, I'm working on Release Notes, keeping an eye on the final 
Install Guide testing, and some other housekeeping tasks.

This is my last docs newsletter before the Liberty release, although I'll 
endeavour to get a special Summit edition out as well. We will pick up regular 
meetings (and other tasks, like core team reviews) after the Summit.

== Progress towards Liberty ==

5 days to go!

601 bugs closed so far for this release.

The main things that still need to be done before release:
- - Testing, testing, testing: 
https://wiki.openstack.org/wiki/Documentation/LibertyDocTesting
- - Reviews: 
https://review.openstack.org/#/q/status:open+project:openstack/openstack-manuals,n,z
 and https://review.openstack.org/#/q/status:open+project:openstack/api-site,n,z
- - Bug triage: 
https://bugs.launchpad.net/openstack-manuals/+bugs?search=Search=New

== Release Notes ==

Can all PTLs and Cross-Project Liaisons please check the Release Notes page, 
and ensure they've included all the new features for their project here: 
https://wiki.openstack.org/wiki/ReleaseNotes/Liberty 

The best way to do this (I'm told) is to go through the blueprints created for 
Liberty. I will be editing these during the next few days, so please ensure 
you're up to date ASAP. Please contact me directly if you have any questions or 
problems.

== Mitaka Summit Prep ==

Thank you to everyone who provided suggestions for Design Summit sessions. I've 
now mangled them into a draft schedule, which is available on Sched: 
http://mitakadesignsummit.sched.org/type/Documentation

== Doc team meeting ==

The US meeting was held this week. The minutes are here: 
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2015-10-07

The next meetings will be the final meetings before the Liberty release:
APAC: Wednesday 14 October, 00:30:00 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

- --

東京でお会いしましょう

Lana

- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJWF1vMAAoJELppzVb4+KUyjNAH/iWkObcXhsCagd/Ht6N5IjsY
SRQ1+B9PDpRU1281VCmy0MfIRIGATCOOaSlpumEfpz8lIOnPmR/yXU/mKm6xJ55k
gCRCXSdb0wGomn7b346b3kvfRD6AUanVfq/6LnQsP0c6XCH8graqPwB5LMT9ndVm
68n/bOm2KlHh1NHmlI40l7eKOUCL0J82VOgdyqrWup55kg4R96cc1wUFH7gbz0jf
5b7khgD/P6XTzD1dbJIigKMxy8cj49FMx4tsrxdZZxZds2NauoE5inteI7HOuZMH
/iBf1SB1NbzFGSGH6upQVVZsxR0vKpq3WO/VwNTtlFZkjPrNDQXt0/n3wLv6REU=
=aUM2
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Cory Benfield
Robert Collins  writes:
> The problem that occurs is the result of a few interacting things:
>  - requests has very very specific versions of urllib3 it works with.
> So specific they aren't always released yet.

This should no longer be true. Our downstream redistributors pointed out to us
that this  was making their lives harder than they needed to be, so it's now
our policy to only  update to actual release versions of urllib3.
 
> The second is trivially insufficient - anytime requests vendored
> urllib3 is not precisely identical to a released urllib3, it becomes
> impossible to satisfy that via dependency version pinning - the only
> way to satisfy it is with the urllib3 in the distro that has whatever
> change was needed included.

Per my note above, if we restrict ourselves to relatively recent versions of
requests  (2.7.3+ IIRC) we should be fine. Of course, that doesn't mean we can
actually do that...

> The fourth approach meets the stone wall of 'but security' and 'no
> redundancy permitted' - I don't have the energy to try and get through
> the near-religious mindset I've encountered there before, though hey -
> if Fedora and Debian and Ubuntu folk are all interested in figuring
> out a sustainable way forward, that would be great: please don't feel
> cut out, I'm just not expecting anything.

It should be assumed that approach number four is a non-starter. This list has
had that  conversation before, which was a stunningly unpleasant experience for
me and not one I  want to repeat. Additionally, getting *all* of
Fedora/Debian/Ubuntu on board with not unbundling requests is about as likely
as hell freezing over.

Cory


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All][Glance] Feedback on the proposed refactor to the image import process required

2015-10-09 Thread Flavio Percoco

Greetings,

There was recently a discussion[0] on the mailing list, started by Doug
Hellman, to discuss some issues related to Glance's API, the conflicts
between v1 and v2 and how this is making some pandas sad.

The above served as a starting point for a discussion around the
current API, how it can be improved, etc. This discussions happened on
IRC[1], on  a call (sorry, I forgot to record this call, this is entirely
my fault) and on an etherpad[2]. Later on, Brian Rosmaita summarized
all this in a document[3], which became a spec[4]. :D

The spec is the central point of discussion now and it contains a more
structured, more organized and more concrete proposal that needs to be
discussed. Nevertheless, I believe there's still lot to do there and I
also believe - I'm sure others do as well - this spec could use
opinions from a broader audience. Therefore, I'd really appreciate
your opinion on this thread.

This will also be discussed at the summit[5] in a fishbowl session and
I hope to see you all there as well.

I'd like to thank everyone that has participated in this discussion so
far and I hope to see others chime in as well.

Flavio

[0] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074360.html
[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-22.log.html#t2015-09-22T14:31:00
[2] https://etherpad.openstack.org/p/glance-upload-mechanism-reloaded
[3] 
https://docs.google.com/document/d/1_mQZlUN_AtqhH6qh3ANz-m1zCOYkp1GyxndLtYMFRb0
[4] https://review.openstack.org/#/c/232371/
[5] http://mitakadesignsummit.sched.org/event/398b1f44af7a4ae3dde9cb47d4d52d9a

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Nominate Svetlana Karslioglu for fuel-docs core

2015-10-09 Thread Olga Gusarenko
+1
Svetlana, thank you for your contribution, and looking forward to working
with you on the Fuel documentation improvement!

Best,
Olga

On Thu, Oct 8, 2015 at 11:13 AM, Alexander Adamov 
wrote:

> +1 to Svetlana's nomination.
>
> On Tue, Sep 29, 2015 at 4:58 AM, Dmitry Borodaenko <
> dborodae...@mirantis.com> wrote:
>
>> I'd like to nominate Svetlana Karslioglu as a core reviewer for the
>> fuel-docs-core team. During the last few months, Svetlana restructured
>> the Fuel QuickStart Guide, fixed a few documentation bugs for Fuel 7.0,
>> and improved the quality of the Fuel documentation through reviews.
>>
>> I believe it's time to grant her core reviewer rights in the fuel-docs
>> repository.
>>
>> Svetlana's contribution to fuel-docs:
>>
>> http://stackalytics.com/?user_id=skarslioglu=all_type=all=fuel-docs
>>
>> Core reviewer approval process definition:
>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>
>> --
>> Dmitry Borodaenko
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Olga

Technical Writer
skype: gusarenko.olga
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Scheduler proposal

2015-10-09 Thread Chris Friesen

On 10/08/2015 01:37 AM, Clint Byrum wrote:

Excerpts from Maish Saidel-Keesing's message of 2015-10-08 00:14:55 -0700:

Forgive the top-post.

Cross-posting to openstack-operators for their feedback as well.

Ed the work seems very promising, and I am interested to see how this
evolves.

With my operator hat on I have one piece of feedback.

By adding in a new Database solution (Cassandra) we are now up to three
different database solutions in use in OpenStack

MySQL (practically everything)
MongoDB (Ceilometer)
Cassandra.

Not to mention two different message queues
Kafka (Monasca)
RabbitMQ (everything else)

Operational overhead has a cost - maintaining 3 different database
tools, backing them up, providing HA, etc. has operational cost.

This is not to say that this cannot be overseen, but it should be taken
into consideration.

And *if* they can be consolidated into an agreed solution across the
whole of OpenStack - that would be highly beneficial (IMHO).



Just because they both say they're databases, doesn't mean they're even
remotely similar.


True, but the fact remains that it means operators (and developers) would have 
to become familiar with the quirks and problems of yet another piece of technology.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-09 Thread Dmitry Tantsur

On 10/08/2015 11:47 PM, Jim Rollenhagen wrote:

Hi all,

I've been thinking a lot about Ironic's core reviewer team and how we might
make it better.

I'd like to grow the team more through trust and mentoring. We should be
able to promote someone to core based on a good knowledge of *some* of
the code base, and trust them not to +2 things they don't know about. I'd
also like to build a culture of mentoring non-cores on how to review, in
preparation for adding them to the team. Through these pieces, I'm hoping
we can have a few rounds of core additions this cycle.

With that said...

I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
have been super high quality, and the quantity is ever-increasing. He's
also started helping out with some smaller efforts (full tempest, for
example), and I'd love to see that continue with larger efforts.


+2



I'd also like to nominate John Villalovos (jlvillal). John has been
reviewing a ton of code and making a real effort to learn everything,
and keep track of everything going on in the project.


+2



Ironic cores, please reply with your vote; provided feedback is positive,
I'd like to make this official next week sometime. Thanks!

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We should move strutils.mask_password back into oslo-incubator

2015-10-09 Thread Paul Carlton


On 08/10/15 16:49, Doug Hellmann wrote:

Excerpts from Matt Riedemann's message of 2015-10-07 14:38:07 -0500:

Here's why:

https://review.openstack.org/#/c/220622/

That's marked as fixing an OSSA which means we'll have to backport the
fix in nova but it depends on a change to strutils.mask_password in
oslo.utils, which required a release and a minimum version bump in
global-requirements.

To backport the change in nova, we either have to:

1. Copy mask_password out of oslo.utils and add it to nova in the
backport or,

2. Backport the oslo.utils change to a stable branch, release it as a
patch release, bump minimum required version in stable g-r and then
backport the nova change and depend on the backported oslo.utils stable
release - which also makes it a dependent library version bump for any
packagers/distros that have already frozen libraries for their stable
releases, which is kind of not fun.

Bug fix releases do not generally require a minimum version bump. The
API hasn't changed, and there's nothing new in the library in this case,
so it's a documentation issue to ensure that users update to the new
release. All we should need to do is backport the fix to the appropriate
branch of oslo.utils and release a new version from that branch that is
compatible with the same branch of nova.

Doug


So I'm thinking this is one of those things that should ultimately live
in oslo-incubator so it can live in the respective projects. If
mask_password were in oslo-incubator, we'd have just fixed and
backported it there and then synced to nova on master and stable
branches, no dependent library version bumps required.

Plus I miss the good old days of reviewing oslo-incubator
syncs...(joking of course).


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
I've been following this discussion, is there now a consensus on the way 
forward?


My understanding is that Doug is suggesting back porting my oslo.utils 
change to the stable juno and kilo branches?


--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Mobile:+44 (0)7768 994283
Email:mailto:paul.carlt...@hpe.com
Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN 
Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be 
legally privileged. If you have received this message in error, you should delete it from 
your system immediately and advise the sender. To any recipient of this message within 
HP, unless otherwise stated you should consider this message and attachments as "HP 
CONFIDENTIAL".


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Metadata via dhcp namespace not working for icehouse release

2015-10-09 Thread Pradeep kumar
Hii Guys,
I am trying to run Metadata via the DHCP namespace by following blog post
mentioned below:

http://techbackground.blogspot.in/2013/06/metadata-via-dhcp-namespace.html

Commands mentioned on above link for debugging o/p is exactly same for my
setup. Also i am able to ping 169.254.169.254 metadata server ip. But while
running below command from VM i get

curl http://169.254.169.254

curl: (7) Failed to connect to 169.254.169.254 port 80: Connection timed out

and

curl http://169.254.169.254:53
gives empty response
&
from controller node if i run curl http://169.254.169.254 i get
curl http://169.254.169.254:8775
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
On controller node:
netstat -nap | grep 8775 gives
tcp0  0 0.0.0.0:87750.0.0.0:*
LISTEN  13619/python

*Please suggest some pointers i am stuck at the same point from last 10
days.*

Regards
Pradeep Kumar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][horizon] adding Doug Fish to horizon stable-maint

2015-10-09 Thread Matthias Runge
On 01/10/15 11:21, Matthias Runge wrote:
> Hello,
> 
> I would like to propose to add
> 
> Doug Fish (doug-fish)
> 
> to horizon-stable-maint team.
> 
> I'd volunteer and introduce him to stable branch policy.
A week has passed, no negative votes.

Doug, I'll request to add you to horizon-stable-maint, apparently I
can't do that myself.

Matthias


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Metadata via dhcp namespace not working for icehouse release

2015-10-09 Thread Rossella Sblendido


Hi Pradeep,

see inline please...

On 10/09/2015 09:00 AM, Pradeep kumar wrote:


Hii Guys,


and girls :)


I am trying to run Metadata via the DHCP namespace by following blog
post mentioned below:

http://techbackground.blogspot.in/2013/06/metadata-via-dhcp-namespace.html

Commandsmentioned on above link for debugging o/p is exactly same for my
setup. Also i am able to ping 169.254.169.254 metadata server ip. But
while running below command from VM i get

curl http://169.254.169.254

curl: (7) Failed to connect to 169.254.169.254 port 80: Connection timed out

and

curl http://169.254.169.254:53
gives empty response
&
from controller node if i run curl http://169.254.169.254 i get
curl http://169.254.169.254:8775
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
On controller node:
netstat -nap | grep 8775 gives
tcp0  0 0.0.0.0:8775 
0.0.0.0:*   LISTEN  13619/python

*Please suggest some pointers i am stuck at the same point from last 10
days.*


Can you try using another image to create a VM, the image you are using 
might not support DHCP option 121...


cheers,

Rossella



Regards
Pradeep Kumar



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] review priorities etherpad

2015-10-09 Thread Dan Prince
On Thu, 2015-10-08 at 09:17 -0400, James Slagle wrote:
> At the TripleO meething this week, we talked about using an etherpad
> to help get some organization around reviews for the high priority
> themes in progress.
> 
> I started on one: 
> https://etherpad.openstack.org/p/tripleo-review-priorities

Nice. Thanks.

> 
> And I subjectively added a few things :). Feel free to add more
> stuff.
> Personally, I like seeing it organized by "feature" or theme instead
> of git repo, but we can work out whatever seems best.

Agree. For some things it really helps to see things grouped by feature
in an etherpad.

Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Shamail Tahir
Well said!

On Fri, Oct 9, 2015 at 5:00 PM, Sean Dague  wrote:

> On 10/09/2015 02:52 PM, Jonathan D. Proulx wrote:
> > On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
> > :On 10/09/2015 01:39 PM, David Stanek wrote:
> > :>
> > :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx  > :>> wrote:
> > :>As an operator I'd be happy to use SRV records to define endpoints,
> > :>though multiple regions could make that messy.
> > :>
> > :>would we make subdomins per region or include region name in the
> > :>service name?
> > :>
> > :>_compute-regionone._tcp.example.com 
> > :>-vs-
> > :>_compute._tcp.regionone.example.com <
> http://tcp.regionone.example.com>
> > :>
> > :>Also not all operators can controll their DNS to this level so it
> > :>couldn't be the only option.
> > :
> > :SO - XMPP does this. The way it works is that if your XMPP provider
> > :has put the approriate records in DNS, then everything Just Works. If
> > :not, then you, as a consumer, have several pieces of information you
> > :need to provide by hand.
> > :
> > :Of course, there are already several pieces of information you have
> > :to provide by hand to connect to OpenStack, so needing to download a
> > :manifest file or something like that to talk to a cloud in an
> > :environment where the people running a cloud do not have the ability
> > :to add information to DNS (boggles) shouldn't be that terrible.
> >
> > yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
> > of local config options is managable. A cloud with X endpoints and Y
> > regions is significantly more.
> >
> > Not to say this couldn't be done by packing more stuff into the openrc
> > or equivelent so users don't need to directly enter all that, but that
> > would be a significant change and one I think would be more difficult
> > for smaller operations.
> >
> > :One could also imagine an in-between option where OpenStack could run
> > :an _optional_ DNS for this purpose - and then the only 'by-hand'
> > :you'd need for clouds with no real DNS is the location of the
> > :discover DNS.
> >
> > Yes a special purpose DNS (a la dnsbl) might be preferable to
> > pushing around static configs.
>
> I do realize lots of people want to go in much more radical directions
> here. I think we have to be really careful about that. The current
> cinder v1 -> v2 transition challenges demonstrate how much inertia there
> is. 3 years of talking about a Tasks API is another instance of it.

Yep... very valid point.

>
>
We aren't starting with a blank slate. This is brownfield development.
> There are enough users of this that making shifts need to be done in
> careful shifts that enable a new thing similar enough to the old thing,
> that people will easily be able to take advantage of it. Which means I
> think deciding to jump off the REST bandwagon for this is currently a
> bridge too far. At least to get anything tangible done in the next 6 to
> 12 months.
>
++ but I think it does make sense to consider possible future design
considerations into account.  For example, we shouldn't abandon REST (for
the points you have raised) but if there is interest in possibly using DNS
in the future then we should try to make design choices today that would
allow for that direction in the future.  To further the compatibility
conversation, if/when we do decide to add DNS... we will still need to
support REST for an indefinite amount of time to let people choose their
desired mode of operation over a time window that should be (for the most
part) in their control due to their own pace of adopting changes.

>
> I think getting us a service catalog served over REST that doesn't
> require auth, and doesn't require tenant_ids in urls, gets us someplace
> we could figure out a DNS representation (for those that wanted that).
> But we have to tick / tock this and not change transports and
> representations at the same time.
>
> And, as I've definitely discovered through this process the Service
> Catalog today has been fluid enough that where it is used, and what
> people rely on in it, isn't always clear all at once. For instance,
> tenant_ids in urls are very surface features in Nova (we don't rely on
> it, we're using the context), don't exist at all in most new services,
> and are very corely embedded in Swift. This is part of what has also
> required the service catalog is embedded in the Token, which causes toke
> bloat, and has led to other features to try to shrink the catalog by
> filtering it by what a user is allowed. Which in turn ended up being
> used by Horizon to populate the feature matrix users see.
>
> ++

> So we're pulling on a thread, and we have to do that really carefully.
>
> I think the important thing is to focus on what we have in 6 months
> doesn't break current users / applications, and is incrementally closer

Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Joshua Harlow

Gregory Haynes wrote:

Excerpts from Joshua Harlow's message of 2015-10-08 15:24:18 +:

On this point, and just thinking out loud. If we consider saving
compute_node information into say a node in said DLM backend (for
example a znode in zookeeper[1]); this information would be updated
periodically by that compute_node *itself* (it would say contain
information about what VMs are running on it, what there utilization is
and so-on).

For example the following layout could be used:

/nova/compute_nodes/

  data could be:

{
 vms: [],
 memory_free: XYZ,
 cpu_usage: ABC,
 memory_used: MNO,
 ...
}

Now if we imagine each/all schedulers having watches
on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
afaik) then when a compute_node updates that information a push
notification (the watch being triggered) will be sent to the
scheduler(s) and the scheduler(s) could then update a local in-memory
cache of the data about all the hypervisors that can be selected from
for scheduling. This avoids any reading of a large set of data in the
first place (besides an initial read-once on startup to read the
initial list + setup the watches); in a way its similar to push
notifications. Then when scheduling a VM ->  hypervisor there isn't any
need to query anything but the local in-memory representation that the
scheduler is maintaining (and updating as watches are triggered)...

So this is why I was wondering about what capabilities of cassandra are
being used here; because the above I think are unique capababilties of
DLM like systems (zookeeper, consul, etcd) that could be advantageous
here...

[1]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes

[2]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches


I wonder if we would even need to make something so specialized to get
this kind of local caching. I dont know what the current ZK tools are
but the original Chubby paper described that clients always have a
write-through cache for nodes which they set up subscriptions for in
order to break the cache.


Perhaps not, make it as simple as we want as long as people agree that 
the concept is useful. My idea is it would look like something like:


(simplified obviously):

http://paste.openstack.org/show/475938/

Then resources (in this example compute_nodes) would register themselves 
via a call like:


>>> from kazoo import client
>>> import json
>>> c = client.KazooClient()
>>> c.start()
>>> n = "/node/compute_nodes"
>>> c.ensure_path(n)
>>> c.create("%s/h1.hypervisor.yahoo.com" % n, json.dumps({}))

^^^ the dictionary above would be whatever data to then put into the 
receivers caches...


Then in the pasted program (running in a different shell/computer/...) 
the cache would then get updated, and then a user of that cache can use 
it to find resources to schedule things to


The example should work, just get zookeeper setup:

http://packages.ubuntu.com/precise/zookeeperd should do all of that, and 
then try it out...




Also, re: etcd - The last time I checked their subscription API was
woefully inadequate for performing this type of thing without hurding
issues.


Any idea on the consul watch capabilities?

Similar API(s) appear to exist (but I don't know how they work, if they 
do at all); https://www.consul.io/docs/agent/watches.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Different OpenStack components

2015-10-09 Thread Amrith Kumar
A google search produced this as result #2.

http://governance.openstack.org/reference/projects/index.html

Looks pretty complete to me.

-amrith

--
Amrith Kumar, CTO   | amr...@tesora.com
Tesora, Inc | @amrithkumar
125 CambridgePark Drive, Suite 400  | http://www.tesora.com
Cambridge, MA. 02140|



From: Abhishek Talwar [mailto:abhishek.tal...@tcs.com]
Sent: Friday, October 09, 2015 3:46 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] Different OpenStack components

Hi Folks,

I have been working with OpenStack from a while now, I know that other than the 
main componets (nova, neutron, glance, cinder, horizon, tempest, keystone etc) 
there are many more components in OpenStack (like Sahara, Trove).

So, where can I see the list of all existing OpenStack components and is there 
any documentation for these components so that I can read what all roles these 
components play.

Thanks and Regards
Abhishek Talwar

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Everett Toews
On Oct 9, 2015, at 9:39 AM, Sean Dague  wrote:
> 
> It looks like some great conversation got going on the service catalog
> standardization spec / discussion at the last cross project meeting.
> Sorry I wasn't there to participate.
> 
> A lot of that ended up in here (which was an ether pad stevemar and I
> started working on the other day) -
> https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
> 
> A couple of things that would make this more useful:
> 
> 1) if you are commenting, please (ircnick) your comments. It's not easy
> to always track down folks later if the comment was not understood.
> 
> 2) please provide link to code when explaining a point. Github supports
> the ability to very nicely link to (and highlight) a range of code by a
> stable object ref. For instance -
> https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132
> 
> That will make comments about X does Y, or Z can't do W, more clear
> because we'll all be looking at the same chunk of code and start to
> build more shared context here. One of the reasons this has been long
> and difficult is that we're missing a lot of that shared context between
> projects. Reassembling that by reading each other's relevant code will
> go a long way to understanding the whole picture.
> 
> 
> Lastly, I think it's pretty clear we probably need a dedicated workgroup
> meeting to keep this ball rolling, come to a reasonable plan that
> doesn't break any existing deployed code, but lets us get to a better
> world in a few cycles. annegentle, stevemar, and I have been pushing on
> that ball so far, however I'd like to know who else is willing to commit
> a chunk of time over this cycle to this. Once we know that we can try to
> figure out when a reasonable weekly meeting point would be.

It's likely you're already aware of it but see

https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

for many examples of service catalogs from both public and private OpenStack 
clouds.

Everett


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Sean Dague
On 10/09/2015 02:52 PM, Jonathan D. Proulx wrote:
> On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote:
> :On 10/09/2015 01:39 PM, David Stanek wrote:
> :>
> :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx  :>> wrote:
> :>As an operator I'd be happy to use SRV records to define endpoints,
> :>though multiple regions could make that messy.
> :>
> :>would we make subdomins per region or include region name in the
> :>service name?
> :>
> :>_compute-regionone._tcp.example.com 
> :>-vs-
> :>_compute._tcp.regionone.example.com 
> :>
> :>Also not all operators can controll their DNS to this level so it
> :>couldn't be the only option.
> :
> :SO - XMPP does this. The way it works is that if your XMPP provider
> :has put the approriate records in DNS, then everything Just Works. If
> :not, then you, as a consumer, have several pieces of information you
> :need to provide by hand.
> :
> :Of course, there are already several pieces of information you have
> :to provide by hand to connect to OpenStack, so needing to download a
> :manifest file or something like that to talk to a cloud in an
> :environment where the people running a cloud do not have the ability
> :to add information to DNS (boggles) shouldn't be that terrible.
> 
> yes but XMPP require 2 (maybe 3) SRV records so an equivelent number
> of local config options is managable. A cloud with X endpoints and Y
> regions is significantly more.
> 
> Not to say this couldn't be done by packing more stuff into the openrc
> or equivelent so users don't need to directly enter all that, but that
> would be a significant change and one I think would be more difficult
> for smaller operations.
> 
> :One could also imagine an in-between option where OpenStack could run
> :an _optional_ DNS for this purpose - and then the only 'by-hand'
> :you'd need for clouds with no real DNS is the location of the
> :discover DNS.
> 
> Yes a special purpose DNS (a la dnsbl) might be preferable to
> pushing around static configs.

I do realize lots of people want to go in much more radical directions
here. I think we have to be really careful about that. The current
cinder v1 -> v2 transition challenges demonstrate how much inertia there
is. 3 years of talking about a Tasks API is another instance of it.

We aren't starting with a blank slate. This is brownfield development.
There are enough users of this that making shifts need to be done in
careful shifts that enable a new thing similar enough to the old thing,
that people will easily be able to take advantage of it. Which means I
think deciding to jump off the REST bandwagon for this is currently a
bridge too far. At least to get anything tangible done in the next 6 to
12 months.

I think getting us a service catalog served over REST that doesn't
require auth, and doesn't require tenant_ids in urls, gets us someplace
we could figure out a DNS representation (for those that wanted that).
But we have to tick / tock this and not change transports and
representations at the same time.

And, as I've definitely discovered through this process the Service
Catalog today has been fluid enough that where it is used, and what
people rely on in it, isn't always clear all at once. For instance,
tenant_ids in urls are very surface features in Nova (we don't rely on
it, we're using the context), don't exist at all in most new services,
and are very corely embedded in Swift. This is part of what has also
required the service catalog is embedded in the Token, which causes toke
bloat, and has led to other features to try to shrink the catalog by
filtering it by what a user is allowed. Which in turn ended up being
used by Horizon to populate the feature matrix users see.

So we're pulling on a thread, and we have to do that really carefully.

I think the important thing is to focus on what we have in 6 months
doesn't break current users / applications, and is incrementally closer
to our end game. That's the lens I'm going to keep putting on this one.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-09 Thread Steve Martinelli

Dave, thanks for much for organizing this recurring event. I'll definitely
be there to help squash some bugs this Friday!

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead



From:   David Stanek 
To: OpenStack Development Mailing List

Date:   2015/10/09 03:55 PM
Subject:[openstack-dev] [keystone] Let's get together and fix all the
bugs



I would like to start running a recurring bug squashing day. The general
idea is to get more focus on bugs and stability. You can find the details
here: https://etherpad.openstack.org/p/keystone-office-hours


--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Joshua Harlow

And one last reply with more code:

http://paste.openstack.org/show/475941/ (a creator of services that 
dynamically creates services, and destroys them after a set amount of 
time is included in here, along with the prior resource watcher).


Works locally, should work for u as well.

Output from example run of 'creator process'

http://paste.openstack.org/show/475942/

Output from example run of 'watcher process'

http://paste.openstack.org/show/475943/

Enjoy!

-josh

Joshua Harlow wrote:

Further example stuff,

Get kazoo installed (http://kazoo.readthedocs.org/)

Output from my local run (with no data)

$ python test.py
Kazoo client has changed to state: CONNECTED
Got data: '' for new resource /node/compute_nodes/h1.hypervisor.yahoo.com
Idling (ran for 0.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 1.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 2.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 3.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 4.00s).
Known resources:
- h1.hypervisor.yahoo.com => {}
Idling (ran for 5.00s).
Kazoo client has changed to state: LOST
Traceback (most recent call last):
File "test.py", line 72, in 
time.sleep(1.0)
KeyboardInterrupt

Joshua Harlow wrote:

Gregory Haynes wrote:

Excerpts from Joshua Harlow's message of 2015-10-08 15:24:18 +:

On this point, and just thinking out loud. If we consider saving
compute_node information into say a node in said DLM backend (for
example a znode in zookeeper[1]); this information would be updated
periodically by that compute_node *itself* (it would say contain
information about what VMs are running on it, what there utilization is
and so-on).

For example the following layout could be used:

/nova/compute_nodes/

 data could be:

{
vms: [],
memory_free: XYZ,
cpu_usage: ABC,
memory_used: MNO,
...
}

Now if we imagine each/all schedulers having watches
on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
afaik) then when a compute_node updates that information a push
notification (the watch being triggered) will be sent to the
scheduler(s) and the scheduler(s) could then update a local in-memory
cache of the data about all the hypervisors that can be selected from
for scheduling. This avoids any reading of a large set of data in the
first place (besides an initial read-once on startup to read the
initial list + setup the watches); in a way its similar to push
notifications. Then when scheduling a VM -> hypervisor there isn't any
need to query anything but the local in-memory representation that the
scheduler is maintaining (and updating as watches are triggered)...

So this is why I was wondering about what capabilities of cassandra are
being used here; because the above I think are unique capababilties of
DLM like systems (zookeeper, consul, etcd) that could be advantageous
here...

[1]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes



[2]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches




I wonder if we would even need to make something so specialized to get
this kind of local caching. I dont know what the current ZK tools are
but the original Chubby paper described that clients always have a
write-through cache for nodes which they set up subscriptions for in
order to break the cache.


Perhaps not, make it as simple as we want as long as people agree that
the concept is useful. My idea is it would look like something like:

(simplified obviously):

http://paste.openstack.org/show/475938/

Then resources (in this example compute_nodes) would register themselves
via a call like:

>>> from kazoo import client
>>> import json
>>> c = client.KazooClient()
>>> c.start()
>>> n = "/node/compute_nodes"
>>> c.ensure_path(n)
>>> c.create("%s/h1.hypervisor.yahoo.com" % n, json.dumps({}))

^^^ the dictionary above would be whatever data to then put into the
receivers caches...

Then in the pasted program (running in a different shell/computer/...)
the cache would then get updated, and then a user of that cache can use
it to find resources to schedule things to

The example should work, just get zookeeper setup:

http://packages.ubuntu.com/precise/zookeeperd should do all of that, and
then try it out...



Also, re: etcd - The last time I checked their subscription API was
woefully inadequate for performing this type of thing without hurding
issues.


Any idea on the consul watch capabilities?

Similar API(s) appear to exist (but I don't know how they work, if they
do at all); https://www.consul.io/docs/agent/watches.html



__


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[openstack-dev] [app-catalog] Tokyo Summit Sessions

2015-10-09 Thread Christopher Aedo
Hello!  I wanted to send a note letting people know about the two
sessions we have planned for the Tokyo Summit coming up.  Both of them
are on Thursday, with a Fishbowl session followed by a working
session.

I'm eager to get feedback and input while we're in Tokyo.  We are
interested not only in adding content, but in making that content
easier to add and easier to consume.  To that end we've got an
excellent Horizon plugin that makes the App Catalog essentially a
native element of your OpenStack cloud.  Once we complete the first
pass of a real API for the site we will also write a plugin for the
unified client.  We have other plans and ideas along these lines, but
could really use your help in making sure we are headed in the right
direction.

During the Fishbowl we will go over the progress we've made in the
last six months, where things with the App Catalog stand today, and
what our plans are for the next cycle.  This will be highly
interactive, so if you care even a little bit about making OpenStack
better for the end users you should join us!

That session will be followed by a working session where we'll have a
chance to talk over some of the major design decisions we're making
and discuss improvements, concerns, or anything else related to the
catalog that comes up.

Status, progress and plans
http://mitakadesignsummit.sched.org/event/27bf7f9a29094cf9e96026d682db1609
Thursday in the Kotobuki room, from 1:50 to 2:30

Work session
http://mitakadesignsummit.sched.org/event/7754b46437c14cd4fdb51debebe89fb0
Thursday in the Tachibana room, from 2:40 to 3:20

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Feedback about Swift API - Especially about Large Objects

2015-10-09 Thread Pierre SOUCHAY
Hi Swift Developpers,

We have been using Swift as a IAAS provider for more than two years now, but 
this mail is about feedback on the API side. I think it would be great to 
include some of the ideas in future revisions of API.

I’ve been developping a few Swift clients in HTML (in Cloudwatt Dashboard) with 
CORS, Java with Swing GUI (https://github.com/pierresouchay/swiftbrowser 
) and Go for Swift to filesystem 
(https://github.com/pierresouchay/swiftsync/ 
), so I have now a few ideas about 
how improving a bit the API.

The API is quite straightforward and intuitive to use, and writing a client is 
now that difficult, but unfortunately, the Large Object support is not easy at 
all to deal with.

The biggest issue is that there is now way to know whenever a file is a large 
object when performing listings using JSON format, since, AFAIK a large object 
is an object with 0 bytes (so its size in bytes is 0), but it also has a hash 
of a zero file bytes.

For instance, a signature of such object is :
 {"hash": "d41d8cd98f00b204e9800998ecf8427e", "last_modified": 
"2015-06-04T10:23:57.618760", "bytes": 0, "name": "5G", "content_type": 
"octet/stream"}

which is, exactly the hash of a 0 bytes file :
$ echo -n | md5
d41d8cd98f00b204e9800998ecf8427e

Ok, now lets try HEAD :
$ curl -vv -XHEAD -H X-Auth-Token:$TOKEN 
'https://storage.fr1.cloudwatt.com/v1/AUTH_61b8fe6dfd0a4ce69f6622ea7e0f/large_files/5G
…
< HTTP/1.1 200 OK
< Date: Fri, 09 Oct 2015 19:43:09 GMT
< Content-Length: 50
< Accept-Ranges: bytes
< X-Object-Manifest: large_files/5G/.part-50-
< Last-Modified: Thu, 04 Jun 2015 10:16:33 GMT
< Etag: "479517ec4767ca08ed0547dca003d116"
< X-Timestamp: 1433413437.61876
< Content-Type: octet/stream
< X-Trans-Id: txba36522b0b7743d683a5d-00561818cd

WTF ? While all files have the same value for ETag and hash, this is not the 
case for Large files…

Furthermore, the ETag is not the md5 of the whole file, but the hash of the 
hash of all manifest files (as described somewhere hidden deeply in the 
documentation)

Why this is a problem ?
---

Imagine a « naive »  client using the API which performs some kind of Sync.

The client download each file and when it syncs, compares the local md5 to the 
md5 of the listing… of course, the hash is the hash of a zero bytes files… so 
it downloads the file again… and again… and again. Unfortunaly for our naive 
client, this is exactly the kind of files we don’t want to download twice… 
since the file is probably huge (after all, it has been split for a reason no ?)

I think this is really a design flaw since you need to know everything about 
Swift API and extensions to have a proper behavior. The minimum would be to at 
least return the same value as the ETag header.

OK, let’s continue…

We are not so Naive… our Swift Sync client know that 0 files needs more work.

* First issue: we have to know whenever the file is a « real » 0 bytes file or 
not. You may think most people do not create 0 bytes files after all… this is 
dummy. Actually, some I have seen two Object Storage middleware using many 0 
bytes files (for instance to store meta data or two set up some kind of 
directory like structure). So, in this cas, we need to perform a HEAD request 
to each 0 bytes files. If you have 1000 files like this, you have to perform 
1000 HEAD requests to finally know that there are not any Large file. Not very 
efficient. Your Swift Sync client took 1 second to sync 20G of data with naive 
approach, now, you need 5 minutes… hash of 0 bytes is not a good idea at all.

* Second issue: since the hash is the hash of all parts (I have an idea about 
why this decision was made, probably for performance reasons), your client 
cannot work on files since the hash of local file is not the hash of the Swift 
aggregated file (which is the hash of all the hash of manifest). So, it means 
you cannot work on existing data, you have to either :
 - split all the files in the same way as the manifest, compute the MD5 of each 
part, than compute the MD5 of the hashes and compare to the MD5 on server… (ok… 
doable, but I gave up with such system)
 - have a local database in your client (when you download, store the REAL Hash 
of file and store that in fact you have to compare it the the HASH returned by 
server)
 - perform some kind of crappy heuristics (size + grab the starting bytes of 
each data of each part or something like that…)

* Third issue:
 - If you don’t want to store the parts of your object file, you have to wait 
for all your HEAD requests to finish since it is the only way to guess all the 
files that are referenced in your manifest headers.

So summarize, I think the current API really need some refinements about the 
listings since a competent developper may trust the bytes value and the hash 
value and create an algorithm that does not behave 

Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Neil Jerram
FWIW - and somewhat ironically given what you said just before - I couldn't 
parse your last sentence below... You might like to follow up with a corrected 
version.

(On the broad point, BTW, I really agree with you. So much OpenStack discussion 
is rendered difficult to get into by use of wrong or imprecise language.)

Regards,
 Neil


  Original Message
From: Clint Byrum
Sent: Friday, 9 October 2015 19:08
To: openstack-dev
Reply To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Scheduler proposal


Excerpts from Chris Friesen's message of 2015-10-09 10:54:36 -0700:
> On 10/09/2015 11:09 AM, Zane Bitter wrote:
>
> > The optimal way to do this would be a weighted random selection, where the
> > probability of any given host being selected is proportional to its 
> > weighting.
> > (Obviously this is limited by the accuracy of the weighting function in
> > expressing your actual preferences - and it's at least conceivable that this
> > could vary with the number of schedulers running.)
> >
> > In fact, the choice of the name 'weighting' would normally imply that it's 
> > done
> > this way; hearing that the 'weighting' is actually used as a 'score' with 
> > the
> > highest one always winning is quite surprising.
>
> If you've only got one scheduler, there's no need to get fancy, you just pick
> the "best" host based on your weighing function.
>
> It's only when you've got parallel schedulers that things get tricky.
>

Note that I think you mean _concurrent_ not _parallel_ schedulers.

Parallel schedulers would be trying to solve the same unit of work by
breaking it up into smaller components and doing them at the same time.

Concurrent means they're just doing different things at the same time.

I know this is nit-picky, but we use the wrong word _A LOT_ and the
problem space is actually vastly different, as parallelizable problems
have a whole set of optimizations and advantages that generic concurrent
problems (especially those involving mutating state!) have a whole set
of race conditions that must be managed.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feedback about Swift API - Especially about Large Objects

2015-10-09 Thread Clay Gerrard
A lot of these deficiencies are drastically improved with static large
objects - and non-trivial to address (impossible?) with DLO's because of
their dynamic nature.  It's unfortunate, but DLO's don't really serve your
use-case very well - and you should find a way to transition to SLO's [1].

We talked about improving the checksumming behavior in SLO's for the
general naive sync case back at the hack-a-thon before the Vancouver summit
- but it's tricky (MD5 => CRC) - and would probably require a API version
bump.

All we've been able to get done so far is improve the native client
handling [2] - but if using SLO's you may find a similar solution quite
manageable.

Thanks for the feedback.

-Clay

1.
http://docs-draft.openstack.org/91/219991/7/check/gate-swift-docs/75fb84c//doc/build/html/overview_large_objects.html#module-swift.common.middleware.slo
2.
https://github.com/openstack/python-swiftclient/commit/ff0b3b02f07de341fa9eb81156ac2a0565d85cd4

On Friday, October 9, 2015, Pierre SOUCHAY 
wrote:

> Hi Swift Developpers,
>
> We have been using Swift as a IAAS provider for more than two years now,
> but this mail is about feedback on the API side. I think it would be great
> to include some of the ideas in future revisions of API.
>
> I’ve been developping a few Swift clients in HTML (in Cloudwatt Dashboard)
> with CORS, Java with Swing GUI (
> https://github.com/pierresouchay/swiftbrowser) and Go for Swift to
> filesystem (https://github.com/pierresouchay/swiftsync/), so I have now a
> few ideas about how improving a bit the API.
>
> The API is quite straightforward and intuitive to use, and writing a
> client is now that difficult, but unfortunately, the Large Object support
> is not easy at all to deal with.
>
> The biggest issue is that there is now way to know whenever a file is a
> large object when performing listings using JSON format, since, AFAIK a
> large object is an object with 0 bytes (so its size in bytes is 0), but it
> also has a hash of a zero file bytes.
>
> For instance, a signature of such object is :
>  {"hash": "d41d8cd98f00b204e9800998ecf8427e", "last_modified":
> "2015-06-04T10:23:57.618760", "bytes": 0, "name": "5G", "content_type": "
> octet/stream"}
>
> which is, exactly the hash of a 0 bytes file :
> $ echo -n | md5
> d41d8cd98f00b204e9800998ecf8427e
>
> Ok, now lets try HEAD :
> $ curl -vv -XHEAD -H X-Auth-Token:$TOKEN '
> https://storage.fr1.cloudwatt.com/v1/AUTH_61b8fe6dfd0a4ce69f6622ea7e0f/large_files/5G
> …
> < HTTP/1.1 200 OK
> < Date: Fri, 09 Oct 2015 19:43:09 GMT
> < Content-Length: 50
> < Accept-Ranges: bytes
> < X-Object-Manifest: large_files/5G/.part-50-
> < Last-Modified: Thu, 04 Jun 2015 10:16:33 GMT
> < Etag: "479517ec4767ca08ed0547dca003d116"
> < X-Timestamp: 1433413437.61876
> < Content-Type: octet/stream
> < X-Trans-Id: txba36522b0b7743d683a5d-00561818cd
>
> WTF ? While all files have the same value for ETag and hash, this is not
> the case for Large files…
>
> Furthermore, the ETag is not the md5 of the whole file, but the hash of
> the hash of all manifest files (as described somewhere hidden deeply in the
> documentation)
>
> Why this is a problem ?
> ---
>
> Imagine a « naive »  client using the API which performs some kind of Sync.
>
> The client download each file and when it syncs, compares the local md5 to
> the md5 of the listing… of course, the hash is the hash of a zero bytes
> files… so it downloads the file again… and again… and again. Unfortunaly
> for our naive client, this is exactly the kind of files we don’t want to
> download twice… since the file is probably huge (after all, it has been
> split for a reason no ?)
>
> I think this is really a design flaw since you need to know everything
> about Swift API and extensions to have a proper behavior. The minimum would
> be to at least return the same value as the ETag header.
>
> OK, let’s continue…
>
> We are not so Naive… our Swift Sync client know that 0 files needs more
> work.
>
> * First issue: we have to know whenever the file is a « real » 0 bytes
> file or not. You may think most people do not create 0 bytes files after
> all… this is dummy. Actually, some I have seen two Object Storage
> middleware using many 0 bytes files (for instance to store meta data or two
> set up some kind of directory like structure). So, in this cas, we need to
> perform a HEAD request to each 0 bytes files. If you have 1000 files like
> this, you have to perform 1000 HEAD requests to finally know that there are
> not any Large file. Not very efficient. Your Swift Sync client took 1
> second to sync 20G of data with naive approach, now, you need 5 minutes…
> hash of 0 bytes is not a good idea at all.
>
> * Second issue: since the hash is the hash of all parts (I have an idea
> about why this decision was made, probably for performance reasons), your
> client cannot work on files since the hash of local file is not the hash of
> the 

Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Joshua Harlow

Further example stuff,

Get kazoo installed (http://kazoo.readthedocs.org/)

Output from my local run (with no data)

$ python test.py
Kazoo client has changed to state: CONNECTED
Got data: '' for new resource /node/compute_nodes/h1.hypervisor.yahoo.com
Idling (ran for 0.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 1.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 2.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 3.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 4.00s).
Known resources:
 - h1.hypervisor.yahoo.com => {}
Idling (ran for 5.00s).
Kazoo client has changed to state: LOST
Traceback (most recent call last):
  File "test.py", line 72, in 
time.sleep(1.0)
KeyboardInterrupt

Joshua Harlow wrote:

Gregory Haynes wrote:

Excerpts from Joshua Harlow's message of 2015-10-08 15:24:18 +:

On this point, and just thinking out loud. If we consider saving
compute_node information into say a node in said DLM backend (for
example a znode in zookeeper[1]); this information would be updated
periodically by that compute_node *itself* (it would say contain
information about what VMs are running on it, what there utilization is
and so-on).

For example the following layout could be used:

/nova/compute_nodes/

 data could be:

{
vms: [],
memory_free: XYZ,
cpu_usage: ABC,
memory_used: MNO,
...
}

Now if we imagine each/all schedulers having watches
on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
afaik) then when a compute_node updates that information a push
notification (the watch being triggered) will be sent to the
scheduler(s) and the scheduler(s) could then update a local in-memory
cache of the data about all the hypervisors that can be selected from
for scheduling. This avoids any reading of a large set of data in the
first place (besides an initial read-once on startup to read the
initial list + setup the watches); in a way its similar to push
notifications. Then when scheduling a VM -> hypervisor there isn't any
need to query anything but the local in-memory representation that the
scheduler is maintaining (and updating as watches are triggered)...

So this is why I was wondering about what capabilities of cassandra are
being used here; because the above I think are unique capababilties of
DLM like systems (zookeeper, consul, etcd) that could be advantageous
here...

[1]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes


[2]
https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches



I wonder if we would even need to make something so specialized to get
this kind of local caching. I dont know what the current ZK tools are
but the original Chubby paper described that clients always have a
write-through cache for nodes which they set up subscriptions for in
order to break the cache.


Perhaps not, make it as simple as we want as long as people agree that
the concept is useful. My idea is it would look like something like:

(simplified obviously):

http://paste.openstack.org/show/475938/

Then resources (in this example compute_nodes) would register themselves
via a call like:

 >>> from kazoo import client
 >>> import json
 >>> c = client.KazooClient()
 >>> c.start()
 >>> n = "/node/compute_nodes"
 >>> c.ensure_path(n)
 >>> c.create("%s/h1.hypervisor.yahoo.com" % n, json.dumps({}))

^^^ the dictionary above would be whatever data to then put into the
receivers caches...

Then in the pasted program (running in a different shell/computer/...)
the cache would then get updated, and then a user of that cache can use
it to find resources to schedule things to

The example should work, just get zookeeper setup:

http://packages.ubuntu.com/precise/zookeeperd should do all of that, and
then try it out...



Also, re: etcd - The last time I checked their subscription API was
woefully inadequate for performing this type of thing without hurding
issues.


Any idea on the consul watch capabilities?

Similar API(s) appear to exist (but I don't know how they work, if they
do at all); https://www.consul.io/docs/agent/watches.html



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Auto-abandon bot

2015-10-09 Thread Ben Nemec
Hi OoOers,

As discussed in the meeting a week or two ago, we would like to bring
back the auto-abandon functionality for old, unloved gerrit reviews.
I've got a first implementation of a tool to do that:
https://github.com/cybertron/tripleo-auto-abandon

It currently follows these rules for determining what would be abandoned:

Never abandoned:
-WIP patches are never abandoned
-Approved patches are never abandoned
-Patches with no feedback are never abandoned
-Patches with negative feedback, followed by any sort of non-negative
comment are never abandoned (this is to allow committers to respond to
reviewer comments)
-Patches that get restored after a first abandonment are not abandoned
again, unless a new patch set is pushed and also receives negative feedback.

Candidates for abandonment:
-Patches with negative feedback that has not been responded to in over a
month.
-Patches that are failing CI for over a month on the same patch set
(regardless of any followup comments - the intent is that patches
expected to fail CI should be marked WIP).

My intent with this can be summed up as "when in doubt, leave it open".
 I'm open to discussion on any of the points above though.  I expect
that at least the current message for abandonment needs tweaking before
this gets run for real.

I'm a little torn on whether this should be run under my account or a
dedicated bot account.  On the one hand, I don't really want to end up
subscribed to every dead change, but on the other this is only supposed
to run on changes that are unlikely to be resurrected, so that should
limit the review spam.

Anyway, please take a look and let me know what you think.  Thanks.

-Ben

For the curious, this is the list of patches that would currently be
abandoned by the tool:

Abandoning https://review.openstack.org/192521 - Add centos7 test
Abandoning https://review.openstack.org/168002 - Allow dib to be lauched
from venv
Abandoning https://review.openstack.org/180807 - Warn when silently
ignoring executable files
Abandoning https://review.openstack.org/91376 - RabbitMQ: VHost support
Abandoning https://review.openstack.org/112870 - Adding configuration
options for stunnel.
Abandoning https://review.openstack.org/217511 - Fix "pkg-map failed"
issue building IPA ramdisk
Abandoning https://review.openstack.org/176060 - Introduce Overcloud Log
Aggregation
Abandoning https://review.openstack.org/141380 - Add --force-yes option
for install-packages
Abandoning https://review.openstack.org/149433 - Double quote to prevent
globbing and word splitting in os-db-create
Abandoning https://review.openstack.org/204639 - Perform a booting test
for our images
Abandoning https://review.openstack.org/102304 - Configures keystone
with apache
Abandoning https://review.openstack.org/214771 - Ramdisk should consider
the size unit when inspecting the amount of RAM
Abandoning https://review.openstack.org/87223 - Install the "classic"
icinga interface
Abandoning https://review.openstack.org/89744 - configure keystone with
apache
Abandoning https://review.openstack.org/176057 - Introduce Elements for
Log Aggregation
Abandoning https://review.openstack.org/153747 - Fail job if SELinux
denials are found
Abandoning https://review.openstack.org/179229 - Document how to use
network isolation/static IPs
Abandoning https://review.openstack.org/109651 - Add explicit
configuraton parameters for DB pool size
Abandoning https://review.openstack.org/189026 - shorter sleeps if
metadata changes are detected
Abandoning https://review.openstack.org/139627 - Nothing to see here
Abandoning https://review.openstack.org/117887 - Support Debian distro
for haproxy iptables
Abandoning https://review.openstack.org/113823 - Allow single node
mariadb clusters to restart
Abandoning https://review.openstack.org/110906 - Install pkg-config to
use ceilometer-agent-compute
Abandoning https://review.openstack.org/86580 - Add support for
specifying swift ring directory range
Abandoning https://review.openstack.org/142529 - Allow enabling debug
logs at build time
Abandoning https://review.openstack.org/89742 - configure keystone with
apache
Abandoning https://review.openstack.org/138007 - add elements Memory and
Disk limit to rabbitmq
Abandoning https://review.openstack.org/130826 - Nova rule needs to be
added with add-rule for persistence
Abandoning https://review.openstack.org/177043 - Make backwards
compatible qcow2s by default
Abandoning https://review.openstack.org/165118 - Make os_net_config
package private
Abandoning https://review.openstack.org/118220 - Added a MySQL logrotate
configuration
Abandoning https://review.openstack.org/94500 - Ceilometer Service
Update/Upgrade in TripleO
Abandoning https://review.openstack.org/177559 - Dont pass xattrs to tar
if its unsupported
Abandoning https://review.openstack.org/87226 - Install check_mk server
Abandoning https://review.openstack.org/113827 - Configure haproxy logging


[openstack-dev] [Rally][Meeting][Agenda]

2015-10-09 Thread Roman Vasilets
Hi, its a friendly reminder that if you what to discuss some topics at
Rally meetings, please add you topic to our Meeting agenda
https://wiki.openstack.org/wiki/Meetings/Rally#Agenda. Don't forget to
specify who will lead topic discussion. Add some information about
topic(links, etc.) Thank you for your attention.

- Best regards, Vasilets Roman.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Gregory Haynes
Excerpts from Chris Friesen's message of 2015-10-09 19:36:03 +:
> On 10/09/2015 12:55 PM, Gregory Haynes wrote:
> 
> > There is a more generalized version of this algorithm for concurrent
> > scheduling I've seen a few times - Pick N options at random, apply
> > heuristic over that N to pick the best, attempt to schedule at your
> > choice, retry on failure. As long as you have a fast heuristic and your
> > N is sufficiently smaller than the total number of options then the
> > retries are rare-ish and cheap. It also can scale out extremely well.
> 
> If you're looking for a resource that is relatively rare (say you want a 
> particular hardware accelerator, or a very large number of CPUs, or even to 
> be 
> scheduled "near" to a specific other instance) then you may have to retry 
> quite 
> a lot.
> 
> Chris
> 

Yep. You can either be fast or correct. There is no solution which will
both scale easily and allow you to schedule to a very precise node
efficiently or this would be a solved problem.

There is a not too bad middle ground here though - you can definitely do
some filtering beforehand efficiently (especially if you have some kind
of local cache similar to what Josh mentioned with ZK) and then this is
less of an issue. This is definitely a big step in complexity though...

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] REST API to return ip-margin

2015-10-09 Thread Aihua Li

>Network names are not guaranteed to be unique. This could cause >problems. If 
>I recall correctly we had a similar discussion about one of the >plugins (One 
>of the IBM ones?) where they ran into the issue of network >names and 
>uniqueness.
My proposal is to use network-uuid as the key, and send the network name in the 
body, as shown below.
{ "network-1-uuid": { "total-ips" : 256
   "available-ips" : count1,
   "name" : test-network,
}}
 == Aihua Edward Li == 


 On Friday, October 9, 2015 1:27 PM, Sean M. Collins  
wrote:
   

 On Fri, Oct 09, 2015 at 02:38:03PM EDT, Aihua Li wrote:
>  For this use-case, we need to return network name in the response.We also 
>have the implementation and accompanying tempest test scripts.The issue 
>1457986 is currently assigned to Mike Dorman. I am curious to see where we are 
>on this issue. Is the draft REST API ready? Can we incorporate my use-case 
>input into the considerations.

Network names are not guaranteed to be unique. This could cause problems. If I 
recall correctly we had a similar discussion about one of the plugins (One of 
the IBM ones?) where they ran into the issue of network names and uniqueness.

-- 
Sean M. Collins


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Ian Wells
On 9 October 2015 at 12:50, Chris Friesen 
wrote:

> Has anybody looked at why 1 instance is too slow and what it would take to
>
>> make 1 scheduler instance work fast enough? This does not preclude the
>> use of
>> concurrency for finer grain tasks in the background.
>>
>
> Currently we pull data on all (!) of the compute nodes out of the database
> via a series of RPC calls, then evaluate the various filters in python code.
>

I'll say again: the database seems to me to be the problem here.  Not to
mention, you've just explained that they are in practice holding all the
data in memory in order to do the work so the benefit we're getting here is
really a N-to-1-to-M pattern with a DB in the middle (the store-to-DB is
rather secondary, in fact), and that without incremental updates to the
receivers.

I suspect it'd be a lot quicker if each filter was a DB query.
>

That's certainly one solution, but again, unless you can tell me *why* this
information will not all fit in memory per process (when it does right
now), I'm still not clear why a database is required at all, let alone a
central one.  Even if it doesn't fit, then a local DB might be reasonable
compared to a centralised one.  The schedulers don't need to work off of
precisely the same state, they just need to make different choices to each
other, which doesn't require a that's-mine-hands-off approach; and they
aren't going to have a perfect view of the state of a distributed system
anyway, so retries are inevitable.

On a different topic, on the weighted choice: it's not 'optimal', given
this is a packing problem, so there isn't a perfect solution.  In fact,
given we're trying to balance the choice of a preferable host with the
chance that multiple schedulers make different choices, it's likely worse
than even weighting.  (Technically I suspect we'd want to rethink whether
the weighting mechanism, is actually getting us a benefit.)
-- 
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Alec Hothan (ahothan)





On 10/9/15, 6:29 PM, "Clint Byrum"  wrote:

>Excerpts from Chris Friesen's message of 2015-10-09 17:33:38 -0700:
>> On 10/09/2015 03:36 PM, Ian Wells wrote:
>> > On 9 October 2015 at 12:50, Chris Friesen > > > wrote:
>> >
>> > Has anybody looked at why 1 instance is too slow and what it would 
>> > take to
>> >
>> > make 1 scheduler instance work fast enough? This does not preclude 
>> > the
>> > use of
>> > concurrency for finer grain tasks in the background.
>> >
>> >
>> > Currently we pull data on all (!) of the compute nodes out of the 
>> > database
>> > via a series of RPC calls, then evaluate the various filters in python 
>> > code.
>> >
>> >
>> > I'll say again: the database seems to me to be the problem here.  Not to
>> > mention, you've just explained that they are in practice holding all the 
>> > data in
>> > memory in order to do the work so the benefit we're getting here is really 
>> > a
>> > N-to-1-to-M pattern with a DB in the middle (the store-to-DB is rather
>> > secondary, in fact), and that without incremental updates to the receivers.
>> 
>> I don't see any reason why you couldn't have an in-memory scheduler.
>> 
>> Currently the database serves as the persistant storage for the resource 
>> usage, 
>> so if we take it out of the picture I imagine you'd want to have some way of 
>> querying the compute nodes for their current state when the scheduler first 
>> starts up.
>> 
>> I think the current code uses the fact that objects are remotable via the 
>> conductor, so changing that to do explicit posts to a known scheduler topic 
>> would take some work.
>> 
>
>Funny enough, I think thats exactly what Josh's "just use Zookeeper"
>message is about. Except in memory, it is "in an observable storage
>location".
>
>Instead of having the scheduler do all of the compute node inspection
>and querying though, you have the nodes push their stats into something
>like Zookeeper or consul, and then have schedulers watch those stats
>for changes to keep their in-memory version of the data up to date. So
>when you bring a new one online, you don't have to query all the nodes,
>you just scrape the data store, which all of these stores (etcd, consul,
>ZK) are built to support atomically querying and watching at the same
>time, so you can have a reasonable expectation of correctness.
>
>Even if you figured out how to make the in-memory scheduler crazy fast,
>There's still value in concurrency for other reasons. No matter how
>fast you make the scheduler, you'll be slave to the response time of
>a single scheduling request. If you take 1ms to schedule each node
>(including just reading the request and pushing out your scheduling
>result!) you will never achieve greater than 1000/s. 1ms is way lower
>than it's going to take just to shove a tiny message into RabbitMQ or
>even 0mq.

That is not what I have seen, measurements that I did or done by others show 
between 5000 and 1 send *per sec* (depending on mirroring, up to 1KB msg 
size) using oslo messaging/kombu over rabbitMQ.
And this is unmodified/highly unoptimized oslo messaging code.
If you remove the oslo messaging layer, you get 25000 to 45000 msg/sec with 
kombu/rabbitMQ (which shows how inefficient is oslo messaging layer itself)


> So I'm pretty sure this is o-k for small clouds, but would be
>a disaster for a large, busy cloud.

It all depends on how many sched/sec for the "large busy cloud"...

>
>If, however, you can have 20 schedulers that all take 10ms on average,
>and have the occasional lock contention for a resource counter resulting
>in 100ms, now you're at 2000/s minus the lock contention rate. This
>strategy would scale better with the number of compute nodes, since
>more nodes means more distinct locks, so you can scale out the number
>of running servers separate from the number of scheduling requests.

How many compute nodes are we talking about max? How many scheduling per second 
is the requirement? And where are we today with the latest nova scheduler?
My point is that without these numbers we could end up under-shooting, 
over-shooting or over-engineering along with the cost of maintaining that extra 
complexity over the lifetime of openstack.

I'll just make up some numbers for the sake of this discussion:

nova scheduler latest can do only 100 sched/sec for 1 instance (I guess the 
10ms average you bring out may not be that unrealistic)
the requirement is a sustained 500 sched/sec worst case with 10K nodes (that is 
5% of 10K and today we can barely launch 100VM/sec sustained)

Are we going to achieve 5x with just 3 instances which is what most people 
deploy? Not likely.
Will using more elaborate distributed infra/DLM like consul/zk/etcd going to 
get us to that 500 mark with 3 instances? Maybe but it will be at the expense 
of added complexity of the overall solution.
Can we instead optimize 

Re: [openstack-dev] [keystone] Let's get together and fix all the bugs

2015-10-09 Thread Adam Young

On 10/09/2015 11:04 PM, Chen, Wei D wrote:


Great idea! core reviewer’s advice is definitely much important and 
valuable before proposing a fixing. I was always thinking it will help 
save us if we can get some agreement at some point.


Best Regards,

Dave Chen

*From:*David Stanek [mailto:dsta...@dstanek.com]
*Sent:* Saturday, October 10, 2015 3:54 AM
*To:* OpenStack Development Mailing List
*Subject:* [openstack-dev] [keystone] Let's get together and fix all 
the bugs


I would like to start running a recurring bug squashing day. The 
general idea is to get more focus on bugs and stability. You can find 
the details here: https://etherpad.openstack.org/p/keystone-office-hours



Can we start with Bug 968696?


--

David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek

www: http://dstanek.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ceilometer, New measurements, Types

2015-10-09 Thread phot...@126.com
There are three type of meters are defined in Ceilometer, including Cumulative, 
Gauge, Delta. 
In fact, these three types are numeric, but i need string.
How could i make ceilometer to support string.

Doc: http://docs.openstack.org/developer/ceilometer/new_meters.html
image: 


phot...@126.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] REST API to return ip-margin

2015-10-09 Thread Sean M. Collins
On Fri, Oct 09, 2015 at 02:38:03PM EDT, Aihua Li wrote:
>  For this use-case, we need to return network name in the response.We also 
> have the implementation and accompanying tempest test scripts.The issue 
> 1457986 is currently assigned to Mike Dorman. I am curious to see where we 
> are on this issue. Is the draft REST API ready? Can we incorporate my 
> use-case input into the considerations.

Network names are not guaranteed to be unique. This could cause problems. If I 
recall correctly we had a similar discussion about one of the plugins (One of 
the IBM ones?) where they ran into the issue of network names and uniqueness.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Gregory Haynes
Excerpts from Joshua Harlow's message of 2015-10-08 15:24:18 +:
> On this point, and just thinking out loud. If we consider saving
> compute_node information into say a node in said DLM backend (for
> example a znode in zookeeper[1]); this information would be updated
> periodically by that compute_node *itself* (it would say contain
> information about what VMs are running on it, what there utilization is
> and so-on).
> 
> For example the following layout could be used:
> 
> /nova/compute_nodes/
> 
>  data could be:
> 
> {
> vms: [],
> memory_free: XYZ,
> cpu_usage: ABC,
> memory_used: MNO,
> ...
> }
> 
> Now if we imagine each/all schedulers having watches
> on /nova/compute_nodes/ ([2] consul and etc.d have equivalent concepts
> afaik) then when a compute_node updates that information a push
> notification (the watch being triggered) will be sent to the
> scheduler(s) and the scheduler(s) could then update a local in-memory
> cache of the data about all the hypervisors that can be selected from
> for scheduling. This avoids any reading of a large set of data in the
> first place (besides an initial read-once on startup to read the
> initial list + setup the watches); in a way its similar to push
> notifications. Then when scheduling a VM -> hypervisor there isn't any
> need to query anything but the local in-memory representation that the
> scheduler is maintaining (and updating as watches are triggered)...
> 
> So this is why I was wondering about what capabilities of cassandra are
> being used here; because the above I think are unique capababilties of
> DLM like systems (zookeeper, consul, etcd) that could be advantageous
> here...
> 
> [1]
> https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#sc_zkDataModel_znodes
> 
> [2]
> https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkWatches

I wonder if we would even need to make something so specialized to get
this kind of local caching. I dont know what the current ZK tools are
but the original Chubby paper described that clients always have a
write-through cache for nodes which they set up subscriptions for in
order to break the cache.

Also, re: etcd - The last time I checked their subscription API was
woefully inadequate for performing this type of thing without hurding
issues.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Scheduler proposal

2015-10-09 Thread Alec Hothan (ahothan)

There are several ways to make python code that deals with a lot of data 
faster, especially when it comes to operating on DB fields from SQL tables (and 
that is not limited to the nova scheduler).
Pulling data from large SQL tables and operating on them through regular python 
code (using python loops) is extremely inefficient due to the nature of the 
python interpreter. If this is what nova scheduler code is doing today, the 
good thing is there is a potentially huge room for improvement.


The approach to scale out, in practice means a few instances (3 instances is 
common), meaning the gain would be in the order of 3x (or 1 order of magnitude) 
but with sharply increased complexity to deal with concurrent schedulers and 
potentially conflicting results (with the use of tools lie ZK or Consul...). 
But in essence we're basically just running the same unoptimized code 
concurrently to achieve a better throughput.
On the other hand optimizing something that is not very optimized to start with 
can yield a much better return than 3x, with the advantage of simplicity (one 
active scheduler, which could be backed by a standby for HA).

Python is actually one of the better languages to do *fast* in-memory big data 
processing using open source python scientific and data analysis libraries as 
they can provide native speed through cythonized libraries and powerful high 
level abstraction to do complex filters and vectorized operations. Not only it 
is fast but it also yields much smaller code.

I have used libraries such as numpy and pandas to operate on very large data 
sets (the equivalent of SQL tables with hundreds of thousands of rows) and 
there is easily 2 orders of magnitude of difference for operating on these data 
in memory between plain python code with loops and python code using these 
libraries (that is without any DB access).
The order of filtering on the kind of reduction that you describe below 
certainly helps but becomes second order when you use pandas filters because 
they are extremely fast even for very large datasets.

I'm curious to know why this path was not explored more before embarking full 
speed on concurrency/scale out options which is a very complex and treacherous 
path as we see in this discussion. Clearly very attractive intellectually to 
work with all these complex distributed frameworks, but the cost of complexity 
is often overlooked.

Is there any data showing the performance of the current nova scheduler? How 
many scheduling can nova do per second at scale with worst case filters?
When you think about it, 10,000 nodes and their associated properties is not 
such a big number if you use the right libraries.




On 10/9/15, 1:10 PM, "Joshua Harlow"  wrote:

>And also we should probably deprecate/not recommend:
>
>http://docs.openstack.org/developer/nova/api/nova.scheduler.filters.json_filter.html#nova.scheduler.filters.json_filter.JsonFilter
>
>That filter IMHO basically disallows optimizations like forming SQL 
>statements for each filter (and then letting the DB do the heavy 
>lifting) or say having each filter say 'oh my logic can be performed by 
>a prepared statement ABC and u should just use that instead' (and then 
>letting the DB do the heavy lifting).
>
>Chris Friesen wrote:
>> On 10/09/2015 12:25 PM, Alec Hothan (ahothan) wrote:
>>>
>>> Still the point from Chris is valid. I guess the main reason openstack is
>>> going with multiple concurrent schedulers is to scale out by
>>> distributing the
>>> load between multiple instances of schedulers because 1 instance is too
>>> slow. This discussion is about coordinating the many instances of
>>> schedulers
>>> in a way that works and this is actually a difficult problem and will get
>>> worst as the number of variables for instance placement increases (for
>>> example NFV is going to require a lot more than just cpu pinning, huge
>>> pages
>>> and numa).
>>>
>>> Has anybody looked at why 1 instance is too slow and what it would
>>> take to
>>> make 1 scheduler instance work fast enough? This does not preclude the
>>> use of
>>> concurrency for finer grain tasks in the background.
>>
>> Currently we pull data on all (!) of the compute nodes out of the
>> database via a series of RPC calls, then evaluate the various filters in
>> python code.
>>
>> I suspect it'd be a lot quicker if each filter was a DB query.
>>
>> Also, ideally we'd want to query for the most "strict" criteria first,
>> to reduce the total number of comparisons. For example, if you want to
>> implement the "affinity" server group policy, you only need to test a
>> single host. If you're matching against host aggregate metadata, you
>> only need to test against hosts in matching aggregates.
>>
>> Chris
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Kyle Mestery
On Fri, Oct 9, 2015 at 8:52 AM, Russell Bryant  wrote:

> On 10/09/2015 05:42 AM, Thierry Carrez wrote:
> > Hello everyone,
> >
> > OpenStack has become quite big, and it's easier than ever to feel lost,
> > to feel like nothing is really happening. It's more difficult than ever
> > to feel part of a single community, and to celebrate little successes
> > and progress.
> >
> > In a (small) effort to help with that, I suggested making it easier to
> > record little moments of joy and small success bits. Those are usually
> > not worth the effort of a blog post or a new mailing-list thread, but
> > they show that our community makes progress *every day*.
> >
> > So whenever you feel like you made progress, or had a little success in
> > your OpenStack adventures, or have some joyful moment to share, just
> > throw the following message on your local IRC channel:
> >
> > #success [Your message here]
> >
> > The openstackstatus bot will take that and record it on this wiki page:
> >
> > https://wiki.openstack.org/wiki/Successes
> >
> > We'll add a few of those every week to the weekly newsletter (as part of
> > the developer digest that we reecently added there).
> >
> > Caveats: Obviously that only works on channels where openstackstatus is
> > present (the official OpenStack IRC channels), and we may remove entries
> > that are off-topic or spam.
> >
> > So... please use #success liberally and record lttle everyday OpenStack
> > successes. Share the joy and make the OpenStack community a happy place.
> >
>
> This is *really* cool.  I'm excited to use this and see all the things
> others record.  Thanks!!
>
>
Indeed, sometimes it's easy to get lost in the bike shedding, this is a
good way for everyone to remember the little successes that people are
having. After all, this project is composed of actual people, it's good to
highlight the little things we each consider a success. Well done!

Thanks,
Kyle


> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:58 PM, Dmitry Tantsur wrote:

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.


As I'm leaving for the weekend, I'll post my findings here.

I was not able to spot what writes these files (in my case it was named 
33). I also was not able to reproduce it on my regular devstack environment.


I've posted a temporary patch https://review.openstack.org/#/c/233017/ 
so that we're able to track where and when these files appear. Right now 
I only understood that they really appear during the devstack run, not 
earlier.






This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Carl Baldwin
+1 Great idea!

On Fri, Oct 9, 2015 at 3:42 AM, Thierry Carrez  wrote:
> Hello everyone,
>
> OpenStack has become quite big, and it's easier than ever to feel lost,
> to feel like nothing is really happening. It's more difficult than ever
> to feel part of a single community, and to celebrate little successes
> and progress.
>
> In a (small) effort to help with that, I suggested making it easier to
> record little moments of joy and small success bits. Those are usually
> not worth the effort of a blog post or a new mailing-list thread, but
> they show that our community makes progress *every day*.
>
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
>
> #success [Your message here]
>
> The openstackstatus bot will take that and record it on this wiki page:
>
> https://wiki.openstack.org/wiki/Successes
>
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
>
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
>
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L2 gateway project

2015-10-09 Thread Kyle Mestery
On Fri, Oct 9, 2015 at 10:13 AM, Gary Kotton  wrote:

> Hi,
> Who will be creating the stable/liberty branch?
> Thanks
> Gary
>
>
I'll be doing this once someone from the L2GW team lets me know a commit
SHA to create it from.

Thanks,
Kyle


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Monty Taylor

On 10/09/2015 11:21 AM, Shamail wrote:




On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:

It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.


Apologize if this is a question that has already been address but why can't we 
just leverage something like consul.io?


It's a good question and there have actually been some discussions about 
leveraging it on the backend. However, even if we did, we'd still need 
keystone to provide the multi-tenancy view on the subject. consul wasn't 
designed (quite correctly I think) to be a user-facing service for 50k 
users.


I think it would be an excellent backend.




A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.

I didn't see anything immediately in the etherpad that couldn't be covered with 
the tool mentioned above.  It is open-source so we could always try to 
contribute there if we need something extra (written in golang though).


A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132

That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context between
projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated workgroup
meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to commit
a chunk of time over this cycle to this. Once we know that we can try to
figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Nick Chase
This is AWESOME!  And I've already found useful resources on the list of 
successes.  Beautiful job, and fantastic idea!


  Nick


On 10/09/2015 05:42 AM, Thierry Carrez wrote:
> Hello everyone,
>
> OpenStack has become quite big, and it's easier than ever to
feel lost,
> to feel like nothing is really happening. It's more difficult
than ever
> to feel part of a single community, and to celebrate little
successes
> and progress.
>
> In a (small) effort to help with that, I suggested making it
easier to
> record little moments of joy and small success bits. Those are
usually
> not worth the effort of a blog post or a new mailing-list
thread, but
> they show that our community makes progress *every day*.
>
> So whenever you feel like you made progress, or had a little
success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
>
> #success [Your message here]
>
> The openstackstatus bot will take that and record it on this
wiki page:
>
> https://wiki.openstack.org/wiki/Successes
>
> We'll add a few of those every week to the weekly newsletter (as
part of
> the developer digest that we reecently added there).
>
> Caveats: Obviously that only works on channels where
openstackstatus is
> present (the official OpenStack IRC channels), and we may remove
entries
> that are off-topic or spam.
>
> So... please use #success liberally and record lttle everyday
OpenStack
> successes. Share the joy and make the OpenStack community a
happy place.
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] mox to mock migration

2015-10-09 Thread Steven Hardy
On Fri, Oct 09, 2015 at 09:06:57AM -0400, Jay Dobies wrote:
> I forget where we left things at the last meeting with regard to whether or
> not there should be a blueprint on this. I was going to work on some during
> some downtime but I wanted to make sure I wasn't overlapping with what
> others may be converting (it's more time consuming than I anticipated).
> 
> Any thoughts on how to track it?

I'd probably suggest raising either a bug or a blueprint (not spec), then
link from that to an etherpad where you can track all the tests requiring
rework, and who's working on them.

"it's more time consuming than I anticipated" is pretty much my default
response for anything to do with heat unit tests btw, good luck! :)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Metadata via dhcp namespace not working for icehouse release

2015-10-09 Thread Ihar Hrachyshka
> On 09 Oct 2015, at 09:00, Pradeep kumar  wrote:
> 
> 
> Hii Guys,
> I am trying to run Metadata via the DHCP namespace by following blog post 
> mentioned below:
> 
> http://techbackground.blogspot.in/2013/06/metadata-via-dhcp-namespace.html
> 
> Commands mentioned on above link for debugging o/p is exactly same for my 
> setup. Also i am able to ping 169.254.169.254 metadata server ip. But while 
> running below command from VM i get
> 
> curl http://169.254.169.254
> 
> curl: (7) Failed to connect to 169.254.169.254 port 80: Connection timed out
> 
> and
> 
> curl http://169.254.169.254:53
> gives empty response
> &
> from controller node if i run curl http://169.254.169.254 i get
> curl http://169.254.169.254:8775
> 1.0
> 2007-01-19
> 2007-03-01
> 2007-08-29
> 2007-10-10
> 2007-12-15
> 2008-02-01
> 2008-09-01
> 2009-04-04
> On controller node:
> netstat -nap | grep 8775 gives
> tcp0  0 0.0.0.0:87750.0.0.0:*   LISTEN
>   13619/python
> 
> Please suggest some pointers i am stuck at the same point from last 10 days.
> 
> Regards
> Pradeep Kumar

Icehouse is no longer supported. Please retry on a supported version Juno+.

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Jeremy Stanley
On 2015-10-09 17:00:27 +0800 (+0800), Tang Chen wrote:
[...]
> I'm not sure if it is a good idea. Please help to review the
> following BP.
> 
> https://blueprints.launchpad.net/openstack-ci/+spec/ci-rfc-mechanism

The Infra team doesn't rely on blueprints, and instead uses
http://specs.openstack.org/openstack-infra/infra-specs/ to plan
complex implementation work. I only just discovered that you can
disable blueprints on a project in Launchpad, so I've now done that
for the "openstack-ci" project.

That said, I don't expect the proposed feature would get very far.
We're committed to providing test results for all patchsets when
possible; that doesn't mean you are required to pay attention to the
results or even wait for them to be posted on your change while it's
still in a state of initial flux.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][horizon] adding stable reviewer

2015-10-09 Thread Matthias Runge
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 09/10/15 13:44, Thierry Carrez wrote:
> Ihar Hrachyshka wrote:
>> Welcome Doug!
> 
> Please (re)read and apply the policy at: 
> https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy

Thank you Ihar!

Best,
Matthias

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWF7afAAoJEBdpskgc9eOLn04QAMVQhUCArh3rvpTBqHIS0mCB
SHdj1Go5LMU6+t8p42codEn0q+dihtO5sZaOKx+7LR6YZn5AhEmnf8ihn7dB4iHA
I+xgEdBNDXoalQhvS9fuoMtOqVesf+aUaH1bEnsktQpycYwXCXz5bSDhHqYEMkvn
Yq1rJfzA3JpHSeLR0yTaT3eFNfOJYyIx2vo9+bl77U7kZ8NnWpKsH3HRRXfW90fy
8Z6udKo3+bqtf03xHJAyHGxlD7TtkL5mGeNY9P+WKlHHKXSW3kkKpUj2RotzVoD3
d3Zz9A8g350vbXrfFhVwsbjdWBW6UMgtOpAheqZ+Nz/w/ZqClPGT6NAxjJfyLLWL
EsKcy4oHJ1VZPk7ql+MEjK6WNLe5c6PPTQT1QSOv49YfFfDPRYCBYx32Gs3hAMRH
zSRm24DJBoGq6aVBQQFUhun4YuisglWYTaSvwQbqe5arS/lNOyWzEC2sbI5ffJLR
fjtFXmJQDgJRIxkAC0VKlXEMizAfMstZJoG/ID05VZPe6KWGwlkrkAkT5p0XR2Oz
5EqJ9myy+Y9ZKwNC32R5fEt8JCG3z/FLMmA64OYtZ3ylY3VTrBzGVg4TtsP28gkV
AcKMMuSyIaKMpAavR7SWxJq1HvSr9bDQqCX+4AHApxvr0Z8YN355AudlTbylLUGz
wumYrnthpMwF6UpCA09K
=4S4k
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] py.test vs testrepository

2015-10-09 Thread Roman Prykhodchenko
Thank you guys for all your help! Special thanks to Robert who helped to find a 
workaround for an issue [1] that didn’t let us use testr for Fuel Client. The 
patch [2] was merged and both unit and functional tests are launched by subunit 
and the data is maintained by testrepository.

Please also note that in order to facilitate debugging two additional tox 
environments —  dbgunit or dbgfunc,  were introduced for either unit or 
functional tests.


1. https://bugs.launchpad.net/testrepository/+bug/1504310 

2. https://review.openstack.org/#/c/227895/


- romcheg


> 9 жовт. 2015 р. о 01:51 Roman Prykhodchenko  написав(ла):
> 
> Folks,
> 
> Since we’ve reached the consensus here I’d like to invite you to review the 
> patch [1] that replaces py.test with testr without making debuging or running 
> specific tests harder. Please also note that it has a dependency which needs 
> to be reviewed and merged first one.
> 
> 1. https://review.openstack.org/#/c/227895
> 
> 
> - romcheg
> 
> 
>> 7 жовт. 2015 р. о 14:41 Roman Prykhodchenko  написав(ла):
>> 
>> Michał,
>> 
>> some comments in-line
>> 
 - testrepository and related components are used in OpenStack Infra
 environment for much more tasks than just running tests
>>> 
>>> If by "more tasks" you mean parallel testing, py.test also has a
>>> possibility to do that by pytest-xdist.
>> 
>> As Monthy mentioned, it’s not only about testing, it’s more about deeper 
>> integration with OpenStack Infra.
>> 
>> 
 - py.test won’t be added to global-requirements so there always be a chance
 of another dependency hell
>>> 
>>> As Igor Kalnitsky said, py.test doesn't have much requirements.
>>> https://github.com/pytest-dev/pytest/blob/master/setup.py#L58
>>> It's only argparse, which already is in global requirements without
>>> any version pinned.
>> 
>> It’s not only about py.test, there is an up-to-date objective of sticking 
>> all requirements to global-requirements because we have big problems because 
>> of that every release.
>> 
>>> 
>>> Cheers,
>>> Michal
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Ihar Hrachyshka
> On 09 Oct 2015, at 11:42, Thierry Carrez  wrote:
> 
> Hello everyone,
> 
> OpenStack has become quite big, and it's easier than ever to feel lost,
> to feel like nothing is really happening. It's more difficult than ever
> to feel part of a single community, and to celebrate little successes
> and progress.
> 
> In a (small) effort to help with that, I suggested making it easier to
> record little moments of joy and small success bits. Those are usually
> not worth the effort of a blog post or a new mailing-list thread, but
> they show that our community makes progress *every day*.
> 
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
> 
> #success [Your message here]
> 
> The openstackstatus bot will take that and record it on this wiki page:
> 
> https://wiki.openstack.org/wiki/Successes
> 
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
> 
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
> 
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.

That is oh so cool! :)

Another IRC service that I find useful to encourage collaboration in teams is a 
karma bot. Something that would calculate ++ messages in tracked 
channels. Having such a lightweight and visible way to tell ‘thank you’ to a 
contributor would be great. Do we have plans to implement it in infra?

Ihar



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-09 Thread Sean Dague
>From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.

This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] jobs that make break when we remove Devstack extras.d in 10 weeks

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:35 PM, Sean Dague wrote:

 From now until the removal of devstack extras.d support I'm going to
send a weekly email of jobs that may break. A warning was added that we
can track in logstash.

Here are the top 25 jobs (by volume) that are currently tripping the
warning:

gate-murano-devstack-dsvm
gate-cue-integration-dsvm-rabbitmq
gate-murano-congress-devstack-dsvm
gate-solum-devstack-dsvm-centos7
gate-rally-dsvm-murano-task
gate-congress-dsvm-api
gate-tempest-dsvm-ironic-agent_ssh
gate-solum-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ipa-nv
gate-ironic-inspector-dsvm-nv
gate-tempest-dsvm-ironic-pxe_ssh
gate-tempest-dsvm-ironic-parallel-nv
gate-tempest-dsvm-ironic-pxe_ipa
gate-designate-dsvm-powerdns
gate-python-barbicanclient-devstack-dsvm
gate-tempest-dsvm-ironic-pxe_ssh-postgres
gate-rally-dsvm-designate-designate
gate-tempest-dsvm-ironic-pxe_ssh-dib
gate-tempest-dsvm-ironic-agent_ssh-src
gate-tempest-dsvm-ironic-pxe_ipa-src
gate-muranoclient-dsvm-functional
gate-designate-dsvm-bind9
gate-tempest-dsvm-python-ironicclient-src
gate-python-ironic-inspector-client-dsvm
gate-tempest-dsvm-ironic-lib-src-nv

(You can view this query with http://goo.gl/6p8lvn)

The ironic jobs are surprising, as something is crudding up extras.d
with a file named 23, which isn't currently run. Eventual removal of
that directory is going to potentially make those jobs fail, so someone
more familiar with it should look into it.


Thanks for noticing, looking now.



This is not guarunteed to be a complete list, but as jobs are removed /
fixed we should end up with other less frequently run jobs popping up in
future weeks.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Plan to port Swift to Python 3

2015-10-09 Thread Victor Stinner

Hi,

Le 09/10/2015 12:12, vishal yadav a écrit :

However I was just checking if you considered using 2to3. I can
understand that translation using this tool might not cover every area
in the code more specifically custom/3rd party libraries (non-standard
python libraries) but IMO it can do fixer translations to much extent.


I tried 2to3, modernize and 2to6 tools in the past, but they produce a 
single giant patch with unwanted changes. These tools are written to add 
compatibility for all Python versions including Python 2.6 and Python 
3.0. In OpenStack, we only care of Python 2.7 and 3.4, so the code can 
be simpler. For example, we can simply write u"unicode" instead of 
six.b("unicode").


I wrote the sixer tool for OpenStack. Basically, it's the same than 
2to6, except that:


- sixer respects OpenStack Coding Style on imports: it groups and sorts 
imports. It avoids to have to manually fix individual modified import 
which takes a lot of time


- sixer can produce a patch for a single pattern. For example, replace 
all unicode with six.text_type but nothing else. Since all changes are 
reviewed carefully in OpenStack, it's important to produce "reviewable" 
(small) changes.


See also my blog article which explains the full rationale:
http://haypo.github.io/python3-sixer.html

My patch serie of 6 changes to fix most Python 3 issues was almost fully 
generated by sixer. Sometimes, I had to manually fix a few lines because 
no tool is perfect ;-) The patch serie:


* "py3: Replace unicode with six.text_type"
  https://review.openstack.org/#/c/232476/

* "py3: Replace urllib imports with six.moves.urllib"
  https://review.openstack.org/#/c/232536/

* "py3: Use six.reraise() to reraise an exception"
  https://review.openstack.org/#/c/232537/

* "py3: Replace gen.next() with next(gen)"
  https://review.openstack.org/#/c/232538/

* "Replace itertools.ifilter with six.moves.filter"
  https://review.openstack.org/#/c/232539/

* "py3: Replace basestring with six.string_types"
  https://review.openstack.org/#/c/232540/

Then I will then use sixer on individual files to fix all Python 3 at once.

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Inconsistent timestamping of polled data

2015-10-09 Thread Igor Degtiarov
Hi!

Looks good to me, especially for cases when after some incident we gather a
great amount of notifications in queue and stated to work with it so some
data will have incorrect timestamp if it set only when sample is created.

Cheers,

Igor Degtiarov
Software Engineer
Mirantis Inc.
www.mirantis.com

On Fri, Oct 9, 2015 at 1:00 PM, Wen Zhi WW Yu  wrote:

> Hi all,
>
> As Gordon descriped in https://bugs.launchpad.net/ceilometer/+bug/1491509
> , many of pollsters define the timestamp individually for each sample that
> is generated rather than basing on when the data was polled. I agree with
> Gordon on that the timestamping of samples should base on when the data was
> polled.
>
> What's your opinion on this?
>
> Best Regards,
> Yu WenZhi(余文治)
> OpenStack on Power Development, IBM Shanghai
> 2F, 399 Keyuan Rd, Zhangjiang Chuangxin No. 10 Building, Zhangjiang High
> Tech Park Shanghai
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] [heat] Mistral Workflow resource type - resource signal handling

2015-10-09 Thread Renat Akhmerov

> On 08 Oct 2015, at 23:27, ELISHA, Moshe (Moshe) 
>  wrote:
> 
> Hi,
>  
> I would like to propose a change in the behavior of the OS::Mistral::Workflow 
> resource signal.
>  
> CURRENT:
> The OS::Mistral::Workflow resource type is expecting the following request 
> body on resource signal request:
>  
> {
>   "input": {
> ...
>   },
>   "params": {
> ...
>   }
> }
>  
> The input section is optional and if exists it will be passed to the workflow 
> execution as inputs
> The params section is also optional and if exists it will be passed to the 
> workflow execution as parameters.
>  
> The problem this approach creates is that external systems many times send a 
> predefined body that you cannot control and it is obviously not in the format 
> the resource is expecting.
> So you basically have no way to pass the information from the request body to 
> the workflow execution.

That makes sense to me, I’m just wondering if it’s possible to have a 
transformer somewhere that would convert data to a needed form. I’m not that 
good at Heat though.


> SUGGESTION:
> OS::Mistral::Workflow will treat the root of the JSON request body as input 
> parameters.
> That way you will be able to use external systems by making sure your WF 
> inputs are aligned with what the external system sends.
>  
> For example, if you try to put the WF alarm_url as a Ceilometer alarm action 
> - Ceilometer will send a request similar to:
>  
> {
>  "severity": "low",
>  "alarm_name": "my-alarm",
>  "current": "insufficient data",
>  "alarm_id": "895fe8c8-3a6e-48bf-b557-eede3e7f4bbd",
>  "reason": "1 datapoints are unknown",
>  "reason_data": {
>"count": 1,
>"most_recent": null,
>"type": "threshold",
>"disposition": "unknown"
>  },
>  "previous": "ok"
> }
>  
> The WF could get this info as input if it will be defined like so:
>  
>   my_workflow:
> type: OS::Mistral::Workflow
> properties:
>   input:
> current: !!null
> alarm_id: !!null
> reason: !!null
> previous: !!null
> severity: !!null
> alarm_name: !!null
> reason_data: !!null
>  
>  
> The (least used) “params” section can be passed in an custom HTTP header and 
> the OS::Mistral::Workflow will read those from the header and pass it to the 
> WF execution.
> Remember, we are trying to solve the problem where you can’t influence the 
> request format – so in any case the signal will not get the params in the 
> request body.
> If the WF of the user must receive params, the user will always be able to 
> create a wrapper WF with only inputs that starts the orig WF with inputs and 
> params.

I’m generally ok with this suggestion. My only concern is that we won’t be able 
to trigger the same WF with different type of triggers. For instance, I want to 
have a WF that does healing but I want to have Ceilometer alarm and some other 
type of trigger (e.g. some monitoring system like Zabbix) to trigger this WF. 
Most likely it would not be straightforward because I’d have to design my WF 
for a certain triggering systems: its input values should match request body 
structure sent by a trigger.

At the same time, this is probably going to be a rare situation when we need 
two types of triggers. And like you said we can always create a wrapper 
workflow, if needed for other types of triggers. General approach here actually 
may be: we create a WF with input structure that’s most convenient from WF 
standpoint itself and for every type of trigger we create a wrapper WF, if 
needed.


> In order to make this non-backward compatible change, I suggest to add a 
> property “params_alarm_http_header_name” to the OS::Mistral::Workflow. If 
> null the params are expected to be in the body as today.
> If not null – the request should have a header with that name and the value 
> will be a string representing a JSON dict.

Sounds ok to me.

Renat Akhmerov
@ Mirantis Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][horizon] adding stable reviewer

2015-10-09 Thread Matthias Runge
Hello,

who would be the person to talk to, to add a new reviewer to
horizon-stable-maint

I would like Doug Fish aka drfish (on launchpad) added.

Unfortunately, the fields on
https://review.openstack.org/#/admin/groups/537,members
are greyed out for me.

Thanks, Matthias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][horizon] adding stable reviewer

2015-10-09 Thread Ihar Hrachyshka
> On 09 Oct 2015, at 12:44, Matthias Runge  wrote:
> 
> Hello,
> 
> who would be the person to talk to, to add a new reviewer to
> horizon-stable-maint
> 
> I would like Doug Fish aka drfish (on launchpad) added.
> 
> Unfortunately, the fields on
> https://review.openstack.org/#/admin/groups/537,members
> are greyed out for me.
> 
> Thanks, Matthias

Added the new member to the group.

Welcome Doug!

Ihar


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][horizon] adding stable reviewer

2015-10-09 Thread Thierry Carrez
Ihar Hrachyshka wrote:
> Welcome Doug!

Please (re)read and apply the policy at:
https://wiki.openstack.org/wiki/StableBranch#Stable_branch_policy

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [CI] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Dmitry Tantsur

On 10/09/2015 12:06 PM, Tang Chen wrote:


On 10/09/2015 05:48 PM, Jordan Pittier wrote:

Hi,
On Fri, Oct 9, 2015 at 11:00 AM, Tang Chen > wrote:

Hi,

CI systems will run tests for each patch once it is submitted or
modified.
But most CI systems occupy a lot of resource, and take a long time to
run tests (1 or 2 hours for one patch).

I think, not all the patches submitted need to be tested. Even
those patches
with an approved BP and spec may be reworked for 20+ versions. So
I think
CI should support a RFC (Require For Comments) mechanism for
developers
to submit and review the code detail and rework. When the patches are
fully ready, I mean all reviewers have agreed on the
implementation detail,
then CI will test the patches.

So have the humans do the hard work to eventually find out that the
patch breaks the world ?


No. Developers of course will run some tests themselves before they
submit patches.


Tests, but not all possible CI's. E.g. in ironic we 6 devstack-based 
jobs, I don't really expect a submitter to go through them manually. 
Actually, it's an awesome feature of our CI system that I would not give 
away :)


Also as a reviewer, I'm not sure I would like to argue on function 
names, while I'm not even sure that this change does not break the world.



It is just a waste of resource if reviewers are discussing about where
this function should be,
or what the function should be named. After all these details are agreed
on, run the CI.


For a 20+ version patch-set, maybe 3 or 4 rounds
of tests are enough. Just test the last 3 or 4 versions.

 How do know, when a new patchset arrives, that it's part of the last
3 or 4 versions ?


I think it could work like this:
1. At first, developer submits v1 patch-set with RFC tag. CIs don't run.
2. After several versions reworked, like v5, v6, most reviewers have
agreed on the implementation
 is OK. Then submit v7 without RFC tag. Then CIs run.
3. After 3, 4 rounds of tests, v10 patch-set could be merged.

Thanks.



This can significantly reduce CI overload.

This workflow appears in many other OSS communities, such as Linux
kernel,
qemu and libvirt. Testers won't test patches with a [RFC] tag in
the commit message.
So I want to enable CI to support a similar mechanism.

I'm not sure if it is a good idea. Please help to review the
following BP.

https://blueprints.launchpad.net/openstack-ci/+spec/ci-rfc-mechanism

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I am running a 3rd party for Cinder. The amount of time to setup,
operate and watch after the CI results cost way more than the 1 or 2
servers it take to run the jobs. So, I don"t want to be a party pooper
here, but in my opinion I am not sure it's worth the effort.

Note: I don"t know about nova or neutron.

Jordan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Try to introduce RFC mechanism to CI.

2015-10-09 Thread Jeremy Stanley
On 2015-10-09 18:06:55 +0800 (+0800), Tang Chen wrote:
[...]
> It is just a waste of resource if reviewers are discussing about
> where this function should be, or what the function should be
> named. After all these details are agreed on, run the CI.
[...]

As one of the people maintaining the upstream CI and helping
coordinate our resources/quotas, I don't see that providing early
test feedback is a waste. We're steadily increasing the instance
quotas available to us, so check pipeline utilization should
continue to become less and less of a concern anyway.

For a change which is still under debate, feel free to simply ignore
test results until you get it to a point where you see them start to
become relevant.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Jeremy Stanley
On 2015-10-09 15:53:17 +0200 (+0200), Ihar Hrachyshka wrote:
[...]
> There are already multiple karmabot implementations that could be
> reused, like https://github.com/chromakode/karmabot
> 
> Can we just adopt one of those?

Perhaps, though we're trying to reduce rather than increase the
number of individual IRC bots we're managing. Ultimately I'm
interested in seeing us collapse our current family (gerritbot,
statusbot, meetbot) into one codebase/framework to further reduce
the maintenance burden, but haven't found interested parties yet to
contribute the coding effort.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Backport policy for Liberty

2015-10-09 Thread Sam Yaple
On Thu, Oct 8, 2015 at 2:47 PM, Steven Dake (stdake) 
wrote:

> Kolla operators and developers,
>
> The general consensus of the Core Reviewer team for Kolla is that we
> should embrace a liberal backport policy for the Liberty release.  An
> example of liberal -> We add a new server service to Ansible, we would
> backport the feature to liberty.  This is in breaking with the typical
> OpenStack backports policy.  It also creates a whole bunch more work and
> has potential to introduce regressions in the Liberty release.
>
> Given these realities I want to put on hold any liberal backporting until
> after Summit.  I will schedule a fishbowl session for a backport policy
> discussion where we will decide as a community what type of backport policy
> we want.  The delivery required before we introduce any liberal backporting
> policy then should be a description of that backport policy discussion at
> Summit distilled into a RST file in our git repository.
>
> If you have any questions, comments, or concerns, please chime in on the
> thread.
>
> Regards
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I am in favor of a very liberal backport policy. We have the potential to
have very little code difference between N, N-1, and N-2 releases while
still deploying the different versions of OpenStack. However, I recognize
is a big undertaking to backport all things, not to mention the testing
involved.

I would like to see two things before we truly embrace a liberal policy.
The first is better testing. A true gate that does upgrades and potentially
multinode (at least from a network perspective). The second thing is a bot
or automation of some kind to automatically propose non-conflicting patches
to the stable branches if they include the 'backport: xyz' tag in the
commit message. Cores would still need to confirm these changes with the
normal review process and could easily abandon them, but that would remove
alot of overhead of performing the actual backport.

Since Kolla simply deploys OpenStack, it is alot closer to a client or a
library than it is to Nova or Neutron. And given its mission maybe it
should break from the "typical OpenStack backports policy" so we can give a
consistent deployment experience across all stable and supported version of
OpenStack at any given time.

Those are my thoughts on the matter at least. I look forward to some
conversations about this in Tokyo.

Sam Yaple
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Jeremy Stanley
On 2015-10-09 14:58:36 +0100 (+0100), Cory Benfield wrote:
[...]
> IMO, what OpenStack needs is a decision about where it’s getting
> its packages from, and then to refuse to mix the two.

I have yet to find a Python-based operating system installable in
whole via pip. There will always be _at_least_some_ packages you
install from your operating system's package management. What you
seem to be missing is that Linux distros are now shipping base
images which include their python-requests and python-urllib3
packages already pre-installed as dependencies of Python-based tools
they deem important to their users.

To work around this in our test infrastructure we're effectively
abandoning all hope of using distro-provided server images, and
building our own from scratch to avoid the possibility that they may
bring with them their own versions of any Python libraries
whatsoever. We're at the point where we're basically maintaining our
own derivative Linux distributions. The web of dependencies in
OpenStack has reached a level of complexity where it's guaranteed to
overlap with just about any pre-installed python-.* packages in a
distro-supplied image.

We're only now reaching the point where our Python dependencies
actually all function within the context of a virtualenv without
needing system-site-packages contamination, so the next logical step
is probably to see if virtualenv isolation is possible for
frameworks like DevStack (the QA team may already be trying to
figure that out, I'm not sure).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][horizon] adding stable reviewer

2015-10-09 Thread Yolanda Robla Mota

Hi Matthias
That group is owned by stable-maint-core
So you should look at members there 
(https://review.openstack.org/#/admin/groups/530,members) and ask some 
of them to add Doug.


Best
Yolanda

El 09/10/15 a las 12:44, Matthias Runge escribió:

Hello,

who would be the person to talk to, to add a new reviewer to
horizon-stable-maint

I would like Doug Fish aka drfish (on launchpad) added.

Unfortunately, the fields on
https://review.openstack.org/#/admin/groups/537,members
are greyed out for me.

Thanks, Matthias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Yolanda Robla Mota
Cloud Automation and Distribution Engineer
+34 605641639
yolanda.robla-m...@hpe.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All][Elections] Vote Vote Vote in the TC election!

2015-10-09 Thread Tristan Cacqueray
We are coming down the the last hours for voting in the TC election.

Search your gerrit preferred email address[0] for the following subject:
  Poll: OpenStack Technical Committee (TC) Election - October 2015

That is your ballot and links you to the voting application. Please
vote. If you have voted, please encourage your colleagues to vote.

Candidate statements are linked to the names of all confirmed candidates:
https://wiki.openstack.org/wiki/TC_Elections_September/October_2015#Confirmed_Candidates

What to do if you don't see the email and have a commit in at least one
of the official programs projects[1]:
  * check the trash of your gerrit Preferred Email address[0],
in case it went into trash or spam
  * wait a bit and check again, in case your email server is a bit slow
  * find the sha of at least one commit from the program project
repos[1] and email me and Tony[2]. If we can confirm that you are
entitled to vote, we will add you to the voters list and you will
be emailed a ballot.

Please vote!

Thank you,
Tristan

[0] Sign into review.openstack.org: Go to Settings > Contact
Information. Look at the email listed as your Preferred Email.
That is where the ballot has been sent.
[1]
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections
[2] Tony (tonyb): tony at bakeyournoodle dot com
Tristan (tristanC): tdecacqu at redhat dot com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requests + urllib3 + distro packages

2015-10-09 Thread Cory Benfield

> On 9 Oct 2015, at 14:40, William M Edmonds  wrote:
> 
> Cory Benfield  writes:
> > > The problem that occurs is the result of a few interacting things:
> > >  - requests has very very specific versions of urllib3 it works with.
> > > So specific they aren't always released yet.
> >
> > This should no longer be true. Our downstream redistributors pointedout to 
> > us
> > that this  was making their lives harder than they needed to be, so it's now
> > our policy to only  update to actual release versions of urllib3.
> 
> That's great... except that I'm confused as to why requests would continue to 
> repackage urllib3 if that's the case. Why not just prereq the version of 
> urllib3 that it needs? I thought the one and only answer to that question had 
> been so that requests could package non-standard versions.
> 

That is not and was never the only reason for vendoring urllib3. However, and I 
cannot stress this enough, the decision to vendor urllib3 is *not going to be 
changed on this thread*. If and when it changes, it will be by consensus 
decision from the requests maintenance team, which we do not have at this time.

Further, as I pointed out to Donald Stufft on IRC, if requests unbundled 
urllib3 *today* that would not fix the problem. The reason is that we’d specify 
our urllib3 dependency as: urllib3>=1.12,<1.13. This dependency note would 
still cause exactly the problem observed in this thread.

As you correctly identify in your subsequent email, William, the core problem 
is mixing of packages from distributions and PyPI. This happens with any tool 
with external dependencies: if you subsequently install a different version of 
a dependency using a packaging tool that is not aware of some of the dependency 
tree, it is entirely plausible that an incompatible version will be installed. 
It’s not hard to trigger this kind of thing on Ubuntu. IMO, what OpenStack 
needs is a decision about where it’s getting its packages from, and then to 
refuse to mix the two.

Cory


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Adam Young

On 10/09/2015 12:28 PM, Monty Taylor wrote:

On 10/09/2015 11:21 AM, Shamail wrote:




On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:

It looks like some great conversation got going on the service catalog
standardization spec / discussion at the last cross project meeting.
Sorry I wasn't there to participate.

Apologize if this is a question that has already been address but why 
can't we just leverage something like consul.io?


It's a good question and there have actually been some discussions 
about leveraging it on the backend. However, even if we did, we'd 
still need keystone to provide the multi-tenancy view on the subject. 
consul wasn't designed (quite correctly I think) to be a user-facing 
service for 50k users.


I think it would be an excellent backend.


The better question is, "Why are we not using DNS for the service catalog?"

Right now, we have the aspect of "project filtering of endpoints" which 
means that a token does not need to have every endpoint for a specified 
service.  If we were to use DNS, how would that map to the existing 
functionality.



Can we make better use of regions to help in endpoint filtering/selection?

Do we still need a query to Keystone to play arbiter if there are two 
endpoints assigned for a specific use case to help determine which is 
appropriate?










A lot of that ended up in here (which was an ether pad stevemar and I
started working on the other day) -
https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
I didn't see anything immediately in the etherpad that couldn't be 
covered with the tool mentioned above.  It is open-source so we could 
always try to contribute there if we need something extra (written in 
golang though).


A couple of things that would make this more useful:

1) if you are commenting, please (ircnick) your comments. It's not easy
to always track down folks later if the comment was not understood.

2) please provide link to code when explaining a point. Github supports
the ability to very nicely link to (and highlight) a range of code by a
stable object ref. For instance -
https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132 



That will make comments about X does Y, or Z can't do W, more clear
because we'll all be looking at the same chunk of code and start to
build more shared context here. One of the reasons this has been long
and difficult is that we're missing a lot of that shared context 
between

projects. Reassembling that by reading each other's relevant code will
go a long way to understanding the whole picture.


Lastly, I think it's pretty clear we probably need a dedicated 
workgroup

meeting to keep this ball rolling, come to a reasonable plan that
doesn't break any existing deployed code, but lets us get to a better
world in a few cycles. annegentle, stevemar, and I have been pushing on
that ball so far, however I'd like to know who else is willing to 
commit
a chunk of time over this cycle to this. Once we know that we can 
try to

figure out when a reasonable weekly meeting point would be.

Thanks,

-Sean

--
Sean Dague
http://dague.net

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating two new core reviewers

2015-10-09 Thread Devananda van der Veen
++ on both counts!

On Thu, Oct 8, 2015 at 2:47 PM, Jim Rollenhagen 
wrote:

> Hi all,
>
> I've been thinking a lot about Ironic's core reviewer team and how we might
> make it better.
>
> I'd like to grow the team more through trust and mentoring. We should be
> able to promote someone to core based on a good knowledge of *some* of
> the code base, and trust them not to +2 things they don't know about. I'd
> also like to build a culture of mentoring non-cores on how to review, in
> preparation for adding them to the team. Through these pieces, I'm hoping
> we can have a few rounds of core additions this cycle.
>
> With that said...
>
> I'd like to nominate Vladyslav Drok (vdrok) for the core team. His reviews
> have been super high quality, and the quantity is ever-increasing. He's
> also started helping out with some smaller efforts (full tempest, for
> example), and I'd love to see that continue with larger efforts.
>
> I'd also like to nominate John Villalovos (jlvillal). John has been
> reviewing a ton of code and making a real effort to learn everything,
> and keep track of everything going on in the project.
>
> Ironic cores, please reply with your vote; provided feedback is positive,
> I'd like to make this official next week sometime. Thanks!
>
> // jim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] py26 support in python-muranoclient

2015-10-09 Thread Vahid S Hashemian
Serg, Jeremy,

Thank you for your response, so the issue I ran into with my patch is the 
gate job failing on python26.
You can see it here: https://review.openstack.org/#/c/232271/

Serg suggested that we add 2.6 support to tosca-parser, which is fine with 
us.
But I got a bit confused after reading Jeremy's response.
It seems to me that the support will be going away, but there is no 
timeline (and therefore no near-term plan?)
So, I'm hoping Jeremy can advise whether he also recommends the same 
thing, or not.

Thank you both again.

Regards,
-
Vahid Hashemian, Ph.D.
Advisory Software Engineer, IBM Cloud




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] Symantec's security group management policies

2015-10-09 Thread Shiv Haris
Hi Su,

This looks very good.

Will it be possible to put your usecase as part of the Usecase VM.
Have you tried it out on the Usecase VM that I published earlier. I can help if 
you get stuck.

LMK,

Thanks,

-Shiv


From: Su Zhang [mailto:westlif...@gmail.com]
Sent: Thursday, October 08, 2015 1:23 PM
To: openstack-dev
Subject: [openstack-dev] [congress] Symantec's security group management 
policies

Hello,

I've implemented a set of security group management policies and already put 
them into our usecase doc.
Let me know if you guys have any comments. My policies is called "Security 
Group Management "
You can find the use case doc at: 
https://docs.google.com/document/d/1ExDmT06vDZjzOPePYBqojMRfXodvsk0R8nRkX-zrkSw/edit#heading=h.6z1ggtfrzg3n

Thanks,

--
Su Zhang
Senior Software Engineer
Symantec Corporation
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-manage db archive_deleted_rows broken

2015-10-09 Thread Jay Pipes

On 10/07/2015 11:04 AM, Matt Riedemann wrote:

I'm wondering why we don't reverse sort the tables using the sqlalchemy
metadata object before processing the tables for delete?  That's the
same thing I did in the 267 migration since we needed to process the
tree starting with the leafs and then eventually get back to the
instances table (since most roads lead to the instances table).


Yes, that would make a lot of sense to me if we used the SA metadata 
object for reverse sorting.



Another thing that's really weird is how max_rows is used in this code.
There is cumulative tracking of the max_rows value so if the value you
pass in is too small, you might not actually be removing anything.

I figured max_rows meant up to max_rows from each table, not max_rows
*total* across all tables. By my count, there are 52 tables in the nova
db model. The way I read the code, if I pass in max_rows=10 and say it
processes table A and archives 7 rows, then when it processes table B it
will pass max_rows=(max_rows - rows_archived), which would be 3 for
table B. If we archive 3 rows from table B, rows_archived >= max_rows
and we quit. So to really make this work, you have to pass in something
big for max_rows, like 1000, which seems completely random.

Does this seem odd to anyone else?


Uhm, yes it does.

> Given the relationships between

tables, I'd think you'd want to try and delete max_rows for all tables,
so archive 10 instances, 10 block_device_mapping, 10 pci_devices, etc.

I'm also bringing this up now because there is a thread in the operators
list which pointed me to a set of scripts that operators at GoDaddy are
using for archiving deleted rows:

http://lists.openstack.org/pipermail/openstack-operators/2015-October/008392.html

Presumably because the command in nova doesn't work. We should either
make this thing work or just punt and delete it because no one cares.


The db archive code in Nova just doesn't make much sense to me at all. 
The algorithm for purging stuff, like you mention above, does not take 
into account the relationships between tables; instead of diving into 
the children relations and archiving those first, the code just uses a 
simplistic "well, if we hit a foreign key error, just ignore and 
continue archiving other things, we will eventually repeat the call to 
delete this row" strategy:


https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L6021-L6023

I had a proposal [1] to completely rework the whole shadow table mess 
and db archiving functionality. I continue to believe that is the 
appropriate solution for this, and that we should rip out the existing 
functionality because it simply does not work properly.


Best,
-jay

[1] https://review.openstack.org/#/c/137669/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] PTL & Component Leads elections

2015-10-09 Thread Mike Scherbakov
Congratulations to Dmitry!
Now you are officially titled with PTL.
It won't be easy, but we will support you!

118 contributors voted. Thanks everyone! Thank you Sergey for organizing
elections for us.

On Thu, Oct 8, 2015 at 3:52 PM Sergey Lukjanov 
wrote:

> Voting period ended and so we have an officially selected Fuel PTL - DB.
> Congrats!
>
> Poll results & details -
> http://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=1=E_b79041aa56684ec0
>
> Let's start proposing candidates for the component lead positions!
>
> On Wed, Sep 30, 2015 at 8:47 PM, Sergey Lukjanov 
> wrote:
>
>> Hi folks,
>>
>> I've just setup the voting system and you should start receiving email
>> with topic "Poll: Fuel PTL Elections Fall 2015".
>>
>> NOTE: Please, don't forward this email, it contains *personal* unique
>> token for the voting.
>>
>> Thanks.
>>
>> On Wed, Sep 30, 2015 at 3:28 AM, Vladimir Kuklin 
>> wrote:
>>
>>> +1 to Igor. Do we have voting system set up?
>>>
>>> On Wed, Sep 30, 2015 at 4:35 AM, Igor Kalnitsky >> > wrote:
>>>
 > * September 29 - October 8: PTL elections

 So, it's in progress. Where I can vote? I didn't receive any emails.

 On Mon, Sep 28, 2015 at 7:31 PM, Tomasz Napierala
  wrote:
 >> On 18 Sep 2015, at 04:39, Sergey Lukjanov 
 wrote:
 >>
 >>
 >> Time line:
 >>
 >> PTL elections
 >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
 position
 >> * September 29 - October 8: PTL elections
 >
 > Just a reminder that we have a deadline for candidates today.
 >
 > Regards,
 > --
 > Tomasz 'Zen' Napierala
 > Product Engineering - Poland
 >
 >
 >
 >
 >
 >
 >
 >
 >
 __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> Mirantis, Inc.
>>> +7 (495) 640-49-04
>>> +7 (926) 702-39-68
>>> Skype kuklinvv
>>> 35bk3, Vorontsovskaya Str.
>>> Moscow, Russia,
>>> www.mirantis.com 
>>> www.mirantis.ru
>>> vkuk...@mirantis.com
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Scheduler proposal

2015-10-09 Thread Clint Byrum
Excerpts from Chris Friesen's message of 2015-10-08 23:52:41 -0700:
> On 10/08/2015 01:37 AM, Clint Byrum wrote:
> > Excerpts from Maish Saidel-Keesing's message of 2015-10-08 00:14:55 -0700:
> >> Forgive the top-post.
> >>
> >> Cross-posting to openstack-operators for their feedback as well.
> >>
> >> Ed the work seems very promising, and I am interested to see how this
> >> evolves.
> >>
> >> With my operator hat on I have one piece of feedback.
> >>
> >> By adding in a new Database solution (Cassandra) we are now up to three
> >> different database solutions in use in OpenStack
> >>
> >> MySQL (practically everything)
> >> MongoDB (Ceilometer)
> >> Cassandra.
> >>
> >> Not to mention two different message queues
> >> Kafka (Monasca)
> >> RabbitMQ (everything else)
> >>
> >> Operational overhead has a cost - maintaining 3 different database
> >> tools, backing them up, providing HA, etc. has operational cost.
> >>
> >> This is not to say that this cannot be overseen, but it should be taken
> >> into consideration.
> >>
> >> And *if* they can be consolidated into an agreed solution across the
> >> whole of OpenStack - that would be highly beneficial (IMHO).
> >>
> >
> > Just because they both say they're databases, doesn't mean they're even
> > remotely similar.
> 
> True, but the fact remains that it means operators (and developers) would 
> have 
> to become familiar with the quirks and problems of yet another piece of 
> technology.
> 

Indeed! And we can get really opinionated here now that we have some
experience I think. Personally, I'd rather become familiar with the
quirks and problems of Cassandra, than try to become familiar with the
quirks and problems of OpenStack's invented complex workarounds for high
scale state management cells.

So I agree with the statement that the cost of adding a technology should
be weighed. However, the cost of inventing a workaround should be weighed
with the same scale. Complex workarounds will, in most cases, weigh much
more than adopting a well known and proven technology that is aimed at
what turns out to be a common problem set.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Recording little everyday OpenStack successes

2015-10-09 Thread Mike Spreitzer
Thierry Carrez  wrote on 10/09/2015 05:42:49 AM:

...
> So whenever you feel like you made progress, or had a little success in
> your OpenStack adventures, or have some joyful moment to share, just
> throw the following message on your local IRC channel:
> 
> #success [Your message here]
> 
> The openstackstatus bot will take that and record it on this wiki page:
> 
> https://wiki.openstack.org/wiki/Successes
> 
> We'll add a few of those every week to the weekly newsletter (as part of
> the developer digest that we reecently added there).
> 
> Caveats: Obviously that only works on channels where openstackstatus is
> present (the official OpenStack IRC channels), and we may remove entries
> that are off-topic or spam.
> 
> So... please use #success liberally and record lttle everyday OpenStack
> successes. Share the joy and make the OpenStack community a happy place.

Great.  I am about to contribute one myself.  Lucky I noticed this email. 
How will the word get out to those who did not?  How about a pointer to 
instructions on the Successes page?

Thanks,
Mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Backport policy for Liberty

2015-10-09 Thread Jastrzebski, Michal
Hello,

Since we have little actual logic, and ansible itself is pretty pluggable by 
its very nature, backporting should be quite easy and would not affect existing 
deployment much. We will make sure that it will be safe to have stable/liberty 
code and will keep working at all times. I agree with Sam that we need careful 
CI for that, and it will be our first priority.

I would very much like to introduce operators to our session regarding this 
policy, as they will be most affected party and we want to make sure that they 
will take part in decision.

Regards,
Michał

From: Sam Yaple [mailto:sam...@yaple.net]
Sent: Friday, October 9, 2015 4:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Backport policy for Liberty

On Thu, Oct 8, 2015 at 2:47 PM, Steven Dake (stdake) 
> wrote:
Kolla operators and developers,

The general consensus of the Core Reviewer team for Kolla is that we should 
embrace a liberal backport policy for the Liberty release.  An example of 
liberal -> We add a new server service to Ansible, we would backport the 
feature to liberty.  This is in breaking with the typical OpenStack backports 
policy.  It also creates a whole bunch more work and has potential to introduce 
regressions in the Liberty release.

Given these realities I want to put on hold any liberal backporting until after 
Summit.  I will schedule a fishbowl session for a backport policy discussion 
where we will decide as a community what type of backport policy we want.  The 
delivery required before we introduce any liberal backporting policy then 
should be a description of that backport policy discussion at Summit distilled 
into a RST file in our git repository.

If you have any questions, comments, or concerns, please chime in on the thread.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I am in favor of a very liberal backport policy. We have the potential to have 
very little code difference between N, N-1, and N-2 releases while still 
deploying the different versions of OpenStack. However, I recognize is a big 
undertaking to backport all things, not to mention the testing involved.

I would like to see two things before we truly embrace a liberal policy. The 
first is better testing. A true gate that does upgrades and potentially 
multinode (at least from a network perspective). The second thing is a bot or 
automation of some kind to automatically propose non-conflicting patches to the 
stable branches if they include the 'backport: xyz' tag in the commit message. 
Cores would still need to confirm these changes with the normal review process 
and could easily abandon them, but that would remove alot of overhead of 
performing the actual backport.
Since Kolla simply deploys OpenStack, it is alot closer to a client or a 
library than it is to Nova or Neutron. And given its mission maybe it should 
break from the "typical OpenStack backports policy" so we can give a consistent 
deployment experience across all stable and supported version of OpenStack at 
any given time.
Those are my thoughts on the matter at least. I look forward to some 
conversations about this in Tokyo.
Sam Yaple

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Shamail


> On Oct 9, 2015, at 12:28 PM, Monty Taylor  wrote:
> 
>> On 10/09/2015 11:21 AM, Shamail wrote:
>> 
>> 
>>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
>>> 
>>> It looks like some great conversation got going on the service catalog
>>> standardization spec / discussion at the last cross project meeting.
>>> Sorry I wasn't there to participate.
>> Apologize if this is a question that has already been address but why can't 
>> we just leverage something like consul.io?
> 
> It's a good question and there have actually been some discussions about 
> leveraging it on the backend. However, even if we did, we'd still need 
> keystone to provide the multi-tenancy view on the subject. consul wasn't 
> designed (quite correctly I think) to be a user-facing service for 50k users.
> 
> I think it would be an excellent backend.
Thanks, that makes sense.  I agree that it might be a good backend but not the 
overall solution... I was bringing it up to ensure we consider existing options 
(where possible) and spend cycles on the unsolved bits.

I am going to look into the scaling limitations for consul to educate myself.
> 
>> 
>>> A lot of that ended up in here (which was an ether pad stevemar and I
>>> started working on the other day) -
>>> https://etherpad.openstack.org/p/mitaka-service-catalog which is great.
>> I didn't see anything immediately in the etherpad that couldn't be covered 
>> with the tool mentioned above.  It is open-source so we could always try to 
>> contribute there if we need something extra (written in golang though).
>>> 
>>> A couple of things that would make this more useful:
>>> 
>>> 1) if you are commenting, please (ircnick) your comments. It's not easy
>>> to always track down folks later if the comment was not understood.
>>> 
>>> 2) please provide link to code when explaining a point. Github supports
>>> the ability to very nicely link to (and highlight) a range of code by a
>>> stable object ref. For instance -
>>> https://github.com/openstack/nova/blob/2dc2153c289c9d5d7e9827a4908b0ca61d87dabb/nova/context.py#L126-L132
>>> 
>>> That will make comments about X does Y, or Z can't do W, more clear
>>> because we'll all be looking at the same chunk of code and start to
>>> build more shared context here. One of the reasons this has been long
>>> and difficult is that we're missing a lot of that shared context between
>>> projects. Reassembling that by reading each other's relevant code will
>>> go a long way to understanding the whole picture.
>>> 
>>> 
>>> Lastly, I think it's pretty clear we probably need a dedicated workgroup
>>> meeting to keep this ball rolling, come to a reasonable plan that
>>> doesn't break any existing deployed code, but lets us get to a better
>>> world in a few cycles. annegentle, stevemar, and I have been pushing on
>>> that ball so far, however I'd like to know who else is willing to commit
>>> a chunk of time over this cycle to this. Once we know that we can try to
>>> figure out when a reasonable weekly meeting point would be.
>>> 
>>> Thanks,
>>> 
>>>-Sean
>>> 
>>> --
>>> Sean Dague
>>> http://dague.net
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] service catalog: TNG

2015-10-09 Thread Clint Byrum
Excerpts from Adam Young's message of 2015-10-09 09:51:55 -0700:
> On 10/09/2015 12:28 PM, Monty Taylor wrote:
> > On 10/09/2015 11:21 AM, Shamail wrote:
> >>
> >>
> >>> On Oct 9, 2015, at 10:39 AM, Sean Dague  wrote:
> >>>
> >>> It looks like some great conversation got going on the service catalog
> >>> standardization spec / discussion at the last cross project meeting.
> >>> Sorry I wasn't there to participate.
> >>>
> >> Apologize if this is a question that has already been address but why 
> >> can't we just leverage something like consul.io?
> >
> > It's a good question and there have actually been some discussions 
> > about leveraging it on the backend. However, even if we did, we'd 
> > still need keystone to provide the multi-tenancy view on the subject. 
> > consul wasn't designed (quite correctly I think) to be a user-facing 
> > service for 50k users.
> >
> > I think it would be an excellent backend.
> 
> The better question is, "Why are we not using DNS for the service catalog?"
> 

Agreed, we're using HTTP and JSON for what DNS is supposed to do.

As an aside, consul has a lovely DNS interface.

> Right now, we have the aspect of "project filtering of endpoints" which 
> means that a token does not need to have every endpoint for a specified 
> service.  If we were to use DNS, how would that map to the existing 
> functionality.
> 

There are a number of "how?" answers, but the "what?" question is the
more interesting one. As in, what is the actual point of this
functionality, and what do people want to do per-project?

I think what really ends up happening is you have 99.9% the same
catalogs to the majority of projects, with a few getting back a
different endpoint or two. For that, it seems like you would have two
queries needed in the "discovery" phase:

SRV compute.myprojectid.region1.mycloud.com
SRV compute.region1.mycloud.com

Use the first one you get an answer for. Keystone would simply add
or remove entries for special project<->endpoint mappings. You don't
need Keystone to tell you what your project ID is, so you just make
these queries. When you get a negative answer, respect the TTL and stop
querying for it.

Did I miss a use case with that?

> 
> Can we make better use of regions to help in endpoint filtering/selection?
> 
> Do we still need a query to Keystone to play arbiter if there are two 
> endpoints assigned for a specific use case to help determine which is 
> appropriate?
> 

I'd hope not. If the user is authorized then they should be able
to access the endpoint that they're assigned to. It's confusing to
me sometimes how keystone is thought of as an authorization service,
when it is named "Identity", and primarily performs authentication and
service discovery.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >