Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Sam Yaple
On Thu, Oct 19, 2017 at 11:23 PM, Gabriele Cerami 
wrote:

> On 19 Oct, Sam Yaple wrote:
> > So it seems tripleo is building *all* images and then pushing them.
> > Reworking your number leads me to believe you will be consuming 10-15GB
> in
> > total on Dockerhub. Kolla images are only the size that you posted when
> > built as seperate services. Just keep building all the images at the same
> > time and you wont get anywhere near the numbers you posted.
>
> Makes sense, so considering the shared layers
> - a size of 10-15GB per build.
> - 4-6 builds rotated per release
> - 3-4 releases
>

- a size of 1-2GB per build
- 4-6 builds rotated per release
- 3-4 releases

At worst you are looking at 48GB not 360GB. Dont worry so much there!

>
> total size will be approximately be 360GB in the worst case, and 120GB in
> the best case, which seems a bit more reasonable.
>
> Thanks for he clarifications
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Short gerrit / zuul outage 2017-10-20 20:00UTC

2017-10-19 Thread Ian Wienand

Hello,

We plan a short outage (<30 minutes) of gerrit and zuul on 2017-10-20
20:00UTC to facilitate project rename requests.

In flight jobs should be restarted, but if something does go missing a
"recheck" comment will work.

Thanks,

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Gabriele Cerami
On 19 Oct, Sam Yaple wrote:
> So it seems tripleo is building *all* images and then pushing them.
> Reworking your number leads me to believe you will be consuming 10-15GB in
> total on Dockerhub. Kolla images are only the size that you posted when
> built as seperate services. Just keep building all the images at the same
> time and you wont get anywhere near the numbers you posted.

Makes sense, so considering the shared layers
- a size of 10-15GB per build.
- 4-6 builds rotated per release
- 3-4 releases

total size will be approximately be 360GB in the worst case, and 120GB in
the best case, which seems a bit more reasonable.

Thanks for he clarifications

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] v2.0 API removal

2017-10-19 Thread Yaguang Tang
Should this kind of change  be discussed and have an agreement of the TC
and User committee?

-- Forwarded message --
From: Lance Bragstad 
Date: Fri, Oct 20, 2017 at 12:08 AM
Subject: [Openstack-operators] [keystone][all] v2.0 API removal
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>, openstack-operat...@lists.openstack.org


Hey all,

Now that we're finishing up the last few bits of v2.0 removal, I'd like to
send out a reminder that *Queens* will not include the *v2.0 keystone APIs*
except the ec2-api. Authentication and validation of v2.0 tokens has been
removed (in addition to the public and admin APIs) after a lengthy
deprecation period.

Let us know if you have any questions.

Thanks!

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Tang Yaguang


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Sam Yaple
On Thu, Oct 19, 2017 at 9:38 PM, Gabriele Cerami  wrote:

> On 19 Oct, Sam Yaple wrote:
> > docker_image wouldn't be the best place for that. Buf if you are looking
> > for a quicker solution, kolla_docker was written specifically to be
> license
> > compatible for openstack. its structure should make it easily adapted to
> > delete an image. And you can copy it and cut it up thanks to the license.
>
> Thanks, I'll look into it.
>
> > Are you pushing images with no shared base layers at all (300MB
> compressed
> > image is no shared base layers)? With shared base layers a full image set
> > of Kolla images should be much smaller than the numbers you posted.
>
> 300MB is the rounded size of the size reported by the dockerhub UI
> e.g https://hub.docker.com/r/tripleopike/centos-binary-heat-api/
> shows 265MB for the newest tag. I'm not sure what size is dockerhub
> reporting.
>

This is misleading. For example, you will download 265MB if you download
only tripleopike/centos-binary-heat-api:current-tripleo . But if you
download both tripleopike/centos-binary-heat-api:current-tripleo and
tripleopike/centos-binary-heat-engine:current-tripleo you will have only
downloaded 266MB in total since the majority of those layers are shared.

So it seems tripleo is building *all* images and then pushing them.
Reworking your number leads me to believe you will be consuming 10-15GB in
total on Dockerhub. Kolla images are only the size that you posted when
built as seperate services. Just keep building all the images at the same
time and you wont get anywhere near the numbers you posted.


> When pulling the image, docker downloads 30 layers. The final size
> reported locally is 815MB.
>

This is the uncompressed size, but even here layers are shared.

>
> Thanks
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] v2.0 API removal

2017-10-19 Thread Yaguang Tang
Keystone is one project that all other OpenStack projects use, so
personally I think the change to remove the API which are widely used
should be discussed at TC meeting .

As far as I know ,not all OpenStack projects support the keystone v3 domain
(domain, project,user) as well as keystone, you can see the policy.json of
each project to check.
most of projects have no domain specified role API.

I'd ask how much effort do we need to maintain the keystone v2 api ? can we
just keep the code there?

On Fri, Oct 20, 2017 at 2:41 AM, Alex Schultz  wrote:

> On Thu, Oct 19, 2017 at 11:49 AM, Lance Bragstad 
> wrote:
> > Yeah - we specifically talked about this in a recent meeting [0]. We
> > will be more verbose about this in the future.
> >
>
> I'm glad to see a review of this. In reading the meeting logs, I
> understand it was well communicated that the api was going to go away
> at some point. Yes we all knew it was coming, but the exact time of
> impact wasn't known outside of Keystone.  Also saying "oh it works in
> devstack" is not enough when you do something this major.   So a "FYI,
> patches to remove v2.0 to start landing next week (or today)" is more
> what would have been helpful for the devs who consume master.  It
> dramatically shortens the time spent debugging failures if you have an
> idea about when something major changes and then we don't have to go
> through git logs/gerrit to figure out what happened :)
>
> IMHO when large efforts that affect the usage of your service are
> going to start to land, it's good to send a note before landing those
> patches. Or at least at the same time. Anyway I hope other projects
> will also follow a similar pattern when they ultimately need to do
> something like this in the future.
>
> Thanks,
> -Alex
>
> >
> > [0]
> > http://eavesdrop.openstack.org/meetings/keystone/2017/
> keystone.2017-10-10-18.00.log.html#l-107
> >
> > On 10/19/2017 12:00 PM, Alex Schultz wrote:
> >> On Thu, Oct 19, 2017 at 10:08 AM, Lance Bragstad 
> wrote:
> >>> Hey all,
> >>>
> >>> Now that we're finishing up the last few bits of v2.0 removal, I'd
> like to
> >>> send out a reminder that Queens will not include the v2.0 keystone APIs
> >>> except the ec2-api. Authentication and validation of v2.0 tokens has
> been
> >>> removed (in addition to the public and admin APIs) after a lengthy
> >>> deprecation period.
> >>>
> >> In the future can we have a notice before the actual code removal
> >> starts?  We've been battling various places where we thought we had
> >> converted to v3 only to find out we hadn't correctly done so because
> >> it use to just 'work' and the only way we know now is that CI blew up.
> >> A heads up on the ML probably wouldn't have lessened the pain in this
> >> instance but at least we might have been able to pinpoint the exact
> >> problem quicker.
> >>
> >> Thanks,
> >> -Alex
> >>
> >>
> >>> Let us know if you have any questions.
> >>>
> >>> Thanks!
> >>>
> >>>
> >>> 
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Tang Yaguang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Sort order when filtering by changes-since

2017-10-19 Thread Matt Riedemann
Nova has two specs approved for Queens for filtering instance actions 
and migrations by the 'changes-since' filter, which filters on the 
updated_at field.


https://review.openstack.org/#/c/507762/

https://review.openstack.org/#/c/506030/

Those APIs don't take a sort key like the servers API does. Since the 
default sort keys for instances are (created_at, id), we said we'd do 
the same for instance actions and migrations:


https://github.com/openstack/nova/blob/8d21d711000fff80eb367692b157d09b6532923f/nova/db/sqlalchemy/api.py#L2515

When you filter instances using the changes-since filter, it applies to 
the updated_at field but the default sort key is still created_at (in 
descending order):


https://github.com/openstack/nova/blob/8d21d711000fff80eb367692b157d09b6532923f/nova/api/openstack/common.py#L142

Since we're not adding sorting ability to the instance actions and 
migrations APIs, we just said we'd use the same defaults as for listing 
instances.


During the spec review, Alex Xu was asking why we don't sort by the 
updated_at column if the changes-since filter is applied. The only 
response is that's not what we do, by default, when listing instances 
and applying the changes-since filter and we're being consistent with 
those defaults.


So my question is, does anyone have a strong opinion on if we should 
default to sort by created_at but if changes-since is specified, we sort 
by updated_at? Does it matter?


Talk among yourselves.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-18, October 21-27

2017-10-19 Thread Kendall Nelson
Thanks for the election shout out Sean :)

-Kendall (diablo_rojo)

On Thu, Oct 19, 2017, 11:45 PM Sean McGinnis  wrote:

> At last, our regular release countdown email.
>
> Development Focus
> -
>
> We are now past the Queens-1 milestone. While there are still some Zuul job
> issues being worked through, all cycle-following projects should have
> posted a
> release request for the first milestone. If you have not yet, please do
> that as
> soon as possible and let me know if you have any issues preventing it. We
> will
> process those release requests as soon as we sort out the last of the
> publishing job issues, so just know that the final Queens-1 deliverables
> may
> not be available until the following week.
>
> It is also a good time to think about about library and client releases.
> All
> projects with these deliverables should try to have a release done before
> Queens-2. This is not expected to be the final release for these libs for
> Queens, but it is good to get changes out there so we can discover issues
> before the end of the cycle.
>
> And another reminder to pay attention to the work being done in support of
> the
> Queens cycle goals [1].
>
> [1] https://governance.openstack.org/tc/goals/queens/index.html
>
> General Information
> ---
>
> TC elections (will have) ended at end of day UTC time on the 20th [2]. If
> you
> are reading this before then and have not voted, please take a moment and
> make
> sure your voice is heard.
>
> [2] https://governance.openstack.org/election/
>
> If you have not received your election link email, please make sure to
> check
> your junk folders as it appears many mail providers are unfortunately
> classifying these as spam.
>
> Upcoming Deadlines & Dates
> --
>
> Forum at OpenStack Summit in Sydney: November 6-8
> Queens-2 Milestone: December 7
>
> --
> Sean McGinnis (smcginnis)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Gabriele Cerami
On 19 Oct, Sam Yaple wrote:
> docker_image wouldn't be the best place for that. Buf if you are looking
> for a quicker solution, kolla_docker was written specifically to be license
> compatible for openstack. its structure should make it easily adapted to
> delete an image. And you can copy it and cut it up thanks to the license.

Thanks, I'll look into it.

> Are you pushing images with no shared base layers at all (300MB compressed
> image is no shared base layers)? With shared base layers a full image set
> of Kolla images should be much smaller than the numbers you posted.

300MB is the rounded size of the size reported by the dockerhub UI
e.g https://hub.docker.com/r/tripleopike/centos-binary-heat-api/
shows 265MB for the newest tag. I'm not sure what size is dockerhub
reporting.

When pulling the image, docker downloads 30 layers. The final size
reported locally is 815MB.

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-18, October 21-27

2017-10-19 Thread Sean McGinnis
At last, our regular release countdown email.

Development Focus
-

We are now past the Queens-1 milestone. While there are still some Zuul job
issues being worked through, all cycle-following projects should have posted a
release request for the first milestone. If you have not yet, please do that as
soon as possible and let me know if you have any issues preventing it. We will
process those release requests as soon as we sort out the last of the
publishing job issues, so just know that the final Queens-1 deliverables may
not be available until the following week.

It is also a good time to think about about library and client releases. All
projects with these deliverables should try to have a release done before
Queens-2. This is not expected to be the final release for these libs for
Queens, but it is good to get changes out there so we can discover issues
before the end of the cycle.

And another reminder to pay attention to the work being done in support of the
Queens cycle goals [1].

[1] https://governance.openstack.org/tc/goals/queens/index.html

General Information
---

TC elections (will have) ended at end of day UTC time on the 20th [2]. If you
are reading this before then and have not voted, please take a moment and make
sure your voice is heard. 

[2] https://governance.openstack.org/election/

If you have not received your election link email, please make sure to check
your junk folders as it appears many mail providers are unfortunately
classifying these as spam.

Upcoming Deadlines & Dates
--

Forum at OpenStack Summit in Sydney: November 6-8
Queens-2 Milestone: December 7

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Dev Digest is Open to Volunteers!

2017-10-19 Thread Mike Perez
On 19:06 Oct 17, Mike Perez wrote:
> Hey all,
> 
> The OpenStack Dev Digest has been receiving great feedback from various 
> members
> of our community as being a good resource to get important summaries of 
> threads
> they might be interested in responding back to and/or being informed on.
> 
> Currently the Dev Digest gets posted by me on the OpenStack blog [1] weekly
> when I can, gets posted on the dev list, the operators lists, OpenStack
> twitter, and LWN [2].
> 
> Summarizing everything can be a lot of work. I recently read the User Group
> Newsletter [3] by Sonia Ramza and noticed the content is created by the
> community via an etherpad.
> 
> I would like to do the same with the Dev Digest and have started a new 
> etherpad
> [4]. I will still be writing the Dev Digest and acting editor, but hoping to
> lean more on the community for content I might've missed and getting
> corrections.
> 
> The cut off each week for now we'll say is every Friday at 19:00 UTC. Thank
> you!
> 
> [1] - https://www.openstack.org/blog
> [2] - https://lwn.net
> [3] - 
> https://www.openstack.org/blog/2017/10/user-group-newsletter-september-2017/
> [4] - https://etherpad.openstack.org/p/devdigest

Reminder the cut off is tomorrow at 19:00 UTC. Thanks Fungi for writing on
"Time To Remove the Ceilometer API"!

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Deployment workflow changes for ui/client

2017-10-19 Thread James Slagle
I've been looking at how we can hook up the deployment changes for
config-download[1] with the existing deployment workflows in Mistral.

However, it seems we have not sufficiently abstracted the logic to do
a "deployment" behind a given workflow(s). The existing things a
client (or UI) has to do is:

- call tripleo.deployment.v1.deploy_plan
- poll for success/failure of that workflow
- poll for success/failure of in progress Heat stack (list events, etc)
- call tripleo.deployment.v1.overcloudrc
(probably more things too)

If I want to make some changes to the deployment workflow, such that
after the Heat stack operation is complete, we run a config-download
action/workflow, then apply the generated ansible via
ansible-playbook, I can't really do that without requiring all clients
to also get updated to use those new steps (via calling new workflows,
etc).

As a first attempt, I took a shot at creating a workflow that does every step:
https://review.openstack.org/#/c/512876/
But even that will require client changes as it necessitates a
behavior change in that the workflow has to wait for the stack to be
complete as opposed to returning as soon as the stack operation is
accepted by Heat.

I'd like to implement this in a way that minimizes the impact of
changes on both python-tripleoclient and tripleo-ui, but it's looking
as if some changes would be required to use this new ansible driven
approach.

Thoughts or feedback on how to proceed? I'm guess I'm also wondering
if the existing API exposed by the workflows is easy to consume by the
UI, or if it would be better to be wrapped in a single workflow...at
least that way we could make logical implementation changes without
requiring ui/cilent changes.

[1] https://blueprints.launchpad.net/tripleo/+spec/ansible-config-download

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Joshua Harlow

Cool thanks,

I'll have to watch for this and see how it goes.

I thought there was some specific things that changed every time a 
dockerfile was rendered (which would make it different each run) but I 
may have been seeing things (or it's been fixed).


Michał Jastrzębski wrote:

On 19 October 2017 at 13:32, Joshua Harlow  wrote:

This reminded me of something I wanted to ask.

Is it true to state that only way to get 'fully' shared-base layers is to
have `kolla-build` build all the projects (that a person/company/other may
use) in one invocation? (in part because of the jinja2 template generation
which would cause differences in dockerfiles?...)


Well jinja2 should render same dockerfile no matter when you call it,
so it should be fine. Alternatively you can run something like
kolla-build nova --skip-parents  - this call will try to build all
images with "nova" in them while not rebuilding openstack-base and
base image.


I was pretty sure this was the case (unless things have changed), but just
wanting to check since that question seems (somewhat) on-topic...

At godaddy we build individual projects using `kolla-build` (in part because
it makes it easy to rebuild + test + deply a single project with either an
update or a patch or ...) and I suspect others are doing this also (after
all the kolla-build command does take a regex of projects to build) - though
doing it in this way does seem like it would not reuse (all the layers
outside of the base operating system) layers 'optimally'?

Thoughts?

-Josh

Sam Yaple wrote:

docker_image wouldn't be the best place for that. Buf if you are looking
for a quicker solution, kolla_docker was written specifically to be
license compatible for openstack. its structure should make it easily
adapted to delete an image. And you can copy it and cut it up thanks to
the license.

Are you pushing images with no shared base layers at all (300MB
compressed image is no shared base layers)? With shared base layers a
full image set of Kolla images should be much smaller than the numbers
you posted.

Thanks,
SamYaple

On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami>  wrote:

 Hi,

 our CI scripts are now automatically building, testing and pushing
 approved openstack/RDO services images to public repositories in
 dockerhub using ansible docker_image module.

 Promotions have had some hiccups, but we're starting to regularly
upload
 new images every 4 hours.

 When we'll get at full speed, we'll potentially have
 - 3-4 different sets of images, one per release of openstack (counting
a
EOL release grace period)
 - 90-100 different services images per release
 - 4-6 different versions of the same image ( keeping older promoted
images for a while )

 At around 300MB per image a possible grand total is around 650GB of
 space used.

 We don't know if this is acceptable usage of dockerhub space and for
 this we already sent a similar email the to docker support to ask
 specifically if the user would get penalty in any way (e.g. enforcing
 quotas, rete limiting, blocking). We're still waiting for a reply.

 In any case it's critical to keep the usage around the estimate, and
to
 achieve this we need a way to automatically delete the older images.
 docker_image module does not provide this functionality, and we think
 the only way is issuing direct calls to dockerhub API

 https://docs.docker.com/registry/spec/api/#deleting-an-image
 

 docker_image module structure doesn't seem to encourage the addition
of
 such functionality directly in it, so we may be forced to use the uri
 module.
 With new images uploaded potentially every 4 hours, this will become a
 problem to be solved within the next two weeks.

 We'd appreciate any input for an existing, in progress and/or better
 solution for bulk deletion, and issues that may arise with our space
 usage in dockerhub

 Thanks


__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Michał Jastrzębski
On 19 October 2017 at 13:37, Michał Jastrzębski  wrote:
> On 19 October 2017 at 13:32, Joshua Harlow  wrote:
>> This reminded me of something I wanted to ask.
>>
>> Is it true to state that only way to get 'fully' shared-base layers is to
>> have `kolla-build` build all the projects (that a person/company/other may
>> use) in one invocation? (in part because of the jinja2 template generation
>> which would cause differences in dockerfiles?...)
>
> Well jinja2 should render same dockerfile no matter when you call it,
> so it should be fine. Alternatively you can run something like
> kolla-build nova --skip-parents  - this call will try to build all
> images with "nova" in them while not rebuilding openstack-base and
> base image.
>
>> I was pretty sure this was the case (unless things have changed), but just
>> wanting to check since that question seems (somewhat) on-topic...
>>
>> At godaddy we build individual projects using `kolla-build` (in part because
>> it makes it easy to rebuild + test + deply a single project with either an
>> update or a patch or ...) and I suspect others are doing this also (after
>> all the kolla-build command does take a regex of projects to build) - though
>> doing it in this way does seem like it would not reuse (all the layers
>> outside of the base operating system) layers 'optimally'?
>>
>> Thoughts?
>>
>> -Josh
>>
>> Sam Yaple wrote:
>>>
>>> docker_image wouldn't be the best place for that. Buf if you are looking
>>> for a quicker solution, kolla_docker was written specifically to be
>>> license compatible for openstack. its structure should make it easily
>>> adapted to delete an image. And you can copy it and cut it up thanks to
>>> the license.
>>>
>>> Are you pushing images with no shared base layers at all (300MB
>>> compressed image is no shared base layers)? With shared base layers a
>>> full image set of Kolla images should be much smaller than the numbers
>>> you posted.
>>>
>>> Thanks,
>>> SamYaple
>>>
>>> On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami >> > wrote:
>>>
>>> Hi,
>>>
>>> our CI scripts are now automatically building, testing and pushing
>>> approved openstack/RDO services images to public repositories in
>>> dockerhub using ansible docker_image module.
>>>
>>> Promotions have had some hiccups, but we're starting to regularly
>>> upload
>>> new images every 4 hours.
>>>
>>> When we'll get at full speed, we'll potentially have
>>> - 3-4 different sets of images, one per release of openstack (counting
>>> a
>>>EOL release grace period)
>>> - 90-100 different services images per release
>>> - 4-6 different versions of the same image ( keeping older promoted
>>>images for a while )
>>>
>>> At around 300MB per image a possible grand total is around 650GB of
>>> space used.

That doesn't sound correct as images share a lot - full registry of
single type/distro (like centos source) is ~10gig

>>> We don't know if this is acceptable usage of dockerhub space and for
>>> this we already sent a similar email the to docker support to ask
>>> specifically if the user would get penalty in any way (e.g. enforcing
>>> quotas, rete limiting, blocking). We're still waiting for a reply.
>>>
>>> In any case it's critical to keep the usage around the estimate, and
>>> to
>>> achieve this we need a way to automatically delete the older images.
>>> docker_image module does not provide this functionality, and we think
>>> the only way is issuing direct calls to dockerhub API
>>>
>>> https://docs.docker.com/registry/spec/api/#deleting-an-image
>>> 
>>>
>>> docker_image module structure doesn't seem to encourage the addition
>>> of
>>> such functionality directly in it, so we may be forced to use the uri
>>> module.
>>> With new images uploaded potentially every 4 hours, this will become a
>>> problem to be solved within the next two weeks.
>>>
>>> We'd appreciate any input for an existing, in progress and/or better
>>> solution for bulk deletion, and issues that may arise with our space
>>> usage in dockerhub
>>>
>>> Thanks
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 

Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Michał Jastrzębski
On 19 October 2017 at 13:32, Joshua Harlow  wrote:
> This reminded me of something I wanted to ask.
>
> Is it true to state that only way to get 'fully' shared-base layers is to
> have `kolla-build` build all the projects (that a person/company/other may
> use) in one invocation? (in part because of the jinja2 template generation
> which would cause differences in dockerfiles?...)

Well jinja2 should render same dockerfile no matter when you call it,
so it should be fine. Alternatively you can run something like
kolla-build nova --skip-parents  - this call will try to build all
images with "nova" in them while not rebuilding openstack-base and
base image.

> I was pretty sure this was the case (unless things have changed), but just
> wanting to check since that question seems (somewhat) on-topic...
>
> At godaddy we build individual projects using `kolla-build` (in part because
> it makes it easy to rebuild + test + deply a single project with either an
> update or a patch or ...) and I suspect others are doing this also (after
> all the kolla-build command does take a regex of projects to build) - though
> doing it in this way does seem like it would not reuse (all the layers
> outside of the base operating system) layers 'optimally'?
>
> Thoughts?
>
> -Josh
>
> Sam Yaple wrote:
>>
>> docker_image wouldn't be the best place for that. Buf if you are looking
>> for a quicker solution, kolla_docker was written specifically to be
>> license compatible for openstack. its structure should make it easily
>> adapted to delete an image. And you can copy it and cut it up thanks to
>> the license.
>>
>> Are you pushing images with no shared base layers at all (300MB
>> compressed image is no shared base layers)? With shared base layers a
>> full image set of Kolla images should be much smaller than the numbers
>> you posted.
>>
>> Thanks,
>> SamYaple
>>
>> On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami > > wrote:
>>
>> Hi,
>>
>> our CI scripts are now automatically building, testing and pushing
>> approved openstack/RDO services images to public repositories in
>> dockerhub using ansible docker_image module.
>>
>> Promotions have had some hiccups, but we're starting to regularly
>> upload
>> new images every 4 hours.
>>
>> When we'll get at full speed, we'll potentially have
>> - 3-4 different sets of images, one per release of openstack (counting
>> a
>>EOL release grace period)
>> - 90-100 different services images per release
>> - 4-6 different versions of the same image ( keeping older promoted
>>images for a while )
>>
>> At around 300MB per image a possible grand total is around 650GB of
>> space used.
>>
>> We don't know if this is acceptable usage of dockerhub space and for
>> this we already sent a similar email the to docker support to ask
>> specifically if the user would get penalty in any way (e.g. enforcing
>> quotas, rete limiting, blocking). We're still waiting for a reply.
>>
>> In any case it's critical to keep the usage around the estimate, and
>> to
>> achieve this we need a way to automatically delete the older images.
>> docker_image module does not provide this functionality, and we think
>> the only way is issuing direct calls to dockerhub API
>>
>> https://docs.docker.com/registry/spec/api/#deleting-an-image
>> 
>>
>> docker_image module structure doesn't seem to encourage the addition
>> of
>> such functionality directly in it, so we may be forced to use the uri
>> module.
>> With new images uploaded potentially every 4 hours, this will become a
>> problem to be solved within the next two weeks.
>>
>> We'd appreciate any input for an existing, in progress and/or better
>> solution for bulk deletion, and issues that may arise with our space
>> usage in dockerhub
>>
>> Thanks
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Joshua Harlow

This reminded me of something I wanted to ask.

Is it true to state that only way to get 'fully' shared-base layers is 
to have `kolla-build` build all the projects (that a 
person/company/other may use) in one invocation? (in part because of the 
jinja2 template generation which would cause differences in dockerfiles?...)


I was pretty sure this was the case (unless things have changed), but 
just wanting to check since that question seems (somewhat) on-topic...


At godaddy we build individual projects using `kolla-build` (in part 
because it makes it easy to rebuild + test + deply a single project with 
either an update or a patch or ...) and I suspect others are doing this 
also (after all the kolla-build command does take a regex of projects to 
build) - though doing it in this way does seem like it would not reuse 
(all the layers outside of the base operating system) layers 'optimally'?


Thoughts?

-Josh

Sam Yaple wrote:

docker_image wouldn't be the best place for that. Buf if you are looking
for a quicker solution, kolla_docker was written specifically to be
license compatible for openstack. its structure should make it easily
adapted to delete an image. And you can copy it and cut it up thanks to
the license.

Are you pushing images with no shared base layers at all (300MB
compressed image is no shared base layers)? With shared base layers a
full image set of Kolla images should be much smaller than the numbers
you posted.

Thanks,
SamYaple

On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami > wrote:

Hi,

our CI scripts are now automatically building, testing and pushing
approved openstack/RDO services images to public repositories in
dockerhub using ansible docker_image module.

Promotions have had some hiccups, but we're starting to regularly upload
new images every 4 hours.

When we'll get at full speed, we'll potentially have
- 3-4 different sets of images, one per release of openstack (counting a
   EOL release grace period)
- 90-100 different services images per release
- 4-6 different versions of the same image ( keeping older promoted
   images for a while )

At around 300MB per image a possible grand total is around 650GB of
space used.

We don't know if this is acceptable usage of dockerhub space and for
this we already sent a similar email the to docker support to ask
specifically if the user would get penalty in any way (e.g. enforcing
quotas, rete limiting, blocking). We're still waiting for a reply.

In any case it's critical to keep the usage around the estimate, and to
achieve this we need a way to automatically delete the older images.
docker_image module does not provide this functionality, and we think
the only way is issuing direct calls to dockerhub API

https://docs.docker.com/registry/spec/api/#deleting-an-image


docker_image module structure doesn't seem to encourage the addition of
such functionality directly in it, so we may be forced to use the uri
module.
With new images uploaded potentially every 4 hours, this will become a
problem to be solved within the next two weeks.

We'd appreciate any input for an existing, in progress and/or better
solution for bulk deletion, and issues that may arise with our space
usage in dockerhub

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Status & Where to find your patch's status in CI

2017-10-19 Thread Alex Schultz
Hey Folks,

So the gate queue is quite backed up due to various reasons. If your
patch has been approved but you're uncertain of the CI status please,
please, please check the dashboard[0] before doing anything.  Do not
rebase or recheck things currently in a queue somewhere. When you
rebase a patch that's in the gate queue it will reset every job behind
it and restart the jobs for that change.

I've noticed that due to various restarts we did lose some comments on
things that are actually in the gate but there was no update in
gerrit. So please take some time and checkout the dashboard if you are
not certain if it's currently being checked.

Thanks,
-Alex


[0] http://zuulv3.openstack.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][release][neutron][powervm] Pre-release of openstack/networking-powervm failed

2017-10-19 Thread Doug Hellmann
We are still working on fixing up the release jobs to work after the
zuulv3 migration. Please do not tag releases until we announce that it
is OK.

Excerpts from zuul's message of 2017-10-19 20:04:07 +:
> Build failed.
> 
> - trigger-readthedocs 
> http://logs.openstack.org/cb/cb10dca78baed3ce3da8475d3a7c518049a0c662/pre-release/trigger-readthedocs/940d68d/
>  : FAILURE in 25s
> - release-openstack-python 
> http://logs.openstack.org/cb/cb10dca78baed3ce3da8475d3a7c518049a0c662/pre-release/release-openstack-python/52932a6/
>  : FAILURE in 3m 45s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] v2.0 API removal

2017-10-19 Thread Alex Schultz
On Thu, Oct 19, 2017 at 11:49 AM, Lance Bragstad  wrote:
> Yeah - we specifically talked about this in a recent meeting [0]. We
> will be more verbose about this in the future.
>

I'm glad to see a review of this. In reading the meeting logs, I
understand it was well communicated that the api was going to go away
at some point. Yes we all knew it was coming, but the exact time of
impact wasn't known outside of Keystone.  Also saying "oh it works in
devstack" is not enough when you do something this major.   So a "FYI,
patches to remove v2.0 to start landing next week (or today)" is more
what would have been helpful for the devs who consume master.  It
dramatically shortens the time spent debugging failures if you have an
idea about when something major changes and then we don't have to go
through git logs/gerrit to figure out what happened :)

IMHO when large efforts that affect the usage of your service are
going to start to land, it's good to send a note before landing those
patches. Or at least at the same time. Anyway I hope other projects
will also follow a similar pattern when they ultimately need to do
something like this in the future.

Thanks,
-Alex

>
> [0]
> http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-10-10-18.00.log.html#l-107
>
> On 10/19/2017 12:00 PM, Alex Schultz wrote:
>> On Thu, Oct 19, 2017 at 10:08 AM, Lance Bragstad  wrote:
>>> Hey all,
>>>
>>> Now that we're finishing up the last few bits of v2.0 removal, I'd like to
>>> send out a reminder that Queens will not include the v2.0 keystone APIs
>>> except the ec2-api. Authentication and validation of v2.0 tokens has been
>>> removed (in addition to the public and admin APIs) after a lengthy
>>> deprecation period.
>>>
>> In the future can we have a notice before the actual code removal
>> starts?  We've been battling various places where we thought we had
>> converted to v3 only to find out we hadn't correctly done so because
>> it use to just 'work' and the only way we know now is that CI blew up.
>> A heads up on the ML probably wouldn't have lessened the pain in this
>> instance but at least we might have been able to pinpoint the exact
>> problem quicker.
>>
>> Thanks,
>> -Alex
>>
>>
>>> Let us know if you have any questions.
>>>
>>> Thanks!
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Sam Yaple
docker_image wouldn't be the best place for that. Buf if you are looking
for a quicker solution, kolla_docker was written specifically to be license
compatible for openstack. its structure should make it easily adapted to
delete an image. And you can copy it and cut it up thanks to the license.

Are you pushing images with no shared base layers at all (300MB compressed
image is no shared base layers)? With shared base layers a full image set
of Kolla images should be much smaller than the numbers you posted.

Thanks,
SamYaple

On Thu, Oct 19, 2017 at 11:03 AM, Gabriele Cerami 
wrote:

> Hi,
>
> our CI scripts are now automatically building, testing and pushing
> approved openstack/RDO services images to public repositories in
> dockerhub using ansible docker_image module.
>
> Promotions have had some hiccups, but we're starting to regularly upload
> new images every 4 hours.
>
> When we'll get at full speed, we'll potentially have
> - 3-4 different sets of images, one per release of openstack (counting a
>   EOL release grace period)
> - 90-100 different services images per release
> - 4-6 different versions of the same image ( keeping older promoted
>   images for a while )
>
> At around 300MB per image a possible grand total is around 650GB of
> space used.
>
> We don't know if this is acceptable usage of dockerhub space and for
> this we already sent a similar email the to docker support to ask
> specifically if the user would get penalty in any way (e.g. enforcing
> quotas, rete limiting, blocking). We're still waiting for a reply.
>
> In any case it's critical to keep the usage around the estimate, and to
> achieve this we need a way to automatically delete the older images.
> docker_image module does not provide this functionality, and we think
> the only way is issuing direct calls to dockerhub API
>
> https://docs.docker.com/registry/spec/api/#deleting-an-image
>
> docker_image module structure doesn't seem to encourage the addition of
> such functionality directly in it, so we may be forced to use the uri
> module.
> With new images uploaded potentially every 4 hours, this will become a
> problem to be solved within the next two weeks.
>
> We'd appreciate any input for an existing, in progress and/or better
> solution for bulk deletion, and issues that may arise with our space
> usage in dockerhub
>
> Thanks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Fox, Kevin M
For kolla, we were thinking about a couple of optimization that should greatly 
reduce the space.

1. only upload to the hub based on stable versions. The updates are much less 
frequent.
2. fingerprint the containers. base it on rpm/deb list, pip list, git 
checksums. If the fingerprint is the same, don't reupload a container. Nothing 
really changed but some trivial files or timestamps on files.

Also, remember the apparent size of a container is not the same as the actual 
size. Due to layering, the actual size is often significantly smaller then what 
shows up in 'docker images'. For example, this 
http://tarballs.openstack.org/kolla-kubernetes/gate/containers/centos-binary-ceph.tar.bz2
 is only 1.2G and contains all the containers needed for a compute kit 
deployment.

For trunk based builds, it may still be a good idea to only mirror those to 
tarballs.o.o or a openstack provided docker repo that infra has been discussing?

Thanks,
Kevin

From: Gabriele Cerami [gcer...@redhat.com]
Sent: Thursday, October 19, 2017 8:03 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [TripleO][Kolla] Concerns about containers images  
in DockerHub

Hi,

our CI scripts are now automatically building, testing and pushing
approved openstack/RDO services images to public repositories in
dockerhub using ansible docker_image module.

Promotions have had some hiccups, but we're starting to regularly upload
new images every 4 hours.

When we'll get at full speed, we'll potentially have
- 3-4 different sets of images, one per release of openstack (counting a
  EOL release grace period)
- 90-100 different services images per release
- 4-6 different versions of the same image ( keeping older promoted
  images for a while )

At around 300MB per image a possible grand total is around 650GB of
space used.

We don't know if this is acceptable usage of dockerhub space and for
this we already sent a similar email the to docker support to ask
specifically if the user would get penalty in any way (e.g. enforcing
quotas, rete limiting, blocking). We're still waiting for a reply.

In any case it's critical to keep the usage around the estimate, and to
achieve this we need a way to automatically delete the older images.
docker_image module does not provide this functionality, and we think
the only way is issuing direct calls to dockerhub API

https://docs.docker.com/registry/spec/api/#deleting-an-image

docker_image module structure doesn't seem to encourage the addition of
such functionality directly in it, so we may be forced to use the uri
module.
With new images uploaded potentially every 4 hours, this will become a
problem to be solved within the next two weeks.

We'd appreciate any input for an existing, in progress and/or better
solution for bulk deletion, and issues that may arise with our space
usage in dockerhub

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] proposed change to IRC meeting time

2017-10-19 Thread Eric K
Hi all,

Here is a proposal (no actual change until further notice) to move the
weekly Congress team meeting from Thursdays 00:00 UTC to Fridays 02:30 UTC
in order to make the meeting time more bearable for India while still
being workable for East Asia and the Americas. The time remains very bad
for Europe and Africa (if there is interest, we can also set up for some
weeks a meeting time that works better for Europe and Africa; please let
us know!).

Please express your comments and suggestions here. Thanks!

-Eric Kao



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] v2.0 API removal

2017-10-19 Thread Lance Bragstad
Yeah - we specifically talked about this in a recent meeting [0]. We
will be more verbose about this in the future.


[0]
http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-10-10-18.00.log.html#l-107

On 10/19/2017 12:00 PM, Alex Schultz wrote:
> On Thu, Oct 19, 2017 at 10:08 AM, Lance Bragstad  wrote:
>> Hey all,
>>
>> Now that we're finishing up the last few bits of v2.0 removal, I'd like to
>> send out a reminder that Queens will not include the v2.0 keystone APIs
>> except the ec2-api. Authentication and validation of v2.0 tokens has been
>> removed (in addition to the public and admin APIs) after a lengthy
>> deprecation period.
>>
> In the future can we have a notice before the actual code removal
> starts?  We've been battling various places where we thought we had
> converted to v3 only to find out we hadn't correctly done so because
> it use to just 'work' and the only way we know now is that CI blew up.
> A heads up on the ML probably wouldn't have lessened the pain in this
> instance but at least we might have been able to pinpoint the exact
> problem quicker.
>
> Thanks,
> -Alex
>
>
>> Let us know if you have any questions.
>>
>> Thanks!
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2017-10-19 Thread Ed Leafe
Greetings OpenStack community,

If you were hoping for startling news, well, you're going to be disappointed. 
We did, however, have a perfectly enjoyable meeting today.

A question had been raised about creating a guideline for 'changes-since' 
filtering in an API [5], and we debated whether it was needed or not. Consensus 
is that while it isn't of critical importance, it is something that we should 
add to the guidelines on filtering to help achieve consistency across APIs. 
edleafe volunteered to tackle that when he has some free time. The mention of 
"free time" brought much hilarity to everyone at the meeting.

* There's been some discussion of recording an outreach video at summit. 
edleafe sent an email to the openstack-sigs mailing list [7], but so far there 
has been no reply. If there is still no response in a few days, a reminder 
email will be sent.

elmiko posted an email to the openstack-sigs mailing list [7] to help find out 
if there is any interest in an alternate time for holding the API-SIG IRC 
meeting in order to accommodate APAC contributors. As this was just sent, there 
is no response yet. We also expressed the idea that email is probably better 
for communication across time zones than alternating meetings.

# Newly Published Guidelines

* Updates for rename to SIG
  https://review.openstack.org/#/c/508242/
  While this isn't technically a guideline, it is an important step in our 
transition from a WG to a SIG.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the 
OpenStack developer mailing list[1] with the tag "[api]" in the subject. In 
your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] 
http://p.anticdent.org/logs/openstack-nova?dated=2017-10-16%2022:04:06.467386#15H4
[6] http://lists.openstack.org/pipermail/openstack-sigs/
[7] http://lists.openstack.org/pipermail/openstack-sigs/2017-October/000127.html

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] v2.0 API removal

2017-10-19 Thread Alex Schultz
On Thu, Oct 19, 2017 at 10:08 AM, Lance Bragstad  wrote:
> Hey all,
>
> Now that we're finishing up the last few bits of v2.0 removal, I'd like to
> send out a reminder that Queens will not include the v2.0 keystone APIs
> except the ec2-api. Authentication and validation of v2.0 tokens has been
> removed (in addition to the public and admin APIs) after a lengthy
> deprecation period.
>

In the future can we have a notice before the actual code removal
starts?  We've been battling various places where we thought we had
converted to v3 only to find out we hadn't correctly done so because
it use to just 'work' and the only way we know now is that CI blew up.
A heads up on the ML probably wouldn't have lessened the pain in this
instance but at least we might have been able to pinpoint the exact
problem quicker.

Thanks,
-Alex


> Let us know if you have any questions.
>
> Thanks!
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][all] v2.0 API removal

2017-10-19 Thread Lance Bragstad
Hey all,

Now that we're finishing up the last few bits of v2.0 removal, I'd like
to send out a reminder that *Queens* will not include the *v2.0 keystone
APIs* except the ec2-api. Authentication and validation of v2.0 tokens
has been removed (in addition to the public and admin APIs) after a
lengthy deprecation period.

Let us know if you have any questions.

Thanks!



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] Concerns over rigid resource class-only ironic scheduling

2017-10-19 Thread John Garbutt
On 19 October 2017 at 15:38, Jay Pipes  wrote:

> On 10/16/2017 05:31 AM, Nisha Agarwal wrote:
>
>> Hi Matt,
>>
>> As i understand John's spec https://review.openstack.org/#/c/507052/ <
>> https://review.openstack.org/#/c/507052/>, it is actually a replacement
>> for capabilities(qualitative only) for ironic. It doesnt cover the
>> quantitative capabilities as 'traits' are meant only for qualitative
>> capabilities. Quantitative capabilities are covered by resource classes in
>> Nova. We have few (one or two) quantitative capabilities already supported
>> in ironic.
>>
>
> Hi Nisha,
>
> This may be a case of mixed terminology. We do not refer to anything
> quantitative as a "capability". Rather, we use the term "resource class"
> (or sometimes just "resource") to represent quantitative things that may be
> consumed by the instance.
>
> Traits, on the other hand, are qualitative. They represent a binary on/off
> capability that the compute host (or baremetal node in the case of Ironic)
> exposes.
>
> There's no limit on the number of traits that may be associated with a
> particular Ironic baremetal node. However, for Ironic baremetal nodes, if
> the node.resource_class attribute is set, the Nova Ironic virt driver will
> create a single inventory record for the node containing a quantity of 1
> and a resource class equal to whatever is in the node.resource_class
> attribute. This resource class is auto-created by Nova as a custom resource
> class.
>

Just to follow up on this one...

I hope my traits spec will replace the need for the non-exact filters.

Consider two flavors Gold and Gold_Plus. Lets say Gold_plus gives you a
slightly newer CPU, or something.

Consider this setup:

* both GOLD and GOLD_PLUS ironic nodes have Resource Class: CUSTOM_GOLD
* but you can have some have trait: GOLD_REGULAR and some with GOLD_PLUS

Now you can have the flavors:

* GOLD flavor requests resources:CUSTOM_GOLD=1
* GOLD_PLUS flavor also has resources:CUSTOM_GOLD=1 but also
trait:GOLD_PLUS:requires

Now eventually we could modify the GOLD flavor to say:

* resources:CUSTOM_GOLD=1 and trait:GOLD_REGULAR:prefer

@Nisha I think that should largely allow you to construct the same behavior
you have today, or am I totally missing what you are wanting to do?

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Gabriele Cerami
Hi,

our CI scripts are now automatically building, testing and pushing
approved openstack/RDO services images to public repositories in
dockerhub using ansible docker_image module.

Promotions have had some hiccups, but we're starting to regularly upload
new images every 4 hours.

When we'll get at full speed, we'll potentially have
- 3-4 different sets of images, one per release of openstack (counting a
  EOL release grace period)
- 90-100 different services images per release
- 4-6 different versions of the same image ( keeping older promoted
  images for a while )

At around 300MB per image a possible grand total is around 650GB of
space used.

We don't know if this is acceptable usage of dockerhub space and for
this we already sent a similar email the to docker support to ask
specifically if the user would get penalty in any way (e.g. enforcing
quotas, rete limiting, blocking). We're still waiting for a reply.

In any case it's critical to keep the usage around the estimate, and to
achieve this we need a way to automatically delete the older images.
docker_image module does not provide this functionality, and we think
the only way is issuing direct calls to dockerhub API

https://docs.docker.com/registry/spec/api/#deleting-an-image

docker_image module structure doesn't seem to encourage the addition of
such functionality directly in it, so we may be forced to use the uri
module.
With new images uploaded potentially every 4 hours, this will become a
problem to be solved within the next two weeks.

We'd appreciate any input for an existing, in progress and/or better
solution for bulk deletion, and issues that may arise with our space
usage in dockerhub

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] Concerns over rigid resource class-only ironic scheduling

2017-10-19 Thread Jay Pipes

On 10/16/2017 05:31 AM, Nisha Agarwal wrote:

Hi Matt,

As i understand John's spec https://review.openstack.org/#/c/507052/ 
, it is actually a replacement 
for capabilities(qualitative only) for ironic. It doesnt cover the 
quantitative capabilities as 'traits' are meant only for qualitative 
capabilities. Quantitative capabilities are covered by resource classes 
in Nova. We have few (one or two) quantitative capabilities already 
supported in ironic.


Hi Nisha,

This may be a case of mixed terminology. We do not refer to anything 
quantitative as a "capability". Rather, we use the term "resource class" 
(or sometimes just "resource") to represent quantitative things that may 
be consumed by the instance.


Traits, on the other hand, are qualitative. They represent a binary 
on/off capability that the compute host (or baremetal node in the case 
of Ironic) exposes.


There's no limit on the number of traits that may be associated with a 
particular Ironic baremetal node. However, for Ironic baremetal nodes, 
if the node.resource_class attribute is set, the Nova Ironic virt driver 
will create a single inventory record for the node containing a quantity 
of 1 and a resource class equal to whatever is in the 
node.resource_class attribute. This resource class is auto-created by 
Nova as a custom resource class.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L3-subteam] Weekly IRC meeting cancelled on October 19th

2017-10-19 Thread Miguel Lavalle
Dear L3 sub-team Neutrinos,

Due to unexpected agenda conflicts of several of the team members, we will
cancel today's meeting. We will resume normally on the 26th

Regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] TC Candidates: what does an OpenStack user look like?

2017-10-19 Thread Adam Lawson
"Right, so we all agree that what we *don't* want is TC candidates saying
"I'm here to represent the interests of user community X against those of
evil user community Y", all of the X users voting for X candidates and not
Y candidates, and then the elected X members voting to block anything that
only benefits Y, and vice-versa."

Interestingly (and without wanting to derail the overall subject) but this
is precisely what is done by a certain individuals seeking a seat on the
board of directors. And the funny thing is that while the board of
directors is not about representing one bloc of geography, campaigning on
that issue is very effective. The tactic is gratuitous but I guess some
people highly prioritize board membership as an achievement rather than a
service to the community.

/soapbox

//adam

On Oct 18, 2017 11:11 AM, "Zane Bitter"  wrote:

> On 17/10/17 14:16, Doug Hellmann wrote:
>
>> Excerpts from Zane Bitter's message of 2017-10-16 18:10:20 -0400:
>>
>>> On 14/10/17 11:47, Doug Hellmann wrote:
>>>
 Even the rewritten question can be answered
 legitimately using several different personas by people with a bit
 of experience.  I have worked at a public cloud provider and two
 distributors with a wide range of customers, and I use OpenStack
 clouds myself. I hope that all of that background feeds into my
 contributions.

>>>
>>> Yes, that's great. I think most people would agree that there's a
>>> threshold somewhere between 'several' and 'infinity' beyond which we've
>>> crossed over into platitudes though.
>>>
>>
>> Maybe. On the other hand, TC members aren't elected to represent
>> specific constituencies, so there's something to be said for taking each
>> case as it comes and considering the users impacted by that case.
>>
>
> Right, so we all agree that what we *don't* want is TC candidates saying
> "I'm here to represent the interests of user community X against those of
> evil user community Y", all of the X users voting for X candidates and not
> Y candidates, and then the elected X members voting to block anything that
> only benefits Y, and vice-versa. Obviously every step of that process is an
> unmitigated disaster.
>
> So of course each TC member should consider the all of the users impacted
> by any decision on a case-by-case basis. However, even if we're only
> thinking purely about reactive decision-making, it's still often not easy
> to know *which* users are impacted by any particular decision unless you
> have someone in the room who has a deep familiarity with that use case.
> That's why I'd like to see candidates saying something like "I spend a lot
> of time thinking about user community X and if anything came up that
> affected their use cases I'm pretty sure I'd spot it". So that I can vote
> to optimise the diversity of Xs represented, where X might be e.g. web
> developers, devops teams, scientific researchers, vSphere migrants,
> multi-cloud users, NFV, the next Facebook/Twitter/Snapchat/Netflix,
> mobile app or IoT backend developers, kubernetes users, or something I
> haven't even thought of.
>
> Possible tangent: I've always enjoyed this article (about the Sapir-Whorf
> hypothesis): http://www.nytimes.com/2010/08/29/magazine/29language-t.html
> tl;dr Anybody can think about anything, regardless of the language they
> speak (i.e. Sapir-Whorf is wrong). But there are things in every language
> that you can't *not* think about, and they're different for different
> languages.
>
> I want to maximise the set of things the TC, collectively, can't not think
> about.
>
> Suffice to say that nobody should take my example here as anything more
>>> than a rationale for the importance of user-centred design.
>>>
>>
>> How much "design" do you think the TC is doing as a governance group?
>>
>
> It varies between different levels of abstraction.
>
> At the code level, none.
>
> At the level of setting the broad technical direction of the project, not
> as much as I'd like. But y'all did pass https://governance.openstack.o
> rg/tc/resolutions/20170317-cloud-applications-mission.html for me
> (thanks!) so definitely not nothing. There are other less-directly-relevant
> examples like adding etcd to the list of base services too.
>
> At the level of deciding what projects OpenStack consists of, and
> therefore what sort of cloud you can build with it (that is to say, what
> you can _use_ it for)... that's _entirely_ within the TC's purview.
>
> At an even higher level of abstraction, deciding what OpenStack is and
> what the Foundation is for, the TC has at least a significant role in
> giving input to the board and delegated authority to make decisions in some
> areas. Notably, discussions at this level often occur face-to-face at
> TC-only events, or at board meetings where non-members aren't entitled to
> speak, and which few people can and even fewer people do attend. (I've
> given up a few Sunday afternoons before OpenStack Summits 

[openstack-dev] [keystoneauth] [osc] [ironic] Usage of none loader in the CLI

2017-10-19 Thread Vladyslav Drok
Hi!

I'd like to discuss the usage of the new noauth plugin to keystoneauth,
which was introduced in [1]. The docstring of the loader says it is
intended to be used during adapter initialization along with
endpoint_override. But what about the CLI usage in the OpenStack client? I
was trying to make the none loader work with baremetal plugin, as part of
testing [2], and encountered some problems, which are hacked around in [3].

So, here are some questions:

1. Was it intended to be used in CLI at all, or should we still use the
token_endpoint?
2. If it was intended, should we:
2.1. do the hacks as in [3]?
2.2. introduce endpoint as an option for the none loader, making it a
bit similar to token_endpoint, with token hardcoded (and also get_endpoint
method to the auth plugin I think)?
2.3. leave it as-is, allowing the usage of none loader only by
specifying the parameters in the clouds.yaml, as in [4] for example?

[1] https://review.openstack.org/469863
[2] https://review.openstack.org/359061
[3] https://review.openstack.org/512699
[4]
https://github.com/openstack/bifrost/blob/21ca45937a9cb36c6f04073182bf2edea8acbd5d/playbooks/roles/bifrost-keystone-client-config/templates/clouds.yaml.j2#L17-L19

Thanks,
Vlad
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Use neutron's new port binding API

2017-10-19 Thread Mooney, Sean K
Hi matt
You not only currently so I taught I would respond to your question regarding 
the workflow via email.
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-10-18.log.html#t2017-10-18T20:29:02

mriedem 1. conductor asks scheduler for a host20:29
mriedem 2. scheduler filter looks for ports with a qos policy and 
if found, gets allocatoin candidates for hosts that have a nested bw provider 
20:29
mriedem 3. scheduler returns host to conductor  20:29
mriedem 4. conductor binds the port to the host  20:30
mriedem 5. the bound port profile has some allocation juju that 
nova proxies to placement as an allocation request for the port on the bw 
provider   20:30
mriedem 6. conductor sends to compute to build the instance   
20:30
mriedem 7. compute activates the bound port  20:30
mriedem 8. compute plugs vifs 20:30
mriedem 9. profit?!


So my ideal workflow would be


1.   conductor calls allocate_for_instance 
https://github.com/openstack/nova/blob/1b45b530448c45598b62e783bdd567480a8eb433/nova/network/neutronv2/api.py#L814
in schedule_and_build_instances 
https://github.com/openstack/nova/blob/fce56ce8c04b20174cd89dfbc2c06f0068324b55/nova/conductor/manager.py#L1002
Before calling self._schedule_instances. This get or creates all neutron ports 
for the instance before we call the scheduler.

2.   conductor asks scheduler for a host by calling 
self._schedule_instances passing in the network_info object.

3.   scheduler extract placement requests form network_info object and adds 
them to the list it send to placement.

4.   Scheduler applies standard filters to placement candidates.

5.   scheduler returns host  after weighing to conductor.

6.   conductor binds the port to the host.

a.   if it fails early retry on next host in candidate set.

b.  Continue until port binding succeeds, retry limit is reached, or 
candidate are exhausted

7.   The conductor creates allocations for the host against all resource 
providers.

a.   When the port is bound neutron will populate the resource request for 
bandwidth, with the neutron agent uuid which will be the resource provider uuid 
to allocate from.

8.   conductor sends to compute to build the instance passing the 
allocations

9.   compute plugs vifs

10.   compute activates the bound port setting the allocation uuid on the port 
for all resource classes request by neutron

11.   excess of income over expenditure? :)

The important thing to note is nova recives all request for network resouces 
form neutron in the port objects created at step 1
Nova learns the backend resource provider for neutron at step 6 before it makes 
allocations
Nova then passes the allocation that were made back to neutron when it activate 
the port.

We have nova make the allocation for all resources to prevent any races between 
the conductor and neutron when updating the same nested resource provider tree.
(this was jays concern)
Neutron will create the inventories for bandwith but nova will allocate form 
the them.
The intent is for nova to not need to know what the resource it is claim are, 
but instead be able to accept a set of additional resocues to claim form 
neutron in a generic
Workflow which we can hopefully reuse for other project like cinder or cyborg 
in the future.

Regards sean.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Sydney Forum Project Onboarding Rooms

2017-10-19 Thread Kendall Nelson
Added Ansible to my list with the three of you as speakers :)

-Kendall (diablo_rojo)

On Tue, Oct 17, 2017 at 7:28 AM Jean-Philippe Evrard <
jean-phili...@evrard.me> wrote:

> Hello,
>
> I'd be happy to have a room for OpenStack-Ansible.
>
> I'll be there, and probably more ppl, like Kevin Carter(cloudnull) and
> Amy Marrich(spotz).
>
> Thanks!
>
> On 17 October 2017 at 03:39, Jeffrey Zhang 
> wrote:
> > I am the speaker. Michal couldn't be Sydney this summit.
> >
> > On Tue, Oct 17, 2017 at 1:05 AM, Kendall Nelson 
> > wrote:
> >>
> >> Added Kolla to my list. Would the speakers be you and Michal?
> >>
> >> -Kendall (diablo_rojo)
> >>
> >>
> >> On Thu, Oct 12, 2017 at 5:51 PM Jeffrey Zhang 
> >> wrote:
> >>>
> >>> Hi Kendall,
> >>>
> >>> Kolla project would like to have a on-boarding session too.
> >>>
> >>> thanks.
> >>>
> >>> On Fri, Oct 13, 2017 at 5:58 AM, Kendall Nelson  >
> >>> wrote:
> 
>  Added Nova to my list with Dan, Melanie, and Ed as speakers.
> 
>  Thanks Matt,
>  -Kendall (diablo_rojo)
> 
>  On Thu, Oct 12, 2017 at 2:43 PM Matt Riedemann 
>  wrote:
> >
> > On 10/9/2017 4:24 PM, Kendall Nelson wrote:
> > > Wanted to keep this thread towards the top of inboxes for those I
> > > haven't heard from yet.
> > >
> > > About a 1/4 of the way booked, so there are still slots available!
> > >
> > > -Kendall (diablo_rojo)
> >
> > I've tricked the following people into running a Nova on-boarding
> room:
> >
> > - "Super" Dan Smith 
> > - Melanie "Structured Settlement" Witt 
> > - Ed "Alternate Hosts" Leafe 
> >
> > --
> >
> > Thanks,
> >
> > Matt
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> >>>
> >>>
> >>>
> >>> --
> >>> Regards,
> >>> Jeffrey Zhang
> >>> Blog: http://xcodest.me
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > --
> > Regards,
> > Jeffrey Zhang
> > Blog: http://xcodest.me
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Sydney Forum Project Onboarding Rooms

2017-10-19 Thread Kendall Nelson
Added Neutron to my list.

-Kendall (diablo_rojo)

On Tue, Oct 17, 2017 at 7:28 AM Miguel Lavalle  wrote:

> Neutron - Miguel Lavalle - mig...@mlavalle.com
>
> On Mon, Oct 16, 2017 at 9:39 PM, Jeffrey Zhang 
> wrote:
>
>> I am the speaker. Michal couldn't be Sydney this summit.
>>
>> On Tue, Oct 17, 2017 at 1:05 AM, Kendall Nelson 
>> wrote:
>>
>>> Added Kolla to my list. Would the speakers be you and Michal?
>>>
>>> -Kendall (diablo_rojo)
>>>
>>>
>>> On Thu, Oct 12, 2017 at 5:51 PM Jeffrey Zhang 
>>> wrote:
>>>
 Hi Kendall,

 Kolla project would like to have a on-boarding session too.

 thanks.

 On Fri, Oct 13, 2017 at 5:58 AM, Kendall Nelson 
 wrote:

> Added Nova to my list with Dan, Melanie, and Ed as speakers.
>
> Thanks Matt,
> -Kendall (diablo_rojo)
>
> On Thu, Oct 12, 2017 at 2:43 PM Matt Riedemann 
> wrote:
>
>> On 10/9/2017 4:24 PM, Kendall Nelson wrote:
>> > Wanted to keep this thread towards the top of inboxes for those I
>> > haven't heard from yet.
>> >
>> > About a 1/4 of the way booked, so there are still slots available!
>> >
>> > -Kendall (diablo_rojo)
>>
>> I've tricked the following people into running a Nova on-boarding
>> room:
>>
>> - "Super" Dan Smith 
>> - Melanie "Structured Settlement" Witt 
>> - Ed "Alternate Hosts" Leafe 
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 --
 Regards,
 Jeffrey Zhang
 Blog: http://xcodest.me

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][mistral] Mistral expressions package

2017-10-19 Thread Renat Akhmerov
Ok, thanks for your input.

I’d personally say it’s not really worth it but if Bob wants to do it that’s ok.

Thanks

Renat Akhmerov
@Nokia

On 19 Oct 2017, 12:46 +0700, ChangBo Guo , wrote:
> The dependencies of  mistral expressions package make it hard to be adopted 
> as a module of oslo library,  we need oslo library keep simple.
> we have a adopt process [1]  which is not merged to help guide the adoption 
> process if we agree.  agree with Dough, we can discuss in the Oslo weekly 
> meeting.[2]
>
>
> [1] https://review.openstack.org/312233
> [2] http://eavesdrop.openstack.org/#Oslo_Team_Meeting
>
> > 2017-10-18 12:46 GMT+08:00 Renat Akhmerov :
> > > Hi,
> > >
> > > I’m not too happy about the idea of creating one more subproject within 
> > > Mistral. I don’t even see now what else this new library project managed 
> > > by Mistral team will contain besides this expression utils module. I’m 
> > > also not sure about its name. We already have mistral-lib which was 
> > > created for a different purpose (public APIs for making Mistral 
> > > extensions like actions and YAQL/Jinja functions).
> > >
> > > Just to clarify: the code we’re talking about is really small and stable 
> > > (we haven’t touched it for a while, it just works), and it’s generic so 
> > > it can be reused in many situations by many projects. That’s why we had 
> > > an idea to find a place within one of the Oslo libraries, just to make 
> > > one more package (or even module), for example, in oslo.utils. As far as 
> > > maintaining this code, we could still do that. But anyway, if that’s not 
> > > OK, I’d just suggest we leave it as it is. If this code needs to be 
> > > reused somewhere else outside OpenStack space (like in Bob’s case) may be 
> > > it’s just simpler to create a project on github?
> > >
> > > Thanks
> > >
> > > Renat Akhmerov
> > > @Nokia
> > >
> > > On 10 Oct 2017, 22:11 +0700, Doug Hellmann , wrote:
> > > > Excerpts from HADDLETON, Robert W (Bob)'s message of 2017-10-09 
> > > > 19:41:58 -0500:
> > > > > On 10/9/2017 2:35 PM, Doug Hellmann wrote:
> > > > > > Excerpts from Bob Haddleton's message of 2017-10-09 11:35:16 -0500:
> > > > > > > Hello Oslo team:
> > > > > > >
> > > > > > > The Mistral project has an expressions package [0] that is used to
> > > > > > > evaluate inline expressions using a context. It has a pluggable
> > > > > > > architecture that presently supports Jinja and YAQL expression
> > > > > > > evaluation. It also allows custom functions[1] to provide Python
> > > > > > > implementations of functionality that is then made available to 
> > > > > > > the
> > > > > > > expression evaluation engines.
> > > > > > >
> > > > > > > This functionality was originally developed to support dynamic
> > > > > > > processing within Mistral workflows, but is also very useful in 
> > > > > > > other
> > > > > > > applications that use templates which require runtime evaluation 
> > > > > > > of
> > > > > > > expressions.
> > > > > > >
> > > > > > > I'd like to explore extracting this functionality from mistral to 
> > > > > > > make
> > > > > > > it more widely available with minimal dependencies.
> > > > > > >
> > > > > > > The expressions dependencies are pretty limited:
> > > > > > >
> > > > > > > Jinja2
> > > > > > > oslo.utils
> > > > > > > oslo.log
> > > > > > > stevedore
> > > > > > > yaql
> > > > > > >
> > > > > > > and since 60% are already oslo-maintained packages, it seemed 
> > > > > > > like a
> > > > > > > logical place to start.
> > > > > > >
> > > > > > > I'd appreciate feedback on the topic. There is no real OpenStack
> > > > > > > dependency in the functionality, so maybe a standalone package on 
> > > > > > > pypi
> > > > > > > makes sense.
> > > > > > >
> > > > > > > Thanks for your help,
> > > > > > >
> > > > > > > Bob Haddleton
> > > > > > >
> > > > > > >
> > > > > > > [0] 
> > > > > > > https://github.com/openstack/mistral/tree/master/mistral/expressions
> > > > > > > [1]
> > > > > > > https://github.com/openstack/mistral/blob/master/mistral/utils/expression_utils.py#L63
> > > > > > >
> > > > > > Oslo is a good place for things like this that have no other obvious
> > > > > > home, but if the Mistral team is already managing the code is there 
> > > > > > any
> > > > > > reason they couldn't also manage the library after you pull it out 
> > > > > > of
> > > > > > the service? It's much easier for any project team to manage a 
> > > > > > library
> > > > > > now, and we have several other examples of that pattern already.
> > > > > >
> > > > > > Doug
> > > > > >
> > > > > > __
> > > > > > OpenStack Development Mailing List (not for usage questions)
> > > > > > Unsubscribe: 
> > > > > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > > > Hi Doug: