[openstack-dev] [cyborg] New time for Cyborg weekly IRC meetings

2018-11-26 Thread Nadathur, Sundar

Hi,
 The current time for the weekly Cyborg IRC meeting is 1400 UTC, 
which is 6 am Pacific and 10pm China time. That is a bad time for most 
people in the call.


Please vote in this doodle for what time you prefer.

If you need more options, please respond in this thread.

[1] https://doodle.com/poll/eqy3hp8hfqtf2qyn


Thanks & Regards,

Sundar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Multinode setup

2018-11-26 Thread Michael Johnson
At the moment that is all we have for a setup guide.

That said, all of the Octavia controller processes are fully HA
capable. The one setting I can think of is the controller_ip_port_list
setting mentioned above. It will need to contain an entry for each
health manager IP/port as Sa Pham mentioned.

You will also want to load balance connections across your API
instances. Load balancing for the other processes is built into the
design and does not require any additional load balancing.

Michael
On Mon, Nov 26, 2018 at 5:59 AM Sa Pham  wrote:
>
> Hi,
>
> The controller_ip_port_list option is list of all IP of node which is 
> deployed octavia-health-manager.
>
>
>
> On Nov 26, 2018, at 8:41 PM, Anna Taraday  wrote:
>
> Hello everyone!
>
> I'm looking into how to run Octavia services (controller worker, housekeeper, 
> health manager) on several network nodes and get confused with setup guide 
> [1].
> Is there any special config option for such case? (controller_ip_port_list 
> probably)
> What manuals/docs/examples do we have except [2] ?
>
> [1] - 
> https://docs.openstack.org/octavia/queens/contributor/guides/dev-quick-start.html
> [2] 
> -https://github.com/openstack/octavia/blob/stable/queens/devstack/samples/multinode/local-2.conf
> --
> Regards,
> Ann Taraday
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> Sa Pham Dang
> Cloud RnD Team - VCCloud / VCCorp
> Phone: 0986.849.582
> Skype: great_bn
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] about use shared image with each other

2018-11-26 Thread Brian Rosmaita
On 11/21/18 7:16 AM, Rambo wrote:
> yes, but I also have a question, Do we have the quota limit for requests
> to share the image to each other? For example, someone shares the image
> with me without stop, how do we deal with it?

Given that the producer-consumer notifications are not handled by
Glance, this is not a problem.  (Or, to be more precise, not a problem
for Glance.)  A producer can share an image with you multiple times, but
since the producer cannot change your member-status, it will remain in
'pending' (or 'rejected' if you've already rejected it).  So there is no
quota necessary for this operation.

>  
> -- Original --
> *From: * "Brian Rosmaita";
> *Date: * Mon, Nov 19, 2018 10:26 PM
> *To: * "OpenStack Developmen";
> *Subject: * Re: [openstack-dev] [glance] about use shared image with
> each other
>  
> On 11/19/18 7:58 AM, Rambo wrote:
>> Hi,all
>>
>>      Recently, I want to use the shared image with each other.I find it
>> isn't convenient that the producer notifies the consumer via email which
>> the image has been shared and what its UUID is. In other words, why the
>> image api v2 is no provision for producer-consumer communication?
> 
> The design goal for Image API v2 image sharing was to provide an
> infrastructure for an "image marketplace" in an OpenStack cloud by (a)
> making it easy for cloud end users to share images, and (b) making it
> easy for end users not to be spammed by other end users taking advantage
> of (a).  When v2 image sharing was introduced in the Grizzly release, we
> did not want to dictate how producer-consumer communication would work
> (because we had no idea how it would develop), so we left it up to
> operators and end users to figure this out.
> 
> The advantage of email communication is that client side message
> filtering is available for whatever client a particular cloud end-user
> employs, and presumably that end-user knows how to manipulate the
> filters without learning some new scheme (or, if the end-user doesn't
> know, learning how to filter messages will apply beyond just image
> sharing, which is a plus).
> 
> Also, email communication is just one way to handle producer-consumer
> communication.  Some operators have adjusted their web interfaces so
> that when an end-user looks at the list of images available, a
> notification pops up if the end-user has any images that have been
> shared with them and are still in "pending" status.  There are various
> other creative things you can do using the normal API calls with regular
> user credentials.
> 
> In brief, we figured that if an image marketplace evolved in a
> particular cloud, producers and consumers would forge their own
> relationships in whatever way made the most sense for their particular
> use cases.  So we left producer-consumer communication out-of-band.
> 
>>       To make it is more convenient,  if we can add a task to change the
>> member_status from "pending" to "accepted" when we share the image with
>> each other. It is similar to the resize_confirm in Nova, we can control
>> the time interval in config.
> 
> You could do this, but that would defeat the entire purpose of the
> member statuses implementation, and hence I do not recommend it.  See
> OSSN-0005 [1] for more about this issue.
> 
> Additionally, since the Ocata release, "community" images have been
> available.  These do not have to be accepted by an end user (but they
> also don't show up in the default image-list response).  Who can
> "communitize" an image is governed by policy.
> 
> See [2] for a discussion of the various types of image sharing currently
> available in the Image API v2.  The Image Service API v2 api-ref [3]
> contains a brief discussion of image visibility and image sharing that
> may also be useful.  Finally, the Glance Ocata release notes [4] have an
> extensive discussion of image visibility.
> 
>>        Can you tell me more about this?Thank you very much!
> 
> The original design page on the wiki [5] has a list of 14 use cases we
> wanted to address; looking through those will give you a better idea of
> why we made the design choices we did.
> 
> Hope this helps!
> 
> cheers,
> brian
> 
> [0]
> http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html
> [1] https://wiki.openstack.org/wiki/OSSN/1226078
> [2]
> http://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html
> [3] https://developer.openstack.org/api-ref/image/v2/
> [4] https://docs.openstack.org/releasenotes/glance/ocata.html
> [5] https://wiki.openstack.org/wiki/Glance-api-v2-image-sharing
> 
> 
>>
>> Best Regards
>> Rambo
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 

Re: [openstack-dev] [tripleo] Zuul Queue backlogs and resource usage

2018-11-26 Thread Bogdan Dobrelya

Here is a related bug [1] and implementation [1] for that. PTAL folks!

[0] https://bugs.launchpad.net/tripleo/+bug/1804822
[1] https://review.openstack.org/#/q/topic:base-container-reduction


Let's also think of removing puppet-tripleo from the base container.
It really brings the world-in (and yum updates in CI!) each job and each 
container!
So if we did so, we should then either install puppet-tripleo and co on 
the host and bind-mount it for the docker-puppet deployment task steps 
(bad idea IMO), OR use the magical --volumes-from  
option to mount volumes from some "puppet-config" sidecar container 
inside each of the containers being launched by docker-puppet tooling.


On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås  
wrote:

We add this to all images:

https://github.com/openstack/tripleo-common/blob/d35af75b0d8c4683a677660646e535cf972c98ef/container-images/tripleo_kolla_template_overrides.j2#L35

/bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python
socat sudo which openstack-tripleo-common-container-base rsync cronie
crudini openstack-selinux ansible python-shade puppet-tripleo python2-
kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB 


Is the additional 276 MB reasonable here?
openstack-selinux <- This package run relabling, does that kind of
touching the filesystem impact the size due to docker layers?

Also: python2-kubernetes is a fairly large package (18007990) do we use
that in every image? I don't see any tripleo related repos importing
from that when searching on Hound? The original commit message[1]
adding it states it is for future convenience.

On my undercloud we have 101 images, if we are downloading every 18 MB
per image thats almost 1.8 GB for a package we don't use? (I hope it's
not like this? With docker layers, we only download that 276 MB
transaction once? Or?)


[1] https://review.openstack.org/527927




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Multinode setup

2018-11-26 Thread Sa Pham
Hi,

The controller_ip_port_list option is list of all IP of node which is deployed 
octavia-health-manager.



> On Nov 26, 2018, at 8:41 PM, Anna Taraday  wrote:
> 
> Hello everyone!
> 
> I'm looking into how to run Octavia services (controller worker, housekeeper, 
> health manager) on several network nodes and get confused with setup guide 
> [1]. 
> Is there any special config option for such case? (controller_ip_port_list 
> probably)
> What manuals/docs/examples do we have except [2] ? 
> 
> [1] - 
> https://docs.openstack.org/octavia/queens/contributor/guides/dev-quick-start.html
>  
> 
> [2] 
> -https://github.com/openstack/octavia/blob/stable/queens/devstack/samples/multinode/local-2.conf
>  
> --
>  
> Regards,
> Ann Taraday
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Sa Pham Dang
Cloud RnD Team - VCCloud / VCCorp
Phone: 0986.849.582
Skype: great_bn

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Multinode setup

2018-11-26 Thread Anna Taraday
Hello everyone!

I'm looking into how to run Octavia services (controller worker,
housekeeper, health manager) on several network nodes and get confused with
setup guide [1].
Is there any special config option for such case? (controller_ip_port_list
probably)
What manuals/docs/examples do we have except [2] ?

[1] -
https://docs.openstack.org/octavia/queens/contributor/guides/dev-quick-start.html
[2] -
https://github.com/openstack/octavia/blob/stable/queens/devstack/samples/multinode/local-2.conf
-- 
Regards,
Ann Taraday
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][placement] Please help to review XenServer vgpu related patches

2018-11-26 Thread Naichuan Sun
Hi, Sylvain, Jay, Eric and Matt,

I saw the n-rp and reshaper patches in upstream have almost finished. Could you 
help to review XenServer vGPU related patches when you have the time?
https://review.openstack.org/#/c/520313/
https://review.openstack.org/#/c/521041/
https://review.openstack.org/#/c/521717/

Thank you very much.

BR.
Naichuan Sun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev