Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hi Chris,
many thanks for your answer.
It solved the issue.
Regards
Ignazio

Il giorno mar 13 nov 2018 alle ore 03:46 Chris Apsey <
bitskr...@bitskrieg.net> ha scritto:

> Did you change the nova_metadata_ip option to nova_metadata_host in
> metadata_agent.ini?  The former value was deprecated several releases ago
> and now no longer functions as of pike.  The metadata service will throw
> 500 errors if you don't change it.
>
> On November 12, 2018 19:00:46 Ignazio Cassano 
> wrote:
>
>> Any other suggestion ?
>> It does not work.
>> Nova metatada is on port 8775 in listen but no way to solve this issue.
>> Thanks
>> Ignazio
>>
>> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
>> skapl...@redhat.com> ha scritto:
>>
>>> Hi,
>>>
>>> From logs which You attached it looks that Your neutron-metadata-agent
>>> can’t connect to nova-api service. Please check if nova-metadata-api is
>>> reachable from node where Your neutron-metadata-agent is running.
>>>
>>> > Wiadomość napisana przez Ignazio Cassano  w
>>> dniu 12.11.2018, o godz. 22:34:
>>> >
>>> > Hello again,
>>> > I have another installation of ocata .
>>> > On ocata the metadata for a network id is displayed by ps -afe like
>>> this:
>>> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
>>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
>>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
>>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
>>> --metadata_proxy_group=993
>>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
>>> --log-dir=/var/log/neutron
>>> >
>>> > On queens like this:
>>> >  haproxy -f
>>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
>>> >
>>> > Is it the correct behaviour ?
>>>
>>> Yes, that is correct. It was changed some time ago, see
>>> https://bugs.launchpad.net/neutron/+bug/1524916
>>>
>>> >
>>> > Regards
>>> > Ignazio
>>> >
>>> >
>>> >
>>> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
>>> skapl...@redhat.com> ha scritto:
>>> > Hi,
>>> >
>>> > Can You share logs from Your haproxy-metadata-proxy service which is
>>> running in qdhcp namespace? There should be some info about reason of those
>>> errors 500.
>>> >
>>> > > Wiadomość napisana przez Ignazio Cassano 
>>> w dniu 12.11.2018, o godz. 19:49:
>>> > >
>>> > > Hi All,
>>> > > I upgraded  manually my centos 7 openstack ocata to pike.
>>> > > All worked fine.
>>> > > Then I upgraded from pike to Queens and instances stopped to reach
>>> metadata on 169.254.169.254 with error 500.
>>> > > I am using isolated metadata true in my dhcp conf and in dhcp
>>> namespace  the port 80 is in listen.
>>> > > Please, anyone can help me?
>>> > > Regards
>>> > > Ignazio
>>> > >
>>> > > ___
>>> > > OpenStack-operators mailing list
>>> > > OpenStack-operators@lists.openstack.org
>>> > >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> >
>>> > —
>>> > Slawek Kaplonski
>>> > Senior software engineer
>>> > Red Hat
>>> >
>>>
>>> —
>>> Slawek Kaplonski
>>> Senior software engineer
>>> Red Hat
>>>
>>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [tripleo] puppet5 has broken the master gate

2018-11-12 Thread Chandan kumar
Hello Alex,

On Tue, Nov 13, 2018 at 9:53 AM Alex Schultz  wrote:
>
> Just a heads up but we recently updated to puppet5 in the master
> dependencies. It appears that this has completely hosed the master
> scenarios and containers-multinode jobs.  Please do recheck/approve
> anything until we get this resolved.
>
> See https://bugs.launchpad.net/tripleo/+bug/1803024
>
> I have a possible fix (https://review.openstack.org/#/c/617441/) but
> it's probably a better idea to roll back the puppet package if
> possible.
>

In RDO, we have reverted Revert "Stein: push puppet 5.5.6" ->
https://review.rdoproject.org/r/#/c/17333/1

Thanks for the heads up!

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] can't input pipeline symbol in linux instance on web vnc console

2018-11-12 Thread Cheung 楊禮銓

Host OS: Ubuntu 16.4

openstack version: queens


When I type the "|" symbol in linux instance on web vnc console, "|" becomes 
">".

But I connet to linux instance with putty, I can type "|" without any problem.

I do not know why this happend.

This issue only occurs on Linux instances.









--
本電子郵件及其所有附件所含之資訊均屬機密,僅供指定之收件人使用,未經寄件人同意不得揭露、複製或散布本電子郵件。若您並非指定之收件人,請勿使用、保存或揭露本電子郵件之任何部分,並請立即通知寄件人並完全刪除本電子郵件。網路通訊可能含有病毒,收件人應自行確認本郵件是否安全,若因此造成損害,寄件人恕不負責。

The information contained in this communication and attachment is confidential 
and is intended only for the use of the recipient to which this communication 
is addressed. Any disclosure, copying or distribution of this communication 
without the sender's consents is strictly prohibited. If you are not the 
intended recipient, please notify the sender and delete this communication 
entirely without using, retaining, or disclosing any of its contents. Internet 
communications cannot be guaranteed to be virus-free. The recipient is 
responsible for ensuring that this communication is virus free and the sender 
accepts no liability for any damages caused by virus transmitted by this 
communication.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [heat] Heat sessions & forum in Berlin Summit

2018-11-12 Thread Rico Lin
Dear all
Here are some Heat relative sessions in OpenStack summit for you this week.
Welcome everyone to join us and check it out!

*Orchestration Ops/Users feedback session*
Wed 14, 1:40pm - 2:20pm
CityCube Berlin - Level 3 - M-Räume 6
https://etherpad.openstack.org/p/heat-user-berlin

*Heat - Project Update*
Wed 14, 3:45pm - 4:05pm
CityCube Berlin - Level 3 - M-Räume 3
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22739/heat-project-update

*Autoscaling Integration, improvement, and feedback*
Thu 15, 9:00am - 9:40am
CityCube Berlin - Level 3 - M-Räume 8
https://etherpad.openstack.org/p/autoscaling-integration-and-feedback

*Heat - Project Onboarding*
Thu 15, 10:50am - 11:30am
CityCube Berlin - Level 3 - M-Räume 1
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22733/heat-project-onboarding


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hello, I am going to check it.
Thanks
Ignazio

Il giorno Mar 13 Nov 2018 03:46 Chris Apsey  ha
scritto:

> Did you change the nova_metadata_ip option to nova_metadata_host in
> metadata_agent.ini?  The former value was deprecated several releases ago
> and now no longer functions as of pike.  The metadata service will throw
> 500 errors if you don't change it.
>
> On November 12, 2018 19:00:46 Ignazio Cassano 
> wrote:
>
>> Any other suggestion ?
>> It does not work.
>> Nova metatada is on port 8775 in listen but no way to solve this issue.
>> Thanks
>> Ignazio
>>
>> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
>> skapl...@redhat.com> ha scritto:
>>
>>> Hi,
>>>
>>> From logs which You attached it looks that Your neutron-metadata-agent
>>> can’t connect to nova-api service. Please check if nova-metadata-api is
>>> reachable from node where Your neutron-metadata-agent is running.
>>>
>>> > Wiadomość napisana przez Ignazio Cassano  w
>>> dniu 12.11.2018, o godz. 22:34:
>>> >
>>> > Hello again,
>>> > I have another installation of ocata .
>>> > On ocata the metadata for a network id is displayed by ps -afe like
>>> this:
>>> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
>>> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>>> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
>>> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
>>> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
>>> --metadata_proxy_group=993
>>> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
>>> --log-dir=/var/log/neutron
>>> >
>>> > On queens like this:
>>> >  haproxy -f
>>> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
>>> >
>>> > Is it the correct behaviour ?
>>>
>>> Yes, that is correct. It was changed some time ago, see
>>> https://bugs.launchpad.net/neutron/+bug/1524916
>>>
>>> >
>>> > Regards
>>> > Ignazio
>>> >
>>> >
>>> >
>>> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
>>> skapl...@redhat.com> ha scritto:
>>> > Hi,
>>> >
>>> > Can You share logs from Your haproxy-metadata-proxy service which is
>>> running in qdhcp namespace? There should be some info about reason of those
>>> errors 500.
>>> >
>>> > > Wiadomość napisana przez Ignazio Cassano 
>>> w dniu 12.11.2018, o godz. 19:49:
>>> > >
>>> > > Hi All,
>>> > > I upgraded  manually my centos 7 openstack ocata to pike.
>>> > > All worked fine.
>>> > > Then I upgraded from pike to Queens and instances stopped to reach
>>> metadata on 169.254.169.254 with error 500.
>>> > > I am using isolated metadata true in my dhcp conf and in dhcp
>>> namespace  the port 80 is in listen.
>>> > > Please, anyone can help me?
>>> > > Regards
>>> > > Ignazio
>>> > >
>>> > > ___
>>> > > OpenStack-operators mailing list
>>> > > OpenStack-operators@lists.openstack.org
>>> > >
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>> >
>>> > —
>>> > Slawek Kaplonski
>>> > Senior software engineer
>>> > Red Hat
>>> >
>>>
>>> —
>>> Slawek Kaplonski
>>> Senior software engineer
>>> Red Hat
>>>
>>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [tripleo][openstack-ansible] Updates on collaboration on os_tempest role

2018-11-12 Thread Chandan kumar
Hello,

During the starting of Denver 2018 PTG [1]., We started collaborating
towards using the
openstack-ansible-os_tempest role [2] as a unified tempest role in TripleO and
openstack-ansible project within OpenStack community.

It will help us to improve the testing strategies between two projects
which can be
further expanded to other OpenStack deployment tools.

We will be sharing bi-weekly updates through mailing lists.
We are tracking/planning all the work here:
Proposal doc: https://etherpad.openstack.org/p/ansible-tempest-role
Work item collaboration doc:
https://etherpad.openstack.org/p/openstack-ansible-tempest

Here is the update till now:
openstack-ansible-os_tempest project:

* Enable stackviz support - https://review.openstack.org/603100
* Added support for installing tempest from distro -
https://review.openstack.org/591424
* Fixed missing ; from if statement in tempest_run -
https://review.openstack.org/614521
* Added task to list tempest plugins - https://review.openstack.org/615837
* Remove apt_package_pinning dependency from os_tempest role -
https://review.openstack.org/609992
* Enable python-tempestconf support - https://review.openstack.org/612968

Support added to openstack/rpm-packaging project (will be consumed in
os_tempest role):
* Added spec file for stackviz - https://review.openstack.org/609337
* Add initial spec for python-tempestconf - https://review.openstack.org/598143

Upcoming improvements:
* Finish the integration of python-tempestconf in os_tempest role.

Have queries, Feel free to ping us on #tripleo or #openstack-ansible channel.

Links:
[1.] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133119.html
[2.] http://git.openstack.org/cgit/openstack/openstack-ansible-os_tempest

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] puppet5 has broken the master gate

2018-11-12 Thread Alex Schultz
Just a heads up but we recently updated to puppet5 in the master
dependencies. It appears that this has completely hosed the master
scenarios and containers-multinode jobs.  Please do recheck/approve
anything until we get this resolved.

See https://bugs.launchpad.net/tripleo/+bug/1803024

I have a possible fix (https://review.openstack.org/#/c/617441/) but
it's probably a better idea to roll back the puppet package if
possible.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] RabbitMQ and SSL

2018-11-12 Thread Sam Morrison
On the off chance that others see this or there is talk about this in Berlin I 
have tracked this down to versions of python-amqp and python-kombu

More information at the bug report 
https://bugs.launchpad.net/oslo.messaging/+bug/1800957 


Sam



> On 1 Nov 2018, at 11:04 am, Sam Morrison  wrote:
> 
> Hi all,
> 
> We’ve been battling an issue after an upgrade to pike which essentially makes 
> using rabbit with ssl impossible 
> 
> https://bugs.launchpad.net/oslo.messaging/+bug/1800957 
> 
> 
> We use ubuntu cloud archives so it might no exactly be olso but a dependant 
> library.
> 
> Anyone else seen similar issues?
> 
> Cheers,
> Sam
> 
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [masakari] No masakari meeting on 11/12

2018-11-12 Thread Sam P
Hi all!,
 Sorry for the late announcement. Since most of us in Berlin summit,
there will be no IRC meeting at 11/12.

--- Regards,
Sampath

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question

2018-11-12 Thread Chen CH Ji
Got it, this is what I am looking for .. thank you
 
- Original message -From: Slawomir Kaplonski To: "OpenStack Development Mailing List (not for usage questions)" Cc:Subject: Re: [openstack-dev] [nova][neutron] boot server with more than one subnet selection questionDate: Tue, Nov 13, 2018 1:06 AM 
Hi,You can choose which subnet (and even IP address) should be used, see „fixed_ips” field in [1].If You will not provide anything Neutron will choose for You one IPv4 address and one IPv6 address and in both cases it will be chosen randomly from available IPs from all subnets.[1] https://developer.openstack.org/api-ref/network/v2/?expanded=create-port-detail#create-port> Wiadomość napisana przez Chen CH Ji  w dniu 12.11.2018, o godz. 13:44:>> I have a network created like below:>  > 1 network with 3 subnets (1 ipv6 and 2 ipv4) ,when boot, whether I can select subnet to boot from or the subnet will be force selected by the order the subnet created? Any document or code can be  referred? Thanks>  > | fd0e2078-044d-4c5c-b114-3858631e6328 | private   | a8184e4f-5165-4ea8-8ed8-b776d619af6e fd9b:c245:1aaa::/64 |> |                                      |           | b3ee7cad-c672-4172-a183-8e9f069bea31 10.0.0.0/26         |> |                                      |           | 9439abfd-afa2-4264-8422-977d725a7166 10.0.2.0/24         |>  >> __> OpenStack Development Mailing List (not for usage questions)> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev—Slawek KaplonskiSenior software engineerRed Hat__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Searchlight] Report for the week of Stein R-22

2018-11-12 Thread Trinh Nguyen
Hi team,

This is the report for last week, Stein R-22 [1]. Please follow to know
what going on with Searchlight.

[1]
https://www.dangtrinh.com/2018/11/searchlight-weekly-report-stein-r-22.html

Bests,

-- 
*Trinh Nguyen*
*www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Chris Apsey
Did you change the nova_metadata_ip option to nova_metadata_host in 
metadata_agent.ini?  The former value was deprecated several releases ago 
and now no longer functions as of pike.  The metadata service will throw 
500 errors if you don't change it.


On November 12, 2018 19:00:46 Ignazio Cassano  wrote:

Any other suggestion ?
It does not work.
Nova metatada is on port 8775 in listen but no way to solve this issue.
Thanks
Ignazio

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski 
 ha scritto:

Hi,

From logs which You attached it looks that Your neutron-metadata-agent 
can’t connect to nova-api service. Please check if nova-metadata-api is 
reachable from node where Your neutron-metadata-agent is running.


Wiadomość napisana przez Ignazio Cassano  w dniu 
12.11.2018, o godz. 22:34:


Hello again,
I have another installation of ocata .
On ocata the metadata for a network id is displayed by ps -afe like this:
 /usr/bin/python2 /bin/neutron-ns-metadata-proxy 
 --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid 
 --metadata_proxy_socket=/var/lib/neutron/metadata_proxy 
 --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 
 --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 
 --metadata_proxy_group=993 
 --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log 
 --log-dir=/var/log/neutron


On queens like this:
 haproxy -f 
 /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf


Is it the correct behaviour ?


Yes, that is correct. It was changed some time ago, see 
https://bugs.launchpad.net/neutron/+bug/1524916




Regards
Ignazio



Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski 
 ha scritto:

Hi,

Can You share logs from Your haproxy-metadata-proxy service which is 
running in qdhcp namespace? There should be some info about reason of those 
errors 500.


> Wiadomość napisana przez Ignazio Cassano  w 
dniu 12.11.2018, o godz. 19:49:

>
> Hi All,
> I upgraded  manually my centos 7 openstack ocata to pike.
> All worked fine.
> Then I upgraded from pike to Queens and instances stopped to reach 
metadata on 169.254.169.254 with error 500.
> I am using isolated metadata true in my dhcp conf and in dhcp namespace  
the port 80 is in listen.

> Please, anyone can help me?
> Regards
> Ignazio
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

—
Slawek Kaplonski
Senior software engineer
Red Hat



—
Slawek Kaplonski
Senior software engineer
Red Hat

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] [glance] task in pending state, image in uploading state

2018-11-12 Thread Bernd Bausch

Thanks Brian. It's great to get an email from Mr. Glance.

I managed to patch Devstack, and a first test was successful. Perfect!

A bit late, I then found numerous warnings in release notes and other 
documents that UWSGI should not be used when deploying Glance. My 
earlier web searches flew by these documents without noticing them.


Bernd

On 11/12/2018 11:27 PM, Brian Rosmaita wrote:

On 11/12/18 5:07 AM, Bernd Bausch wrote:

Trying Glance's new import process, my images are all stuck in status
uploading (both methods glance-direct and web-download).

I can see that there are tasks for those images; they are pending. The
Glance API log doesn't contain anything that clues me in (debug logging
is enabled).

The source code is too involved for my feeble Python and OpenStack
Internals skills.

*How can I find out what blocks the tasks? *

This is a stable Rocky Devstack without any customization of the Glance
config.


The tasks engine Glance uses to facilitate the "new" (experimental in
Pike, current in Queens) image import process does not work when Glance
is deployed as a WSGI application using uWSGI [0]; as you observed, the
tasks remain stuck in 'pending'.  You can apply this patch [1] to your
devstack Glance and restart devstack@g-api and image import should work
without additional glance api-changes (the patch applied cleanly last
time I checked, which was a Stein-1 milestone devstack; it should apply
cleanly to your stable Rocky devstack).  You may also want to take a
look at the Glance admin guide [2] to see what configuration options are
available.

[0]
https://docs.openstack.org/releasenotes/glance/queens.html#relnotes-16-0-0-stable-queens-known-issues
[1] https://review.openstack.org/#/c/545483/
[2]
https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




smime.p7s
Description: S/MIME Cryptographic Signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Any other suggestion ?
It does not work.
Nova metatada is on port 8775 in listen but no way to solve this issue.
Thanks
Ignazio

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> >  haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> >
> > > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded  manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace  the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > ___
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] new SIGs to cover use cases

2018-11-12 Thread Jeremy Stanley
On 2018-11-12 15:46:38 + (+), arkady.kanev...@dell.com wrote:
[...]
>   1.  Do we have or want to create a user community around Hybrid cloud.
[...]
>   2.  As we target AI/ML as 2019 target application domain do we
>   want to create a SIG for it? Or do we extend scientific
>   community SIG to cover it?
[...]

It may also be worthwhile to ask this on the openstack-sigs mailing
list.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Yes, sorry.
Also the 8775 port is reachable from neutron metadata agent
Regards
Ignazio

Il giorno lun 12 nov 2018 alle ore 23:08 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:55:
> >
> > Hello,
> > the nova api in on the same controller on port 8774 and it can be
> reached from the metadata agent
>
> Nova-metadata-api is running on port 8775 IIRC.
>
> > No firewall is present
> > Regards
> >
> > Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
> > Hi,
> >
> > From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
> >
> > > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:34:
> > >
> > > Hello again,
> > > I have another installation of ocata .
> > > On ocata the metadata for a network id is displayed by ps -afe like
> this:
> > >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> > >
> > > On queens like this:
> > >  haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> > >
> > > Is it the correct behaviour ?
> >
> > Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
> >
> > >
> > > Regards
> > > Ignazio
> > >
> > >
> > >
> > > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
> > > Hi,
> > >
> > > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> > >
> > > > Wiadomość napisana przez Ignazio Cassano 
> w dniu 12.11.2018, o godz. 19:49:
> > > >
> > > > Hi All,
> > > > I upgraded  manually my centos 7 openstack ocata to pike.
> > > > All worked fine.
> > > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace  the port 80 is in listen.
> > > > Please, anyone can help me?
> > > > Regards
> > > > Ignazio
> > > >
> > > > ___
> > > > OpenStack-operators mailing list
> > > > OpenStack-operators@lists.openstack.org
> > > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > >
> > > —
> > > Slawek Kaplonski
> > > Senior software engineer
> > > Red Hat
> > >
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
I tried 1 minute ago to create another instance.
Nova api reports the  following:

RROR oslo_db.sqlalchemy.engines [req-cac96dee-d91b-48cb-831b-31f95cffa2f4
89f76bc5de5545f381da2c10c7df7f15 59f1f232ce28409593d66d8f6495e434 - default
default] Database connection was found disconnected; reconnecting:
DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection
to MySQL server during query') [SQL: u'SELECT 1'] (Background on this error
at: http://sqlalche.me/e/e3q8)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines Traceback
(most recent call last):
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73,
in _connect_ping_listener
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
connection.scalar(select([1]))
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 880,
in scalar
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines return
self.execute(object, *multiparams, **params).scalar()
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948,
in execute
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines return
meth(self, multiparams, params)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269,
in _execute_on_connection
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines return
connection._execute_clauseelement(self, multiparams, params)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060,
in _execute_clauseelement
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
compiled_sql, distilled_params
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200,
in _execute_context
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409,
in _handle_dbapi_exception
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
util.raise_from_cause(newraise, exc_info)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203,
in raise_from_cause
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193,
in _execute_context
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line
507, in do_execute
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
cursor.execute(statement, parameters)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines result =
self._query(query)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _query
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
conn.query(q)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 856, in
query
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1057, in
_read_query_result
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
result.read()
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1340, in
read
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
first_packet = self.connection._read_packet()
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 987, in
_read_packet
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
packet_header = self._read_bytes(4)
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1033, in
_read_bytes
2018-11-12 23:07:28.813 4224 ERROR oslo_db.sqlalchemy.engines
CR.CR_SERVER_LOST, "Lost connection to MySQL server 

Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Slawomir Kaplonski
Hi,

> Wiadomość napisana przez Ignazio Cassano  w dniu 
> 12.11.2018, o godz. 22:55:
> 
> Hello,
> the nova api in on the same controller on port 8774 and it can be reached 
> from the metadata agent

Nova-metadata-api is running on port 8775 IIRC.

> No firewall is present
> Regards
> 
> Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski 
>  ha scritto:
> Hi,
> 
> From logs which You attached it looks that Your neutron-metadata-agent can’t 
> connect to nova-api service. Please check if nova-metadata-api is reachable 
> from node where Your neutron-metadata-agent is running.
> 
> > Wiadomość napisana przez Ignazio Cassano  w dniu 
> > 12.11.2018, o godz. 22:34:
> > 
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy 
> > --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> >  --metadata_proxy_socket=/var/lib/neutron/metadata_proxy 
> > --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 
> > --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 
> > --metadata_proxy_group=993 
> > --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> >  --log-dir=/var/log/neutron
> > 
> > On queens like this:
> >  haproxy -f 
> > /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> > 
> > Is it the correct behaviour ?
> 
> Yes, that is correct. It was changed some time ago, see 
> https://bugs.launchpad.net/neutron/+bug/1524916
> 
> > 
> > Regards
> > Ignazio
> > 
> > 
> > 
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski 
> >  ha scritto:
> > Hi,
> > 
> > Can You share logs from Your haproxy-metadata-proxy service which is 
> > running in qdhcp namespace? There should be some info about reason of those 
> > errors 500.
> > 
> > > Wiadomość napisana przez Ignazio Cassano  w 
> > > dniu 12.11.2018, o godz. 19:49:
> > > 
> > > Hi All,
> > > I upgraded  manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach 
> > > metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp namespace  
> > > the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > > 
> > > ___
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> > 
> > — 
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> > 
> 
> — 
> Slawek Kaplonski
> Senior software engineer
> Red Hat
> 

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hello again, ath the same time I tried to create an instance nova-api log
reports:

2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
[req-e4799f40-eeab-482d-9717-cb41be8ffde2 89f76bc5de5545f381da2c10c7df7f15
59f1f232ce28409593d66d8f6495e434 - default default] Database connection was
found disconnected; reconnecting: DBConnectionError:
(pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server
during query') [SQL: u'SELECT 1'] (Background on this error at:
http://sqlalche.me/e/e3q8)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines Traceback
(most recent call last):
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 73,
in _connect_ping_listener
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
connection.scalar(select([1]))
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 880,
in scalar
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return
self.execute(object, *multiparams, **params).scalar()
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 948,
in execute
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return
meth(self, multiparams, params)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269,
in _execute_on_connection
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines return
connection._execute_clauseelement(self, multiparams, params)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060,
in _execute_clauseelement
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
compiled_sql, distilled_params
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200,
in _execute_context
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1409,
in _handle_dbapi_exception
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
util.raise_from_cause(newraise, exc_info)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203,
in raise_from_cause
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
reraise(type(exception), exception, tb=exc_tb, cause=cause)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193,
in _execute_context
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines context)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line
507, in do_execute
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines
cursor.execute(statement, parameters)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 166, in execute
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines result =
self._query(query)
2018-11-12 21:30:42.225 4226 ERROR oslo_db.sqlalchemy.engines   File
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 322, in _quer




I never lost connections to de db before upgrading
:-(

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> >  haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916

Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hello,
the nova api in on the same controller on port 8774 and it can be reached
from the metadata agent
No firewall is present
Regards

Il giorno lun 12 nov 2018 alle ore 22:40 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> From logs which You attached it looks that Your neutron-metadata-agent
> can’t connect to nova-api service. Please check if nova-metadata-api is
> reachable from node where Your neutron-metadata-agent is running.
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 22:34:
> >
> > Hello again,
> > I have another installation of ocata .
> > On ocata the metadata for a network id is displayed by ps -afe like this:
> >  /usr/bin/python2 /bin/neutron-ns-metadata-proxy
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
> --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
> --metadata_proxy_group=993
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
> --log-dir=/var/log/neutron
> >
> > On queens like this:
> >  haproxy -f
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> >
> > Is it the correct behaviour ?
>
> Yes, that is correct. It was changed some time ago, see
> https://bugs.launchpad.net/neutron/+bug/1524916
>
> >
> > Regards
> > Ignazio
> >
> >
> >
> > Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
> > Hi,
> >
> > Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
> >
> > > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 19:49:
> > >
> > > Hi All,
> > > I upgraded  manually my centos 7 openstack ocata to pike.
> > > All worked fine.
> > > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > > I am using isolated metadata true in my dhcp conf and in dhcp
> namespace  the port 80 is in listen.
> > > Please, anyone can help me?
> > > Regards
> > > Ignazio
> > >
> > > ___
> > > OpenStack-operators mailing list
> > > OpenStack-operators@lists.openstack.org
> > >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Slawomir Kaplonski
Hi,

From logs which You attached it looks that Your neutron-metadata-agent can’t 
connect to nova-api service. Please check if nova-metadata-api is reachable 
from node where Your neutron-metadata-agent is running.

> Wiadomość napisana przez Ignazio Cassano  w dniu 
> 12.11.2018, o godz. 22:34:
> 
> Hello again,
> I have another installation of ocata .
> On ocata the metadata for a network id is displayed by ps -afe like this:
>  /usr/bin/python2 /bin/neutron-ns-metadata-proxy 
> --pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
>  --metadata_proxy_socket=/var/lib/neutron/metadata_proxy 
> --network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1 
> --state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996 
> --metadata_proxy_group=993 
> --log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log 
> --log-dir=/var/log/neutron
> 
> On queens like this:
>  haproxy -f 
> /var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf
> 
> Is it the correct behaviour ?

Yes, that is correct. It was changed some time ago, see 
https://bugs.launchpad.net/neutron/+bug/1524916

> 
> Regards
> Ignazio
> 
> 
> 
> Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski 
>  ha scritto:
> Hi,
> 
> Can You share logs from Your haproxy-metadata-proxy service which is running 
> in qdhcp namespace? There should be some info about reason of those errors 
> 500.
> 
> > Wiadomość napisana przez Ignazio Cassano  w dniu 
> > 12.11.2018, o godz. 19:49:
> > 
> > Hi All,
> > I upgraded  manually my centos 7 openstack ocata to pike.
> > All worked fine.
> > Then I upgraded from pike to Queens and instances stopped to reach metadata 
> > on 169.254.169.254 with error 500.
> > I am using isolated metadata true in my dhcp conf and in dhcp namespace  
> > the port 80 is in listen.
> > Please, anyone can help me?
> > Regards
> > Ignazio
> > 
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> — 
> Slawek Kaplonski
> Senior software engineer
> Red Hat
> 

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hello again,
I have another installation of ocata .
On ocata the metadata for a network id is displayed by ps -afe like this:
 /usr/bin/python2 /bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/c4731392-9b91-4663-adb3-b10b5ebcc4f1.pid
--metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--network_id=c4731392-9b91-4663-adb3-b10b5ebcc4f1
--state_path=/var/lib/neutron --metadata_port=80 --metadata_proxy_user=996
--metadata_proxy_group=993
--log-file=neutron-ns-metadata-proxy-c4731392-9b91-4663-adb3-b10b5ebcc4f1.log
--log-dir=/var/log/neutron

On queens like this:
 haproxy -f
/var/lib/neutron/ns-metadata-proxy/e8ba8c09-a7dc-4a22-876e-b8d4187a23fe.conf

Is it the correct behaviour ?

Regards
Ignazio



Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
skapl...@redhat.com> ha scritto:

> Hi,
>
> Can You share logs from Your haproxy-metadata-proxy service which is
> running in qdhcp namespace? There should be some info about reason of those
> errors 500.
>
> > Wiadomość napisana przez Ignazio Cassano  w
> dniu 12.11.2018, o godz. 19:49:
> >
> > Hi All,
> > I upgraded  manually my centos 7 openstack ocata to pike.
> > All worked fine.
> > Then I upgraded from pike to Queens and instances stopped to reach
> metadata on 169.254.169.254 with error 500.
> > I am using isolated metadata true in my dhcp conf and in dhcp namespace
> the port 80 is in listen.
> > Please, anyone can help me?
> > Regards
> > Ignazio
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
PS
Thanks for your help

Il giorno lun 12 nov 2018 alle ore 22:15 Ignazio Cassano <
ignaziocass...@gmail.com> ha scritto:

> Hello,
> attached here there is the log file.
>
> Connecting to an instance created befor upgrade I also tried:
> wget http://169.254.169.254/2009-04-04/meta-data/instance-id
>
> The following is the output
>
> --2018-11-12 22:14:45--
> http://169.254.169.254/2009-04-04/meta-data/instance-id
> Connecting to 169.254.169.254:80... connected.
> HTTP request sent, awaiting response... 500 Internal Server Error
> 2018-11-12 22:14:45 ERROR 500: Internal Server Error
>
>
> Il giorno lun 12 nov 2018 alle ore 21:37 Slawomir Kaplonski <
> skapl...@redhat.com> ha scritto:
>
>> Hi,
>>
>> Can You share logs from Your haproxy-metadata-proxy service which is
>> running in qdhcp namespace? There should be some info about reason of those
>> errors 500.
>>
>> > Wiadomość napisana przez Ignazio Cassano  w
>> dniu 12.11.2018, o godz. 19:49:
>> >
>> > Hi All,
>> > I upgraded  manually my centos 7 openstack ocata to pike.
>> > All worked fine.
>> > Then I upgraded from pike to Queens and instances stopped to reach
>> metadata on 169.254.169.254 with error 500.
>> > I am using isolated metadata true in my dhcp conf and in dhcp
>> namespace  the port 80 is in listen.
>> > Please, anyone can help me?
>> > Regards
>> > Ignazio
>> >
>> > ___
>> > OpenStack-operators mailing list
>> > OpenStack-operators@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> —
>> Slawek Kaplonski
>> Senior software engineer
>> Red Hat
>>
>>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [qa] [containers] [airship] [berlin] Berlin Airship Forums

2018-11-12 Thread MCEUEN, MATT
I wanted to make sure that all interested folks are aware of the 
Airship-related Forums that will be held on Tuesday:

Cross-project container security discussion:
  https://etherpad.openstack.org/p/BER-container-security
Airship Quality Assurance use cases:
https://etherpad.openstack.org/p/BER-airship-qa
Airship Bare Metal provisioning brainstorming & design:
https://etherpad.openstack.org/p/BER-airship-bare-metal

We welcome all participation and discussion - please add any topics you'd like 
to discuss to the etherpads!

I look forward to some good sessions tomorrow.
Thanks,
Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Slawomir Kaplonski
Hi,

Can You share logs from Your haproxy-metadata-proxy service which is running in 
qdhcp namespace? There should be some info about reason of those errors 500.

> Wiadomość napisana przez Ignazio Cassano  w dniu 
> 12.11.2018, o godz. 19:49:
> 
> Hi All,
> I upgraded  manually my centos 7 openstack ocata to pike.
> All worked fine.
> Then I upgraded from pike to Queens and instances stopped to reach metadata 
> on 169.254.169.254 with error 500.
> I am using isolated metadata true in my dhcp conf and in dhcp namespace  the 
> port 80 is in listen.
> Please, anyone can help me?
> Regards
> Ignazio
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Queens metadata agent error 500

2018-11-12 Thread Ignazio Cassano
Hi All,
I upgraded  manually my centos 7 openstack ocata to pike.
All worked fine.
Then I upgraded from pike to Queens and instances stopped to reach metadata
on 169.254.169.254 with error 500.
I am using isolated metadata true in my dhcp conf and in dhcp namespace
 the port 80 is in listen.
Please, anyone can help me?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question

2018-11-12 Thread Slawomir Kaplonski
Hi,

You can choose which subnet (and even IP address) should be used, see 
„fixed_ips” field in [1].
If You will not provide anything Neutron will choose for You one IPv4 address 
and one IPv6 address and in both cases it will be chosen randomly from 
available IPs from all subnets.

[1] 
https://developer.openstack.org/api-ref/network/v2/?expanded=create-port-detail#create-port

> Wiadomość napisana przez Chen CH Ji  w dniu 12.11.2018, 
> o godz. 13:44:
> 
> I have a network created like below:
>  
> 1 network with 3 subnets (1 ipv6 and 2 ipv4) ,when boot, whether I can select 
> subnet to boot from or the subnet will be force selected by the order the 
> subnet created? Any document or code can be  referred? Thanks
>  
> | fd0e2078-044d-4c5c-b114-3858631e6328 | private   | 
> a8184e4f-5165-4ea8-8ed8-b776d619af6e fd9b:c245:1aaa::/64 |
> |  |   | 
> b3ee7cad-c672-4172-a183-8e9f069bea31 10.0.0.0/26 |
> |  |   | 
> 9439abfd-afa2-4264-8422-977d725a7166 10.0.2.0/24 |
>  
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] new SIGs to cover use cases

2018-11-12 Thread Arkady.Kanevsky
Team,
At today Board and joint TC and UC meetings 2 questions come up:

  1.  Do we have or want to create a user community around Hybrid cloud. This 
is one of the major push of OpenStack for the communities. With 70+% of 
questionnaire responders told that they deploy and use hybrid cloud. We do have 
Public and Private clouds SIGs but not hybrid. That brings the question where 
do we capture and driver hybrid cloud requirements.
  2.  As we target AI/ML as 2019 target application domain do we want to create 
a SIG for it? Or do we extend scientific community SIG to cover it?

Want to start dialog on it.
Thanks,
Arkady
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] [PackStack][Cinder] On which node OpenStack store data of each instance

2018-11-12 Thread Bernd Bausch
OpenStack stores volumes wherever you configure it to store them. On a 
disk array, an NFS server, a Ceph cluster, a dedicated storage node, a 
controller or even a compute node. And more.


My guess: Volumes on controllers or compute nodes are not a good 
solution for production systems.


By default, Packstack implements Cinder volumes as LVM volumes on the 
controller. It's probably possible to put the LVM volumes on other 
nodes, and it is definitely possible to configure a different backend 
than LVM, for example Netapp, in which case the volumes would be on a 
Netapp appliance.


On 11/12/2018 9:34 PM, Soheil Pourbafrani wrote:
My question is does OpenStack store volumes somewhere other than 
the compute node?
For example in PackStack on two nodes, one for controller and network 
and the other for compute node, the instance's volumes will be stored 
on the controller or on compute?




smime.p7s
Description: S/MIME Cryptographic Signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [glance] Rocky image import stuck

2018-11-12 Thread Brian Rosmaita
On 11/7/18 8:46 AM, Bernd Bausch wrote:
> Does the new image import process work? Am I missing something? Uploaded
> images stay in an intermediate state, either /uploading/ or /importing/,
> and never become /active.///Where should I look?

Apologies for the late reply to your question.  For anyone else with a
similar question, I replied to your thread on the general list:

http://lists.openstack.org/pipermail/openstack/2018-November/047186.html

> 
> On a stable/Rocky Devstack, I do:
> 
> openstack image create --disk-format qcow2 myimg
> 
> Image status is /queueing/, as expected.
> 
> glance image-stage --file devstack/files/Fedora...qcow2 --progress
> IMAGE_ID
> 
> Image status is /uploading/. A copy of the image is on
> /tmp/staging/IMAGE_ID.
> 
> glance image-import --import-method glance-direct IMAGE_ID
> 
> Sometimes, the status remains /uploading, /sometimes it turns
> /importing, /never /active./
> 
> glance-api log grep'd for the image ID:
> 
> Nov 07 18:51:36 rocky devstack@g-api.service[1033]: INFO
> glance.common.scripts.image_import.main [None
> req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] Task
> ec4b36fd-dece-4f41-aa8d-337d01c239f1: Got image data uri
> file:///tmp/staging/72a6d7d0-a538-4922-95f2-1649e9702eb2 to be imported
> Nov 07 18:51:37 rocky devstack@g-api.service[1033]: DEBUG
> glance_store._drivers.swift.store [None
> req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] Adding image
> object '72a6d7d0-a538-4922-95f2-1649e9702eb2' to Swift {{(pid=2250) add
> /usr/local/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py:941}}
> Nov 07 18:51:45 rocky devstack@g-api.service[1033]: DEBUG swiftclient
> [None req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] REQ: curl -i
> http://192.168.1.201:8080/v1/AUTH_9495609cff044252965f8c3e5e86f8e0/glance/72a6d7d0-a538-4922-95f2-1649e9702eb2-1
> -X PUT -H "X-Auth-Token: gABb4rWowjLQ..." {{(pid=2250) http_log
> /usr/local/lib/python2.7/dist-packages/swiftclient/client.py:167}}
> Nov 07 18:51:45 rocky devstack@g-api.service[1033]: DEBUG
> glance_store._drivers.swift.store [None
> req-7a747213-c160-4423-b703-c6cad15b9217 admin admin] Wrote chunk
> 72a6d7d0-a538-4922-95f2-1649e9702eb2-1 (1/?) of length 20480 to
> Swift returning MD5 of content: 5139500edbb5814a1351100d162db333
> {{(pid=2250) add
> /usr/local/lib/python2.7/dist-packages/glance_store/_drivers/swift/store.py:1024}}
> 
> And then nothing. So it does send a 200MB chunk to Swift. I can see it
> on Swift, too. But it stops after the first chunk and forgets to send
> the rest.
> 
> After I tried that a few times, now it doesn't even upload the first
> chunk. Nothing in Swift at all. No error in the Glance API log either.
> 
> Same problem with the /image-upload-via-import /command. I also tried
> the /web-download /import method; same result.
> 
> In all these cases, the image remains in an non-active state forever,
> i.e. an hour or so, when I lose patience and delete it.
> 
> "Classic" upload works (/openstack image create --file /). The log
> file then shows the expected chunk uploads to Swift.
> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [glance] task in pending state, image in uploading state

2018-11-12 Thread Brian Rosmaita
On 11/12/18 5:07 AM, Bernd Bausch wrote:
> Trying Glance's new import process, my images are all stuck in status
> uploading (both methods glance-direct and web-download).
> 
> I can see that there are tasks for those images; they are pending. The
> Glance API log doesn't contain anything that clues me in (debug logging
> is enabled).
> 
> The source code is too involved for my feeble Python and OpenStack
> Internals skills.
> 
> *How can I find out what blocks the tasks? *
> 
> This is a stable Rocky Devstack without any customization of the Glance
> config.
> 

The tasks engine Glance uses to facilitate the "new" (experimental in
Pike, current in Queens) image import process does not work when Glance
is deployed as a WSGI application using uWSGI [0]; as you observed, the
tasks remain stuck in 'pending'.  You can apply this patch [1] to your
devstack Glance and restart devstack@g-api and image import should work
without additional glance api-changes (the patch applied cleanly last
time I checked, which was a Stein-1 milestone devstack; it should apply
cleanly to your stable Rocky devstack).  You may also want to take a
look at the Glance admin guide [2] to see what configuration options are
available.

[0]
https://docs.openstack.org/releasenotes/glance/queens.html#relnotes-16-0-0-stable-queens-known-issues
[1] https://review.openstack.org/#/c/545483/
[2]
https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html

> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Retire of openstack-ansible-os_monasca-ui

2018-11-12 Thread Mohammed Naser
+1

On Mon, Nov 12, 2018 at 2:14 PM Kaio Oliveira 
wrote:

> Hi everyone,
>
> As part of the process of retiring the os_monasca-ui role from the
> openstack-ansible project, I'm announcing here on the ML that this role
> will be retired, because there's no reason to maintain it anymore.
> This has been discussed with the previous and the current
> OpenStack-Ansible PTL.
>
> The monasca-ui will be dealt within os_horizon role on openstack-ansible.
>
> Best regards,
> Kaio
>


-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set

2018-11-12 Thread Satish Patel
Mike,

I had same issue month ago when I roll out sriov in my cloud and this is what I 
did to solve this issue. Set following in flavor 

hw:numa_nodes=2

It will spread out instance vcpu across numa, yes there will be little penalty 
but if you tune your application according they you are good 

Yes this is bug I have already open ticket and I believe folks are working on 
it but its not simple fix. They may release new feature in coming oprnstack 
release. 

Sent from my iPhone

> On Nov 11, 2018, at 9:25 PM, Mike Joseph  wrote:
> 
> Hi folks,
> 
> It appears that the numa_policy attribute of a PCI alias is ignored for 
> flavors referencing that alias if the flavor also has hw:cpu_policy=dedicated 
> set.  The alias config is:
> 
> alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", 
> "product_id": "1004", "numa_policy": "preferred" }
> 
> And the flavor config is:
> 
> {
>   "OS-FLV-DISABLED:disabled": false,
>   "OS-FLV-EXT-DATA:ephemeral": 0,
>   "access_project_ids": null,
>   "disk": 10,
>   "id": "221e1bcd-2dde-48e6-bd09-820012198908",
>   "name": "vm-2",
>   "os-flavor-access:is_public": true,
>   "properties": "hw:cpu_policy='dedicated', pci_passthrough:alias='mlx:1'",
>   "ram": 8192,
>   "rxtx_factor": 1.0,
>   "swap": "",
>   "vcpus": 2
> }
> 
> In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 16 
> VFs configured.  We wish to expose these VFs to VMs that schedule on the 
> host.  However, the NIC is in NUMA region 0 which means that only half of the 
> compute node's CPU cores would be usable if we required VM affinity to the 
> NIC's NUMA region.  But we don't need that, since we are okay with 
> cross-region access to the PCI device.
> 
> However, we do need CPU pinning to work, in order to have efficient cache 
> hits on our VM processes.  Therefore, we still want to pin our vCPUs to 
> pCPUs, even if the pins end up on on a NUMA region opposite of the NIC.  The 
> spec for numa_policy seem to indicate that this is exactly the intent of the 
> option:
> 
> https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html
> 
> But, with the above config, we still get PCI affinity scheduling errors:
> 
> 'Insufficient compute resources: Requested instance NUMA topology together 
> with requested PCI devices cannot fit the given host NUMA topology.'
> 
> This strikes me as a bug, but perhaps I am missing something here?
> 
> Thanks,
> MJ
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Retire of openstack-ansible-os_monasca-ui

2018-11-12 Thread Kaio Oliveira
Hi everyone, 

As part of the process of retiring the os_monasca-ui role from the 
openstack-ansible project, I'm announcing here on the ML that this role will be 
retired, because there's no reason to maintain it anymore. 
This has been discussed with the previous and the current OpenStack-Ansible 
PTL. 

The monasca-ui will be dealt within os_horizon role on openstack-ansible. 

Best regards, 
Kaio 
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [PackStack][Cinder] On which node OpenStack store data of each instance

2018-11-12 Thread Soheil Pourbafrani
Hi,

My question is does OpenStack store volumes somewhere other than
the compute node?
For example in PackStack on two nodes, one for controller and network and
the other for compute node, the instance's volumes will be stored on the
controller or on compute?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [nova][neutron] boot server with more than one subnet selection question

2018-11-12 Thread Chen CH Ji
I have a network created like below:
 
1 network with 3 subnets (1 ipv6 and 2 ipv4) ,when boot, whether I can select subnet to boot from or the subnet will be force selected by the order the subnet created? Any document or code can be  referred? Thanks
 
| fd0e2078-044d-4c5c-b114-3858631e6328 | private   | a8184e4f-5165-4ea8-8ed8-b776d619af6e fd9b:c245:1aaa::/64 ||  |   | b3ee7cad-c672-4172-a183-8e9f069bea31 10.0.0.0/26 ||  |   | 9439abfd-afa2-4264-8422-977d725a7166 10.0.2.0/24 |
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[OpenStack-Infra] Meeting up Tuesday Evening for a Team Dinner (7pm outside main doors of citycube, near reg desk)

2018-11-12 Thread Clark Boylan
Hello everyone,

I'd mentioned last week that we could try and cobble together an informal team 
dinner Tuesday night. The marketplace mixer runs until 7:30pm Tuesday night. 
Why don't we pop out of that a little early and meet at 7:00 pm Tuesday night 
outside the main doors of the City Cube (just outside of registration desk).

From there we can take the S-bahn back towards town and find one or more places 
that will sit us. On an effort level, informal and going with what works is 
what I have in mind. I think we should be able to find a gasthaus/brewpub type 
setup that will work.

I'm totally open to location ideas if anyone wants to suggest them too.

Hope to see you there,
Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[openstack-dev] [neutron] Cancelling weekly meeting on November 12th

2018-11-12 Thread Miguel Lavalle
Dear Neutron Team,

Due to the OpenStack Summit in Berlin and the activities around it, let's
cancel the weekly IRC meeting on November 12th. We will resume normally on
the 20th.

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [glance] task in pending state, image in uploading state

2018-11-12 Thread Bernd Bausch
Trying Glance's new import process, my images are all stuck in status 
uploading (both methods glance-direct and web-download).


I can see that there are tasks for those images; they are pending. The 
Glance API log doesn't contain anything that clues me in (debug logging 
is enabled).


The source code is too involved for my feeble Python and OpenStack 
Internals skills.


*How can I find out what blocks the tasks? *

This is a stable Rocky Devstack without any customization of the Glance 
config.




smime.p7s
Description: S/MIME Cryptographic Signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [kolla][iconic] baremetal network

2018-11-12 Thread Mark Goddard
Hi Manuel,

You can configure the neutron tenant network types in kolla-ansible via the
'neutron_tenant_network_types' variable in globals.yml. It's a
comma-separated list. The default for that variable is vxlan, it's expected
that you set it to match your requirements.

The 'ironic_cleaning_network' variable should be the name of a network in
neutron to be used for cleaning, rather than an interface name. If you're
using flat networking, this will just be 'the' network.

Regards,
Mark

On Mon, 12 Nov 2018 at 01:23, Manuel Sopena Ballesteros <
manuel...@garvan.org.au> wrote:

> Dear Kolla-ansible team,
>
>
>
> I am trying to deploy ironic through kolla-ansible. According to ironic
> documentation
> https://docs.openstack.org/ironic/rocky/install/configure-networking.html
> we need bare metal network with tenant_network_types = flat. However
> kolla-ansible configures:
>
>
>
> [root@TEST-openstack-controller ~]# grep -R -i "baremetal" -R /etc/kolla/*
>
> /etc/kolla/neutron-openvswitch-agent/ml2_conf.ini:mechanism_drivers =
> openvswitch,baremetal,l2population
>
> /etc/kolla/neutron-server/ml2_conf.ini:mechanism_drivers =
> openvswitch,baremetal,l2population
>
>
>
>
>
> [root@TEST-openstack-controller ~]# grep -R -i "tenant_network_types" -R
> /etc/kolla/*
>
> /etc/kolla/neutron-openvswitch-agent/ml2_conf.ini:tenant_network_types =
> vxlan
>
> /etc/kolla/neutron-server/ml2_conf.ini:tenant_network_types = vxlan
>
>
>
> This is my filtered globals.yml:
>
>
>
> [root@openstack-deployment ~]# grep -E -i "(^[^#]|ironic)"
> /etc/kolla/globals.yml
>
> ---
>
> openstack_release: "rocky"
>
> kolla_internal_vip_address: "192.168.1.51"
>
> neutron_external_interface: "ens161"
>
> enable_cinder: "yes"
>
> enable_cinder_backend_nfs: "yes"
>
> #enable_horizon_ironic: "{{ enable_ironic | bool }}"
>
> enable_ironic: "yes"
>
> #enable_ironic_ipxe: "no"
>
> #enable_ironic_neutron_agent: "no"
>
> #enable_ironic_pxe_uefi: "no"
>
> glance_enable_rolling_upgrade: "no"
>
> # Ironic options
>
> # following value must be set when enable ironic, the value format
>
> ironic_dnsmasq_dhcp_range: "192.168.1.100,192.168.1.150"
>
> # PXE bootloader file for Ironic Inspector, relative to /tftpboot.
>
> ironic_dnsmasq_boot_file: "pxelinux.0"
>
> ironic_cleaning_network: "ens224"
>
> #ironic_dnsmasq_default_gateway: 192.168.1.255
>
> # Configure ironic upgrade option, due to currently kolla support
>
> # two upgrade ways for ironic: legacy_upgrade and rolling_upgrade
>
> # The variable "ironic_enable_rolling_upgrade: yes" is meaning
> legacy_upgrade
>
> #ironic_enable_rolling_upgrade: "yes"
>
> #ironic_inspector_kernel_cmdline_extras: []
>
> tempest_image_id:
>
> tempest_flavor_ref_id:
>
> tempest_public_network_id:
>
> tempest_floating_network_name:
>
>
>
> ens224 is a my management network for admins to ssh and install and manage
> the physical nodes.
>
>
>
> Any idea why tenant_network_types = vxlan and not flat as suggested by
> the ironic documentation?
>
>
>
> Thank you
>
>
>
> *Manuel Sopena Ballesteros *| Big data Engineer
> *Garvan Institute of Medical Research *
> The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
> *T:* + 61 (0)2 9355 5760 | *F:* +61 (0)2 9295 8507 | *E:*
> manuel...@garvan.org.au
>
>
> NOTICE
> Please consider the environment before printing this email. This message
> and any attachments are intended for the addressee named and may contain
> legally privileged/confidential/copyright information. If you are not the
> intended recipient, you should not read, use, disclose, copy or distribute
> this communication. If you have received this message in error please
> notify us at once by return email and then delete both messages. We accept
> no liability for the distribution of viruses or similar in electronic
> communications. This notice should not be removed.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] New Four Opens Project

2018-11-12 Thread Chris Hoge
Earlier this year, the OpenStack Foundation staff had the opportunity to 
brainstorm some ideas about how to express the values behind The Four Opens and 
how they are applied in practice. As the Foundation grows in scope to include 
new strategic focus areas and new projects, we felt it was important to provide 
explanation and guidance on the principles that guide our community.

We’ve collected these notes and have written some seeds to start this document. 
I’ve staged this work into github and have prepared a review to move the work 
into OpenStack hosting, turning this over to the community to help guide and 
shape the document.

This is very much a work in progress, but we have a goal to polish this up and 
make it an important document that captures our vision and values for the 
OpenStack development community, guides the establishment of governance for new 
top-level projects, and is a reference for the open-source development 
community as a whole.

I also want to be clear that the original Four Opens, as listed in the 
OpenStack governance page, is an OpenStack TC document. This project doesn’t 
change that. Instead, it is meant to be applied to the Foundation as a whole 
and be a reference to the new projects that land both as pilot top-level 
projects and projects hosted by our new infrastructure efforts.

Thanks to all of the original authors of the Four-Opens for your visionary work 
that started this process, and thanks in advance to the community members who 
will continue to grow and evolve this document.

Chris Hoge
OpenStack Foundation

Four Opens: https://governance.openstack.org/tc/reference/opens.html
New Project Review Patch: https://review.openstack.org/#/c/617005/
Four Opens Document Staging: https://github.com/hogepodge/four-opens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] [openstack-operators] [radosgw] Adding user type attribute in radosGW admin operations API to simplify quota operations

2018-11-12 Thread Jose Castro Leon
Dear all,

At CERN, we are currently adding the radosgw component in our private
cloud OpenStack based offering. In order to ease the integration with
lifecycle management, we are proposing to enable the possibility to add
users with the keystone type through the radosgw Admin Ops API.

During the integration process, we observed that the users are created
upon first user request onto the radosgw. For the quota configuration,
this is taken from the default values configured and once this user has
been created,then it can be modified later.

For the lifecycle management of resources, we are using OpenStack
Mistral that is orchestrating the needed steps to configure the project
from creation to be ready to be offered to the user. In this workflow,
we configure the services that the project has access and the quotas
associated with them.

For the radosgw component we need to consider two different events:
provisioning and decommissioning of resources in it. On the cleanup /
decommissioning side every bit is covered through the admin operations
api. 

Here comes our problem.

On the provisioning side, we could not apply quotas to users that have
not yet been created by radosgw (as it waits for the first user
request). Once they are created, they have a type attribute with the
value keystone

So we would like to be able to create the users on radosgw with the
keystone type, way before the first user request, by adding the
possibility to specify the type on user creation.

We think this addition has added value for other OpenStack operators
that are using radosgw for their S3/Swift offering and gives them
flexibility for lifecycle management of the resources contained in
radosgw. We have submmitted a feature request for this particular
addition to Ceph tracker and we would like to know if you are also
interested into this feature as well.


Cheers,
Jose Castro Leon
CERN Cloud Infrastructure Team
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [openstack-ansible] meeting cancelled

2018-11-12 Thread Mohammed Naser
Hi everyone,

Due to most of us being at the OpenStack Summit, we're cancelling the
meeting tomorrow.

Thanks everyone and see you in Berlin.

Regards,
Mohammed

-- 
Mohammed Naser — vexxhost
-
D. 514-316-8872
D. 800-910-1726 ext. 200
E. mna...@vexxhost.com
W. http://vexxhost.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][manila] Cinder and Friends Dinner at Berlin Summit ...

2018-11-12 Thread Jay S Bryant

Ivan,

Yeah, I saw that was the case but it seems like there is not a point in 
time where there isn't a conflict.  Need to get some food at some point 
so anyone who wants to join can, and then we can head to the party if 
people want.


Jay


On 11/10/2018 8:07 AM, Ivan Kolodyazhny wrote:

Thanks for organizing this, Jay,

Just in case if you missed it, Matrix Party hosted by Trilio + Red Hat 
will be on Tuesday too.



Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Thu, Nov 8, 2018 at 12:43 AM Jay S Bryant > wrote:


All,

I am working on scheduling a dinner for the Cinder team (and our
extended family that work on and around Cinder) during the Summit
in Berlin.  I have created an etherpad for people to RSVP for
dinner [1].

It seemed like Tuesday night after the Marketplace Mixer was the
best time for most people.

So, it will be a little later dinner ... 8 pm.  Here is the place:

Location: http://www.dicke-wirtin.de/
Address: Carmerstraße 9, 10623 Berlin, Germany

It looks like the kind of place that will fit for our usual group.

If planning to attend please add your name to the etherpad and I
will get a reservation in over the weekend.

Hope to see you all on Tuesday!

Jay

[1] https://etherpad.openstack.org/p/BER-cinder-outing-planning

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] No meeting this week ...

2018-11-12 Thread Jay S Bryant

Team,

Just a friendly reminder that we will not have our weekly meeting this 
week due to the OpenStack Summit.


Hope to see some of you here.  Otherwise, talk to you next week!

Thanks,

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Meetings cancelled this and next week - resumes November 26th

2018-11-12 Thread Julia Kreger
Greetings everyone!

We're cancelling this week's meeting and next week's meeting due to the
OpenStack Summit and US holidays the following week where some of our core
reviewers will also be on vacation.

If there are any questions, please feel to ask in #openstack-ironic.

See  you all in IRC.

-Julia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev