Re: [openstack-dev] [Neutron][Dragonflow] - Documentation

2015-12-13 Thread Haifeng Li
Good work~

2015-12-13 15:38 GMT+08:00 Gal Sagie :

> Hello All,
>
> We have recently uploaded a big amount of documents and diagrams to
> Dragonflow
> repository in order to expose what we are doing and make it as simple as
> we can for new
> contributors and users to start using Dragonflow.
>
> I have created a wiki page for Dragonflow [1].
>
> Some recommended documents:
>
> - Dragonflow mission statement and high level architecture [2]
> - Dragonflow pluggable DB architecture [3]
> - Dragonflow Distributed DHCP implementation [4]
> - Dragonflow OpenFlow Pipeline explained [5]
>
> If you are both new or familiar with the project, i think you will find
> these
> interesting to read.
>
> [1] https://wiki.openstack.org/wiki/Dragonflow
> [2]
> http://docs.openstack.org/developer/dragonflow/distributed_dragonflow.html
> [3] http://docs.openstack.org/developer/dragonflow/pluggable_db.html
> [4] http://docs.openstack.org/developer/dragonflow/distributed_dhcp.html
> [5] http://docs.openstack.org/developer/dragonflow/pipeline.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] neutron metadata-agent HA

2015-12-13 Thread Gary Kotton





On 12/12/15, 10:44 PM, "Assaf Muller"  wrote:

>The neutron metadata agent is stateless. It takes requests from the
>metadata proxies running in the router namespaces and moves the
>requests on to the nova server. If you're using HA routers, start the
>neutron-metadata-agent on every machine the L3 agent runs, and just
>make sure that the metadata-agent is restarted in case it crashes and
>you're done.

So does this mean that it could be the single point of failure?

>Nothing else you need to do.
>
>On Fri, Dec 11, 2015 at 3:24 PM, Fabrizio Soppelsa
> wrote:
>>
>> On Dec 10, 2015, at 12:56 AM, Alvise Dorigo 
>> wrote:
>>
>> So my question is: is there any progress on this topic ? is there a way
>> (something like a cronjob script) to make the metadata-agent redundant
>> without involving the clustering software Pacemaker/Corosync ?
>>
>>
>> Reason for such a dirty solution instead of rely onto pacemaker?
>>
>> I’m not aware of such initiatives - just checked the blueprints in Neutron
>> and I found no relevant. I can suggest to file a proposal to the
>> correspondent launchpad page, by elaborating your idea.
>>
>> F.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-13 Thread Yuriy Taraday
On Sun, Dec 13, 2015 at 12:14 PM Shinobu Kinjo  wrote:

> What is the current status of this failure?
>
>  > 2015-12-13 08:55:04.863 | ValueError: need more than 1 value to unpack
>

It shouldn't reappear in gate because CI images have been reverted to tox
2.2.1.
It can be reproduced locally if one has tox 2.3.0 installed locally.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [astara][requirements] astara-appliance has requirements not in global-requirements

2015-12-13 Thread Andreas Jaeger

Astara team,

The requirements proposal job complains about astara-appliance with:
'gunicorn' is not in global-requirements.txt

Please get this requirement into global-requirements or remove it.

Details:
https://jenkins.openstack.org/job/propose-requirements-updates/602/consoleFull
http://docs.openstack.org/developer/requirements/
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Dragonflow] - IRC Meeting tomorrow (12/14) - 0900 UTC

2015-12-13 Thread Gal Sagie
Hello All,

We will have an IRC meeting tomorrow (Monday, 12/14) at 0900 UTC
in #openstack-meeting-4

Please review the expected meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/Dragonflow

You can view last meeting action items and logs here:
http://eavesdrop.openstack.org/meetings/dragonflow/2015/dragonflow.2015-12-07-08.59.html

Please update the agenda if you have any subject you would like to discuss
about.

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-13 Thread Shinobu Kinjo
What is the current status of this failure?

 > 2015-12-13 08:55:04.863 | ValueError: need more than 1 value to unpack

Thank you
Shinobu


On Sun, Dec 13, 2015 at 4:48 AM, Yuriy Taraday  wrote:

> On Sat, Dec 12, 2015 at 10:27 PM Jeremy Stanley  wrote:
>
>> On 2015-12-12 19:00:23 + (+), Yuriy Taraday wrote:
>> > I think it should be a good first step in right direction. For example,
>> > with today's issue it would break gate for tempest itself only since all
>> > other jobs would have preinstalled tox reverted to one mentioned in
>> > upper-constraints.
>> [...]
>>
>> Other way around. It would force DevStack to downgrade tox if the
>> existing version on the worker were higher. Pretty much no other
>> jobs install tox during the job, so they rely entirely on the one
>> present on the system being correct and an entry for tox in
>> upper-constraints.txt wouldn't help them at all, whether they're
>> using that file to constrain their requirements lists or not (since
>> tox is not present in any of our projects' requirements lists).
>>
>
> By "other" jobs I meant all jobs that use devstack to install tempest.
> That's seem to be all jobs in all projects except probably tempest itself.
>
> As for jobs that don't use devstack but only run tox, I suggest us to add
> a step to adjust tox version according to upper-constraints as well.
>
> Also, the constraints list is built from pip installing everything
>> in global-requirements.txt into a virtualenv, so if tox is not a
>> direct or transitive requirement then it will end up dropped from
>> upper-constraints.txt on the next automated proposal in review.
>>
>
> Ok, will fix that in my CR.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Email:
shin...@linux.com
GitHub:
shinobu-x 
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] HELP CONFIRM OR DISCUSS: How to add extension common attributes into db table 'standardattributes'

2015-12-13 Thread zhaobo
Hi guys,
Could you give some good ideas about add neutron common attributes into db 
table 'standardattributes'? Like timestamp, it must be a common one should be 
added.
The table 'standardattributes' is imported from [1]. It must be neutron common 
attrs should be added into this table in the future.


Now I had find a way to add timestamp based on subscribe/notify through 
defining a new subscribe resource named 'standardattr'. 
The code is [2][3]. I use this way as garyk is moving the extension resource 
from base code(db_base_plugin_base / db_base_plugin_v2 )to extension. I realize 
it will not be allowed to modify the code in base anymore. So I decide to add a 
mechanism into base code for not add the code when every new attribute 
addition, if we want to add other attributes into this table, we just fulfill 
their own field addition functions into db model. If my thought is reasonable, 
please add review list into [2][3].


Original, I just think I could set/get common attr through 
db_model.standardattr.(which column I had added into table and models/) , more 
details like network_model.standardattr.created_at/updated_at. I had made it in 
the patches [3] before. So I still have doubt about whether my way is correct 
or make it difficult. I wish your kind help and guidance. 


If there is some doubts of mine, please feel free to correct. I'm now still 
feel whether my way is right or can be accept by our community. If my way is 
too complex and useless, which way should I follow? Could anyone give your kind 
suggestions? Thanks.


And one more question:
I think make timestamp as extension is good, but there are some problem when 
use it.


1. As this bp will expand the tables in db of neutron core resources, whatever 
users use a plugin which support timestamp or not, db will still store the 
related data. 
---My reason is: At first , if a plugin unable the timestamp and won't 
store data into db,  but when this plugin support this plugin once, db become 
to store timestamp.
---This may cause issue about how to set the timestamp to the original 
data, if plugin enable/unable several times and create some resources during 
the change,
---the db will contain some strange data, and when the plugin support 
timestamp at last, users will be confused why the resources created during the 
period of unable timestamp don't have the correct timestamp.
---And it will cause some strong behaviors, like users create the base 
resource for a long time, why it doesn't have timestamp when it is true to open 
the timestamp.
2.Like the bp of mtu, I remember this field of network is as the neutron core 
attribute of network in the beginning. Now it had been moved to extension, and 
the field of mtu still be stored into db whatever plugin support mtu or not.
3.Like nova and other modules , the timestamp is treated as core attribute of 
resources, like instance, action list and so on. Neutron treat it as extension, 
so I think this timestamp of neutron should be drawn towards the usage of other 
modules. 
4.For other reasons, we can request the incremental query based on timestamp, 
when the scale is large, and there are so many messages or resources in system, 
no one want to list the info through get all of them.


So I have doubts about whether timestamp fields should be stored into db 
whatever the plugin support timestamp or not. When plugin support it, users 
could see them in the returned info. Wish your kind suggestion to help me to 
move on the timestamp. Thanks.


[1] https://review.openstack.org/#/c/222079/
[2] https://review.openstack.org/#/c/251193/ 
[3] https://review.openstack.org/#/c/213586/__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] broken gitreviews after stackforge rename

2015-12-13 Thread Andreas Jaeger
A lot of projects still have broken .gitreview files after the 
stackforge rename two months ago ;(. This will make contribution by new 
users difficult - and break all our automation jobs.


Please check 
https://review.openstack.org/#/q/status:open++topic:stackforge-retirement,n,z 
and approve changes for your projects - also in stable branches.


Incomplete list of projects:
* Lots of fuel projects
* stacktach*
* blazar*
* networking-mlnx
* compass*
* networking-nec
* nova-docker
* cloud-init
* powervc-driver
* sticks*
* surveil*
* cloudkitty*
* anvil
* rack*

thanks,
Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-13 Thread Shinobu Kinjo
Thank you for your confirmation.
Something installed tox onto my CentOS7, 3.10.0-229.el7.x86_64 because of
dependency.
I need to elaborate more.

 Shinobu

On Sun, Dec 13, 2015 at 6:47 PM, Yuriy Taraday  wrote:

> On Sun, Dec 13, 2015 at 12:14 PM Shinobu Kinjo 
> wrote:
>
>> What is the current status of this failure?
>>
>>  > 2015-12-13 08:55:04.863 | ValueError: need more than 1 value to unpack
>>
>
> It shouldn't reappear in gate because CI images have been reverted to tox
> 2.2.1.
> It can be reproduced locally if one has tox 2.3.0 installed locally.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Email:
shin...@linux.com
GitHub:
shinobu-x 
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] IRC Meeting - Tuesday 0300 UTC (12/15)

2015-12-13 Thread Gal Sagie
Hello All,

I have updated the agenda for the upcoming Kuryr IRC meeting [1]
Please review and add any additional topics you might want to cover.

Also please go over last meeting action items [2] , there are still patches
(IPAM)
that are looking for review love :)

Since this is the week we do the meeting in an alternating time, i won't be
able to attend
(and i believe toni won't be able to as well)
Taku/banix please run the meeting.

banix, would love if you can update regarding the team/peoples going to work
on testing/CI for Kuryr, i think this is a top priority for us at this
cycle.

[1] https://wiki.openstack.org/wiki/Meetings/Kuryr
[2]
http://eavesdrop.openstack.org/meetings/kuryr/2015/kuryr.2015-12-07-15.00.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][release][all] Automatic .ics generation for OpenStack's and project's deadlines

2015-12-13 Thread Louis Taylor
On Thu, Dec 10, 2015 at 06:20:44PM +, Flavio Percoco wrote:
> Greetings,
> 
> I'd like to explore the possibility of having .ics generated - pretty
> much the same way we generate it for irc-meetings - for the OpenStack
> release schedule and project's deadlines. I believe just 1 calendar
> would be enough but I'd be ok w/  a per-project .ics too.
> 
> With the new home for the release schedule, and it being a good place
> for projects to add their own deadlines as well, I believe it would be
> good for people that use calendars to have these .ics being generated
> and linked there as well.
> 
> Has this been attempted? Any objections? Is there something I'm not
> considering?

I had a bit of time and started hacking up a simple version of this:

https://github.com/kragniz/release-schedule-generator

The output of this may or may not be standards-compliant, but google calendar
appears to accept it. You can see example output here:

https://kragniz.eu/pub/schedule.ics

There's currently no support for project-specific events or RST output, but
these can be added later if people think this current implementation isn't too
bad.

Feel free to send feedback or (even better) pull requests in my direction if
this seems okay.

Cheers,
Louis


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] neutron metadata-agent HA

2015-12-13 Thread Eugene Nikanorov
It is as 'single' as active L3 router that is handling traffic at current
point of time.

On Sun, Dec 13, 2015 at 11:13 AM, Gary Kotton  wrote:

>
>
>
>
>
> On 12/12/15, 10:44 PM, "Assaf Muller"  wrote:
>
> >The neutron metadata agent is stateless. It takes requests from the
> >metadata proxies running in the router namespaces and moves the
> >requests on to the nova server. If you're using HA routers, start the
> >neutron-metadata-agent on every machine the L3 agent runs, and just
> >make sure that the metadata-agent is restarted in case it crashes and
> >you're done.
>
> So does this mean that it could be the single point of failure?
>
> >Nothing else you need to do.
> >
> >On Fri, Dec 11, 2015 at 3:24 PM, Fabrizio Soppelsa
> > wrote:
> >>
> >> On Dec 10, 2015, at 12:56 AM, Alvise Dorigo 
> >> wrote:
> >>
> >> So my question is: is there any progress on this topic ? is there a way
> >> (something like a cronjob script) to make the metadata-agent redundant
> >> without involving the clustering software Pacemaker/Corosync ?
> >>
> >>
> >> Reason for such a dirty solution instead of rely onto pacemaker?
> >>
> >> I’m not aware of such initiatives - just checked the blueprints in
> Neutron
> >> and I found no relevant. I can suggest to file a proposal to the
> >> correspondent launchpad page, by elaborating your idea.
> >>
> >> F.
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ansible] One or more undefined variables: 'dict object' has no attribute 'bridge'

2015-12-13 Thread Kevin Carter
Hi Mark,

What Wade said is spot on, there's a missing entry in your inventory. 
 From looking at your configuration it seems you need to add this stanza 
[0], or something like it, in to your `openstack_user_config.yml` file 
under the "provider_networks" section. This will add the required tunnel 
network configuration type to your compute hosts and allow the neutron 
installation to continue.

On a related note the issue is due to an assumption we've made in the 
playbook. We've assumed that the "tunnel" network type is always present 
on compute nodes. In you current configuration file only a single flat 
network was referenced which failed to load the required network entry 
into the host variables of your inventory on your compute node. While I 
assume you'll want or need tunnel networks on your compute nodes, the 
interface file you shared has a "br-vxlan" device, the single network 
use case seems like something we should be able to address. If you 
wouldn't mind raising an issue in launchpad [1] I'd appreciated it.

[0] http://paste.openstack.org/show/481751/
[1] https://bugs.launchpad.net/openstack-ansible

--

Kevin Carter
IRC: cloudnull


On 12/12/2015 11:11 AM, Wade Holler wrote:
> Hi Mark,
>
> I haven't reviewed your configs yet but if "bridge" is a valid ansible
> inventory attribute , then this error is usually caused by trying to
> reference a host that ansible didn't check in on yet / gather facts on.
> At least this is what is has been in my brief experience.
>
> For example if I wanted to reference all hosts in a "webservers" ansible
> group to build a haproxy config but that playbook didn't apply to the
> "webservers" group them their facts have not been collected.
>
> Just a thought.
>
> Cheers
> Wade
>
>
> On Sat, Dec 12, 2015 at 9:34 AM Mark Korondi  > wrote:
>
> Hi all,
>
> Trying to set up openstack-ansible, but stuck on this point:
>
>  > TASK: [set local_ip fact (is_metal)]
> *
>  > ...
>  > fatal: [os-compute-1] => One or more undefined variables: 'dict
> object' has no
>  > attribute 'bridge'
>  > ...
>  > One or more undefined variables: 'dict object' has no attribute
> 'bridge'
>
> These are my configs:
> - http://paste.openstack.org/show/481739/
>(openstack_user_config.yml)
> - http://paste.openstack.org/show/481740/
>(/etc/network/interfaces on compute host called `os-compute-1`)
>
> I set up the eth12 veth pair interface also on the compute host as
> you can see.
> `ifup-ifdown` works without any problems reported.
>
>
> Why is it reporting an undefined bridge variable? Any ideas on my
> config is well
> appreciated.
>
> Mark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Tempest] OS-INHERIT APIs were skipped by Jenkins because "os_inherit" in keystone.conf was disable.

2015-12-13 Thread Ken'ichi Ohmichi
Hi Henry,

When adding this extension on https://review.openstack.org/#/c/35986/
, the extension is disabled as the default setting.
Now can we enable this os_inherit extension on Keystone side like

keystone/common/config.py
'os_inherit': [
-   cfg.BoolOpt('enabled', default=False,
+  cfg.BoolOpt('enabled', default=True,

?

Or if we don't want to change the default value on Keystone side, we
can enable this os_inherit extension on DevStack side for testing the
extension on Tempest.

This extension has been implemented since 2 years ago, and the API
doc[1] also contains it.
So it is nice to enable it for the development and test, I feel.

Thanks
Ken Ohmichi

---
[1]: 
http://developer.openstack.org/api-ref-identity-v3-ext.html#identity_v3_OS-INHERIT-ext




2015-12-09 18:03 GMT+09:00 Henry Nash :
> Hi Maho,
>
> So in the keystone unit tests, we flip the os_inherit flag back and forth 
> during tests to make sure it is honored correctly.  For the tempest case, I 
> don’t think you need to do that level of testing. Setting the os_inherit flag 
> to true will have no effect if you have not created any role assignments that 
> are inherited - you’ll just get the regular assignments back as normal. So 
> provided there is no test data leakage between tests (i.e. old data lying 
> around from a previous test), I think it should be safe to run tempest with 
> os_inherit switched on.
>
> Henry
>> On 9 Dec 2015, at 08:45, koshiya maho  wrote:
>>
>> Hi all,
>>
>> I pushed the patch set of OS-INHERIT API tempest (keystone v3).
>> https://review.openstack.org/#/c/250795/
>>
>> But, all API tests in patch set was skipped, because "os_inherit" in 
>> keystone.conf of
>> Jenkins jobs was disable. So, it couldn't be confirmed.
>>
>> Reference information :
>> http://logs.openstack.org/95/250795/5/check/gate-tempest-dsvm-full/fbde6d2/logs/etc/keystone/keystone.conf.txt.gz
>> #L1422
>> https://github.com/openstack/keystone/blob/master/keystone/common/config.py#L224
>>
>> Default "os_inherit" setting is disable. OS-INHERIT APIs need "os_inherit" 
>> setting enable.
>>
>> For keystone v3 tempests using OS-INHERIT, we should enable "os_inherit" of 
>> the existing keystone.conf called by Jenkins.
>> Even if "os_inherit" is enable, I think there have no effects on other 
>> tempests.
>>
>> Do you have any other ideas?
>>
>> Thank you and best regards,
>>
>> --
>> Maho Koshiya
>> NTT Software Corporation
>> E-Mail : koshiya.m...@po.ntts.co.jp
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Microversions support for extensions without Controller

2015-12-13 Thread Zhenyu Zheng
Hi, I think for this kind of change you should register a Blueprint and
submit a spec for discussion. Sounds like it will be a bit change.

BR

On Sun, Dec 13, 2015 at 2:18 AM, Alexandre Levine  wrote:

> Hi all,
>
> os-user-data extension implements server_create method to add user_data
> for server creation. No Controller is used for this, only "class
> UserData(extensions.V21APIExtensionBase)".
>
> I want to add server_update method allowing to update the user_data.
> Obviously I have to add it as a microversioned functionality.
>
> And here is the problem: there is no information about the incoming
> request version in this code. It is available for Controllers only. But
> checking the version in controller would be too late, because the instance
> is already updated (non-generator extensions are post-processed).
>
> Can anybody guide me how to resolve this collision?
>
> Would it be possible to just retroactively add the user_data modification
> for the whole 2.1 version skipping the microversioning? Or we need to
> change nova so that request version is passed through to extension?
>
> Best regards,
>   Alex Levine
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Microversions support for extensions without Controller

2015-12-13 Thread Zhenyu Zheng
Sorry, s/bit/big

On Mon, Dec 14, 2015 at 10:46 AM, Zhenyu Zheng 
wrote:

> Hi, I think for this kind of change you should register a Blueprint and
> submit a spec for discussion. Sounds like it will be a bit change.
>
> BR
>
> On Sun, Dec 13, 2015 at 2:18 AM, Alexandre Levine <
> alexandrelev...@gmail.com> wrote:
>
>> Hi all,
>>
>> os-user-data extension implements server_create method to add user_data
>> for server creation. No Controller is used for this, only "class
>> UserData(extensions.V21APIExtensionBase)".
>>
>> I want to add server_update method allowing to update the user_data.
>> Obviously I have to add it as a microversioned functionality.
>>
>> And here is the problem: there is no information about the incoming
>> request version in this code. It is available for Controllers only. But
>> checking the version in controller would be too late, because the instance
>> is already updated (non-generator extensions are post-processed).
>>
>> Can anybody guide me how to resolve this collision?
>>
>> Would it be possible to just retroactively add the user_data modification
>> for the whole 2.1 version skipping the microversioning? Or we need to
>> change nova so that request version is passed through to extension?
>>
>> Best regards,
>>   Alex Levine
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - availability zone performance regression and discussion about added network field

2015-12-13 Thread Hirofumi Ichihara

Hi Kevin,

On 2015/12/14 11:10, Kevin Benton wrote:

Hi all,

The availability zone code added a new field to the network API that 
shows the availability zones of a network. This caused a pretty big 
performance impact to get_networks calls because it resulted in a 
database lookup for every network.[1]


I already put a patch up to join the information ahead of time in the 
network model.[2]
I agree with your suggestion. I believe that the patch can solve the 
performance issue.


However, before we go forward with that, I think we should consider 
the removal of that field from the API.


Having to always join to the DHCP agents table to lookup which zones a 
network has DHCP agents on is expensive and is duplicating information 
available with other API calls.


Additionally, the field is just called 'availability_zones' but it's 
being derived solely from AZ definitions in DHCP agent bindings for 
that network. To me that doesn't represent where the network is 
available, it just says which zones its scheduled DHCP instances live 
in. If that's the purpose, then we should just be using the DHCP agent 
API for this info and not impact the network API.

I don't think so. I have three points.

1. Availability zone is implemented in just a case with Agent now, but 
it's reference implementation. For example, we should expect that 
availability zone will be used by plugin without agent.


2. In users view, availability zone is related to network resource. On 
the other hand, users doesn't need to consider Agent or operators 
doesn't like to enable users to do in the first place. So I don't agree 
with using Agent API.


3. We should consider whether users want to know the field. Originally, 
the field doesn't exist in Spec[3] but I added it according with 
reviewer's opinion(maybe Akihiro?). This is about discussion of use 
case. After users create resources via API with availability_zone_hints 
so that they achieve HA for their service, they want to know which zones 
are their resources hosted on because their resources might not be 
distributed on multiple availability zones by any reasons. In the case, 
they need to know "availability_zones" for the resources via Network API.


Thanks,
Hirofumi

[3]: https://review.openstack.org/#/c/169612/31



Thoughts?

1. https://bugs.launchpad.net/neutron/+bug/1525740
2. https://review.openstack.org/#/c/257086/

--
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - availability zone performance regression and discussion about added network field

2015-12-13 Thread Hong Hui Xiao

Hi,

Can we just add "availability_zones" as one Column in Network? And
update it when "NetworkDhcpAgentBinding" updates. The code will be a bit
more complex, but it can save the time when retrieving Network resource.





From:   Hirofumi Ichihara 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   12/14/2015 13:33
Subject:Re: [openstack-dev] [neutron] - availability zone performance
regression and discussion about added network field



Hi Kevin,

On 2015/12/14 11:10, Kevin Benton wrote:
  Hi all,

  The availability zone code added a new field to the network API that
  shows the availability zones of a network. This caused a pretty big
  performance impact to get_networks calls because it resulted in a
  database lookup for every network.[1]

  I already put a patch up to join the information ahead of time in the
  network model.[2]
I agree with your suggestion. I believe that the patch can solve the
performance issue.

  However, before we go forward with that, I think we should consider
  the removal of that field from the API.

  Having to always join to the DHCP agents table to lookup which zones
  a network has DHCP agents on is expensive and is duplicating
  information available with other API calls.

  Additionally, the field is just called 'availability_zones' but it's
  being derived solely from AZ definitions in DHCP agent bindings for
  that network. To me that doesn't represent where the network is
  available, it just says which zones its scheduled DHCP instances live
  in. If that's the purpose, then we should just be using the DHCP
  agent API for this info and not impact the network API.
I don't think so. I have three points.

1. Availability zone is implemented in just a case with Agent now, but it's
reference implementation. For example, we should expect that availability
zone will be used by plugin without agent.

2. In users view, availability zone is related to network resource. On the
other hand, users doesn't need to consider Agent or operators doesn't like
to enable users to do in the first place. So I don't agree with using Agent
API.

3. We should consider whether users want to know the field. Originally, the
field doesn't exist in Spec[3] but I added it according with reviewer's
opinion(maybe Akihiro?). This is about discussion of use case. After users
create resources via API with availability_zone_hints so that they achieve
HA for their service, they want to know which zones are their resources
hosted on because their resources might not be distributed on multiple
availability zones by any reasons. In the case, they need to know
"availability_zones" for the resources via Network API.

Thanks,
Hirofumi

[3]: https://review.openstack.org/#/c/169612/31


  Thoughts?

  1. https://bugs.launchpad.net/neutron/+bug/1525740
  2. https://review.openstack.org/#/c/257086/

  --
  Kevin Benton


  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ansible] One or more undefined variables: 'dict object' has no attribute 'bridge'

2015-12-13 Thread Mark Korondi
Thanks cloudnull,

This solved the installation issue. I commented out all non-flat
related networks before, to investigate my main problem, which is

> PortBindingFailed: Binding failed for port 
> fe67a2d5-6d6a-4440-80d0-acbe2ff5c27f, please check neutron logs for more 
> information.

I still have this problem; I created the flat external network with no
errors, still I get this when trying to launch an instance. What's
really interesting to me, is that no neutron microservices are
deployed and running on the compute node.

Mark (kmARC)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Upcoming specs and blueprints for Trove/Mitaka

2015-12-13 Thread Flavio Percoco

On 10/12/15 20:05 +, Amrith Kumar wrote:

Flavio,

The issue we had in the last cycle was that a lot of specs and code arrived for 
review late in the process and this posed a challenge. The intent this time 
around was to ensure that there wasn't such a back-end loaded process and that 
people had a good idea of what is coming down the pike. It is more of a traffic 
management solution, and one that we are trying for the first time in this 
cycle. We will get better in the next cycle.

That is my interpretation of the process and the context for my broadcast 
message. Note that this is not a hard spec freeze and a request for exceptions 
(which seems to be your interpretation). On the contrary, it is a heads-up to 
the rest of the Trove team of what is coming down the pike.


Ah indeed! I had interpreted it as a hard spec freeze. I don't think a
hard spec freeze is a bad idea but it's harder to enforce and
communicate.

Looks like you guys have it under control, thanks for clarifying. :)
Flavio



Thanks,

-amrith


-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com]
Sent: Thursday, December 10, 2015 1:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Upcoming specs and blueprints for
Trove/Mitaka

On 10/12/15 18:44 +, Vyvial, Craig wrote:
>Amrith/Victoria,
>
>Thanks for the heads up about this these blueprints for the Mitaka cycle.
This looks like a lot of work but there shouldn’t be a reason to hold back new
blueprints this early in the cycle if they plan on being completed in Mitaka.
Can we get these blueprints written up and submitted so that we can get
them approved by Jan 8th? Due to the holidays i think this makes sense.
>
>These blueprints should all be complete and merged by M-3 cut date (Feb
29th) for the feature freeze.
>
>Let me know if there are concerns around this.
>

Sorry for jumping in out of the blue, especially as I haven't been part of the
process but, wouldn't it be better for Trove to just skips having a hard spec
freeze in Mitaka and just plan it for N (as Amrith
proposed) ?

Having a deadline and then allowing new spec to be proposed (or just a
bunch of freeze exceptions) is not very effective. Deadlines need to be well
planned ahead and thoroughly communicated.

If it was done, I'm sorry. As I mentioned, I wasn't part of the process and I
just happened to have read Amrith's email.

Hope the above makes sense,
Flavio

>Thanks,
>-Craig
>
>On Dec 10, 2015, at 12:11 PM, Victoria Martínez de la Cruz

> wrote:
>
>2015-12-10 13:10 GMT-03:00 Amrith Kumar
>:
>Members of the Trove community,
>
>Over the past couple of weeks we have discussed the possibility of an early
deadline for submission of trove specifications for projects that are to be
included in the Mitaka release. I understand why we're doing it, and agree
with the concept. Unfortunately though, there are a number of projects for
which specifications won't be ready in time for the proposed deadline of
Friday 12/11 (aka tomorrow).
>
>I'd like to that the following projects are in the works and specifications 
will
be submitted as soon as possible. Now that we know of the new process, we
will all be able to make sure that we are better planned in time for the N
release.
>
>Blueprints have been registered for these projects.
>
>The projects in question are:
>
>Cassandra:
>- enable/disable/show root
(https://blueprints.launchpad.net/trove/+spec/cassandra-database-user-
functions)
>- Clustering
>(https://blueprints.launchpad.net/trove/+spec/cassandra-cluster)
>
>MariaDB:
>- Clustering (https://blueprints.launchpad.net/trove/+spec/mariadb-
clustering)
>- GTID replication
>(https://blueprints.launchpad.net/trove/+spec/mariadb-gtid-replication)
>
>Vertica:
>- Add/Apply license
(https://blueprints.launchpad.net/trove/+spec/vertica-licensing)
>- User triggered data upload from Swift
(https://blueprints.launchpad.net/trove/+spec/vertica-bulk-data-load)
>- Cluster grow/shrink
(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-grow-shrink)
>- Configuration Groups
(https://blueprints.launchpad.net/trove/+spec/vertica-configuration-
groups)
>- Cluster Anti-affinity
>(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-anti-affi
>nity)
>
>Hbase and Hadoop based databases:
>- Extend Trove to Hadoop based databases, starting with HBase
>(https://blueprints.launchpad.net/trove/+spec/hbase-support)
>
>Specifications in the trove-specs repository will be submitted for review as
soon as they are available.
>
>Thanks,
>
>-amrith
>
>
>
>_
__
>___ OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>OpenStack-dev-

Re: [openstack-dev] neutron metadata-agent HA

2015-12-13 Thread Assaf Muller
The L3 agent monitors the metadata proxies it spawns and restarts them
automatically. You should be using an external tool to restart the
metadata *agent* in case that crashes.

On Sun, Dec 13, 2015 at 7:49 AM, Gary Kotton  wrote:
>
>
> From: Eugene Nikanorov 
> Reply-To: OpenStack List 
> Date: Sunday, December 13, 2015 at 12:09 PM
> To: OpenStack List 
> Subject: Re: [openstack-dev] neutron metadata-agent HA
>
> It is as 'single' as active L3 router that is handling traffic at current
> point of time.
>
> [Gary] But if the l3 agent is up and running and the metadataproxy is not
> then all of the instances using that agent will not be able to get their
> metadata.
>
> On Sun, Dec 13, 2015 at 11:13 AM, Gary Kotton  wrote:
>>
>>
>>
>>
>>
>>
>> On 12/12/15, 10:44 PM, "Assaf Muller"  wrote:
>>
>> >The neutron metadata agent is stateless. It takes requests from the
>> >metadata proxies running in the router namespaces and moves the
>> >requests on to the nova server. If you're using HA routers, start the
>> >neutron-metadata-agent on every machine the L3 agent runs, and just
>> >make sure that the metadata-agent is restarted in case it crashes and
>> >you're done.
>>
>> So does this mean that it could be the single point of failure?
>>
>> >Nothing else you need to do.
>> >
>> >On Fri, Dec 11, 2015 at 3:24 PM, Fabrizio Soppelsa
>> > wrote:
>> >>
>> >> On Dec 10, 2015, at 12:56 AM, Alvise Dorigo 
>> >> wrote:
>> >>
>> >> So my question is: is there any progress on this topic ? is there a way
>> >> (something like a cronjob script) to make the metadata-agent redundant
>> >> without involving the clustering software Pacemaker/Corosync ?
>> >>
>> >>
>> >> Reason for such a dirty solution instead of rely onto pacemaker?
>> >>
>> >> I’m not aware of such initiatives - just checked the blueprints in
>> >> Neutron
>> >> and I found no relevant. I can suggest to file a proposal to the
>> >> correspondent launchpad page, by elaborating your idea.
>> >>
>> >> F.
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Group Based Policy] [Policy] [GBP]

2015-12-13 Thread Ernesto Valentino
Hello,
how can i write an application with gbp using the libcloud? Thanks in
advance. Best regards,

ernesto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] neutron metadata-agent HA

2015-12-13 Thread Gary Kotton


From: Eugene Nikanorov >
Reply-To: OpenStack List 
>
Date: Sunday, December 13, 2015 at 12:09 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] neutron metadata-agent HA

It is as 'single' as active L3 router that is handling traffic at current point 
of time.

[Gary] But if the l3 agent is up and running and the metadataproxy is not then 
all of the instances using that agent will not be able to get their metadata.

On Sun, Dec 13, 2015 at 11:13 AM, Gary Kotton 
> wrote:





On 12/12/15, 10:44 PM, "Assaf Muller" 
> wrote:

>The neutron metadata agent is stateless. It takes requests from the
>metadata proxies running in the router namespaces and moves the
>requests on to the nova server. If you're using HA routers, start the
>neutron-metadata-agent on every machine the L3 agent runs, and just
>make sure that the metadata-agent is restarted in case it crashes and
>you're done.

So does this mean that it could be the single point of failure?

>Nothing else you need to do.
>
>On Fri, Dec 11, 2015 at 3:24 PM, Fabrizio Soppelsa
>> wrote:
>>
>> On Dec 10, 2015, at 12:56 AM, Alvise Dorigo 
>> >
>> wrote:
>>
>> So my question is: is there any progress on this topic ? is there a way
>> (something like a cronjob script) to make the metadata-agent redundant
>> without involving the clustering software Pacemaker/Corosync ?
>>
>>
>> Reason for such a dirty solution instead of rely onto pacemaker?
>>
>> I’m not aware of such initiatives - just checked the blueprints in Neutron
>> and I found no relevant. I can suggest to file a proposal to the
>> correspondent launchpad page, by elaborating your idea.
>>
>> F.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-13 Thread Clark Boylan
On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
> Hi,
> 
> As Kai Qiang mentioned, magnum gate recently had a bunch of random
> failures, which occurred on creating a nova instance with 2G of RAM.
> According to the error message, it seems that the hypervisor tried to
> allocate memory to the nova instance but couldn’t find enough free memory
> in the host. However, by adding a few “nova hypervisor-show XX” before,
> during, and right after the test, it showed that the host has 6G of free
> RAM, which is far more than 2G. Here is a snapshot of the output [1]. You
> can find the full log here [2].
If you look at the dstat log
http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/screen-dstat.txt.gz
the host has nowhere near 6GB free memory and less than 2GB. I think you
actually are just running out of memory.
> 
> Another observation is that most of the failure happened on a node with
> name “devstack-trusty-ovh-*” (You can verify it by entering a query [3]
> at http://logstash.openstack.org/ ). It seems that the jobs will be fine
> if they are allocated to a node other than “ovh”.
I have just done a quick spot check of the total memory on
devstack-trusty hosts across HPCloud, Rackspace, and OVH using `free -m`
and the results are 7480, 7732, and 6976 megabytes respectively. Despite
using 8GB flavors in each case there is variation and OVH comes in on
the low end for some reason. I am guessing that you fail here more often
because the other hosts give you just enough extra memory to boot these
VMs.

We will have to look into why OVH has less memory despite using flavors
that should be roughly equivalent.
> 
> Any hints to debug this issue further? Suggestions are greatly
> appreciated.
> 
> [1] http://paste.openstack.org/show/481746/
> [2]
> http://logs.openstack.org/48/256748/1/check/gate-functional-dsvm-magnum-swarm/56d79c3/console.html
> [3] https://review.openstack.org/#/c/254370/2/queries/1521237.yaml
> 
> Best regards,
> Hongbin

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-13 Thread pcrews

Hi,

OVH is a new cloud provider for openstack-infra nodes:
http://www.openstack.org/blog/2015/12/announcing-a-new-cloud-provider-for-openstacks-ci-system-ovh/

It appears that selection of nodes on any cloud provider is a matter of 
luck:
"When a developer uploads a proposed change to an OpenStack project, 
available instances from any of our contributing cloud providers will be 
used interchangeably to test it."


You might want to ping people in #openstack-infra to find a point of 
contact for them (OVH) and/or to work with the infra folks directly to 
see about troubleshooting this further.



On 12/12/2015 02:16 PM, Hongbin Lu wrote:

Hi,

As Kai Qiang mentioned, magnum gate recently had a bunch of random
failures, which occurred on creating a nova instance with 2G of RAM.
According to the error message, it seems that the hypervisor tried to
allocate memory to the nova instance but couldn’t find enough free
memory in the host. However, by adding a few “nova hypervisor-show XX”
before, during, and right after the test, it showed that the host has 6G
of free RAM, which is far more than 2G. Here is a snapshot of the output
[1]. You can find the full log here [2].

Another observation is that most of the failure happened on a node with
name “devstack-trusty-ovh-*” (You can verify it by entering a query [3]
at http://logstash.openstack.org/ ). It seems that the jobs will be fine
if they are allocated to a node other than “ovh”.

Any hints to debug this issue further? Suggestions are greatly appreciated.



[1] http://paste.openstack.org/show/481746/

[2]
http://logs.openstack.org/48/256748/1/check/gate-functional-dsvm-magnum-swarm/56d79c3/console.html

[3] https://review.openstack.org/#/c/254370/2/queries/1521237.yaml

Best regards,

Hongbin

*From:*Kai Qiang Wu [mailto:wk...@cn.ibm.com]
*Sent:* December-09-15 7:23 AM
*To:* openstack-dev@lists.openstack.org
*Subject:* [openstack-dev] [Infra][nova][magnum] Jenkins failed quite
often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

Hi All,

I am not sure what changes these days, We found quite often now, the
Jenkins failed for:


http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnum-k8s/5305d7a/logs/libvirt/libvirtd.txt.gz

2015-12-09 08:52:27.892
+:
22957: debug : qemuMonitorJSONCommandWithFd:264 : Send command
'{"execute":"qmp_capabilities","id":"libvirt-1"}' for write with FD -1
2015-12-09 08:52:27.892
+:
22957: debug : qemuMonitorSend:959 : QEMU_MONITOR_SEND_MSG:
mon=0x7fa66400c6f0 msg={"execute":"qmp_capabilities","id":"libvirt-1"}
  fd=-1
2015-12-09 08:52:27.941
+:
22951: debug : virNetlinkEventCallback:347 : dispatching to max 0
clients, called from event watch 6
2015-12-09 08:52:27.941
+:
22951: debug : virNetlinkEventCallback:360 : event not handled.
2015-12-09 08:52:27.941
+:
22951: debug : virNetlinkEventCallback:347 : dispatching to max 0
clients, called from event watch 6
2015-12-09 08:52:27.941
+:
22951: debug : virNetlinkEventCallback:360 : event not handled.
2015-12-09 08:52:27.941
+:
22951: debug : virNetlinkEventCallback:347 : dispatching to max 0
clients, called from event watch 6
2015-12-09 08:52:27.941
+:
22951: debug : virNetlinkEventCallback:360 : event not handled.
2015-12-09 08:52:28.070
+:
22951: error : qemuMonitorIORead:554 : Unable to read from monitor:
Connection reset by peer
2015-12-09 08:52:28.070
+:
22951: error : qemuMonitorIO:690 : internal error: early end of file
from monitor: possible problem:
Cannot set up guest 

Re: [openstack-dev] [nova][serial-console-proxy]

2015-12-13 Thread Tony Breeds
On Fri, Dec 11, 2015 at 11:07:02AM +0530, Prathyusha Guduri wrote:
> Hi All,
> 
> I have set up open stack on an Arm64 machine and all the open stack related
> services are running fine. Also am able to launch an instance successfully.
> Now that I need to get a console for my instance. The noVNC console is not
> supported in the machine am using. So I have to use a serial-proxy console
> or spice-proxy console.
> 
> After rejoining the stack, I have stopped the noVNC service and started the
> serial proxy service in  /usr/local/bin  as
> 
> ubuntu@ubuntu:~/devstack$ /usr/local/bin/nova-serialproxy --config-file
> /etc/nova/nova.conf
> 2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-]
> WebSocket server settings:
> 2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-]   -
> Listen on 0.0.0.0:6083
> 2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   -
> Flash security policy server
> 2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   - No
> SSL/TLS support (no cert file)
> 2015-12-10 19:07:13.790 21979 INFO nova.console.websocketproxy [-]   -
> proxying from 0.0.0.0:6083 to None:None
> 
> But
> ubuntu@ubuntu:~/devstack$ nova get-serial-console vm20
> ERROR (ClientException): The server has either erred or is incapable of
> performing the requested operation. (HTTP 500) (Request-ID:
> req-cfe7d69d-3653-4d62-ad0b-50c68f1ebd5e)

So you probably need to restart you nova-api, cauth and compute servcies.

As you're using devstack I'd recommend you start again and follow the following
guides:
 1. 
http://docs.openstack.org/developer/devstack/guides/nova.html#nova-serialproxy
 2. http://docs.openstack.org/developer/nova/testing/serial-console.html

Also your nova.conf looked strange I hope that's just manually formatting.

Tony.


pgp5mGZBgVbi2.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tox 2.3.0 broke tempest jobs

2015-12-13 Thread Robert Collins
On 13 December 2015 at 03:20, Yuriy Taraday  wrote:
> Tempest jobs in all our projects seem to become broken after tox 2.3.0
> release yesterday. It's a regression in tox itself:
> https://bitbucket.org/hpk42/tox/issues/294
>
> I suggest us to add tox to upper-constraints to avoid this breakage for now
> and in the future: https://review.openstack.org/256947
>
> Note that we install tox in gate with no regard to global-requirements, so
> only upper-constraints can save us from tox releases.

Ah, friday releases. Gotta love them... on my saturday :(.

So - tl;dr AIUI:

 - the principle behind gating changes to tooling applies to tox as well
 - existing implementation of jobs in the gate precludes applying
upper-constraints systematically as a way to gate these changes
 - the breakage we experienced was due to already known-bad system images

Assuming that thats correct, my suggestion would be that we either
make tox pip installed during jobs (across the board), so that we can
in fact control it with upper-constraints, or we work on functional
tests of new images before they go-live

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] stable/liberty branch needed for oslo-incubator

2015-12-13 Thread Matt Riedemann



On 12/11/2015 10:24 PM, Davanum Srinivas wrote:

Matt,

Do you have an example? This was a deliberate decision to leave it out.

-- Dims

On Sat, Dec 12, 2015 at 4:35 AM, Matt Riedemann
 wrote:

oslo-incubator is EOL now but it's missing a stable/liberty branch, and we
need one for backporting changes that will be synced to projects in liberty.
I'm not sure what commit would be used to create the stable/liberty branch
though.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






I don't have a pressing need to backport something right now, but as 
long as there was code in oslo-incubator that *could* be synced to other 
projects which wasn't in libraries, then that code could have bugs and 
code require backports to stable/liberty oslo-incubator for syncing to 
projects that use it.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - availability zone performance regression and discussion about added network field

2015-12-13 Thread Kevin Benton
I see, so regular users are supposed to use this information as well. But
how are they supposed to use it? For example, if they see that their
network has availability zones 1 and 4, but their instance is hosted in
zone 3, what are they supposed to do?

On Sun, Dec 13, 2015 at 8:57 PM, Hirofumi Ichihara <
ichihara.hirof...@lab.ntt.co.jp> wrote:

> Hi Kevin,
>
> On 2015/12/14 11:10, Kevin Benton wrote:
>
> Hi all,
>
> The availability zone code added a new field to the network API that shows
> the availability zones of a network. This caused a pretty big performance
> impact to get_networks calls because it resulted in a database lookup for
> every network.[1]
>
> I already put a patch up to join the information ahead of time in the
> network model.[2]
>
> I agree with your suggestion. I believe that the patch can solve the
> performance issue.
>
> However, before we go forward with that, I think we should consider the
> removal of that field from the API.
>
> Having to always join to the DHCP agents table to lookup which zones a
> network has DHCP agents on is expensive and is duplicating information
> available with other API calls.
>
> Additionally, the field is just called 'availability_zones' but it's being
> derived solely from AZ definitions in DHCP agent bindings for that network.
> To me that doesn't represent where the network is available, it just says
> which zones its scheduled DHCP instances live in. If that's the purpose,
> then we should just be using the DHCP agent API for this info and not
> impact the network API.
>
> I don't think so. I have three points.
>
> 1. Availability zone is implemented in just a case with Agent now, but
> it's reference implementation. For example, we should expect that
> availability zone will be used by plugin without agent.
>
> 2. In users view, availability zone is related to network resource. On the
> other hand, users doesn't need to consider Agent or operators doesn't like
> to enable users to do in the first place. So I don't agree with using Agent
> API.
>
> 3. We should consider whether users want to know the field. Originally,
> the field doesn't exist in Spec[3] but I added it according with reviewer's
> opinion(maybe Akihiro?). This is about discussion of use case. After users
> create resources via API with availability_zone_hints so that they achieve
> HA for their service, they want to know which zones are their resources
> hosted on because their resources might not be distributed on multiple
> availability zones by any reasons. In the case, they need to know
> "availability_zones" for the resources via Network API.
>
> Thanks,
> Hirofumi
>
> [3]: https://review.openstack.org/#/c/169612/31
>
>
> Thoughts?
>
> 1. https://bugs.launchpad.net/neutron/+bug/1525740
> 2. https://review.openstack.org/#/c/257086/
>
> --
> Kevin Benton
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Tempest] OS-INHERIT APIs were skipped by Jenkins because "os_inherit" in keystone.conf was disable.

2015-12-13 Thread Ken'ichi Ohmichi
A devstack patch is https://review.openstack.org/#/c/257085/ for
enabling os_inherit extension.


2015-12-14 10:43 GMT+09:00 Ken'ichi Ohmichi :
> Hi Henry,
>
> When adding this extension on https://review.openstack.org/#/c/35986/
> , the extension is disabled as the default setting.
> Now can we enable this os_inherit extension on Keystone side like
>
> keystone/common/config.py
> 'os_inherit': [
> -   cfg.BoolOpt('enabled', default=False,
> +  cfg.BoolOpt('enabled', default=True,
>
> ?
>
> Or if we don't want to change the default value on Keystone side, we
> can enable this os_inherit extension on DevStack side for testing the
> extension on Tempest.
>
> This extension has been implemented since 2 years ago, and the API
> doc[1] also contains it.
> So it is nice to enable it for the development and test, I feel.
>
> Thanks
> Ken Ohmichi
>
> ---
> [1]: 
> http://developer.openstack.org/api-ref-identity-v3-ext.html#identity_v3_OS-INHERIT-ext
>
>
>
>
> 2015-12-09 18:03 GMT+09:00 Henry Nash :
>> Hi Maho,
>>
>> So in the keystone unit tests, we flip the os_inherit flag back and forth 
>> during tests to make sure it is honored correctly.  For the tempest case, I 
>> don’t think you need to do that level of testing. Setting the os_inherit 
>> flag to true will have no effect if you have not created any role 
>> assignments that are inherited - you’ll just get the regular assignments 
>> back as normal. So provided there is no test data leakage between tests 
>> (i.e. old data lying around from a previous test), I think it should be safe 
>> to run tempest with os_inherit switched on.
>>
>> Henry
>>> On 9 Dec 2015, at 08:45, koshiya maho  wrote:
>>>
>>> Hi all,
>>>
>>> I pushed the patch set of OS-INHERIT API tempest (keystone v3).
>>> https://review.openstack.org/#/c/250795/
>>>
>>> But, all API tests in patch set was skipped, because "os_inherit" in 
>>> keystone.conf of
>>> Jenkins jobs was disable. So, it couldn't be confirmed.
>>>
>>> Reference information :
>>> http://logs.openstack.org/95/250795/5/check/gate-tempest-dsvm-full/fbde6d2/logs/etc/keystone/keystone.conf.txt.gz
>>> #L1422
>>> https://github.com/openstack/keystone/blob/master/keystone/common/config.py#L224
>>>
>>> Default "os_inherit" setting is disable. OS-INHERIT APIs need "os_inherit" 
>>> setting enable.
>>>
>>> For keystone v3 tempests using OS-INHERIT, we should enable "os_inherit" of 
>>> the existing keystone.conf called by Jenkins.
>>> Even if "os_inherit" is enable, I think there have no effects on other 
>>> tempests.
>>>
>>> Do you have any other ideas?
>>>
>>> Thank you and best regards,
>>>
>>> --
>>> Maho Koshiya
>>> NTT Software Corporation
>>> E-Mail : koshiya.m...@po.ntts.co.jp
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - availability zone performance regression and discussion about added network field

2015-12-13 Thread Kevin Benton
Hi all,

The availability zone code added a new field to the network API that shows
the availability zones of a network. This caused a pretty big performance
impact to get_networks calls because it resulted in a database lookup for
every network.[1]

I already put a patch up to join the information ahead of time in the
network model.[2] However, before we go forward with that, I think we
should consider the removal of that field from the API.

Having to always join to the DHCP agents table to lookup which zones a
network has DHCP agents on is expensive and is duplicating information
available with other API calls.

Additionally, the field is just called 'availability_zones' but it's being
derived solely from AZ definitions in DHCP agent bindings for that network.
To me that doesn't represent where the network is available, it just says
which zones its scheduled DHCP instances live in. If that's the purpose,
then we should just be using the DHCP agent API for this info and not
impact the network API.

Thoughts?

1. https://bugs.launchpad.net/neutron/+bug/1525740
2. https://review.openstack.org/#/c/257086/

-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-12-13 Thread Steven Dake (stdake)
Adrian,

Its a real shame Atomic can't execute its mission -  serve as a container 
operating system.  If you need some guidance on image building find experienced 
developers on #kolla – we have extensive experience in producing containers for 
various runtime environments focused around OpenStack.

Regards
-steve


From: Adrian Otto >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, December 7, 2015 at 1:16 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap

Until I see evidence to the contrary, I think adding some bootstrap complexity 
to simplify the process of bay node image management and customization is worth 
it. Think about where most users will focus customization efforts. My guess is 
that it will be within these docker images. We should ask our team to keep 
things as simple as possible while working to containerize components where 
that makes sense. That may take some creativity and a few iterations to achieve.

We can pivot on this later if we try it and hate it.

Thanks,

Adrian

On Dec 7, 2015, at 1:57 AM, Kai Qiang Wu 
> wrote:


HI Hua,

From my point of view, not everything needed to be put in container. Let's make 
the initial start (be simple)to work and then discussed other options if needed 
in IRC or weekly meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

王华 ---07/12/2015 10:10:38 am---Hi all, If we want to run etcd and 
flannel in container, we will introduce

From: 王华 >
To: Egor Guz >
Cc: 
"openstack-dev@lists.openstack.org" 
>
Date: 07/12/2015 10:10 am
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap





Hi all,

If we want to run etcd and flannel in container, we will introduce 
docker-bootstrap which makes setup become more complex as Egor pointed out. 
Should we pay for the price?

On Sat, Nov 28, 2015 at 8:45 AM, Egor Guz 
> wrote:

Wanghua,

I don’t think moving flannel to the container is good idea. This is setup great 
for dev environment, but become too complex from operator point of view (you 
add extra Docker daemon and need extra Cinder volume for this daemon, also
keep in mind it makes sense to keep etcd data folder at Cinder storage as well 
because etcd is database). flannel has just there files without extra 
dependencies and it’s much easy to download it during cloud-init ;)

I agree that we have pain with building Fedora Atomic images, but instead of 
simplify this process we should switch to another more “friendly” images (e.g. 
Fedora/CentOS/Ubuntu) which we can easy build with disk builder.
Also we can fix CoreOS template (I believe people more asked about it instead 
of Atomic), but we may face similar to Atomic issues when we will try to 
integrate not CoreOS products (e.g. Calico or Weave)

—
Egor

From: 王华 
>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Date: Thursday, November 26, 2015 at 00:15
To: "OpenStack Development Mailing List (not for usage questions)" 
>>
Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap

Hi Hongbin,

The docker in master node stores data in /dev/mapper/atomicos-docker--data and 
metadata in /dev/mapper/atomicos-docker--meta. 
/dev/mapper/atomicos-docker--data and /dev/mapper/atomicos-docker--meta are 
logic volumes. The docker in minion node store data in the cinder volume, but 
/dev/mapper/atomicos-docker--meta and /dev/mapper/atomicos-docker--meta are not 
used. If we want to leverage Cinder 

Re: [openstack-dev] [oslo] stable/liberty branch needed for oslo-incubator

2015-12-13 Thread Robert Collins
On 14 December 2015 at 15:28, Matt Riedemann  wrote:
>
>
> I don't have a pressing need to backport something right now, but as long as
> there was code in oslo-incubator that *could* be synced to other projects
> which wasn't in libraries, then that code could have bugs and code require
> backports to stable/liberty oslo-incubator for syncing to projects that use
> it.

I thought the thing to do was backport the application of the change
from the projects master?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] HELP CONFIRM OR DISCUSS: How to add extension common attributes into db table 'standardattributes'

2015-12-13 Thread Kevin Benton
I would avoid trying to make special functions that are meant for everyone
to use to interact with the standard attribute table. There are a wide
variety of use cases (1:1 relationships, 1:M relationships, stuff visible
in the API, stuff internal to Neutron, etc) and trying to guess how they
will look now will not work.

So for now just work on implementing the timestamp by interacting with the
model like any other new relationship (e.g. model.standardattr.created_at).
You might be able to use callbacks for this if it makes it easier, but
don't start out by over-engineering something meant to be generic for all
standard attributes.

>if plugin enable/unable several times and create some resources during the
change,

Why would a plugin add support for the timestamp extension, remove it, and
then re-add it?

On Sun, Dec 13, 2015 at 1:37 AM, zhaobo  wrote:

> Hi guys,
> Could you give some good ideas about add neutron common attributes into db
> table 'standardattributes'? Like timestamp, it must be a common one should
> be added.
> The table 'standardattributes' is imported from [1]. It must be neutron
> common attrs should be added into this table in the future.
>
> Now I had find a way to add timestamp based on subscribe/notify through
> defining a new subscribe resource named 'standardattr'.
> The code is [2][3]. I use this way as garyk is moving the extension
> resource from base code(db_base_plugin_base / db_base_plugin_v2 )to
> extension. I realize it will not be allowed to modify the code in base
> anymore. So I decide to add a mechanism into base code for not add the code
> when every new attribute addition, if we want to add other attributes into
> this table, we just fulfill their own field addition functions into db
> model. If my thought is reasonable, please add review list into [2][3].
>
> Original, I just think I could set/get common attr through
> db_model.standardattr.(which column I had added into table and models/) ,
> more details like network_model.standardattr.created_at/updated_at. I had
> made it in the patches [3] before. So I still have doubt about whether my
> way is correct or make it difficult. I wish your kind help and guidance.
>
> If there is some doubts of mine, please feel free to correct. I'm now
> still feel whether my way is right or can be accept by our community. If my
> way is too complex and useless, which way should I follow? Could anyone
> give your kind suggestions? Thanks.
>
> And one more question:
> I think make timestamp as extension is good, but there are some problem
> when use it.
>
> 1. As this bp will expand the tables in db of neutron core resources,
> whatever users use a plugin which support timestamp or not, db will still
> store the related data.
> ---My reason is: At first , if a plugin unable the timestamp and won't
> store data into db,  but when this plugin support this plugin once, db
> become to store timestamp.
> ---This may cause issue about how to set the timestamp to the original
> data, if plugin enable/unable several times and create some resources
> during the change,
> ---the db will contain some strange data, and when the plugin support
> timestamp at last, users will be confused why the resources created during
> the period of unable timestamp don't have the correct timestamp.
> ---And it will cause some strong behaviors, like users create the base
> resource for a long time, why it doesn't have timestamp when it is true to
> open the timestamp.
> 2.Like the bp of mtu, I remember this field of network is as the neutron
> core attribute of network in the beginning. Now it had been moved to
> extension, and the field of mtu still be stored into db whatever plugin
> support mtu or not.
> 3.Like nova and other modules , the timestamp is treated as core attribute
> of resources, like instance, action list and so on. Neutron treat it as
> extension, so I think this timestamp of neutron should be drawn towards the
> usage of other modules.
> 4.For other reasons, we can request the incremental query based on
> timestamp, when the scale is large, and there are so many messages or
> resources in system, no one want to list the info through get all of them.
>
> So I have doubts about whether timestamp fields should be stored into db
> whatever the plugin support timestamp or not. When plugin support it, users
> could see them in the returned info. Wish your kind suggestion to help me
> to move on the timestamp. Thanks.
>
> [1] https://review.openstack.org/#/c/222079/
> [2] https://review.openstack.org/#/c/251193/
> [3] https://review.openstack.org/#/c/213586/
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton

Re: [openstack-dev] [neutron] - availability zone performance regression and discussion about added network field

2015-12-13 Thread Kevin Benton
Yes, as I'm starting to understand the use case, I think it would actually
make more sense to add an AZ-network mapping table. Then whatever
implementation can populate them based on the criteria it is using
(reference would just do it on agent updates).

On Sun, Dec 13, 2015 at 9:53 PM, Hong Hui Xiao  wrote:

> Hi,
>
> Can we just add "availability_zones" as one Column in Network? And update
> it when "NetworkDhcpAgentBinding" updates. The code will be a bit more
> complex, but it can save the time when retrieving Network resource.
>
>
>
> [image: Inactive hide details for Hirofumi Ichihara ---12/14/2015
> 13:33:41---Hi Kevin, On 2015/12/14 11:10, Kevin Benton wrote:]Hirofumi
> Ichihara ---12/14/2015 13:33:41---Hi Kevin, On 2015/12/14 11:10, Kevin
> Benton wrote:
>
> From: Hirofumi Ichihara 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 12/14/2015 13:33
> Subject: Re: [openstack-dev] [neutron] - availability zone performance
> regression and discussion about added network field
> --
>
>
>
> Hi Kevin,
>
> On 2015/12/14 11:10, Kevin Benton wrote:
>
>Hi all,
>
>   The availability zone code added a new field to the network API
>   that shows the availability zones of a network. This caused a pretty big
>   performance impact to get_networks calls because it resulted in a 
> database
>   lookup for every network.[1]
>
>   I already put a patch up to join the information ahead of time in
>   the network model.[2]
>
> I agree with your suggestion. I believe that the patch can solve the
> performance issue.
>
>However, before we go forward with that, I think we should consider
>   the removal of that field from the API.
>
>   Having to always join to the DHCP agents table to lookup which
>   zones a network has DHCP agents on is expensive and is duplicating
>   information available with other API calls.
>
>   Additionally, the field is just called 'availability_zones' but
>   it's being derived solely from AZ definitions in DHCP agent bindings for
>   that network. To me that doesn't represent where the network is 
> available,
>   it just says which zones its scheduled DHCP instances live in. If that's
>   the purpose, then we should just be using the DHCP agent API for this 
> info
>   and not impact the network API.
>
> I don't think so. I have three points.
>
> 1. Availability zone is implemented in just a case with Agent now, but
> it's reference implementation. For example, we should expect that
> availability zone will be used by plugin without agent.
>
> 2. In users view, availability zone is related to network resource. On the
> other hand, users doesn't need to consider Agent or operators doesn't like
> to enable users to do in the first place. So I don't agree with using Agent
> API.
>
> 3. We should consider whether users want to know the field. Originally,
> the field doesn't exist in Spec[3] but I added it according with reviewer's
> opinion(maybe Akihiro?). This is about discussion of use case. After users
> create resources via API with availability_zone_hints so that they achieve
> HA for their service, they want to know which zones are their resources
> hosted on because their resources might not be distributed on multiple
> availability zones by any reasons. In the case, they need to know
> "availability_zones" for the resources via Network API.
>
> Thanks,
> Hirofumi
>
> [3]: *https://review.openstack.org/#/c/169612/31*
> 
>
>
>   Thoughts?
>
>   1. *https://bugs.launchpad.net/neutron/+bug/1525740*
>   
>   2. *https://review.openstack.org/#/c/257086/*
>   
>
>   --
>   Kevin Benton
>
>
>
>   
> __
>   OpenStack Development Mailing List (not for usage questions)
>   Unsubscribe:
>   *openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
>   
>   *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
>   
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [neutron] - availability zone performance regression and discussion about added network field

2015-12-13 Thread Hirofumi Ichihara



On 2015/12/14 15:58, Kevin Benton wrote:
What decision would lead the user to request AZ1 and AZ2 in the first 
place? Especially since when it fails to get AZ2, they just request 
again with AZ1 and AZ3 instead.
I expected that user gets AZ1 and AZ2 (and AZ3) via GET Availability 
zones API first. There is a gap between the time user threw and the time 
his resource is scheduled. After user threw API request with AZ1 and 
AZ2, if all agents of AZ2 are dead before scheduling, the resource is 
scheduled in AZ1 only.





On Sun, Dec 13, 2015 at 10:31 PM, Hirofumi Ichihara 
> wrote:




On 2015/12/14 14:52, Kevin Benton wrote:

I see, so regular users are supposed to use this information as
well. But how are they supposed to use it? For example, if they
see that their network has availability zones 1 and 4, but their
instance is hosted in zone 3, what are they supposed to do?

I don't think that there is what they should do in the case
because Neutron AZ is different from Nova AZ. For example, there
may be a case like the following.

1. User throws POST Network API and Subnet API with
availability_zone_hints [AZ1, AZ2]
2. Neutron server tries to schedule the resource on both AZ1 and
AZ2 but the resource are scheduled on AZ1 only by some reasons
3. User confirms via GET Network API where his resource is hosted
and he knows it's AZ1 only
4. User also can know AZ is ready via GET Availability zones API:
AZ1, AZ3
5. User deletes previous resource and he recreates his resource
with availability_zone_hints [AZ1, AZ3]




On Sun, Dec 13, 2015 at 8:57 PM, Hirofumi Ichihara
> wrote:

Hi Kevin,

On 2015/12/14 11:10, Kevin Benton wrote:

Hi all,

The availability zone code added a new field to the network
API that shows the availability zones of a network. This
caused a pretty big performance impact to get_networks calls
because it resulted in a database lookup for every network.[1]

I already put a patch up to join the information ahead of
time in the network model.[2]

I agree with your suggestion. I believe that the patch can
solve the performance issue.


However, before we go forward with that, I think we should
consider the removal of that field from the API.

Having to always join to the DHCP agents table to lookup
which zones a network has DHCP agents on is expensive and is
duplicating information available with other API calls.

Additionally, the field is just called 'availability_zones'
but it's being derived solely from AZ definitions in DHCP
agent bindings for that network. To me that doesn't
represent where the network is available, it just says which
zones its scheduled DHCP instances live in. If that's the
purpose, then we should just be using the DHCP agent API for
this info and not impact the network API.

I don't think so. I have three points.

1. Availability zone is implemented in just a case with Agent
now, but it's reference implementation. For example, we
should expect that availability zone will be used by plugin
without agent.

2. In users view, availability zone is related to network
resource. On the other hand, users doesn't need to consider
Agent or operators doesn't like to enable users to do in the
first place. So I don't agree with using Agent API.

3. We should consider whether users want to know the field.
Originally, the field doesn't exist in Spec[3] but I added it
according with reviewer's opinion(maybe Akihiro?). This is
about discussion of use case. After users create resources
via API with availability_zone_hints so that they achieve HA
for their service, they want to know which zones are their
resources hosted on because their resources might not be
distributed on multiple availability zones by any reasons. In
the case, they need to know "availability_zones" for the
resources via Network API.

Thanks,
Hirofumi

[3]: https://review.openstack.org/#/c/169612/31



Thoughts?

1. https://bugs.launchpad.net/neutron/+bug/1525740
2. https://review.openstack.org/#/c/257086/

-- 
Kevin Benton




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 


Re: [openstack-dev] [Fuel] Separate master node provisioning and deployment

2015-12-13 Thread Vladimir Kozhukalov
Oleg,

Thanks a lot for your opinion. Here are some more thoughts on this topic.

1) For a package it is absolutely normal to throw a user dialog. But
probably there is kind of standard for the dialog that does not allow to
use fuelmenu. AFAIK, for DEB packages it is debconf and there is a tutorial
[0] how to get user input during post install. I don't know if there is
such a standard for RPM packages. In some MLs it is written that any
command line program could be run in %post section including those like
fuel-menu.

2) Fuel package could install default astute.yaml (I'd like to rename it
into /etc/fuel.yaml or /etc/fuel/config.yaml) and use values from the file
by default not running fuelmenu. A user then is supposed to run fuelmenu if
he/she needs to re-configure fuel installation. However, it is gonna be
quite intrusive. What if a user installs fuel and uses it for a while with
default configuration. What if some clusters are already in use and then
the user decides to re-configure the master node. Will it be ok?

3) What is wrong with 'deployment script' approach? Why can not fuel just
install kind of deployment script? Fuel is not a service, it consists of
many components. Moreover some of these components could be optional (not
currently but who knows?), some of this components could be run on an
external node (after all Fuel components use REST, AMQP, XMLRPC to interact
with each other).
Imagine you want to install OpenStack. It also consists of many components.
Some components like database or AMQP service could be deployed using HA
architecture. What if one needs Fuel to be run with external HA database,
amqp? From this perspective I'd say Fuel package should not exist at all.
Let's maybe think of Fuel package as a convenient way to deploy Fuel on a
single node, i.e single node deployment script.

4) If Fuel is just a deployment script, then I'd say we should not run any
post install dialog. Deployment script is to run this dialog (fuelmenu) and
then run puppet. IMO it sounds reasonable.


[0] http://www.fifi.org/doc/debconf-doc/tutorial.html

Vladimir Kozhukalov

On Fri, Dec 11, 2015 at 11:14 PM, Oleg Gelbukh 
wrote:

> For the package-based deployment, we need to get rid of 'deployment
> script' whatsoever. All configuration stuff should be done in package
> specs, or by the user later on (maybe via some fuelmenu-like lightweight
> UI, or via WebUI).
>
> Thus, fuel package must install everything that is required for running
> base Fuel as it's dependencies (or dependencies of it's dependencies, as it
> could be more complicated with cross-deps between our components).
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Fri, Dec 11, 2015 at 10:45 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> At the moment part of the Fuel master deployment logic is located in ISO
>> kickstart file, which is bad. We'd better carefully split provisioning and
>> deployment stages so as to install base operating system during
>> provisioning stage and then everything else on the deployment stage. That
>> would make it possible to deploy Fuel on pre-installed vanilla Centos 7.
>> Besides, if we have deb packages for all Fuel components it will be easy to
>> support Fuel deployment on pre-installed Ubuntu and Debian.
>>
>> We (Fuel build team) are going to do this ASAP [0]. Right now we are on
>> the stage of writing design spec for the change [1].
>>
>> Open questions are:
>> 1) Should fuel package have all other fuel packages like nailgun, astute,
>> etc. as its dependencies? Or maybe it should install only puppet modules
>> and deployment script that then could be used to deploy everything else?
>>
>> 2) bootstrap_admin_node.sh runs fuelmenu and then puppet to deploy Fuel
>> components. Should we run this script as post-install script or maybe we
>> should leave this up to a user to run this script later when fuel package
>> is already installed?
>>
>> Anyway, the final goal is to make ISO just one of possible delivery
>> schemes. Primary delivery approach should be rpm/deb repo, not ISO.
>>
>> [0]
>> https://blueprints.launchpad.net/fuel/+spec/separate-fuel-node-provisioning
>> [1] https://review.openstack.org/#/c/254270/
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage 

[openstack-dev] [nova-scheduler] Scheduler sub-group meeting - Agenda 12/14

2015-12-13 Thread Dugger, Donald D
Meeting on #openstack-meeting-alt at 1400 UTC (7:00AM MDT)



1) Spec & BPs - 
https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking

2) Bugs - https://bugs.launchpad.net/nova/+bugs?field.tag=scheduler

3) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - availability zone performance regression and discussion about added network field

2015-12-13 Thread Hirofumi Ichihara



On 2015/12/14 14:52, Kevin Benton wrote:
I see, so regular users are supposed to use this information as well. 
But how are they supposed to use it? For example, if they see that 
their network has availability zones 1 and 4, but their instance is 
hosted in zone 3, what are they supposed to do?
I don't think that there is what they should do in the case because 
Neutron AZ is different from Nova AZ. For example, there may be a case 
like the following.


1. User throws POST Network API and Subnet API with 
availability_zone_hints [AZ1, AZ2]
2. Neutron server tries to schedule the resource on both AZ1 and AZ2 but 
the resource are scheduled on AZ1 only by some reasons
3. User confirms via GET Network API where his resource is hosted and he 
knows it's AZ1 only

4. User also can know AZ is ready via GET Availability zones API: AZ1, AZ3
5. User deletes previous resource and he recreates his resource with 
availability_zone_hints [AZ1, AZ3]




On Sun, Dec 13, 2015 at 8:57 PM, Hirofumi Ichihara 
> wrote:


Hi Kevin,

On 2015/12/14 11:10, Kevin Benton wrote:

Hi all,

The availability zone code added a new field to the network API
that shows the availability zones of a network. This caused a
pretty big performance impact to get_networks calls because it
resulted in a database lookup for every network.[1]

I already put a patch up to join the information ahead of time in
the network model.[2]

I agree with your suggestion. I believe that the patch can solve
the performance issue.


However, before we go forward with that, I think we should
consider the removal of that field from the API.

Having to always join to the DHCP agents table to lookup which
zones a network has DHCP agents on is expensive and is
duplicating information available with other API calls.

Additionally, the field is just called 'availability_zones' but
it's being derived solely from AZ definitions in DHCP agent
bindings for that network. To me that doesn't represent where the
network is available, it just says which zones its scheduled DHCP
instances live in. If that's the purpose, then we should just be
using the DHCP agent API for this info and not impact the network
API.

I don't think so. I have three points.

1. Availability zone is implemented in just a case with Agent now,
but it's reference implementation. For example, we should expect
that availability zone will be used by plugin without agent.

2. In users view, availability zone is related to network
resource. On the other hand, users doesn't need to consider Agent
or operators doesn't like to enable users to do in the first
place. So I don't agree with using Agent API.

3. We should consider whether users want to know the field.
Originally, the field doesn't exist in Spec[3] but I added it
according with reviewer's opinion(maybe Akihiro?). This is about
discussion of use case. After users create resources via API with
availability_zone_hints so that they achieve HA for their service,
they want to know which zones are their resources hosted on
because their resources might not be distributed on multiple
availability zones by any reasons. In the case, they need to know
"availability_zones" for the resources via Network API.

Thanks,
Hirofumi

[3]: https://review.openstack.org/#/c/169612/31



Thoughts?

1. https://bugs.launchpad.net/neutron/+bug/1525740
2. https://review.openstack.org/#/c/257086/

-- 
Kevin Benton



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - availability zone performance regression and discussion about added network field

2015-12-13 Thread Kevin Benton
What decision would lead the user to request AZ1 and AZ2 in the first
place? Especially since when it fails to get AZ2, they just request again
with AZ1 and AZ3 instead.

On Sun, Dec 13, 2015 at 10:31 PM, Hirofumi Ichihara <
ichihara.hirof...@lab.ntt.co.jp> wrote:

>
>
> On 2015/12/14 14:52, Kevin Benton wrote:
>
> I see, so regular users are supposed to use this information as well. But
> how are they supposed to use it? For example, if they see that their
> network has availability zones 1 and 4, but their instance is hosted in
> zone 3, what are they supposed to do?
>
> I don't think that there is what they should do in the case because
> Neutron AZ is different from Nova AZ. For example, there may be a case like
> the following.
>
> 1. User throws POST Network API and Subnet API with
> availability_zone_hints [AZ1, AZ2]
> 2. Neutron server tries to schedule the resource on both AZ1 and AZ2 but
> the resource are scheduled on AZ1 only by some reasons
> 3. User confirms via GET Network API where his resource is hosted and he
> knows it's AZ1 only
> 4. User also can know AZ is ready via GET Availability zones API: AZ1, AZ3
> 5. User deletes previous resource and he recreates his resource with
> availability_zone_hints [AZ1, AZ3]
>
>
>
> On Sun, Dec 13, 2015 at 8:57 PM, Hirofumi Ichihara <
> ichihara.hirof...@lab.ntt.co.jp> wrote:
>
>> Hi Kevin,
>>
>> On 2015/12/14 11:10, Kevin Benton wrote:
>>
>> Hi all,
>>
>> The availability zone code added a new field to the network API that
>> shows the availability zones of a network. This caused a pretty big
>> performance impact to get_networks calls because it resulted in a database
>> lookup for every network.[1]
>>
>> I already put a patch up to join the information ahead of time in the
>> network model.[2]
>>
>> I agree with your suggestion. I believe that the patch can solve the
>> performance issue.
>>
>> However, before we go forward with that, I think we should consider the
>> removal of that field from the API.
>>
>> Having to always join to the DHCP agents table to lookup which zones a
>> network has DHCP agents on is expensive and is duplicating information
>> available with other API calls.
>>
>> Additionally, the field is just called 'availability_zones' but it's
>> being derived solely from AZ definitions in DHCP agent bindings for that
>> network. To me that doesn't represent where the network is available, it
>> just says which zones its scheduled DHCP instances live in. If that's the
>> purpose, then we should just be using the DHCP agent API for this info and
>> not impact the network API.
>>
>> I don't think so. I have three points.
>>
>> 1. Availability zone is implemented in just a case with Agent now, but
>> it's reference implementation. For example, we should expect that
>> availability zone will be used by plugin without agent.
>>
>> 2. In users view, availability zone is related to network resource. On
>> the other hand, users doesn't need to consider Agent or operators doesn't
>> like to enable users to do in the first place. So I don't agree with using
>> Agent API.
>>
>> 3. We should consider whether users want to know the field. Originally,
>> the field doesn't exist in Spec[3] but I added it according with reviewer's
>> opinion(maybe Akihiro?). This is about discussion of use case. After users
>> create resources via API with availability_zone_hints so that they achieve
>> HA for their service, they want to know which zones are their resources
>> hosted on because their resources might not be distributed on multiple
>> availability zones by any reasons. In the case, they need to know
>> "availability_zones" for the resources via Network API.
>>
>> Thanks,
>> Hirofumi
>>
>> [3]: https://review.openstack.org/#/c/169612/31
>>
>>
>> Thoughts?
>>
>> 1. https://bugs.launchpad.net/neutron/+bug/1525740
>> 2. https://review.openstack.org/#/c/257086/
>>
>> --
>> Kevin Benton
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [nova][serial-console-proxy]

2015-12-13 Thread Prathyusha Guduri
Hi Tony,


Thanks a lot for your response.
I actually did a rejoin-stack.sh which will also restart n-api and all
other services. But still the same issue.

Anyway now that I've to run all over again, will change my local.conf
according to the guide and run stack.

Will keep you updated.

Thanks,
Prathyusha



On Mon, Dec 14, 2015 at 4:09 AM, Tony Breeds 
wrote:

> On Fri, Dec 11, 2015 at 11:07:02AM +0530, Prathyusha Guduri wrote:
> > Hi All,
> >
> > I have set up open stack on an Arm64 machine and all the open stack
> related
> > services are running fine. Also am able to launch an instance
> successfully.
> > Now that I need to get a console for my instance. The noVNC console is
> not
> > supported in the machine am using. So I have to use a serial-proxy
> console
> > or spice-proxy console.
> >
> > After rejoining the stack, I have stopped the noVNC service and started
> the
> > serial proxy service in  /usr/local/bin  as
> >
> > ubuntu@ubuntu:~/devstack$ /usr/local/bin/nova-serialproxy --config-file
> > /etc/nova/nova.conf
> > 2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-]
> > WebSocket server settings:
> > 2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-]   -
> > Listen on 0.0.0.0:6083
> > 2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   -
> > Flash security policy server
> > 2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   - No
> > SSL/TLS support (no cert file)
> > 2015-12-10 19:07:13.790 21979 INFO nova.console.websocketproxy [-]   -
> > proxying from 0.0.0.0:6083 to None:None
> >
> > But
> > ubuntu@ubuntu:~/devstack$ nova get-serial-console vm20
> > ERROR (ClientException): The server has either erred or is incapable of
> > performing the requested operation. (HTTP 500) (Request-ID:
> > req-cfe7d69d-3653-4d62-ad0b-50c68f1ebd5e)
>
> So you probably need to restart you nova-api, cauth and compute servcies.
>
> As you're using devstack I'd recommend you start again and follow the
> following
> guides:
>  1.
> http://docs.openstack.org/developer/devstack/guides/nova.html#nova-serialproxy
>  2. http://docs.openstack.org/developer/nova/testing/serial-console.html
>
> Also your nova.conf looked strange I hope that's just manually formatting.
>
> Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev