Re: [openstack-dev] [nova] Usability question for the server migrations API

2017-04-16 Thread Alex Xu
2017-04-15 4:38 GMT+08:00 Chet Burgess :

> On Fri, Apr 14, 2017 at 1:27 PM, Matt Riedemann 
> wrote:
>
>> The GET /servers/{server_id}/migrations API only lists in-progress live
>> migrations. This is an artifact of when it was originally introduced as the
>> os-migrations API which was tightly coupled with the API operation to
>> cancel a live migration.
>>
>> There is a spec [1] which is now approved which proposes to expand that
>> to also return other types of in-progress migrations, like cold migrations,
>> resizes and evacuations.
>>
>> What I don't like about the proposal is that it still filters out
>> completed migrations from being returned. I never liked the original design
>> where only in-progress live migrations would be returned. I understand why
>> it was done that way, as a convenience for using those results to then
>> cancel a live migration, but seriously that's something that can be
>> filtered out properly.
>>
>> So what I'd propose is that in a new microversion, we'd return all
>> migration records for a server, regardless of status. We could provide a
>> status filter query parameter if desired to just see in-progress
>> migrations, or completed migrations, etc. And the live migration cancel
>> action API would still validate that the requested migration to cancel is
>> indeed in progress first, else it's a 400 error.
>>
>> The actual migration entries in the response are quite detailed, so if
>> that's a problem, we could change listing to just show some short info (id,
>> status, source and target host), and then leave the actual details for the
>> show API.
>>
>> What do operators think about this? Is this used at all? Would you like
>> to get all migrations and not just in-progress migrations, with the ability
>> to filter as necessary?
>>
>> [1] https://review.openstack.org/#/c/407237/
>
>
> +1
>
> I would love to see this. Our support team frequently has to figure out
> the "history" of a VM and today they have to use tool that relies on logs
> and/or the DB to figure out where a VM used to be and when it was moved. It
> would wonderful if that whole tool can just be replaced with a single call
> to the nova API to return a full history.
>

Chet, do you have requirement to query the migrations for multiple VMs?
'/servers/{uuid}/migrations' will pain for that.

Also note that, we still have the API '/os-migrations', it will return all
the migration records in any status for all the VMs, and it supports
filters like 'instance_uuid', 'status', and 'migration_type' etc. I can't
remember clearly whether we said we will deprecated it, at least for now,
we didn't deprecate it yet. Want to figure whether it still have some
useful use-case for query multiple VMs' migration records.


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][gate] tempest slow - where do we execute them in gate?

2017-04-16 Thread Ihar Hrachyshka
Hi all,

so I tried to inject a failure in a tempest test and was surprised
that no gate job failed because of that:
https://review.openstack.org/#/c/457102/1

It turned out that the test is not executed because we always ignore
all 'slow' tagged test cases:
http://logs.openstack.org/02/457102/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/89a08cc/console.html#_2017-04-17_01_43_39_115768

Question: do we execute those tests anywhere in gate, and if so,
where? (And if not, why, and how do we guarantee that they are not
broken by new changes?)

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Integrate Identity with LDAP

2017-04-16 Thread Chason Chan
Hi team,

I am trying to integrate Identity (Ocata Release) with LDAP. Afer I following  
this page: 
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/integrate_with_identity_service/sec-active-directory
 

I failed to log in to the dashboard by entering their AD DS username and 
password. 

This is my "keystone.log" show: http://paste.openstack.org/show/606862/ 

Please tell me how can I fix it. 

-- 
Regards,
Chason Chan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][deployment] FYI: changes to cells v2 setup guide (pike only)

2017-04-16 Thread Alex Xu
Is it strange that the 'nova service-list' and 'nova host-list' return the
hosts which didn't have host mapping yet?

How the user to know whether a host was added to a cell or not?

2017-04-14 23:45 GMT+08:00 Matt Riedemann :

> Nova is working on adding multi-cell aware support to the compute API. A
> side effect of this is we can now have a chicken-and-egg situation during
> deployment such that if your tooling is depending on the compute API to
> list compute hosts, for example, before running the discover_hosts command,
> nothing will actually show up. This is because to list compute hosts, like
> using 'nova hypervisor-list', we get those from the cells now and until you
> run discover_hosts, they aren't mapped to a cell.
>
> The solution is to use "nova service-list --binary nova-compute" instead
> of "nova hypervisor-list" since we can pull services from all cells before
> the hosts are mapped using discover_hosts.
>
> I have a patch up to update our docs and add a release note:
>
> https://review.openstack.org/#/c/456923/
>
> I'll be updating the official install guide docs later.
>
> Note that this is master branch (Pike) only, this does not impact Ocata.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer]

2017-04-16 Thread Andres Alvarez
Hello everyone

I am trying to get acquainted with the Ceilometer code base. I was reading
through the Telemetry Admin Guide

regarding publishers and noticed that there are many deprecated publishers
such as *direct*, *kafka*, and *database*. Yet on the source code's master
branch

can still see some of these publishers there.

Is there a reason for this?

Sorry if it's a dumb question.

Wish you all a good day.
Andres
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][trove] deleted instances appearing in nova's server-group list

2017-04-16 Thread Amrith Kumar
TL;DR

Nova's v2 server group API list() call returns a list of members which
includes deleted instances. Is this valid behavior? Should requestors make
the determination that these are deleted instances and act accordingly, or
is this something that changed in Nova and that Nova will fix?

The whole story:

Trove's gate has begun to fail with a very repeatable and specific pattern
where a test finds that there are left-over members in a server group and
therefore does not delete the server group. See [1].

The tests in question create a server group (anti-affinity) and add multiple
instance to it. The instances are then deleted, and the tests wait to ensure
that the instances are in fact deleted. The tests then attempt to delete the
server group. The code (in trove) that determines whether or not to delete
the server group has contained (since its creation) a check to ensure that
the group is only deleted if it is empty (has no members) or has a single
instance (the last instance being deleted).

Now however, since the server groups membership shows deleted instances,
trove's code isn't deleting the server group and our tests are failing.

This is illustrated from the CLI based example (attached).

[1] https://bugs.launchpad.net/bugs/1682845

--
Amrith Kumar
amrith.ku...@gmail.com


amrith@amrith-work:/opt/stack/trove$ openstack server group create 
affinity-policy --policy affinity
+--+--+
| Field| Value|
+--+--+
| id   | e74ce429-9f12-48f6-a156-6222930f733e |
| members  |  |
| name | affinity-policy  |
| policies | affinity |
+--+--+

amrith@amrith-work:/opt/stack/trove$ nova server-group-list
/usr/local/lib/python2.7/dist-packages/novaclient/client.py:278: UserWarning: 
The 'tenant_id' argument is deprecated in Ocata and its use may result in 
errors in future releases. As 'project_id' is provided, the 'tenant_id' 
argument will be ignored.
  warnings.warn(msg)
+--+-+--+--+---+-+--+
| Id   | Name| Project Id   
| User Id  | Policies  | Members | 
Metadata |
+--+-+--+--+---+-+--+
| e74ce429-9f12-48f6-a156-6222930f733e | affinity-policy | 
a2701e81c2214285b12f86921426d12b | ac9d7a717aa34456984ad2d0c9d21255 | 
[u'affinity'] | []  | {}   |
+--+-+--+--+---+-+--+

amrith@amrith-work:/opt/stack/trove$ nova boot --image 
2c5becd4-4d27-4567-b6c8-5d5cb984ee7b --flavor 1 --hint 
"group=e74ce429-9f12-48f6-a156-6222930f733e" instance-1

amrith@amrith-work:/opt/stack/trove$ nova boot --image 
2c5becd4-4d27-4567-b6c8-5d5cb984ee7b --flavor 1 --hint 
"group=e74ce429-9f12-48f6-a156-6222930f733e" instance-2


amrith@amrith-work:/opt/stack/trove$ nova server-group-get 
e74ce429-9f12-48f6-a156-6222930f733e  
/usr/local/lib/python2.7/dist-packages/novaclient/client.py:278: UserWarning: 
The 'tenant_id' argument is deprecated in Ocata and its use may result in 
errors in future releases. As 'project_id' is provided, the 'tenant_id' 
argument will be ignored.
  warnings.warn(msg)
+--+-+--+--+---++--+
| Id   | Name| Project Id   
| User Id  | Policies  | Members
| Metadata |
+--+-+--+--+---++--+
| e74ce429-9f12-48f6-a156-6222930f733e | affinity-policy | 
a2701e81c2214285b12f86921426d12b | ac9d7a717aa34456984ad2d0c9d21255 | 
[u'affinity'] | [u'b4d45181-030b-491d-8266-4be7f412c59a', 
u'7af84322-4d3f-4c11-9f86-a52d34f393c4'] | {}   |
+--+-+--+--+---++--+

amrith@amrith-work:/opt/stack/trove$ nova list

Re: [openstack-dev] [tc][elections]questions about one platform vision

2017-04-16 Thread Neil Jerram
FWIW, I think the Lego analogy is not actually helpful for another reason: it 
has vastly too many ways of combining, and (hence) no sense at all of 
consistency / interoperability between the different things that you can 
construct with it. Whereas for OpenStack I believe you are also aiming for some 
forms of consistency and interoperability. 


  Original Message  
From: Thierry Carrez
Sent: Friday, 14 April 2017 09:34

[...]
I like the Lego analogy. While it is, in the end, a bunch of building
blocks of various shape and function, we don't describe Lego as a
"collection of blocks". We describe it as "the Lego system" (one
platform) that lets you build stacks in various ways. What makes the
value of the platform is the common experience of operating the blocks,
and the fact that Lego stacks built by someone else are interoperable
with your stacks.

Like all analogies, this one is not perfect (in particular it horribly
fails to capture the "open" nature of the stack), but I think it's still
useful to inform on what OpenStack is.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][ironic][octavia] oslo.config 4.0 will break projects' unit test

2017-04-16 Thread ChangBo Guo
As I expect, there are some failures in periodic tasks recently [1] if we
set enforce_type with True by default,  we'd better fix them before we
release oslo.config 4.0.  Some guys had been working on this :
Nova: https://review.openstack.org/455534 should fix failures
tempest:  https://review.openstack.org/456445 fixed
Keystone:  https://review.openstack.org/455391 wait for oslo.config 4.0

We still need help from Glance/Ironic/Octavia
Glance:  https://review.openstack.org/#/c/455522/ need review
Ironic:  Need fix failure in
http://logs.openstack.org/periodic/periodic-ironic-py27-with-oslo-master/680abfe/testr_results.html.gz
Octavia: Need fix failure in
http://logs.openstack.org/periodic/periodic-octavia-py35-with-oslo-master/80fee03/testr_results.html.gz

[1]
http://status.openstack.org/openstack-health/#/?groupKey=build_name=hour=-with-oslo

2017-04-04 0:01 GMT+08:00 ChangBo Guo :

> Hi ALL,
>
> oslo_config provides method CONF.set_override[1] , developers usually use
> it to change config option's value in tests. That's convenient . By
> default  parameter enforce_type=False,  it doesn't check any type or
> value of override. If set enforce_type=True , will check parameter
> override's type and value.  In production code(running time code),
> oslo_config  always checks  config option's value. In short, we test and
> run code in different ways. so there's  gap:  config option with wrong
> type or invalid value can pass tests when
> parameter enforce_type = False in consuming projects.  that means some
> invalid or wrong tests are in our code base.
>
>
> We began to warn user about the change since Sep, 2016 in [2]. This change
> will notify consuming project to write correct test cases with config
> options.
> We would make enforce_type = true by default in [3], that may break some
> projects' tests, that's also raise wrong unit tests. The failure is easy to
> fix, which
> is recommended.
>
>
> [1] https://github.com/openstack/oslo.config/blob/
> efb287a94645b15b634e8c344352696ff85c219f/oslo_config/cfg.py#L2613
> [2] https://review.openstack.org/#/c/365476/
> [3] https://review.openstack.org/328692
>
> --
> ChangBo Guo(gcb)
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev