Re: [openstack-dev] [nova] FFE Request: v3 setting v3 API core

2013-09-08 Thread Alex Xu

On 2013?09?08? 21:34, Christopher Yeoh wrote:

Hi,

The following 3 changesets in the queue:

https://review.openstack.org/#/c/43274/
https://review.openstack.org/#/c/43278/
https://review.openstack.org/#/c/43280/

make keypairs, scheduler hints and console output part of the V3 core
api. This essentially changes just two things:

- the v3 API server will refuse to startup if they are not loaded (this
   is the definition of a feature being part of core given that all of
   the API is an 'extension' now.
- the resource is accessed slightly differently - /v3/keypairs rather
   than /v3/os-keypairs as an example

In terms of risk the change is limited to the V3 API only. And although
the V3 API will be experimental in Havana anyway and subject to some
change I would like to include this because of the resource name
changes and minimise any hassle for people who do start using the V3 API
in Havana and then want to use it for Icehouse. It has similar
downstream effects on documentation.
For the same reason, I propose this patch 
https://review.openstack.org/#/c/41349/
to get FFE, it already got one +2. It also doing some changing for 
resource name.


The patches are ready to go (recently its just mostly been rebase
updates with some very minor fixups).

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to do nova v3 tests in tempest

2013-09-08 Thread Christopher Yeoh
On Sat, Sep 7, 2013 at 3:22 AM, David Kranz  wrote:

> On 09/04/2013 09:11 AM, Zhu Bo wrote:
>
>> hi,
>>   I'm working on bp:nova-v3-tests in tempest.  The nova tests in
>> tempest mostly have been ported into v3 and sent off.
>> but we got some feedbacks that there was mass code duplication and
>> suggested to do this by inheritance.
>> So I have sent another patch to do this by inheritance. But in this way,
>> another issue is not easy to drop v2 client and tests.
>> I want to get more feedbacks about this blue-print to make sure we do
>> this in the right way, which is the better one or is there
>> another better way? I'd appreciate every suggestion and comment.
>>
>> the first way to do this in separate files:
>> https://review.openstack.org/#**/c/39609/and
>> https://review.openstack.org/#**/c/39621/6
>>
>> the second way to do this by inheritance.
>> https://review.openstack.org/#**/c/44876/
>>
>> Thanks & Best Regards
>>
>> Ivan
>>
>>  Ivan, I took a look at this. My first thought was that subclassing would
> be good because it could avoid code duplication. But when I looked at the
> patch I saw that although there are subclasses, most of the changes are
> version "ifs" inside the base class code. IMO that gives us the worst of
> both worlds and we would be better off just copying as we did with the new
> image api. It is not great, but I think that is the least of evils here.
> Any one else have a different view?
>
>
I agree. The copy and modify technique for V3 is the lesser of two evils..

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FFE Request: v3 setting v3 API core

2013-09-08 Thread Christopher Yeoh
Hi,

The following 3 changesets in the queue:

https://review.openstack.org/#/c/43274/
https://review.openstack.org/#/c/43278/
https://review.openstack.org/#/c/43280/

make keypairs, scheduler hints and console output part of the V3 core
api. This essentially changes just two things:

- the v3 API server will refuse to startup if they are not loaded (this
  is the definition of a feature being part of core given that all of
  the API is an 'extension' now.
- the resource is accessed slightly differently - /v3/keypairs rather
  than /v3/os-keypairs as an example

In terms of risk the change is limited to the V3 API only. And although
the V3 API will be experimental in Havana anyway and subject to some
change I would like to include this because of the resource name
changes and minimise any hassle for people who do start using the V3 API
in Havana and then want to use it for Icehouse. It has similar
downstream effects on documentation.

The patches are ready to go (recently its just mostly been rebase
updates with some very minor fixups).

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Confused about GroupAntiAffinityFilter and GroupAffinityFilter

2013-09-08 Thread Gary Kotton
Hi Simon,
I have found the problem. A nice optimization was added a while ago -
https://review.openstack.org/#/c/33720/. This ensures that a filter will
only be run once per request. The problem is that if a instance is unable
to be run on a scheduled host then a new host will need to be selected.
The scheduling after a failed attempt would not invoke the anti affinity
scheduling, which may lead to an invalid host being selected.
I am going to post a patch soon.
Thanks
Gary

On 9/6/13 3:18 PM, "Gary Kotton"  wrote:

>Hi,
>Sorry for the delayed response (it is new years my side of the world and
>have some family obligations).
>Would it be possible that you please provide the nova configuration file
>(I would like to see if you have the group anti affinity filter in your
>filter list), and if this exists to at least see a trace that the filter
>has been invoked.
>I have tested this with the patches that I mentioned below and it works. I
>will invest some time on this on Sunday to make sure that it is all
>working with the latest code.
>Thanks
>Gary
>
>On 9/6/13 10:31 AM, "Simon Pasquier"  wrote:
>
>>Gary (or others), did you have some time to look at my issue?
>>FYI, I opened a bug [1] on Launchpad. I'll update it with the outcome of
>>this discussion.
>>Cheers,
>>Simon
>>
>>[1] https://bugs.launchpad.net/nova/+bug/1218878
>>
>>Le 03/09/2013 15:54, Simon Pasquier a écrit :
>>> I've done a wrong copy&paste, see correction inline.
>>>
>>> Le 03/09/2013 12:34, Simon Pasquier a écrit :
 Hello,

 Thanks for the reply.

 First of all, do you agree that the current documentation for these
 filters is inaccurate?

 My test environment has 2 compute nodes: compute1 and compute3. First,
I
 launch 1 instance (not being tied to any group) on each node:
 $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec
--key-name
 local --availability-zone nova:compute1 vm-compute1-nogroup
 $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec
--key-name
 local --availability-zone nova:compute3 vm-compute3-nogroup

 So far so good, everything's active:
 $ nova list
 
+--+-++
-
---+-+--+


 | ID   | Name| Status
|
 Task State | Power State | Networks |
 
+--+-++
-
---+-+--+


 | 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE
|
 None   | Running | private=10.0.0.3 |
 | c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE
|
 None   | Running | private=10.0.0.4 |
 
+--+-++
-
---+-+--+



 Then I try to launch one instance in group 'foo' but it fails:
 $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec
--key-name
 local --availability-zone nova:compute3 vm-compute3-nogroup
>>>
>>> The command is:
>>>
>>> $ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-uec --key-name
>>> local --hint group=foo vm1-foo
>>>
 $ nova list
 
+--+-++
-
---+-+--+


 | ID   | Name| Status
|
 Task State | Power State | Networks |
 
+--+-++
-
---+-+--+


 | 3a465024-85e7-4e80-99a9-ccef3a4f41d5 | vm-compute1-nogroup | ACTIVE
|
 None   | Running | private=10.0.0.3 |
 | c838e0c4-3b4f-4030-b2a2-b21305c0f3ea | vm-compute3-nogroup | ACTIVE
|
 None   | Running | private=10.0.0.4 |
 | 743fa564-f38f-4f44-9913-d8adcae955a0 | vm1-foo | ERROR
|
 None   | NOSTATE |  |
 
+--+-++
-
---+-+--+



 I've pasted the scheduler logs [1] and my nova.conf file [2]. As you
 will see, the log message is there but it looks like group_hosts() [3]
 is returning all my hosts instead of only the ones that run instances
 from the group.

 Concerning GroupAffinityFilter, I understood that it couldn't work
 simultaneously with GroupAntiAffinityFilter but since I missed the
 multiple schedulers, I couldn't figure out how it would be useful. So
I
 got it now.

 Best regards,

 Simon

 [1] http://paste.openstack.org/show/45672/
 [2] http://paste.openstack.org/show/45671/
 

Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a non-shared network in another tenant

2013-09-08 Thread Avishay Balderman
Hi
I have opened two bugs that are related to the topic below:

https://bugs.launchpad.net/neutron/+bug/1221315

https://bugs.launchpad.net/nova/+bug/1221320

Thanks

Avishay

From: Samuel Bercovici
Sent: Wednesday, August 07, 2013 1:05 PM
To: OpenStack Development Mailing List; gong...@unitedstack.com
Subject: Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a 
non-shared network in another tenant

Hi Yong,

Garry has recommended that I will you the following:

In: /opt/stack/nova/nova/network/neutronv2/api.py
In the def _get_available_networks function, the developer has added a specific 
line of code filtering networks by the tenant_id.
Around line 123: search_opts = {"tenant_id": project_id, 'shared': False}

As far as I understand, Neutron already filters non-shared networks by the 
tenant ID, so why do we need this explicit filter, even more, I think that the 
behavior of neutron will also return the shared network in addition to the 
private ones by default so instead of the code doing two calls it could only do 
one call to Neutron with if needed filtering by net_ids.

Do you see a reason why the code should remain as is?

Thanks,
-Sam.



From: Samuel Bercovici
Sent: Thursday, August 01, 2013 10:58 AM
To: OpenStack Development Mailing List; 
sorla...@nicira.com
Subject: Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a 
non-shared network in another tenant

There was another patch needed:
In: /opt/stack/nova/nova/network/neutronv2/api.py
In the def _get_available_networks function, the developer has added a specific 
line of code filtering networks by the tenant_id.
In general as far as I understand, this might be unneeded as quantum will 
already filter the networks based on the tenant_id in the context while if 
is_admin, will elevate and return all networks which I belive is the behavior 
we want.

Do you think this can somehow be solved only on neutron side or must it also be 
done by rmoving the tenant_id filter in the nova side?

When removing the filter of tenant_id + the pathc bellow, I get the behavior 
that as admin, I can createVMs connected to another tenants private network but 
as non-admin I am not able to do so.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Wednesday, July 31, 2013 7:32 PM
To: OpenStack Development Mailing List; 
sorla...@nicira.com
Subject: Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a 
non-shared network in another tenant

Hi Slavatore,

I thought that creating a qport  would be enough but it looks like I still 
missing something else.
I have commented in /opt/stack/quantum/neutron/api/v2/base.py in the create 
function the ._validate_network_tenant_ownership call.
I can now as an Admin user, can create a qport from tenant-a that is mapped to 
a private network in tenant-b.

The following still fails with ERROR: The resource could not be found. (HTTP 
404) ...
nova boot --flavor 1 --image  --nic port-id=
Where  is the one I got from the port-create

Any ideas where I should look next?

Regards,
-Sam.


From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Wednesday, July 31, 2013 5:42 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Neutron]Connecting a VM from one tenant to a 
non-shared network in another tenant

Hi Sam,

is what you're trying to do tantamount to creating a port on a network whose 
tenant_id is different from the network's tenant_id?
We have at the moment a fairly strict ownership check - which does not allow 
even admin users to do this operation.

I do not have a strong opinion against relaxing the check, and allowing admin 
users to create ports on any network - I don't think this would constitute a 
potential vulnerability, as in neutron is someone's manages to impersonate an 
admin user, he/she can make much more damage.

Salvatore

On 31 July 2013 16:11, Samuel Bercovici 
mailto:samu...@radware.com>> wrote:
Hi All,

We are providing load balancing services via virtual machines running under an 
admin tenant that needs to be connected to VMs attached to a non-shared/private 
tenant network.
The virtual machine fails to be provisioned connected to the private tenant 
network event if it is provisioned using the admin user which has admin role on 
both tenants.
Please advise?

Best Regards,
-Sam.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev