Re: [Openstack] nova+vmware doesn't create the cached image in ipaddress_base folder

2016-08-03 Thread Vaidyanath Manogaran
I see that the SRM "recompute datastore groups" have something to do with
this?
I have tried setting up the cache_prefix but no go.



On Tue, Aug 2, 2016 at 7:39 AM, Vaidyanath Manogaran  wrote:

> HI Gary,
> Thanks, I have set the details on the cache_prefix.
> but still fails on file not found error. It is not creating inside the
> specified folder rather it is creating outside the folder in the datastore
>
> http://paste.openstack.org/show/545253/
>
> Regards,
> Vaidyanath
>
> On Mon, Aug 1, 2016 at 7:00 PM, Gary Kotton  wrote:
>
>> Hi,
>>
>> I suggest that you make use of the variable cache_prefix.
>>
>>
>> https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vmops.py#L156
>>
>> The IP address just creates havoc…
>>
>> Thanks
>>
>> Gary
>>
>>
>>
>> *From: *Vaidyanath Manogaran 
>> *Date: *Monday, August 1, 2016 at 2:25 PM
>> *To: *"openstack@lists.openstack.org" , "
>> commun...@lists.openstack.org" 
>> *Subject: *[Openstack] nova+vmware doesn't create the cached image in
>> ipaddress_base folder
>>
>>
>>
>> nova boot fails as the image created with oslo_vmware/api.py is creating
>> the vm outside the base folder. though the x.x.x.x_base folder is present
>> it is not creating it inside the vm my cofiguration is as follows.
>>
>> 1.controller and compute node in one vcenter server
>>
>> 2.the compute manages a different vcenter server.
>>
>> 3.All the hosts in the cluster is shared with the same and many
>> other datastores
>>
>> 4.I have many clusters apart from the one which i am managing
>> currently. which would be managed later
>>
>> Any help here would be appreciable.
>>
>>
>>
>> --
>>
>> Regards,
>>
>> Vaidyanath
>>
>
>
>
> --
> Regards,
>
> Vaidyanath
> +91-9483465528(M)
>



-- 
Regards,

Vaidyanath
+91-9483465528(M)
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [Keystone] List group members with policy.v3cloudsample.json

2016-08-03 Thread 林自均
Hi all,

My OpenStack version is Mitaka. I updated my /etc/keystone/policy.json to
policy.v3cloudsample.json
.
Most functions works as expected.

However, when I wanted to list members in a group as a domain admin, an
error occurred: “You are not authorized to perform the requested action:
identity:list_users_in_group (HTTP 403)”.

The reproduce steps are:

   - As cloud admin:
  - openstack domain create taiwan
  - openstack user create --domain taiwan --password 5ecret
  taiwan-president
  - openstack role add --user taiwan-president --domain taiwan admin
   - As taiwan-president:
  - openstack group create --domain taiwan indigenous
  - openstack user create --domain taiwan margaret
  - openstack group add user --group-domain taiwan indigenous margaret
  - openstack user list --group indigenous --domain taiwan

The last command will generate the 403 error.

The rule for identity:list_users_in_group is rule:cloud_admin or
rule:admin_and_matching_target_group_domain_id. I can successfully list
group members if I changed it to rule:admin_required.

Am I doing anything wrong? Or did I run into some kind of bug? Thanks for
the help.

John
​
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [neutron] - vlan-aware-vms

2016-08-03 Thread Martinx - ジェームズ
On 2 August 2016 at 17:49, Armando M.  wrote:

>
>
> On 29 July 2016 at 12:59, Martinx - ジェームズ 
> wrote:
>
>> Quick question:
>>
>> Can I start testing Newton VLAN Aware VMs now (Beta 2)?
>>
>> Thanks,
>> Thiago
>>
>>
> If you're paying close attention the LinuxBridge version is almost
> functional, and the OVS one is coming along. I'd advise to wait a tad
> longer. I am trying to keep [1] up to date, so you might want to check that
> out before pulling down  the code.
>
> [1] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
>

Whee! That's great news!

I had to give up on Mitaka for a big project (almost 300 Instances) just
because it did not support VLAN Aware VMs.

We deployed Ubuntu 16.04 with OpenvSwitch and DPDK at the hosts... And we
used shell scripts to manage everything, super creepy...

I can't wait to give it a try!

Thank you,
Thiago
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [networking-sfc] Flow classifier conflict logic

2016-08-03 Thread Artem Plakunov

100.*
$ neutron port-show 429fdb89-1bfa-4dc1-bb89-25373501ebde | grep tenant_id
| tenant_id | 0dafd2d782f4445798363ba9b27e104f |
$ neutron port-show ca7f8fdf-a1ff-4cd7-8897-9f6ca5220be6 | grep tenant_id
| tenant_id | 0dafd2d782f4445798363ba9b27e104f |
$ neutron port-show df8ce9a2-eddd-4b86-8d1c-705f9c96ddb6 | grep tenant_id
| tenant_id | 0dafd2d782f4445798363ba9b27e104f |

200.*
$ neutron port-show 2c6f6f67-6241-4661-977c-3fe5da864c95 | grep tenant_id
| tenant_id | ddf01417a9b74648a3a20c2b818a52ca |
$ neutron port-show 9b20c466-f62c-4c49-a074-91a088ebb0f6 | grep tenant_id
| tenant_id | ddf01417a9b74648a3a20c2b818a52ca |
$ neutron port-show f95f2509-d27d-4b3a-b62a-b9bdb69085bf | grep tenant_id
| tenant_id | ddf01417a9b74648a3a20c2b818a52ca |

02.08.2016 20:00, Farhad Sunavala пишет:

Please send the tenant ids of all six neutron ports.

From admin:
neutron port-show  | grep tenant_id

Thanks,
Farhad.


On Monday, August 1, 2016 7:44 AM, Artem Plakunov 
 wrote:



Thanks.

You said though that classifier must be unique within a tenant. I 
tried creating chains in two different tenants by different users 
without any RBAC rules. So there are two tenants, each has 1 network, 
2 vms (source, service) and an admin user. I used different openrc 
configs for each user yet still get the same conflict.


Info about the test is in the attachment
31.07.2016 5:25, Farhad Sunavala пишет:


Yes, this was intentionally done.
The logical-source-port is important only at the point of classification.
All successive classifications rely only on the 5 tuple and MPLS 
label (chain ID).


Consider an extension of the scenario you mention below.

Sources: (similar to your case)
a
b

Port-pairs: (added ppe and ppf)
ppc
ppd
ppe
ppf

Port-pair-groups: (added ppge and ppgf)
ppgc
ppgd
ppge
ppgf

Flow-classifiers:
fc1: logical-source-port of a && tcp
fc2: logical-source-port of b && tcp

Port-chains:
pc1: fc1 && (ppgc + ppge)
pc2: fc2 && (ppgd + ppgc + ppgf)



The flow-classifier has logical-src-port and protocol=tcp
The logical-src-port has no relevance in the middle of the chain.

In the middle of the chain, the only relevant flow-classifier is 
protocol=tcp.


If we allow it, we cannot distinguish TCP traffic coming out of ppgc 
(and subsequently ppc)

as to whether to mark it with the label for pc1 or the label for pc2.

In other words, within a tenant the flow-classifiers need to be 
unique wrt the 5 tuples.


thanks,
Farhad.

Date: Fri, 29 Jul 2016 18:01:05 +0300
From: Artem Plakunov mailto:art...@lvk.cs.msu.su>>
To: openstack@lists.openstack.org 
Subject: [Openstack] [networking-sfc] Flow classifier conflict logic
Message-ID: <579b6fb1.3030...@lvk.cs.msu.su 
>

Content-Type: text/plain; charset="utf-8"; Format="flowed"

Hello.
We have two deployments with networking-sfc:
mirantis 8.0 (liberty) and mirantis 9.0 (mitaka).

I noticed a difference in how flow classifiers conflict with each other
which I do not understand. I'm not sure if it is a bug or not.

I did the following on mitaka:
1. Create tenant 1 and network 1
2. Launch vms A and B in network 1
3. Create tenant 2, share network 1 to it with RBAC policy, launch vm C
in network 1
4. Create tenant 3, share network 1 to it with RBAC policy, launch vm D
in network 1
5. Setup sfc:
create two port pairs for vm C and vm D with a bidirectional port
create two port pair groups with these pairs (one pair in one group)
create flow classifier 1: logical-source-port = vm A port, protocol
= tcp
create flow classifier 2: logical-source-port = vm B port, protocol
= tcp
create chain with group 1 and classifier 1
create chain with group 2 and classifier 2 - this step gives the
following error:

Flow Classifier 7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow
Classifier 4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain
d1070955-fae9-4483-be9e-0e30f2859282.
Neutron server returns request_ids:
['req-9d0eecec-2724-45e8-84b4-7ccf67168b03']

The only thing neutron logs have is this from server.log:
2016-07-29 14:15:57.889 18917 INFO neutron.api.v2.resource
[req-9d0eecec-2724-45e8-84b4-7ccf67168b03
0b807c8616614b84a4b16a318248d28c 9de9dcec18424398a75a518249707a61 - - -]
create failed (client error): Flow Classifier
7f37c1ba-abe6-44a0-9507-5b982c51028b conflicts with Flow Classifier
4e97a8a5-cb22-4c21-8e30-65758859f501 in port chain
d1070955-fae9-4483-be9e-0e30f2859282.

I tried the same in liberty and it works and sfc successfully routes
traffic from both vms to their respective port groups

Liberty setup:
neutron version 7.0.4
neutronclient version 3.1.1
networking-sfc version 1.0.0 (from pip package)

Mitaka setup:
neutron version 8.1.1
neutronclient version 5.0.0 (tried using 3.1.1 with same outcome)
networking-sfc version 1.0.1.dev74 (from master branch commit
6730b6810355761cf55f04a40cd645f065f15752)

I'll attac

Re: [Openstack] [OpenStack] Glance: Unable to create image.

2016-08-03 Thread Eugen Block

So you're running Juno, I'm not sure I can help here...
Are the other openstack services running? What is the output of
nova service-list
cinder service-list
keystone user-list

Can you create an empty volume? If this works then your cinder service  
is probably configured correctly and glance is not, then you should  
check again the docs and your actual configuration.


Are there any other error messages from other services?


Zitat von shivkumar gupta :


Hello Eugen,
I am following attached guide.
RegardsShiv

On Tuesday, 2 August 2016 6:19 PM, Eugen Block  wrote:


 Which guide are you using?
I don't see any domains in your glance-api.conf or 
glance-registry.conf, an excerpt from Mitaka guide:

---cut here---
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = GLANCE_PASS
---cut here---

Did you set the environment variables in your scripts correctly 
(OS_IMAGE_API_VERSION=2)? There are several points that could lead to 
the authorization error. I don't use SSL in my test environment, I 
don't know if this is another point to check.

Regards.
Eugen

Zitat von shivkumar gupta :


Thanks Trinath,
I already verify the configuration from the document. Can you please 
help me what exactly should i verify. also what will be the 
authentication flow while creating an image in glance.

    On Monday, 1 August 2016 3:09 PM, Trinath Somanchi 
 wrote:


  #yiv3904287517 #yiv3904287517 -- _filtered #yiv3904287517 
{font-family:Helvetica;panose-1:2 11 6 4 2 2 2 2 2 4;} _filtered 
#yiv3904287517 {panose-1:2 4 5 3 5 4 6 3 2 4;} _filtered 
#yiv3904287517 {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 
4;}#yiv3904287517 #yiv3904287517 p.yiv3904287517MsoNormal, 
#yiv3904287517 li.yiv3904287517MsoNormal, #yiv3904287517 
div.yiv3904287517MsoNormal 
{margin:0in;margin-bottom:.0001pt;font-size:12.0pt;}#yiv3904287517 
a:link, #yiv3904287517 span.yiv3904287517MsoHyperlink 
{color:#0563C1;text-decoration:underline;}#yiv3904287517 a:visited, 
#yiv3904287517 span.yiv3904287517MsoHyperlinkFollowed 
{color:#954F72;text-decoration:underline;}#yiv3904287517 
p.yiv3904287517msonormal0, #yiv3904287517 
li.yiv3904287517msonormal0, #yiv3904287517 
div.yiv3904287517msonormal0 
{margin-right:0in;margin-left:0in;font-size:12.0pt;}#yiv3904287517 
span.yiv3904287517EmailStyle18 {color:windowtext;}#yiv3904287517 
.yiv3904287517MsoChpDefault {font-size:10.0pt;} _filtered 
#yiv3904287517 {margin:1.0in 1.0in 1.0in 1.0in;}#yiv3904287517 
div.yiv3904287517WordSection1 {}#yiv3904287517 Hi Shiv-    The error 
clearly mentions its an misconfiguration of keystone.    Reverify 
your glance configuration for keystone – glance authentication 
credentials.  The one you have created while installing and 
configuring glance.    / Trinath          From: shivkumar gupta 
[mailto:shivkumar_gupt...@yahoo.com]
Sent: Monday, August 01, 2016 2:42 PM
To: OpenStack Mailing List 
Subject: Re: [Openstack] [OpenStack] Glance: Unable to create image. 
    Hello Experts,    Please suggest and help to proceed further.    
Regards Shiv    On Sunday, 31 July 2016 5:04 PM, shivkumar gupta 
 wrote:    Hello Experts,    I am 
unable to create image in during glance installation and getting 
following error.     glance image-create --name "Cirros" --file 
/tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 
--container-format bare --is-public True --progress  
[=>] 100% Request returned failure 
status. Invalid OpenStack Identity credentials.    From api.log i 
can see following errors was present. 2016-07-30 21:36:17.135 7114 
INFO urllib3.connectionpool [-] Starting new HTTPS connection (1): 
127.0.0.1 2016-07-30 21:36:17.145 7114 WARNING 
keystoneclient.middleware.auth_token [-] Retrying on HTTP connection 
exception: [Errno 1] _ssl.c:510: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol 2016-07-30 
21:36:17.648 7114 INFO urllib3.connectionpool [-] Starting new HTTPS 
connection (1): 127.0.0.1 2016-07-30 21:36:17.671 7114 WARNING 
keystoneclient.middleware.auth_token [-] Retrying on HTTP connection 
exception: [Errno 1] _ssl.c:510: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol 2016-07-30 
21:36:18.673 7114 INFO urllib3.connectionpool [-] Starting new HTTPS 
connection (1): 127.0.0.1 2016-07-30 21:36:18.686 7114 WARNING 
keystoneclient.middleware.auth_token [-] Retrying on HTTP connection 
exception: [Errno 1] _ssl.c:510: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown protocol 2016-07-30 
21:36:20.690 7114 INFO urllib3.connectionpool [-] Starting new HTTPS 
connection (1): 127.0.0.1 2016-07-30 21:36:20.724 7114 ERROR 
keystoneclient.middleware.auth_token [-] HTTP connection exception: 
[Errno 1] _ssl.c:510: error:140770FC:SSL 
routines:SSL23_GET_SERVER_HELLO:unknown prot

[Openstack] [Cinder][vmware] unable to attach the volume to an instance

2016-08-03 Thread Vaidyanath Manogaran
I have a controller node and the compute node.
The cinder service and cinder volume is running in controller.
Controller and compute nodes are in one vcenter server and there is another
vcenter server which manages the instances.

I get the following error when i tried to attach the volume.

Details: {'obj': 'vm-107'}). to caller
2016-08-03 19:08:05.170 1019 ERROR oslo_messaging._drivers.common
[req-95710170-bf28-4c53-894f-a28094c650d1 a7c5b2526f0546c890e8cbb4b90c58d7
ce581005def94bb1947eac9ac15f15ea - - -] ['Traceback (most recent call
last):\n', '  File
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line
138, in _dispatch_and_reply\nincoming.message))\n', '  File
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line
185, in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt,
args)\n', '  File
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line
127, in _do_dispatch\nresult = func(ctxt, **new_args)\n', '  File
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 1473, in
initialize_connection\nraise
exception.VolumeBackendAPIException(data=err_msg)\n',
"VolumeBackendAPIException: Bad or unexpected response from the storage
volume backend API: Driver initialize connection failed (error: The object
has already been deleted or has not been completely created\nCause: Server
raised fault: 'The object has already been deleted or has not been
completely created'\nFaults: [ManagedObjectNotFound]\nDetails: {'obj':
'vm-107'}).\n"]
2016-08-03 19:08:05.171 1019 DEBUG oslo_messaging._drivers.amqpdriver
[req-95710170-bf28-4c53-894f-a28094c650d1 a7c5b2526f0546c890e8cbb4b90c58d7
ce581005def94bb1947eac9ac15f15ea - - -] sending reply msg_id:
7b20588a5b2d4a44a8de90c3208acc99 reply queue:
reply_4fc7381f67744af4aa40876d65adae2d time elapsed: 0.135911167134s
_send_reply
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:74
2016-08-03 19:08:05.267 1019 DEBUG oslo_messaging._drivers.amqpdriver [-]
received message msg_id: 0e855fc9726c4f2692f6e4d148d077aa reply to
reply_4fc7381f67744af4aa40876d65adae2d __call__
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:201
2016-08-03 19:08:05.310 1019 INFO cinder.volume.manager
[req-82ca077f-47d1-4004-bda6-84a86da77bfa a7c5b2526f0546c890e8cbb4b90c58d7
ce581005def94bb1947eac9ac15f15ea - - -] Terminate volume connection
completed successfully.
2016-08-03 19:08:05.311 1019 DEBUG oslo_messaging._drivers.amqpdriver
[req-82ca077f-47d1-4004-bda6-84a86da77bfa a7c5b2526f0546c890e8cbb4b90c58d7
ce581005def94bb1947eac9ac15f15ea - - -] sending reply msg_id:
0e855fc9726c4f2692f6e4d148d077aa reply queue:
reply_4fc7381f67744af4aa40876d65adae2d time elapsed: 0.0429566651583s
_send_reply
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:74
2016-08-03 19:08:08.533 1019 DEBUG oslo_service.periodic_task
[req-b95791b7-495a-4eb7-8beb-64e12683c508 - - - - -] Running periodic task
VolumeManager._publish_service_capabilities run_periodic_tasks
/usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215
2016-08-03 19:08:08.534 1019 DEBUG cinder.manager
[req-b95791b7-495a-4eb7-8beb-64e12683c508 - - - - -] Notifying Schedulers
of capabilities ... _publish_service_capabilities
/usr/lib/python2.7/dist-packages/cinder/manager.py:168
2016-08-03 19:08:08.536 1019 DEBUG oslo_messaging._drivers.amqpdriver
[req-b95791b7-495a-4eb7-8beb-64e12683c508 - - - - -] CAST unique_id:
d63c362bdeb24ab6b8bf1c25ce0a147a FANOUT topic 'cinder-scheduler' _send
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:443
2016-08-03 19:08:08.540 1019 DEBUG oslo_service.periodic_task
[req-b95791b7-495a-4eb7-8beb-64e12683c508 - - - - -] Running periodic task
VolumeManager._report_driver_status run_periodic_tasks
/usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215
^L2016-08-03 19:08:34.671 1019 DEBUG oslo_messaging._drivers.amqpdriver [-]
received message msg_id: 8ce1c2ceaf3d4f0396584155d9a919c2 reply to
reply_4fc7381f67744af4aa40876d65adae2d __call__
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:201
2016-08-03 19:08:34.751 1019 DEBUG oslo_vmware.api
[req-9f9a5b30-f8c8-414b-8cf2-580286ee6dcb a7c5b2526f0546c890e8cbb4b90c58d7
ce581005def94bb1947eac9ac15f15ea - - -] Waiting for function
oslo_vmware.api._invoke_api to return. func
/usr/lib/python2.7/dist-packages/oslo_vmware/api.py:122
2016-08-03 19:08:34.869 1019 DEBUG oslo_vmware.api
[req-9f9a5b30-f8c8-414b-8cf2-580286ee6dcb a7c5b2526f0546c890e8cbb4b90c58d7
ce581005def94bb1947eac9ac15f15ea - - -] Waiting for function
oslo_vmware.api._invoke_api to return. func
/usr/lib/python2.7/dist-packages/oslo_vmware/api.py:122
2016-08-03 19:08:34.870 1019 DEBUG cinder.volume.drivers.vmware.volumeops
[req-9f9a5b30-f8c8-414b-8cf2-580286ee6dcb a7c5b2526f0546c890e8cbb4b90c58d7
ce581005def94bb1947eac9ac15f15ea - - -] Did not find any backing with name:
volume-e59961aa-62a0-4576-b370-6