[Openstack] ephemeral disks location

2017-01-24 Thread Manuel Sopena Ballesteros
Hi,

I have been searching on the internet and could not find an answer to this 
question.

I understand that ephemeral disks lives until the VM is destroyed, but where 
are the ephemeral disks stored? On local host hosting the VM or in centralized 
storage (e.g. Ceph) or can I have both options?

If ephemeral can be stored on the same host the instance is running... what 
would happen if the host fails and the instance is migrated to another one? 
Will the ephemeral disk be moved across? Will the data persist?

Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] GlusterFS and Openstack Newton

2017-01-24 Thread James Fleet
That is a good idea Peter to troubleshoot the disk access its not making
sense I am able to mount the volume and write files to it but it seems a
permission thing with libvirt and Nova.


James Fleet

James R. Fleet
Innovative Solutions Technology
484 Williamsport Pike  #135
Martinsburg, WV 25404
 888.809.0223 ext.702

On Tue, Jan 24, 2017 at 2:44 PM, Peter Kirby 
wrote:

> If you use audit2why on your audit log, is there anything in there
> suggesting SELinux is blocking the disk access?
>
>
> On Tue, Jan 24, 2017 at 12:12 PM, James Fleet 
> wrote:
>
>> Hello Peter,
>>
>>
>> Yes, the command to add it to SElinux is setsebool -P virt_use_fusefs on
>> which allows the client to connect using SElInux.
>>
>> James R. Fleet
>> Innovative Solutions Technology
>> 484 Williamsport Pike  #135
>> Martinsburg, WV 25404
>>  888.809.0223 ext.702 <(888)%20809-0223>
>>
>> On Tue, Jan 24, 2017 at 11:17 AM, Peter Kirby <
>> peter.ki...@objectstream.com> wrote:
>>
>>> Hi James,
>>>
>>>
>>> I'm pretty new to OpenStack, but I'm working on setting up exactly the
>>> same thing right now.  I'm having some other issues a little before where
>>> you are with my stonith device so I don't really have any insight on your
>>> exact problem.  If I get mine to work I'll share what I did.
>>>
>>> However, my first thought is SELinux.  If you've checked file
>>> permissions and they look ok, is SELinux Enforcing?  If so, you might try
>>> to temporarily set it to permissive.  If that fixes the problem then check
>>> audit logs for what you're missing.  It could be a missing context.
>>>
>>> Just my two cents.
>>>
>>>
>>> On Tue, Jan 24, 2017 at 9:51 AM, James Fleet 
>>> wrote:
>>>
 Hello,

 We have a new build going up in our DC of Openstack Newton. We wanted
 to build in a shared storage solution and really liked the simplicity as
 well as functions of glusterFS. This would allow us to perform live
 migrations along with Geo replication. The issue we have been having is
 getting nova-libvirt instances to run on the compute nodes with the
 glusterfs mount point of /var/lib/nova/instances.

 We have added all the required permissions on the volume share :

 Volume Name: gfsimgstore

 Type: Replicate

 Volume ID: 768d161f-78ca-40dd-befc-ddf9de2ccb38

 Status: Started

 Snapshot Count: 0

 Number of Bricks: 1 x 2 = 2

 Transport-type: tcp

 Bricks:

 Brick1: cloud304-node1:/bricks/imgstore1

 Brick2: cloud304-node2:/bricks/imgstore1

 Options Reconfigured:

 cluster.data-self-heal-algorithm: full

 features.shard: on

 cluster.server-quorum-type: server

 cluster.quorum-type: auto

 network.remote-dio: enable

 cluster.eager-lock: enable

 performance.stat-prefetch: off

 performance.io-cache: off

 performance.read-ahead: off

 performance.quick-read: off

 server.allow-insecure: on

 storage.owner-gid: 162

 storage.owner-uid: 162

 transport.address-family: inet

 performance.readdir-ahead: on

 nfs.disable: on


 We have modified permissions following what documentation we were able
 to locate, but we still get errors when we try to create a VM. The errors
 are a lot but this is the final error that stands out:

  2017-01-23 18:29:25.798 12184 ERROR nova.compute.manager [instance:
 c6634e67-b293-4424-96ec-f0c58b2bf081] libvirtError: Unable to open
 file: 
 /var/lib/nova/instances/c6634e67-b293-4424-96ec-f0c58b2bf081/console.log:
 Permission denied 2017-01-23 18:29:25.798 12184 ERROR


 I am hoping I can find someone running glusterfs and can offer some
 insight to our issue.



 James Fleet









 James R. Fleet
 Innovative Solutions Technology
  888.809.0223 ext.702 <(888)%20809-0223>

 ___
 Mailing list: http://lists.openstack.org/cgi
 -bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe : http://lists.openstack.org/cgi
 -bin/mailman/listinfo/openstack


>>>
>>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack Marketing] [OpenStack Foundation] OpenStack 2016 Annual Report

2017-01-24 Thread Melvin Hillsman
+1 @gkevorki

On Tue, Jan 24, 2017 at 2:47 PM, Sriram Subramanian 
wrote:

> Amen! Here's  to yet another great year for OpenStack!
>
> Thanks,
> -Sriram
> www.CloudDon.com 
> Enabling Modern Enterprise IT Transformations
>
> --
> *From:* Gary Kevorkian (gkevorki) 
> *Sent:* Tuesday, January 24, 2017 11:59:34 AM
> *To:* Lauren Sell; foundation-bo...@lists.openstack.org mailing list;
> foundat...@lists.openstack.org; market...@lists.openstack.org;
> openstack@lists.openstack.org
> *Subject:* Re: [OpenStack Foundation] [OpenStack Marketing] OpenStack
> 2016 Annual Report
>
> Congrats, everyone! It¹s great to be a part of this community.
>
> It¹s more like family, actuallyŠbut with less fighting at the dinner table.
>
> Here¹s to a great 2017!
>
> GK
>
>
> Gary Kevorkian
> MARKETING COMMUNICATIONS MANAGER - EVENTS
> Marketing and Communications
> gkevo...@cisco.com
> Tel: +3237912058 <+32%203%20791%2020%2058>
> Cisco.com 
>   Think before you print.This email may contain confidential and
> privileged material for the sole use of the intended recipient. Any
> review, use, distribution or disclosure by others is strictly prohibited.
> If you are not the intended recipient (or authorized to receive for the
> recipient), please contact the sender by reply email and delete all copies
> of this message.
> Please click here
>  for
> Company Registration Information.
>
>
>
>
>
>
>
> On 1/24/17, 11:44 AM, "Lauren Sell"  wrote:
>
> >Hi everyone,
> >
> >Today we published the OpenStack Foundation 2016 Annual Report:
> >https://www.openstack.org/assets/reports/OpenStack-2016-
> Annual-Report-fina
> >l-draft.pdf
> >
> >Thank you to all of the community members who contributed to the report,
> >and to Heidi Joy for putting it together. The report includes key stats,
> >as well as updates from the Board of Directors, Foundation staff,
> >Technical Committee, Board Committees and Working Groups. We had a busy
> >year in 2016, and it¹s great to see our collective efforts documented.
> >
> >2016 was a big year for OpenStack. We had our most substantial software
> >releases, our best-attended Summits, our broadest set of global OpenStack
> >Days, reached our largest footprint of public cloud providers, and
> >learned about bigger deployments and projects using OpenStack. We saw
> >organizations as varied as AT&T, Volkswagen Group, China Mobile, Banco
> >Santander, and Cambridge University give keynote presentations about
> >driving innovation with the software our community develops. And we saw
> >new companies invest in our community, especially from China and the
> >telecom sector. We started to see the power of the ³one platform² concept
> >with virtualization, bare metal and containers working together to run
> >versatile workloads. From the OpenStack and Kubernetes demos in Austin to
> >the OpenStack and OPNFV telecom demo in Barcelona, we saw how
> >collaboration and technology integrations deliver practical value.
> >
> >But 2016 was also a year of changes in the industry and in our community,
> >whether it be emerging technologies or shifts in the ecosystem. As we
> >look forward to 2017, more change is on the horizon and collaboration
> >within our community and beyond is more important than ever. The way we
> >react and adapt in this period of change will have a huge impact on the
> >role OpenStack plays in the future of IT. With a strong, diverse
> >community and clear mission, there¹s nothing we can¹t achieve together.
> >
> >Today we kick off 2017 with the first Board meeting, in which we'll seat
> >new members from the recent election. We look forward to supporting and
> >working closely with all of you this year!
> >
> >Best,
> >Lauren
> >
> >
> >
> >
> >
> >___
> >Marketing mailing list
> >market...@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/marketing
>
>
> ___
> Foundation mailing list
> foundat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation
>
> ___
> Marketing mailing list
> market...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/marketing
>
>


-- 
Kind regards,

Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center

mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
http://osic.org

Learner | Ideation | Belief | Responsibility | Command
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] GlusterFS and Openstack Newton

2017-01-24 Thread Peter Kirby
If you use audit2why on your audit log, is there anything in there
suggesting SELinux is blocking the disk access?

On Tue, Jan 24, 2017 at 12:12 PM, James Fleet 
wrote:

> Hello Peter,
>
>
> Yes, the command to add it to SElinux is setsebool -P virt_use_fusefs on
> which allows the client to connect using SElInux.
>
> James R. Fleet
> Innovative Solutions Technology
> 484 Williamsport Pike  #135
> Martinsburg, WV 25404
>  888.809.0223 ext.702 <(888)%20809-0223>
>
> On Tue, Jan 24, 2017 at 11:17 AM, Peter Kirby <
> peter.ki...@objectstream.com> wrote:
>
>> Hi James,
>>
>>
>> I'm pretty new to OpenStack, but I'm working on setting up exactly the
>> same thing right now.  I'm having some other issues a little before where
>> you are with my stonith device so I don't really have any insight on your
>> exact problem.  If I get mine to work I'll share what I did.
>>
>> However, my first thought is SELinux.  If you've checked file permissions
>> and they look ok, is SELinux Enforcing?  If so, you might try to
>> temporarily set it to permissive.  If that fixes the problem then check
>> audit logs for what you're missing.  It could be a missing context.
>>
>> Just my two cents.
>>
>>
>> On Tue, Jan 24, 2017 at 9:51 AM, James Fleet 
>> wrote:
>>
>>> Hello,
>>>
>>> We have a new build going up in our DC of Openstack Newton. We wanted to
>>> build in a shared storage solution and really liked the simplicity as well
>>> as functions of glusterFS. This would allow us to perform live migrations
>>> along with Geo replication. The issue we have been having is getting
>>> nova-libvirt instances to run on the compute nodes with the glusterfs mount
>>> point of /var/lib/nova/instances.
>>>
>>> We have added all the required permissions on the volume share :
>>>
>>> Volume Name: gfsimgstore
>>>
>>> Type: Replicate
>>>
>>> Volume ID: 768d161f-78ca-40dd-befc-ddf9de2ccb38
>>>
>>> Status: Started
>>>
>>> Snapshot Count: 0
>>>
>>> Number of Bricks: 1 x 2 = 2
>>>
>>> Transport-type: tcp
>>>
>>> Bricks:
>>>
>>> Brick1: cloud304-node1:/bricks/imgstore1
>>>
>>> Brick2: cloud304-node2:/bricks/imgstore1
>>>
>>> Options Reconfigured:
>>>
>>> cluster.data-self-heal-algorithm: full
>>>
>>> features.shard: on
>>>
>>> cluster.server-quorum-type: server
>>>
>>> cluster.quorum-type: auto
>>>
>>> network.remote-dio: enable
>>>
>>> cluster.eager-lock: enable
>>>
>>> performance.stat-prefetch: off
>>>
>>> performance.io-cache: off
>>>
>>> performance.read-ahead: off
>>>
>>> performance.quick-read: off
>>>
>>> server.allow-insecure: on
>>>
>>> storage.owner-gid: 162
>>>
>>> storage.owner-uid: 162
>>>
>>> transport.address-family: inet
>>>
>>> performance.readdir-ahead: on
>>>
>>> nfs.disable: on
>>>
>>>
>>> We have modified permissions following what documentation we were able
>>> to locate, but we still get errors when we try to create a VM. The errors
>>> are a lot but this is the final error that stands out:
>>>
>>>  2017-01-23 18:29:25.798 12184 ERROR nova.compute.manager [instance:
>>> c6634e67-b293-4424-96ec-f0c58b2bf081] libvirtError: Unable to open
>>> file: 
>>> /var/lib/nova/instances/c6634e67-b293-4424-96ec-f0c58b2bf081/console.log:
>>> Permission denied 2017-01-23 18:29:25.798 12184 ERROR
>>>
>>>
>>> I am hoping I can find someone running glusterfs and can offer some
>>> insight to our issue.
>>>
>>>
>>>
>>> James Fleet
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> James R. Fleet
>>> Innovative Solutions Technology
>>>  888.809.0223 ext.702 <(888)%20809-0223>
>>>
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi
>>> -bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi
>>> -bin/mailman/listinfo/openstack
>>>
>>>
>>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] OpenStack 2016 Annual Report

2017-01-24 Thread Lauren Sell
Hi everyone,

Today we published the OpenStack Foundation 2016 Annual Report: 
https://www.openstack.org/assets/reports/OpenStack-2016-Annual-Report-final-draft.pdf

Thank you to all of the community members who contributed to the report, and to 
Heidi Joy for putting it together. The report includes key stats, as well as 
updates from the Board of Directors, Foundation staff, Technical Committee, 
Board Committees and Working Groups. We had a busy year in 2016, and it’s great 
to see our collective efforts documented.

2016 was a big year for OpenStack. We had our most substantial software 
releases, our best-attended Summits, our broadest set of global OpenStack Days, 
reached our largest footprint of public cloud providers, and learned about 
bigger deployments and projects using OpenStack. We saw organizations as varied 
as AT&T, Volkswagen Group, China Mobile, Banco Santander, and Cambridge 
University give keynote presentations about driving innovation with the 
software our community develops. And we saw new companies invest in our 
community, especially from China and the telecom sector. We started to see the 
power of the “one platform” concept with virtualization, bare metal and 
containers working together to run versatile workloads. From the OpenStack and 
Kubernetes demos in Austin to the OpenStack and OPNFV telecom demo in 
Barcelona, we saw how collaboration and technology integrations deliver 
practical value.

But 2016 was also a year of changes in the industry and in our community, 
whether it be emerging technologies or shifts in the ecosystem. As we look 
forward to 2017, more change is on the horizon and collaboration within our 
community and beyond is more important than ever. The way we react and adapt in 
this period of change will have a huge impact on the role OpenStack plays in 
the future of IT. With a strong, diverse community and clear mission, there’s 
nothing we can’t achieve together. 

Today we kick off 2017 with the first Board meeting, in which we'll seat new 
members from the recent election. We look forward to supporting and working 
closely with all of you this year!

Best,
Lauren





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Problem while adding a security group to a VM in newly deployed system

2017-01-24 Thread Rafael Weingärtner
Hi OpenStack Community,
 I am having a problem in a newly deployed OpenStack environment. Whenever
I try to add a security group to a VM, I get the following message:

> “Port security must be enabled and port must have an IP address in order
> to use security groups. Neutron server returns request_ids:”.
>

I checked the file: /etc/neutron/plugins/ml2/ml2_conf.ini, on both
controller and nova compute, and they both seem ok. The security group
configuration is the following:

> [securitygroup]
> firewall_driver =
> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
> enable_security_group = True
>

Is there anything else I am missing?

The OpenStack version is Mitaka.

Thanks in advance for your help.
-- 
Rafael Weingärtner
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] GlusterFS and Openstack Newton

2017-01-24 Thread James Fleet
Hello Peter,


Yes, the command to add it to SElinux is setsebool -P virt_use_fusefs on
which allows the client to connect using SElInux.

James R. Fleet
Innovative Solutions Technology
484 Williamsport Pike  #135
Martinsburg, WV 25404
 888.809.0223 ext.702

On Tue, Jan 24, 2017 at 11:17 AM, Peter Kirby 
wrote:

> Hi James,
>
>
> I'm pretty new to OpenStack, but I'm working on setting up exactly the
> same thing right now.  I'm having some other issues a little before where
> you are with my stonith device so I don't really have any insight on your
> exact problem.  If I get mine to work I'll share what I did.
>
> However, my first thought is SELinux.  If you've checked file permissions
> and they look ok, is SELinux Enforcing?  If so, you might try to
> temporarily set it to permissive.  If that fixes the problem then check
> audit logs for what you're missing.  It could be a missing context.
>
> Just my two cents.
>
>
> On Tue, Jan 24, 2017 at 9:51 AM, James Fleet 
> wrote:
>
>> Hello,
>>
>> We have a new build going up in our DC of Openstack Newton. We wanted to
>> build in a shared storage solution and really liked the simplicity as well
>> as functions of glusterFS. This would allow us to perform live migrations
>> along with Geo replication. The issue we have been having is getting
>> nova-libvirt instances to run on the compute nodes with the glusterfs mount
>> point of /var/lib/nova/instances.
>>
>> We have added all the required permissions on the volume share :
>>
>> Volume Name: gfsimgstore
>>
>> Type: Replicate
>>
>> Volume ID: 768d161f-78ca-40dd-befc-ddf9de2ccb38
>>
>> Status: Started
>>
>> Snapshot Count: 0
>>
>> Number of Bricks: 1 x 2 = 2
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: cloud304-node1:/bricks/imgstore1
>>
>> Brick2: cloud304-node2:/bricks/imgstore1
>>
>> Options Reconfigured:
>>
>> cluster.data-self-heal-algorithm: full
>>
>> features.shard: on
>>
>> cluster.server-quorum-type: server
>>
>> cluster.quorum-type: auto
>>
>> network.remote-dio: enable
>>
>> cluster.eager-lock: enable
>>
>> performance.stat-prefetch: off
>>
>> performance.io-cache: off
>>
>> performance.read-ahead: off
>>
>> performance.quick-read: off
>>
>> server.allow-insecure: on
>>
>> storage.owner-gid: 162
>>
>> storage.owner-uid: 162
>>
>> transport.address-family: inet
>>
>> performance.readdir-ahead: on
>>
>> nfs.disable: on
>>
>>
>> We have modified permissions following what documentation we were able to
>> locate, but we still get errors when we try to create a VM. The errors are
>> a lot but this is the final error that stands out:
>>
>>  2017-01-23 18:29:25.798 12184 ERROR nova.compute.manager [instance:
>> c6634e67-b293-4424-96ec-f0c58b2bf081] libvirtError: Unable to open file:
>> /var/lib/nova/instances/c6634e67-b293-4424-96ec-f0c58b2bf081/console.log:
>> Permission denied 2017-01-23 18:29:25.798 12184 ERROR
>>
>>
>> I am hoping I can find someone running glusterfs and can offer some
>> insight to our issue.
>>
>>
>>
>> James Fleet
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> James R. Fleet
>> Innovative Solutions Technology
>>  888.809.0223 ext.702 <(888)%20809-0223>
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>>
>>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Setting up another compute node

2017-01-24 Thread Jose Manuel Ferrer Mosteiro
 

Hi 

Some months ago I found this bug:
https://bugs.launchpad.net/nova/+bug/1467734/comments/2 [6] 

I wordarounded the bug setting in nova.conf of compute nodes
vif_plugging_is_fatal=false . 

Look for the string WTF here:
https://github.com/paradigmadigital/ansible-openstack-vcenter/blob/master/etc_ansible/roles/kvm-hypervisor/templates/nova.conf_centos7.j2
[7] 

Maybe this can help. 

Jose Manuel 

El 2017-01-23 21:32, Peter Kirby escribió: 

> I agree. But I can't figure out why the port isn't getting created. Those 
> lines are the only ones that show up in neutron logs.
> 
> Here's what shows up in the nova logs:
> 
> Jan 23 14:09:21 vhost2 nova-compute[8936]: Traceback (most recent call last):
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 115, in wait
> Jan 23 14:09:21 vhost2 nova-compute[8936]: listener.cb(fileno)
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main
> Jan 23 14:09:21 vhost2 nova-compute[8936]: result = function(*args, **kwargs)
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/nova/utils.py", line 1159, in 
> context_wrapper
> Jan 23 14:09:21 vhost2 nova-compute[8936]: return func(*args, **kwargs)
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1587, in 
> _allocate_network_async
> Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(*exc_info)
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1570, in 
> _allocate_network_async
> Jan 23 14:09:21 vhost2 nova-compute[8936]: bind_host_id=bind_host_id)
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 685, 
> in allocate_for_instance
> Jan 23 14:09:21 vhost2 nova-compute[8936]: self._delete_ports(neutron, 
> instance, created_port_ids)
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
> __exit__
> Jan 23 14:09:21 vhost2 nova-compute[8936]: self.force_reraise()
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
> force_reraise
> Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(self.type_, 
> self.value, self.tb)
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 674, 
> in allocate_for_instance
> Jan 23 14:09:21 vhost2 nova-compute[8936]: security_group_ids, 
> available_macs, dhcp_opts)
> Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
> "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 261, 
> in _create_port
> Jan 23 14:09:21 vhost2 nova-compute[8936]: raise 
> exception.PortBindingFailed(port_id=port_id)
> Jan 23 14:09:21 vhost2 nova-compute[8936]: PortBindingFailed: Binding failed 
> for port e1058d22-9a7b-4988-9644-d0f476a01015, please check neutron logs for 
> more information.
> Jan 23 14:09:21 vhost2 nova-compute[8936]: Removing descriptor: 21
> 
> Peter Kirby / Infrastructure and Build Engineer
> Magento Certified Developer Plus [1]
> peter.ki...@objectstream.com
> OBJECTSTREAM, INC. 
> Office: 405-942-4477 [2] / Fax: 866-814-0174 [3] 
> 7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127 
> http://www.objectstream.com/ [4] 
> 
> On Mon, Jan 23, 2017 at 2:21 PM, Trinath Somanchi  
> wrote:
> 
> The port doesn't exists at all. 
> 
> Port e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
> 
> Get Outlook for iOS [5] 
> -
> 
> FROM: Peter Kirby 
> SENT: Tuesday, January 24, 2017 1:43:36 AM 
> 
> TO: Trinath Somanchi
> CC: OpenStack
> SUBJECT: Re: [Openstack] Setting up another compute node 
> 
> I just did another attempt at this so I'd have fresh logs.
> 
> There are all the lines produced in the neutron openvswitch-agent.log file 
> when I attempt that previous command.
> 
> 2017-01-23 14:09:20.918 8097 INFO neutron.agent.securitygroups_rpc 
> [req-a9ab1e05-cf41-44ce-8762-d7f0f72e7ba3 582643be48c04603a09250a1be6e6cf3 
> 1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated 
> [u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
> 2017-01-23 14:09:21.132 8097 INFO neutron.agent.securitygroups_rpc 
> [req-b8cc3ab8-d4f3-4c96-820d-148ae6fd47af 582643be48c04603a09250a1be6e6cf3 
> 1dd7b6481aa34ef7ba105a7336845369 - - -] Security group member updated 
> [u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
> 2017-01-23 14:09:22.057 8097 INFO neutron.agent.common.ovs_lib 
> [req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Port 
> e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
> 2017-01-23 14:09:22.058 8097 INFO 
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
> [req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] port

Re: [Openstack] GlusterFS and Openstack Newton

2017-01-24 Thread Peter Kirby
Hi James,


I'm pretty new to OpenStack, but I'm working on setting up exactly the same
thing right now.  I'm having some other issues a little before where you
are with my stonith device so I don't really have any insight on your exact
problem.  If I get mine to work I'll share what I did.

However, my first thought is SELinux.  If you've checked file permissions
and they look ok, is SELinux Enforcing?  If so, you might try to
temporarily set it to permissive.  If that fixes the problem then check
audit logs for what you're missing.  It could be a missing context.

Just my two cents.


On Tue, Jan 24, 2017 at 9:51 AM, James Fleet 
wrote:

> Hello,
>
> We have a new build going up in our DC of Openstack Newton. We wanted to
> build in a shared storage solution and really liked the simplicity as well
> as functions of glusterFS. This would allow us to perform live migrations
> along with Geo replication. The issue we have been having is getting
> nova-libvirt instances to run on the compute nodes with the glusterfs mount
> point of /var/lib/nova/instances.
>
> We have added all the required permissions on the volume share :
>
> Volume Name: gfsimgstore
>
> Type: Replicate
>
> Volume ID: 768d161f-78ca-40dd-befc-ddf9de2ccb38
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: cloud304-node1:/bricks/imgstore1
>
> Brick2: cloud304-node2:/bricks/imgstore1
>
> Options Reconfigured:
>
> cluster.data-self-heal-algorithm: full
>
> features.shard: on
>
> cluster.server-quorum-type: server
>
> cluster.quorum-type: auto
>
> network.remote-dio: enable
>
> cluster.eager-lock: enable
>
> performance.stat-prefetch: off
>
> performance.io-cache: off
>
> performance.read-ahead: off
>
> performance.quick-read: off
>
> server.allow-insecure: on
>
> storage.owner-gid: 162
>
> storage.owner-uid: 162
>
> transport.address-family: inet
>
> performance.readdir-ahead: on
>
> nfs.disable: on
>
>
> We have modified permissions following what documentation we were able to
> locate, but we still get errors when we try to create a VM. The errors are
> a lot but this is the final error that stands out:
>
>  2017-01-23 18:29:25.798 12184 ERROR nova.compute.manager [instance:
> c6634e67-b293-4424-96ec-f0c58b2bf081] libvirtError: Unable to open file:
> /var/lib/nova/instances/c6634e67-b293-4424-96ec-f0c58b2bf081/console.log:
> Permission denied 2017-01-23 18:29:25.798 12184 ERROR
>
>
> I am hoping I can find someone running glusterfs and can offer some
> insight to our issue.
>
>
>
> James Fleet
>
>
>
>
>
>
>
>
>
> James R. Fleet
> Innovative Solutions Technology
>  888.809.0223 ext.702 <(888)%20809-0223>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] GlusterFS and Openstack Newton

2017-01-24 Thread James Fleet
Hello,

We have a new build going up in our DC of Openstack Newton. We wanted to
build in a shared storage solution and really liked the simplicity as well
as functions of glusterFS. This would allow us to perform live migrations
along with Geo replication. The issue we have been having is getting
nova-libvirt instances to run on the compute nodes with the glusterfs mount
point of /var/lib/nova/instances.

We have added all the required permissions on the volume share :

Volume Name: gfsimgstore

Type: Replicate

Volume ID: 768d161f-78ca-40dd-befc-ddf9de2ccb38

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: cloud304-node1:/bricks/imgstore1

Brick2: cloud304-node2:/bricks/imgstore1

Options Reconfigured:

cluster.data-self-heal-algorithm: full

features.shard: on

cluster.server-quorum-type: server

cluster.quorum-type: auto

network.remote-dio: enable

cluster.eager-lock: enable

performance.stat-prefetch: off

performance.io-cache: off

performance.read-ahead: off

performance.quick-read: off

server.allow-insecure: on

storage.owner-gid: 162

storage.owner-uid: 162

transport.address-family: inet

performance.readdir-ahead: on

nfs.disable: on


We have modified permissions following what documentation we were able to
locate, but we still get errors when we try to create a VM. The errors are
a lot but this is the final error that stands out:

 2017-01-23 18:29:25.798 12184 ERROR nova.compute.manager [instance:
c6634e67-b293-4424-96ec-f0c58b2bf081] libvirtError: Unable to open file:
/var/lib/nova/instances/c6634e67-b293-4424-96ec-f0c58b2bf081/console.log:
Permission denied 2017-01-23 18:29:25.798 12184 ERROR


I am hoping I can find someone running glusterfs and can offer some insight
to our issue.



James Fleet









James R. Fleet
Innovative Solutions Technology
 888.809.0223 ext.702
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Kuryr] [Neutron] nested containers - using Linux bridge for iptables rather than Openvswitch firewall

2017-01-24 Thread Liping Mao (limao)
Hi Agmon,

Thanks for trying this feature first. 
We discussed with Jakub Libosvar from Neutron Team,  and confirmed that 
VM-Nested Trunk can’t work with iptables_hybrid in neutron. More detail in irc 
log[1]

Part of the log:
limao   jlibosva: trunk port do not work with iptables_hybrid , Do we have any 
bug about this or it is by design?   15:27
jlibosvalimao: that's by design 15:27
jlibosvalimao: "Obviously this solution is not compliant with iptables 
firewall." from 
https://github.com/openstack/neutron/blob/master/doc/source/devref/openvswitch_agent.rst#tackling-the-network-trunking-use-case
  15:29
jlibosvalimao: at "To summarize:" section, B solution   15:29

[1]http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-11-22.log.html

Loop jlibosva and add [Neutron] Tag in mail title.
Thanks.

Regards,
Liping Mao

在 17/1/24 18:00,“Agmon, Gideon (Nokia - IL)” 写入:

Hi,

Environment:
 - Centos 7.3 , kernel 3.10 (!)
 - devstack mid Jan 2017 master
 - kuryr-libnetworks
 - NOT using opensvswitch firewall as shown e.g. in 
https://github.com/openstack/kuryr-libnetwork#how-to-try-out-nested-containers-locally
 
   because Linux kernel 3.10 doesn't support it, so Linux bridge is used 
instead! 

Question: Must I use Openvswitch firewall instead of linux bridge for 
proper operation of trunk bridge ?


The phenomenon:
===
When ARP from ContainerA to containerB, both are netsed within a VM, the 
ping fails:
 - ARP request (broadcast) succeeds to pass via the Linux bridge to the OVS 
and back to the VM via the Linux bridge.
 - ARP reply (unicast) succeeds to pass via the Linux bridge to the OVS (it 
learned the MAC from the request coming back from the OVS).
 - this ARP reply is not forwarded by the Linux bridge to the VM ! Note 
that it learned this MAC from the OVS side (although with a different Vlan). 

I suspect:

The Linux bridge works in SVL mode (Shared-Vlan-Learning).   

Thanks in advance
Gideon

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Setting up another compute node

2017-01-24 Thread Trinath Somanchi
Cool!. It’s a misconfiguration at controller. It happens when you configure the 
environment.

Congratulations!

/ Trinath

From: Peter Kirby [mailto:peter.ki...@objectstream.com]
Sent: Tuesday, January 24, 2017 8:17 PM
To: Eugen Block 
Cc: OpenStack 
Subject: Re: [Openstack] Setting up another compute node

Thank you all for the suggestions.  I just copied all the neutron config files 
from the working compute node and now the new one is working.  Unfortunately, I 
didn't make a backup copy of the files that were there so I can't compare and 
see what I had done wrong, but everything is working now.
Thanks again.

On Tue, Jan 24, 2017 at 2:02 AM, Eugen Block 
mailto:ebl...@nde.ag>> wrote:
Have you tried to create a new port manually? I assume this will also fail, 
then you have to check your neutron configuration. If it works, try to attach 
that port to a new instance and see what nova says.


Zitat von Trinath Somanchi 
mailto:trinath.soman...@nxp.com>>:
PortBindingFailed: Binding failed for port e1058d22-9a7b-4988-9644-d0f476a01015,


Please reverify neutron configuration


Get Outlook for iOS



From: Peter Kirby 
mailto:peter.ki...@objectstream.com>>
Sent: Tuesday, January 24, 2017 2:02:57 AM
To: Trinath Somanchi
Cc: OpenStack
Subject: Re: [Openstack] Setting up another compute node

I agree.  But I can't figure out why the port isn't getting created.  Those 
lines are the only ones that show up in neutron logs.

Here's what shows up in the nova logs:

Jan 23 14:09:21 vhost2 nova-compute[8936]: Traceback (most recent call last):
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 115, in wait
Jan 23 14:09:21 vhost2 nova-compute[8936]: listener.cb(fileno)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main
Jan 23 14:09:21 vhost2 nova-compute[8936]: result = function(*args, **kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/utils.py", line 1159, in context_wrapper
Jan 23 14:09:21 vhost2 nova-compute[8936]: return func(*args, **kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1587, in 
_allocate_network_async
Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(*exc_info)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1570, in 
_allocate_network_async
Jan 23 14:09:21 vhost2 nova-compute[8936]: bind_host_id=bind_host_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 685, in 
allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]: self._delete_ports(neutron, 
instance, created_port_ids)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
Jan 23 14:09:21 vhost2 nova-compute[8936]: self.force_reraise()
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(self.type_, self.value, 
self.tb)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 674, in 
allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]: security_group_ids, available_macs, 
dhcp_opts)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 261, in 
_create_port
Jan 23 14:09:21 vhost2 nova-compute[8936]: raise 
exception.PortBindingFailed(port_id=port_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: PortBindingFailed: Binding failed 
for port e1058d22-9a7b-4988-9644-d0f476a01015, please check neutron logs for 
more information.
Jan 23 14:09:21 vhost2 nova-compute[8936]: Removing descriptor: 21



Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer 
Plus
peter.ki...@objectstream.com>

Objectstream, Inc.
Office: 405-942-4477> / 
Fax: 866-814-0174>
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 2:21 PM, Trinath Somanchi 
mailto:trinath.soman...@nxp.com>>>
 wrote:
The port doesn't exists at all.

Port e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int

Get Outlook for iOS


From: Peter Kirby 
mailto:peter.ki...@objectstream.com>>>
Sent: Tuesday, January 24, 2017 1:43:36 AM

To: Trin

Re: [Openstack] Setting up another compute node

2017-01-24 Thread Peter Kirby
Thank you all for the suggestions.  I just copied all the neutron config
files from the working compute node and now the new one is working.
Unfortunately, I didn't make a backup copy of the files that were there so
I can't compare and see what I had done wrong, but everything is working
now.

Thanks again.

On Tue, Jan 24, 2017 at 2:02 AM, Eugen Block  wrote:

> Have you tried to create a new port manually? I assume this will also
> fail, then you have to check your neutron configuration. If it works, try
> to attach that port to a new instance and see what nova says.
>
>
> Zitat von Trinath Somanchi :
>
> PortBindingFailed: Binding failed for port e1058d22-9a7b-4988-9644-d0f476
>> a01015,
>>
>>
>> Please reverify neutron configuration
>>
>>
>> Get Outlook for iOS
>>
>>
>> 
>> From: Peter Kirby 
>> Sent: Tuesday, January 24, 2017 2:02:57 AM
>> To: Trinath Somanchi
>> Cc: OpenStack
>> Subject: Re: [Openstack] Setting up another compute node
>>
>> I agree.  But I can't figure out why the port isn't getting created.
>> Those lines are the only ones that show up in neutron logs.
>>
>> Here's what shows up in the nova logs:
>>
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: Traceback (most recent call
>> last):
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 115, in
>> wait
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: listener.cb(fileno)
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in
>> main
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: result = function(*args,
>> **kwargs)
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/nova/utils.py", line 1159, in
>> context_wrapper
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: return func(*args, **kwargs)
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1587,
>> in _allocate_network_async
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(*exc_info)
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1570,
>> in _allocate_network_async
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: bind_host_id=bind_host_id)
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line
>> 685, in allocate_for_instance
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: self._delete_ports(neutron,
>> instance, created_port_ids)
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
>> __exit__
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: self.force_reraise()
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
>> force_reraise
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(self.type_,
>> self.value, self.tb)
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line
>> 674, in allocate_for_instance
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: security_group_ids,
>> available_macs, dhcp_opts)
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: File
>> "/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line
>> 261, in _create_port
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: raise
>> exception.PortBindingFailed(port_id=port_id)
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: PortBindingFailed: Binding
>> failed for port e1058d22-9a7b-4988-9644-d0f476a01015, please check
>> neutron logs for more information.
>> Jan 23 14:09:21 vhost2 nova-compute[8936]: Removing descriptor: 21
>>
>>
>>
>> Peter Kirby / Infrastructure and Build Engineer
>> Magento Certified Developer Plus> ce.com/certification/directory/dev/2215598/>
>> peter.ki...@objectstream.com
>>
>> Objectstream, Inc.
>> Office: 405-942-4477 / Fax: 866-814-0174> 866-814-0174>
>> 7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
>> http://www.objectstream.com/
>>
>> On Mon, Jan 23, 2017 at 2:21 PM, Trinath Somanchi <
>> trinath.soman...@nxp.com> wrote:
>> The port doesn't exists at all.
>>
>> Port e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int
>>
>> Get Outlook for iOS
>>
>> 
>> From: Peter Kirby > peter.ki...@objectstream.com>>
>> Sent: Tuesday, January 24, 2017 1:43:36 AM
>>
>> To: Trinath Somanchi
>> Cc: OpenStack
>> Subject: Re: [Openstack] Setting up another compute node
>>
>> I just did another attempt at this so I'd have fresh logs.
>>
>> There are all the lines produced in the neutron openvswitch-agent.log
>> file when I attempt that previous command.
>>
>> 2017-01-23 14:09:20.918 8097 INFO neutron.

[Openstack] [Kuryr] nested containers - using Linux bridge for iptables rather than Openvswitch firewall

2017-01-24 Thread Agmon, Gideon (Nokia - IL)
Hi,

Environment:
 - Centos 7.3 , kernel 3.10 (!)
 - devstack mid Jan 2017 master
 - kuryr-libnetworks
 - NOT using opensvswitch firewall as shown e.g. in 
https://github.com/openstack/kuryr-libnetwork#how-to-try-out-nested-containers-locally
 
   because Linux kernel 3.10 doesn't support it, so Linux bridge is used 
instead! 

Question: Must I use Openvswitch firewall instead of linux bridge for proper 
operation of trunk bridge ?


The phenomenon:
===
When ARP from ContainerA to containerB, both are netsed within a VM, the ping 
fails:
 - ARP request (broadcast) succeeds to pass via the Linux bridge to the OVS and 
back to the VM via the Linux bridge.
 - ARP reply (unicast) succeeds to pass via the Linux bridge to the OVS (it 
learned the MAC from the request coming back from the OVS).
 - this ARP reply is not forwarded by the Linux bridge to the VM ! Note that it 
learned this MAC from the OVS side (although with a different Vlan). 

I suspect:

The Linux bridge works in SVL mode (Shared-Vlan-Learning).   

Thanks in advance
Gideon

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Unable Upload Image

2017-01-24 Thread Bjorn Mork
Hi,

Thanks for your support and pointing the culprit. Actually, I had multiple
services configured on controller, which I removed manually and re-verified
Glance configuration side. Which was already okay...at last issue is
resolved.


[root@Controller ~]# openstack service list
WARNING: openstackclient.common.utils is deprecated and will be removed
after Jun 2017. Please use osc_lib.utils
+--+--+--+
| ID   | Name | Type |
+--+--+--+
|
*16a01e55aded47a6baa977a8afba8e19 | glance   | image||
328e27c225834469b6340f0c09bf118a | glance   | image|*
| 583d44a1a4834223bbec2435d1cd7a12 | neutron  | network  |
| 5a6717672d034aaca24a7575f5ac1d4d | manilav2 | sharev2  |
| 9094446d95c0435d8f379aefaa5c0964 | cinder   | volume   |
| a03a1faebfd74daba077ce30a3058e41 | manila   | share|
| a505184b9ac348c3acd81a055cb73f4a | keystone | identity |
| ec45b879f9e0449bb72ad8dcd42e075c | glance   | image|
| f778fdd0ee484a96a682f8bb40166875 | cinderv2 | volumev2 |
| fd30560ff857410dbbd4299b8dc3b164 | nova | compute  |
+--+--+--+
[root@Controller ~]# openstack service delete ec45b879f9e0449bb72ad8dcd42e07
5c
WARNING: openstackclient.common.utils is deprecated and will be removed
after Jun 2017. Please use osc_lib.utils
[root@Controller ~]#
[root@Controller ~]#
[root@Controller ~]# openstack service delete 328e27c225834469b6340f0c09bf11
8a
WARNING: openstackclient.common.utils is deprecated and will be removed
after Jun 2017. Please use osc_lib.utils
[root@Controller ~]#
[root@Controller ~]#
[root@Controller ~]#
[root@Controller ~]# openstack service list
WARNING: openstackclient.common.utils is deprecated and will be removed
after Jun 2017. Please use osc_lib.utils
+--+--+--+
| ID   | Name | Type |
+--+--+--+
| 16a01e55aded47a6baa977a8afba8e19 | glance   | image|
| 583d44a1a4834223bbec2435d1cd7a12 | neutron  | network  |
| 5a6717672d034aaca24a7575f5ac1d4d | manilav2 | sharev2  |
| 9094446d95c0435d8f379aefaa5c0964 | cinder   | volume   |
| a03a1faebfd74daba077ce30a3058e41 | manila   | share|
| a505184b9ac348c3acd81a055cb73f4a | keystone | identity |
| f778fdd0ee484a96a682f8bb40166875 | cinderv2 | volumev2 |
| fd30560ff857410dbbd4299b8dc3b164 | nova | compute  |
+--+--+--+
[root@Controller ~]#

Thanks for your support.

Regards,
B~Mork




On Sun, Jan 22, 2017 at 2:57 PM, Trinath Somanchi 
wrote:

> Very Good.
>
>
>
> Here is the culprit, ServiceCatalogException: Invalid service catalog
> service: image
>
>
>
> Please debug it.
>
>
>
> / Trinath
>
>
>
> *From:* Bjorn Mork [mailto:bjron.m...@gmail.com]
> *Sent:* Sunday, January 22, 2017 4:24 PM
>
> *To:* Trinath Somanchi 
> *Cc:* openstack@lists.openstack.org
> *Subject:* Re: [Openstack] Unable Upload Image
>
>
>
> Thanks for sharing link. Logs given below;
>
>
>
> * Error_log *[Sun Jan 22 10:44:40.804142 2017] [:error] [pid 4817]
> DEBUG:keystoneauth.session:REQ: curl -g -i -X GET
> http://controller:5000/v3/users/bac999c177474e08976c54a362ba6a70/projects
> -H "User-Agent: python-keystoneclient" -H "Accept: application/json" -H
> "X-Auth-Token: {SHA1}cbccf767336b5e5e11dfd1eb113243f01893cc16"
> [Sun Jan 22 10:44:40.882088 2017] [:error] [pid 4817]
> DEBUG:keystoneauth.session:RESP: [200] Date: Sun, 22 Jan 2017 10:44:40
> GMT Server: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 Vary:
> X-Auth-Token x-openstack-request-id: req-4cb2d8ce-8894-40b2-afcf-1ae68bfec228
> Content-Length: 460 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive
> Content-Type: application/json
> [Sun Jan 22 10:44:40.882121 2017] [:error] [pid 4817] RESP BODY: {"links":
> {"self": "http://controller:5000/v3/users/bac999c177474e08976c54a362ba6a
> 70/projects", "previous": null, "next": null}, "projects": [{"is_domain":
> false, "description": "Admin Project", "links": {"self": "
> http://controller:5000/v3/projects/1c53d92609ca40a290a8a552b466b30a"},
> "enabled": true, "id": "1c53d92609ca40a290a8a552b466b30a", "parent_id": "
> 183a01ba69194a9fac15d02b4c9aa118", "domain_id": "
> 183a01ba69194a9fac15d02b4c9aa118", "name": "admin"}]}
> [Sun Jan 22 10:44:40.882141 2017] [:error] [pid 4817]
> [Sun Jan 22 10:44:42.707364 2017] [:error] [pid 4817] HTTP exception with
> no status/code
> [Sun Jan 22 10:44:42.707418 2017] [:error] [pid 4817] Traceback (most
> recent call last):
> [Sun Jan 22 10:44:42.707426 2017] [:error] [pid 4817]   File
> "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/rest/utils.py",
> line 126, in _wrapped
> [Sun Jan 22 10:44:42.707431 2017] [:error] [pid 4817] data =
> function(self, request, *args, **kw)
> [Sun Jan 22 10:44:42.707435 2017] [:error] [pid 4817]   File
>

Re: [Openstack] Unable Upload Image

2017-01-24 Thread Trinath Somanchi
Congratulations!

Nice learning 😀

Get Outlook for iOS


From: Bjorn Mork 
Sent: Tuesday, January 24, 2017 3:24:26 PM
To: Trinath Somanchi
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Unable Upload Image

Hi,

Thanks for your support and pointing the culprit. Actually, I had multiple 
services configured on controller, which I removed manually and re-verified 
Glance configuration side. Which was already okay...at last issue is resolved.


[root@Controller ~]# openstack service list
WARNING: openstackclient.common.utils is deprecated and will be removed after 
Jun 2017. Please use osc_lib.utils
+--+--+--+
| ID   | Name | Type |
+--+--+--+
| 16a01e55aded47a6baa977a8afba8e19 | glance   | image|
| 328e27c225834469b6340f0c09bf118a | glance   | image|
| 583d44a1a4834223bbec2435d1cd7a12 | neutron  | network  |
| 5a6717672d034aaca24a7575f5ac1d4d | manilav2 | sharev2  |
| 9094446d95c0435d8f379aefaa5c0964 | cinder   | volume   |
| a03a1faebfd74daba077ce30a3058e41 | manila   | share|
| a505184b9ac348c3acd81a055cb73f4a | keystone | identity |
| ec45b879f9e0449bb72ad8dcd42e075c | glance   | image|
| f778fdd0ee484a96a682f8bb40166875 | cinderv2 | volumev2 |
| fd30560ff857410dbbd4299b8dc3b164 | nova | compute  |
+--+--+--+
[root@Controller ~]# openstack service delete ec45b879f9e0449bb72ad8dcd42e075c
WARNING: openstackclient.common.utils is deprecated and will be removed after 
Jun 2017. Please use osc_lib.utils
[root@Controller ~]#
[root@Controller ~]#
[root@Controller ~]# openstack service delete 328e27c225834469b6340f0c09bf118a
WARNING: openstackclient.common.utils is deprecated and will be removed after 
Jun 2017. Please use osc_lib.utils
[root@Controller ~]#
[root@Controller ~]#
[root@Controller ~]#
[root@Controller ~]# openstack service list
WARNING: openstackclient.common.utils is deprecated and will be removed after 
Jun 2017. Please use osc_lib.utils
+--+--+--+
| ID   | Name | Type |
+--+--+--+
| 16a01e55aded47a6baa977a8afba8e19 | glance   | image|
| 583d44a1a4834223bbec2435d1cd7a12 | neutron  | network  |
| 5a6717672d034aaca24a7575f5ac1d4d | manilav2 | sharev2  |
| 9094446d95c0435d8f379aefaa5c0964 | cinder   | volume   |
| a03a1faebfd74daba077ce30a3058e41 | manila   | share|
| a505184b9ac348c3acd81a055cb73f4a | keystone | identity |
| f778fdd0ee484a96a682f8bb40166875 | cinderv2 | volumev2 |
| fd30560ff857410dbbd4299b8dc3b164 | nova | compute  |
+--+--+--+
[root@Controller ~]#

Thanks for your support.

Regards,
B~Mork




On Sun, Jan 22, 2017 at 2:57 PM, Trinath Somanchi 
mailto:trinath.soman...@nxp.com>> wrote:
Very Good.

Here is the culprit, ServiceCatalogException: Invalid service catalog service: 
image

Please debug it.

/ Trinath

From: Bjorn Mork [mailto:bjron.m...@gmail.com]
Sent: Sunday, January 22, 2017 4:24 PM

To: Trinath Somanchi mailto:trinath.soman...@nxp.com>>
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Unable Upload Image

Thanks for sharing link. Logs given below;

Error_log

[Sun Jan 22 10:44:40.804142 2017] [:error] [pid 4817] 
DEBUG:keystoneauth.session:REQ: curl -g -i -X GET 
http://controller:5000/v3/users/bac999c177474e08976c54a362ba6a70/projects -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}cbccf767336b5e5e11dfd1eb113243f01893cc16"
[Sun Jan 22 10:44:40.882088 2017] [:error] [pid 4817] 
DEBUG:keystoneauth.session:RESP: [200] Date: Sun, 22 Jan 2017 10:44:40 GMT 
Server: Apache/2.4.6 (CentOS) mod_wsgi/3.4 Python/2.7.5 Vary: X-Auth-Token 
x-openstack-request-id: req-4cb2d8ce-8894-40b2-afcf-1ae68bfec228 
Content-Length: 460 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive 
Content-Type: application/json
[Sun Jan 22 10:44:40.882121 2017] [:error] [pid 4817] RESP BODY: {"links": 
{"self": 
"http://controller:5000/v3/users/bac999c177474e08976c54a362ba6a70/projects";, 
"previous": null, "next": null}, "projects": [{"is_domain": false, 
"description": "Admin Project", "links": {"self": 
"http://controller:5000/v3/projects/1c53d92609ca40a290a8a552b466b30a"}, 
"enabled": true, "id": "1c53d92609ca40a290a8a552b466b30a", "parent_id": 
"183a01ba69194a9fac15d02b4c9aa118", "domain_id": 
"183a01ba69194a9fac15d02b4c9aa118", "name": "admin"}]}
[Sun Jan 22 10:44:40.882141 2017] [:error] [pid 4817]
[Sun Jan 22 10:44:42.707364 2017] [:error] [pid 4817] HTTP exception with no 
status/code
[Sun Jan 22 10:44:42.707418 2017] [:error] [pid 4817] Traceback (most recent 
call last):
[Sun Jan 22 10:44:42.707426 2017] [:error] [pid 

Re: [Openstack] Setting up another compute node

2017-01-24 Thread Eugen Block
Have you tried to create a new port manually? I assume this will also  
fail, then you have to check your neutron configuration. If it works,  
try to attach that port to a new instance and see what nova says.



Zitat von Trinath Somanchi :

PortBindingFailed: Binding failed for port  
e1058d22-9a7b-4988-9644-d0f476a01015,



Please reverify neutron configuration


Get Outlook for iOS


From: Peter Kirby 
Sent: Tuesday, January 24, 2017 2:02:57 AM
To: Trinath Somanchi
Cc: OpenStack
Subject: Re: [Openstack] Setting up another compute node

I agree.  But I can't figure out why the port isn't getting created.  
 Those lines are the only ones that show up in neutron logs.


Here's what shows up in the nova logs:

Jan 23 14:09:21 vhost2 nova-compute[8936]: Traceback (most recent call last):
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/eventlet/hubs/poll.py", line 115,  
in wait

Jan 23 14:09:21 vhost2 nova-compute[8936]: listener.cb(fileno)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line  
214, in main

Jan 23 14:09:21 vhost2 nova-compute[8936]: result = function(*args, **kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/nova/utils.py", line 1159, in  
context_wrapper

Jan 23 14:09:21 vhost2 nova-compute[8936]: return func(*args, **kwargs)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line  
1587, in _allocate_network_async

Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(*exc_info)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line  
1570, in _allocate_network_async

Jan 23 14:09:21 vhost2 nova-compute[8936]: bind_host_id=bind_host_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",  
line 685, in allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]:  
self._delete_ports(neutron, instance, created_port_ids)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220,  
in __exit__

Jan 23 14:09:21 vhost2 nova-compute[8936]: self.force_reraise()
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196,  
in force_reraise
Jan 23 14:09:21 vhost2 nova-compute[8936]: six.reraise(self.type_,  
self.value, self.tb)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",  
line 674, in allocate_for_instance
Jan 23 14:09:21 vhost2 nova-compute[8936]: security_group_ids,  
available_macs, dhcp_opts)
Jan 23 14:09:21 vhost2 nova-compute[8936]: File  
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py",  
line 261, in _create_port
Jan 23 14:09:21 vhost2 nova-compute[8936]: raise  
exception.PortBindingFailed(port_id=port_id)
Jan 23 14:09:21 vhost2 nova-compute[8936]: PortBindingFailed:  
Binding failed for port e1058d22-9a7b-4988-9644-d0f476a01015, please  
check neutron logs for more information.

Jan 23 14:09:21 vhost2 nova-compute[8936]: Removing descriptor: 21



Peter Kirby / Infrastructure and Build Engineer
Magento Certified Developer  
Plus

peter.ki...@objectstream.com

Objectstream, Inc.
Office: 405-942-4477 / Fax: 866-814-0174
7725 W Reno Avenue, Suite 307 Oklahoma City, OK 73127
http://www.objectstream.com/

On Mon, Jan 23, 2017 at 2:21 PM, Trinath Somanchi  
mailto:trinath.soman...@nxp.com>> wrote:

The port doesn't exists at all.

Port e1058d22-9a7b-4988-9644-d0f476a01015 not present in bridge br-int

Get Outlook for iOS


From: Peter Kirby  
mailto:peter.ki...@objectstream.com>>

Sent: Tuesday, January 24, 2017 1:43:36 AM

To: Trinath Somanchi
Cc: OpenStack
Subject: Re: [Openstack] Setting up another compute node

I just did another attempt at this so I'd have fresh logs.

There are all the lines produced in the neutron  
openvswitch-agent.log file when I attempt that previous command.


2017-01-23 14:09:20.918 8097 INFO neutron.agent.securitygroups_rpc  
[req-a9ab1e05-cf41-44ce-8762-d7f0f72e7ba3  
582643be48c04603a09250a1be6e6cf3 1dd7b6481aa34ef7ba105a7336845369 -  
- -] Security group member updated  
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:21.132 8097 INFO neutron.agent.securitygroups_rpc  
[req-b8cc3ab8-d4f3-4c96-820d-148ae6fd47af  
582643be48c04603a09250a1be6e6cf3 1dd7b6481aa34ef7ba105a7336845369 -  
- -] Security group member updated  
[u'a52a5f37-e0dd-4810-a719-2555f348bc1c']
2017-01-23 14:09:22.057 8097 INFO neutron.agent.common.ovs_lib  
[req-d4d61032-5071-4792-a2a1-3d645d44ccfa - - - - -] Port  
e1058d22-9a7b-4988-9644-d0f