Re: [Openstack-operators] [openstack-operators] [large deployments] [neutron[ [rfc] Floating IP idea solicitation and collaboration

2014-12-10 Thread Alberto Rodriguez-Natal
Hi all,

We came to this thread recently and we thought that maybe you would
appreciate a heads-up regarding LISPmob status. I'm one of the LISPmob
developers.

As it has been said, LISPmob is an open-source implementation of the LISP
protocol that runs on Linux, Android and OpenWRT. It's been active for the
last four years and nowadays offers a well-tested and debugged support for
LISP Tunnel Routers and LISP Mobile Nodes. Other LISP infrastructure
components are implemented as well, but still with experimental status.
Current development involves NETCONF support for remote configuration and
Intel's DPDK integration for performance boost.

Let us know if we can be of any help. We'll be glad to collaborate with
other open-source initiatives.

Best,
Alberto
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] automatically evacuate an instance when a compute node dies

2014-12-10 Thread Tzach Shefi
Hi, 

First of all for this to work (if possible..) guessing you need to configure 
live migration, else instance disk is saved on local hostA's disk 
/var/lib/nova/instances. 
As this hostA is down you can't reach instance's disk - end of story :( 
On the other hand if you configure live migration with remote shared storage, 
instance's disk will be saved on a remote shared NFS (or other options maybe 
ceph)
And even if hostA is down the instance's disk would still be accessible for 
hostB. 

https://www.mirantis.com/blog/tutorial-openstack-live-migration-with-kvm-hypervisor-and-nfs-shared-storage/
http://blog.zhaw.ch/icclab/setting-up-live-migration-in-openstack-icehouse/
And many more just google configure nova live migration

Maybe you could setup pacemaker to check access to hostA if it can't be reached 
then issue #nova evacuate instanceID hostB then nove reboot --hard instanceID
http://serverfault.com/questions/413566/openstack-is-it-possible-to-migrate-an-instance-from-a-dead-compute-server-to-a
No idea how to go about doing this from pacemaker's side.
Interesting to check I'll try to play around with it. 


One question comes to mind thought we are talking about Openstack right :)
By design/definition basically if an instance dyes (or it's host) just boot up 
other instances on other hosts in it's place. 
Remember instances should be regarded as temporary dispensable instances. 

Recall Openstack can but isn't meant to replace virtualization solution like 
Ovirt/rhev/vmware..
These solutions have built in automatic virtual machine fail-over just what 
your looking for. 
You'll still need shared storage and proper license needed for such enterprise 
options 

Understanding when you need an instance vs. virtual machine is hard to 
grasp but fundamental to selecting the proper tool for the job at hand.
Not to mention may save you lots of hassle and frustration later on. 

Tshefi

- Original Message -
From: Pedro Sousa pgso...@gmail.com
To: openstack-operators@lists.openstack.org
Sent: Tuesday, December 9, 2014 4:46:59 PM
Subject: [Openstack-operators] automatically evacuate an instance when a
compute node dies

Hi all, 

is there a working solution in nova to automatically restart an instance when a 
compute node dies in a healthy node? 

I've heard about pacemaker, any good howto to help with this? 

Regards 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging sample config versions

2014-12-10 Thread Anne Gentle
On Tue, Dec 9, 2014 at 11:25 AM, Michael Dorman mdor...@godaddy.com wrote:

 Well I think we can all agree this is an irritation.  But how are others
 actually dealing with this problem?  (Maybe it’s less complicated in
 Ubuntu.)

 The sense I get is that most people using Anvil, or other custom-ish
 packaging tools, are also running config management which handles
 generating the config files, anyway.  So you don’t so much care about the
 contents of the config file shipped with the package.

 Is that accurate for most people?  Or are folks doing some other magic to
 get a good config file in the packages?


The docs team -- reallly, Matt Kassawara -- regularly logs bugs for
packagers to put in better, working, default config files.

We do generate documentation for all the configurations across projects
that use oslo.config (and even for swift, which doesn't). So you can rely
on this reference:
http://docs.openstack.org/juno/config-reference/content/

You can also see new, updated, and deprecated options for each service,
such as:
http://docs.openstack.org/juno/config-reference/content/nova-conf-changes-juno.html

I don't believe our reference document was what encouraged devs to take the
sample config generation out-of-tree, but I am letting you know your best
option besides troubleshooting generating it yourself.

Anne



 Mike





 On 12/9/14, 5:02 PM, Kris G. Lindgren klindg...@godaddy.com wrote:

 So more to my point on the latest version of RHEL and doing: yum install
 tox -egenconfig
 
 ceilometer-2014.2.1]# tox -egenconfig
 ERROR: tox version is 1.4.2, required is at least 1.6
 
 
 nova-2014.2.1]# tox -egenconfig
 ERROR: tox version is 1.4.2, required is at least 1.6
 
 
 glance-2014.2.1]# tox -egenconfig
 ERROR: tox version is 1.4.2, required is at least 1.6
 
 
 [root@localhost ~]# pip install --update tox
 (Updated tox to 1.8.1 , upgraded virtualenv to 1.10.1 and upgraded py to
 1.4.14)
 
 glance-2014.2.1]# tox -egenconfig
 genconfig create: /root/rpmbuild/BUILD/glance-2014.2.1/.tox/genconfig
 genconfig installdeps:
 -r/root/rpmbuild/BUILD/glance-2014.2.1/requirements.txt,
 -r/root/rpmbuild/BUILD/glance-2014.2.1/test-requirements.txt
 ERROR: invocation failed (exit code 1), logfile:
 /root/rpmbuild/BUILD/glance-2014.2.1/.tox/genconfig/log/genconfig-1.log
 ERROR: actionid=genconfig
 SNIP
 
 Running setup.py install for MySQL-python
 SNIP
/usr/include/mysql/my_config_x86_64.h:654:2: error: #error
 my_config.h MUST be included first!
  #error my_config.h MUST be included first!
   ^
 error: command 'gcc' failed with exit status 1
 snip
 __ summary
 __
 ERROR:   genconfig: could not install deps
 [-r/root/rpmbuild/BUILD/glance-2014.2.1/requirements.txt,
 -r/root/rpmbuild/BUILD/glance-2014.2.1/test-requirements.txt]; v =
 InvocationError('/root/rpmbuild/BUILD/glance-2014.2.1/.tox/genconfig/bin/p
 i
 p install --allow-all-external --allow-insecure netaddr -U
 -r/root/rpmbuild/BUILD/glance-2014.2.1/requirements.txt
 -r/root/rpmbuild/BUILD/glance-2014.2.1/test-requirements.txt (see
 /root/rpmbuild/BUILD/glance-2014.2.1/.tox/genconfig/log/genconfig-1.log)',
 1)
 
 
 
 So a few things to point out in order to even get tox -egenconfig I had to
 update the system packages versions using pip.  Since we have other python
 packages using virtualenv I have no idea if the updated venvironment
 package is going to break those systems or not.  So the included
 script/command is already a barrier to getting a sample config.  2) tox
 fails to even build all the deps - it happens to be exactly failing at
 mysql in both nova/cinder/glance/keystone 3) It's installing it own
 versions of python libraries that solve the dependencies that are then
 going to be used to generate the configuration.  If the configuration is
 so dynamic that getting a different version of oslo.config could generate
 a sample configuration that wont work on my system then how am I suppose
 to deal with:
 Tox installed version:
 oslo.config-1.5.0
 
 
 System installed version:
 python-oslo-config-1.3.0
 
 
 
 
 Also python-libvrit failed to build because I don¹t have libvrit installed
 on this system.  So am I to assume that there are no libvrit options
 (which we both know is false)?
 Now I can get a example config - that wont work with my system - per what
 everyone else has been saying.  Also, at what point would the average user
 just say F it? - because at the point I feel like if I was an average
 user - I would be there right now.
 
 
 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.
 
 
 On 12/9/14, 8:14 AM, Mathieu Gagné mga...@iweb.com wrote:
 
 On 2014-12-08 11:01 PM, Kris G. Lindgren wrote:
 
I don¹t think its too much to ask for each project to include a
 script
  that will build a venv that includes tox and the other relevant 

Re: [Openstack-operators] Any good info on GRE tunneling on Icehouse?

2014-12-10 Thread Marcos Garcia

Hello Alex

I've always found RDO documentation very easy to follow (but sometimes 
can be outdated, like instructions using 'quantum' instead of neutron):

https://openstack.redhat.com/Using_GRE_Tenant_Networks
https://openstack.redhat.com/Configuring_Neutron_with_OVS_and_GRE_Tunnels_using_quickstack
https://openstack.redhat.com/NeutronLibvirtMultinodeDevEnvironment
and many others

Most of the doc will refer to the controller node and the network node 
as the same. But packstack configuration will let you split them, if you 
really need it.


All RDO-related docs will describe how to use Packstack on CentOS, so 
you should be ok if you use both. Unless you have to use Ubuntu or other 
distros?


Regards

PS: for a detailed view of what the network node will do and Neutron in 
general: https://openstack.redhat.com/Networking_in_too_much_detail


On 2014-12-10 2:48 PM, Alex Leonhardt wrote:


Hi All,

Am failing to find a good tutorial on how to setup a 3+ node cluster 
using GRE tunneling.


Does anyone have an idea / link / blog ?

We're looking at 1x controller, 1x network node, 3x compute node for a 
poc running GRE.. Our current setup is a FlatNetwork.


Thanks!
Alex



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--

*Marcos Garcia
*
Technical Sales Engineer; RHCE, RHCVA, ITIL

*PHONE : *(514) – 907 - 0068 *EMAIL :*marcos.gar...@enovance.com 
mailto:marcos.gar...@enovance.com - *SKYPE : *enovance-marcos.garcia**
*ADDRESS :*127 St-Pierre – Montréal (QC) H2Y 2L6, Canada *WEB : 
*www.enovance.com http://www.enovance.com/




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Any good info on GRE tunneling on Icehouse?

2014-12-10 Thread Alex Leonhardt
Thanks i found these too but we're not using packstack - we already have a
icehouse install but use flat networking.

The docs are all quite too high level (I need details :D) or outdated .. :/

If nothing else we'll try making this work this/next week and may result in
a blog on how to do this from scratch without packstack.

Unless anyone has other links / blogs ?

Thanks!
Alex

On Wed, 10 Dec 2014 20:18 Marcos Garcia marcos.gar...@enovance.com wrote:

  Hello Alex

 I've always found RDO documentation very easy to follow (but sometimes can
 be outdated, like instructions using 'quantum' instead of neutron):
 https://openstack.redhat.com/Using_GRE_Tenant_Networks

 https://openstack.redhat.com/Configuring_Neutron_with_OVS_and_GRE_Tunnels_using_quickstack
 https://openstack.redhat.com/NeutronLibvirtMultinodeDevEnvironment
 and many others

 Most of the doc will refer to the controller node and the network node as
 the same. But packstack configuration will let you split them, if you
 really need it.

 All RDO-related docs will describe how to use Packstack on CentOS, so you
 should be ok if you use both. Unless you have to use Ubuntu or other
 distros?

 Regards

 PS: for a detailed view of what the network node will do and Neutron in
 general: https://openstack.redhat.com/Networking_in_too_much_detail


 On 2014-12-10 2:48 PM, Alex Leonhardt wrote:

 Hi All,

 Am failing to find a good tutorial on how to setup a 3+ node cluster using
 GRE tunneling.

 Does anyone have an idea / link / blog ?

 We're looking at 1x controller, 1x network node, 3x compute node for a poc
 running GRE.. Our current setup is a FlatNetwork.

 Thanks!
 Alex


 ___
 OpenStack-operators mailing 
 listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 --


 *Marcos Garcia *
 Technical Sales Engineer; RHCE, RHCVA, ITIL

  *PHONE : *(514) – 907 - 0068 *EMAIL :* marcos.gar...@enovance.com - *SKYPE
 : *enovance-marcos.garcia
 *ADDRESS :* 127 St-Pierre – Montréal (QC) H2Y 2L6, Canada *WEB : *
 www.enovance.com



  ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Announcing the openstack ansible deployment repo

2014-12-10 Thread Kevin Carter
Hello all,


The RCBOPS team at Rackspace has developed a repository of Ansible roles, 
playbooks, scripts, and libraries to deploy Openstack inside containers for 
production use. We’ve been running this deployment for a while now,
and at the last OpenStack summit we discussed moving the repo into Stackforge 
as a community project. Today, I’m happy to announce that the 
os-ansible-deployment repo is online within Stackforge. This project is a 
work in progress and we welcome anyone who’s interested in contributing.

This project includes:
  * Ansible playbooks for deployment and orchestration of infrastructure 
resources.
  * Isolation of services using LXC containers.
  * Software deployed from source using python wheels.

Where to find us:
  * IRC: #openstack-ansible
  * Launchpad: https://launchpad.net/openstack-ansible
  * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The 
meeting schedule is not fully formalized and may be subject to change.)
  * Code: https://github.com/stackforge/os-ansible-deployment

Thanks and we hope to see you in the channel.

—

Kevin



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Announcing the openstack ansible deployment repo

2014-12-10 Thread Alex Leonhardt
This is great, fwiw, I'd also suggest to look at saltstack also supporting
and working on features for OpenStack.

Cheers!
Alex

On Wed, 10 Dec 2014 22:18 Kevin Carter kevin.car...@rackspace.com wrote:

 Hello all,


 The RCBOPS team at Rackspace has developed a repository of Ansible roles,
 playbooks, scripts, and libraries to deploy Openstack inside containers for
 production use. We’ve been running this deployment for a while now,
 and at the last OpenStack summit we discussed moving the repo into
 Stackforge as a community project. Today, I’m happy to announce that the
 os-ansible-deployment repo is online within Stackforge. This project is a
 work in progress and we welcome anyone who’s interested in contributing.

 This project includes:
   * Ansible playbooks for deployment and orchestration of infrastructure
 resources.
   * Isolation of services using LXC containers.
   * Software deployed from source using python wheels.

 Where to find us:
   * IRC: #openstack-ansible
   * Launchpad: https://launchpad.net/openstack-ansible
   * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC.
 (The meeting schedule is not fully formalized and may be subject to change.)
   * Code: https://github.com/stackforge/os-ansible-deployment

 Thanks and we hope to see you in the channel.

 —

 Kevin

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Telco][NFV] Meeting Minutes and Logs from Dec. 10

2014-12-10 Thread Steve Gordon
Hi all,

Minutes and logs from todays OpenStack Telco Working Group meeting are 
available at the locations below:

* Meeting ended Wed Dec 10 22:59:12 2014 UTC.  Information about MeetBot at 
http://wiki.debian.org/MeetBot . (v 0.1.4)
* Minutes:
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-10-22.00.html
* Minutes (text): 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-10-22.00.txt
* Log:
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-10-22.00.log.html

Thanks,

Steve

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Announcing the openstack ansible deployment repo

2014-12-10 Thread Kevin Carter
Hey John,

We too ran into the same issue with iSCSI and after a lot of digging and 
chasing red-hearings we found that the cinder-volume service wasn’t the cause 
of the issues, it was iscsiadm login” that caused the problem and it was 
happening from within the nova-compute container. If we weren’t running cinder 
there were no issues with nova-compute running vm’s from within a container 
however once we attempted to attach a volume to a running VM iscsiadm would 
simply refuse to initiate. We followed up on an existing upstream bug regarding 
the issues but its gotten little traction at present: 
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855”.  In testing we’ve 
found that if we give the compute container the raw device instead of using a 
bridge on a veth type interface we didn’t see the same issues however doing 
that was less than ideal so we opted to simply leave compute nodes as physical 
hosts. From within the playbooks we can set any service to run on bare metal as 
the “container” type so that’s what we’ve done with nova-compute but hopefully 
sometime soon-ish well be able to move nova-compute back into a container, 
assuming the upstream bugs are fixed.

I’d love to chat some more on this or anything else, hit me up anytime; I’m 
@cloudnull in the channel.

—

Kevin


 On Dec 10, 2014, at 19:01, John Griffith john.griffi...@gmail.com wrote:
 
 On Wed, Dec 10, 2014 at 3:16 PM, Kevin Carter
 kevin.car...@rackspace.com wrote:
 Hello all,
 
 
 The RCBOPS team at Rackspace has developed a repository of Ansible roles, 
 playbooks, scripts, and libraries to deploy Openstack inside containers for 
 production use. We’ve been running this deployment for a while now,
 and at the last OpenStack summit we discussed moving the repo into 
 Stackforge as a community project. Today, I’m happy to announce that the 
 os-ansible-deployment repo is online within Stackforge. This project is a 
 work in progress and we welcome anyone who’s interested in contributing.
 
 This project includes:
  * Ansible playbooks for deployment and orchestration of infrastructure 
 resources.
  * Isolation of services using LXC containers.
  * Software deployed from source using python wheels.
 
 Where to find us:
  * IRC: #openstack-ansible
  * Launchpad: https://launchpad.net/openstack-ansible
  * Meetings: #openstack-ansible IRC channel every Tuesday at 14:30 UTC. (The 
 meeting schedule is not fully formalized and may be subject to change.)
  * Code: https://github.com/stackforge/os-ansible-deployment
 
 Thanks and we hope to see you in the channel.
 
 —
 
 Kevin
 
 
 ___
 OpenStack-dev mailing list
 openstack-...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Hey Kevin,
 
 Really cool!  I have some questions though, I've been trying to do
 this exact sort of thing on my own with Cinder but can't get iscsi
 daemon running in a container.  In fact I run into a few weird
 networking problems that I haven't sorted, but the storage piece seems
 to be a big stumbling point for me even when I cut some of the extra
 stuff I was trying to do with devstack out of it.
 
 Anyway, are you saying that this enables running the reference LVM
 impl c-vol service in a container as well?  I'd love to hear/see more
 and play around with this.
 
 Thanks,
 John
 
 ___
 OpenStack-dev mailing list
 openstack-...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators