[Openstack-operators] [HA] follow-up from HA discussion at Boston Forum

2017-05-15 Thread Adam Spiers

Hi all,

Sam P  wrote:

This is a quick reminder for HA Forum session at Boston Summit.
Thank you all for your comments and effort to make this happen in Boston Summit.

Time: Thu 11 , 11:00am-11:40am
Location: Hynes Convention Center - Level One - MR 103
Etherpad: https://etherpad.openstack.org/p/BOS-forum-HA-in-openstack

Please join and let's discuss the HA issues in OpenStack...

--- Regards,
Sampath


Thanks to everyone who came to the High Availability Forum session in
Boston last week!  To me, the great turn-out proved that there is
enough general interest in HA within OpenStack to justify allocating
space for dicussion on those topics not only at each summit, but in
between the summits too.

To that end, I'd like to a) remind everyone of the weekly HA IRC
meetings:

   https://wiki.openstack.org/wiki/Meetings/HATeamMeeting

and also b) highlight an issue that we most likely need to solve:
currently these weekly IRC meetings are held at 0900 UTC on Wednesday:

   http://eavesdrop.openstack.org/#High_Availability_Meeting

which is pretty much useless for anyone in the Americas.  This time
was previously chosen because the most regular attendees were based in
Europe or Asia, but I'm now looking for suggestions on how to make
this fairer for all continents.  Some options:

- Split the 60 minutes in half, and hold two 30 minute meetings
 each week at different times, so that every timezone has convenient
 access to at least one of them.

- Alternate the timezone every other week.  This might make it hard to
 build any kind of momentum.

- Hold two meetings each week.  I'm not sure we'd have enough traffic
 to justify this, but we could try.

Any opinions, or better ideas?  Thanks!

Adam

P.S. Big thanks to Sampath for organising the Boston Forum session
and managing to attract such a healthy audience :-)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Newton lbaas v2 remote_addr

2017-05-15 Thread Kris G. Lindgren
Ha proxy should be adding an x-forwarded-for header.  You should be able to 
adjust your apache logs and/or enable mod_remoteip to see this (I believe it is 
also made available to other modules within apache or code that is being ran by 
apache (IE php).

https://httpd.apache.org/docs/current/mod/mod_remoteip.html


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: Ignazio Cassano 
Date: Monday, May 15, 2017 at 11:05 AM
To: OpenStack Operators 
Subject: [Openstack-operators] Newton lbaas v2 remote_addr

Hi All,  I installed newton with lbaas v2 haproxy .
Creating an http load balancer the remote_addr showed  by each balanced apache  
is always the load balancing ip. Is there any option to show the client address 
?
Regards
Ignazio


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] preferred option to fix long-standing user-visible bug in nova?

2017-05-15 Thread Chris Friesen

Hi,

In Mitaka nova introduced the "cpu_thread_policy" which can be specified in 
flavor extra-specs.  In the original spec, and in the original implementation, 
not specifying the thread policy in the flavor was supposed to be equivalent to 
specifying a policy of "prefer", and in both cases if the image set a policy 
then nova would use the image policy.


In Newton, the code was changed to fix a bug but there was an unforeseen side 
effect.  Now the behaviour is different depending on whether the flavor 
specifies no policy at all or specifies a policy of "prefer".   Specifically, if 
the flavor doesn't specify a policy at all and the image does then we'll use the 
flavor policy.  However, if the flavor specifies a policy of "prefer" and the 
image specifies a different policy then we'll use the flavor policy.


This is clearly a bug (tracked as part of bug #1687077), but it's now been out 
in the wild for two releases (Newton and Ocata).


What do operators think we should do?  I see two options, neither of which is 
really ideal:


1) Decide that the "new" behaviour has been out in the wild long enough to 
become the defacto standard and update the docs to reflect this.  This breaks 
the "None and 'prefer' are equivalent" model that was originally intended.


2) Fix the bug to revert back to the original behaviour and backport the fix to 
Ocata.  Backporting to Newton might not happen since it's in phase II 
maintenance.  This could potentially break anyone that has come to rely on the 
"new" behaviour.


Either change is trivial from a dev standpoint, so it's really an operator 
issue--what makes the most sense for operators/users?


Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Newton lbaas v2 remote_addr

2017-05-15 Thread Ignazio Cassano
Hi All,  I installed newton with lbaas v2 haproxy .
Creating an http load balancer the remote_addr showed  by each balanced
apache  is always the load balancing ip. Is there any option to show the
client address ?
Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Remote pacemaker on coHi mpute nodes

2017-05-15 Thread Ignazio Cassano
Hello,
adding remote compute with:
pcs resource create computenode1 remote  server=10.102.184.91

instead of:
pcs resource create computenode1 ocf:pacemaker:remote reconnect_interval=60
op monitor interval=20

SOLVES the issue when unexpected compute node reboot happens.
It returns online and works fine.
Thank all for help
Regards

2017-05-13 16:55 GMT+02:00 Sam P :

> Hi,
>
>  This might not what exactly you are looking for... but... you may extend
> this.
>  In Masakari [0], we use pacemaker-remote in masakari-monitors[1] to
> monitor node failures.
>  In [1], there is hostmonitor.sh, which will gonna deprecate in next
> cycle, but straightforward way to do this.
>  [0] https://wiki.openstack.org/wiki/Masakari
>  [1] https://github.com/openstack/masakari-monitors/tree/master/
> masakarimonitors/hostmonitor
>
>  Then there is pacemaker-resources agents,
>  https://github.com/openstack/openstack-resource-agents/tree/master/ocf
>
> > I have already tried "pcs resource cleanup" but it cleans fine all
> resources
> > but not remote nodes.
> > Anycase on monday I'll send what you requested.
> Hope we can get more details on Monday.
>
> --- Regards,
> Sampath
>
>
>
> On Sat, May 13, 2017 at 9:52 PM, Ignazio Cassano
>  wrote:
> > Thanks Curtis.
> > I have already tried "pcs resource cleanup" but it cleans fine all
> resources
> > but not remote nodes.
> > Anycase on monday I'll send what you requested.
> > Regards
> > Ignazio
> >
> > Il 13/Mag/2017 14:27, "Curtis"  ha scritto:
> >
> > On Fri, May 12, 2017 at 10:23 PM, Ignazio Cassano
> >  wrote:
> >> Hi Curtis, at this time I am using remote pacemaker only for controlli
> ng
> >> openstack services on compute nodes (neutron openvswitch-agent,
> >> nova-compute, ceilometer compute). I wrote my own ansible playbooks to
> >> install and configure all components.
> >> Second step could  be expand it for vm high availability.
> >> I did not find any procedure for cleaning up compute node after
> rebooting
> >> and I googled a lot without luck.
> >
> > Can you paste some putput of something like "pcs status" and I can try
> > to take a look?
> >
> > I've only used pacemaker a little, but I'm fairly sure it's going to
> > be something like "pcs resource cleanup "
> >
> > Thanks,
> > Curtis.
> >
> >> Regards
> >> Ignazio
> >>
> >> Il 13/Mag/2017 00:32, "Curtis"  ha scritto:
> >>
> >> On Fri, May 12, 2017 at 8:51 AM, Ignazio Cassano
> >>  wrote:
> >>> Hello All,
> >>> I installed openstack newton p
> >>> with a pacemaker cluster made up of 3 controllers and 2 compute nodes.
> >>> All
> >>> computer have centos 7.3.
> >>> Compute nodes are provided with remote pacemaker ocf resource.
> >>> If before shutting down a compute node I disable the compute node
> >>> resource
> >>> in the cluster and enable it when the compute returns up, it work fine
> >>> and
> >>> cluster shows it online.
> >>> If the compute node goes down before disabling the compute node
> resource
> >>> in
> >>> the cluster, it remains offline also after it is powered up.
> >>> The only solution I found is removing the compute node resource in the
> >>> cluster and add it again with a different name (adding this new name in
> >>> all
> >>> controllers /etc/hosts file).
> >>> With the above workaround it returns online for the cluster and all its
> >>> resources  (openstack-nova-compute etc etc) return to work fine.
> >>> Please,  does anyone know a better solution ?
> >>
> >> What are you using pacemaker for on the compute nodes? I have not done
> >> that personally, but my impression is that sometimes people do that in
> >> order to have virtual machines restarted somewhere else should the
> >> compute node go down outside of a maintenance window (ie. "instance
> >> high availability"). Is that your use case? If so, I would imagine
> >> there is some kind of clean up procedure to put the compute node back
> >> into use when pacemaker thinks it has failed. Did you use some kind of
> >> openstack distribution or follow a particular installation document to
> >> enable this pacemaker setup?
> >>
> >> It sounds like everything is working as expected (if my guess is
> >> right) and you just need the right steps to bring the node back into
> >> the cluster.
> >>
> >> Thanks,
> >> Curtis.
> >>
> >>
> >>> Regards
> >>> Ignazio
> >>>
> >>>
> >>> ___
> >>> OpenStack-operators mailing list
> >>> OpenStack-operators@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-operators
> >>>
> >>
> >>
> >>
> >> --
> >> Blog: serverascode.com
> >>
> >>
> >
> >
> >
> > --
> > Blog: serverascode.com
> >
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> >