Re: [Openstack-operators] [nfv-telecom] Suspending meetings while we consider moving to being a sub group in LCOO

2017-08-16 Thread Jay Pipes

On 08/16/2017 09:25 PM, Curtis wrote:

On Wed, Aug 16, 2017 at 12:03 AM, Jay Pipes  wrote:

Hi Curtis, Andrew U, Jamie M,

May I request that if the telco working group merges with the LCOO, that we
get regular updates to the openstack[-operator|-dev] mailing list with
information about the goings-on of LCOO? Would be good to get a bi-weekly or
even monthly summary.

Other working groups host their meetings on IRC and publish status reports
to the mailing lists fairly regularly. I personally find this information
quite useful and would be great to see a similar effort from LCOO.


If we do merge I will see what I can do. :)


Cool, thanks Curtis. The publish-to-ML summaries of working groups have 
been very helpful for me.


Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Pacemaker / Corosync in guests on OpenStack

2017-08-16 Thread Hauke Bruno Wollentin
+1 to Johns answer.


We also run Pacemaker/Corosync clusters inside OpenStack instances (in 
project/self service networks). Our clusters are formed by 3 instances each and 
run in production currently. We didn't see any problems with migrations, 
handmade or triggered by Pacemaker.


I recommend using unicast for the cluster communication too + using the default 
ocf:heartbeat:IPaddr2 resource agent to keep things simple.


For the VIP we use a _dummy_ port (neutron port create) and allow its IP 
address to all cluster members via 'neutron port update'. That port is never 
attached to any instance, they are just using its IP address on their default 
ports.


The idea of fencing via the API sounds pretty neat, so I will have a look on 
that ;)


best regards,

hauke



From: John Petrini 
Sent: Wednesday, August 16, 2017 12:55 PM
To: Tim Bell
Cc: openstack-operators
Subject: Re: [Openstack-operators] Pacemaker / Corosync in guests on OpenStack

I just did recently and had no issues. I used a provider network so I don't 
have experience using it with project networks but I believe the only issue you 
might run into with project networks is multicast. You can work around this by 
using unicast instead.

If you do you use multicast you need to enable IGMP in your security groups. 
You can do this in Horizon by selecting other protocol and setting the IP 
protocol number to 2.

I hit a minor issue setting up a VIP because port security wouldn't allow 
traffic to the instance that was destined for that address but all I had to do 
was add the VIP as an allowed address pair on the port of each instance. Also, 
I attached an additional interface to one of the instances to allocate the VIP, 
I just didn't configure the interface within the instance. Since we use DHCP 
this was a simple way to reserve the IP. I'm sure I could have created a 
pacemaker resource that would move the port using the OpenStack API but I 
prefer the simplicity and speed of Pacemakers ocf:ipaddr2 resource.

I setup fencing of the instances via the openstack api to avoid any chance of a 
duplicate IP when moving the VIP. I borrowed this script 
https://github.com/beekhof/fence_openstack/blob/master/fence_openstack and made 
a few minor changes.

Overall there weren't many differences between setting up pacemaker in 
OpenStack vs Iron but I hope this is helpful.


Regards,


John Petrini




On Wed, Aug 16, 2017 at 6:06 AM, Tim Bell 
> wrote:

Has anyone had experience setting up a cluster of VM guests running Pacemaker / 
Corosync? Any recommendations?

Tim


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nfv-telecom] Suspending meetings while we consider moving to being a sub group in LCOO

2017-08-16 Thread Curtis
On Wed, Aug 16, 2017 at 12:03 AM, Jay Pipes  wrote:
> Hi Curtis, Andrew U, Jamie M,
>
> May I request that if the telco working group merges with the LCOO, that we
> get regular updates to the openstack[-operator|-dev] mailing list with
> information about the goings-on of LCOO? Would be good to get a bi-weekly or
> even monthly summary.
>
> Other working groups host their meetings on IRC and publish status reports
> to the mailing lists fairly regularly. I personally find this information
> quite useful and would be great to see a similar effort from LCOO.

If we do merge I will see what I can do. :)

Thanks,
Curtis.

>
> All the best,
> -jay
>
>
> On 08/15/2017 11:49 AM, Curtis wrote:
>>
>> Hi All,
>>
>> We've been having NFV-Telecom meetings for OpenStack Operators for a
>> while now, but never quite reached a tipping point in terms of
>> attendance to "get things done" so to speak.
>>
>> The point of the group had been to find OpenStack Operators who are
>> tasked with designing/operating NFV deployments and help them, but
>> were never really able to surface enough of us to move forward. I want
>> to be super clear that this group was for OpenStack Operators, and
>> isn't a comment on any other NFV related groups in and around
>> OpenStack...simply just people like me who deploy NFV clouds based on
>> OpenStack.
>>
>> We are currently considering moving to be a subgroup in LCOO, but
>> that's just one idea. In the future, if we can surface more OpenStack
>> Operators doing NFV work maybe we revisit.
>>
>> If anyone has strong feelings around this, please do reply and let me
>> know of any ideas/comments/criticism, otherwise we'll probably move to
>> discussing with LCOO.
>>
>> Thanks to all those who attended meetings in the past, it's much
>> appreciated,
>> Curtis.
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Blog: serverascode.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] HPCSYSPROS17 opportunity at SC17

2017-08-16 Thread Randles, Timothy C
??For anyone planning to attend SC17 in Denver, the HPC Sys Pros group[1] is 
still looking for papers and lightning talk proposals[2].  The paper submission 
deadline has been extended.  Note that one of their topics of interest is 
"Virtualization/Clouds."


Tim


[1]http://hpcsyspros.lsu.edu/

[2]http://hpcsyspros.lsu.edu/HPCSystems-CFP.pdf

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] [publiccloud-wg] Reminder meeting PublicCloudWorkingGroup

2017-08-16 Thread Tobias Rydberg
For those of you who couldn't attend todays meeting...read up logs below, 
comment on the notes at the agendaandcontinue the discussion at IRC 
channel #openstack-publiccloud .

http://eavesdrop.openstack.org/meetings/publiccloud_wg/2017/publiccloud_wg.2017-08-16-14.00.log.html

https://etherpad.openstack.org/p/publiccloud-wg

Regards,
Tobias

> 16 aug. 2017 kl. 11:08 skrev Tobias Rydberg :
> 
> Hi everyone, 
> 
> Don't forget todays meeting for the PublicCloudWorkingGroup. 
> 1400 UTC in IRC channel #openstack-meeting-3 
> 
> Etherpad: https://etherpad.openstack.org/p/publiccloud-wg 
> 
> Regards, 
> Tobias Rydberg
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nfv-telecom] Suspending meetings while we consider moving to being a sub group in LCOO

2017-08-16 Thread Chris Morgan
A model I have been using is: informal chat AND official meetings on IRC,
but for the latter I also email the link to the minutes and log to the
mailing list so that agreements are easy to find (e.g. actions and #agreed
items). If nothing else it helps me prepare for the next meeting each time!

Chris

On Wed, Aug 16, 2017 at 1:55 PM, Edgar Magana 
wrote:

> Jay,
>
> Your request has been indeed a topic that we have discussed in the past.
> During the last UC IRC meeting the UC requested to all Working Groups and
> Teams under the umbrella of the UC to report their activities via IRC .
> Please, take a look to our last Monday meeting minutes:
> http://eavesdrop.openstack.org/meetings/uc/2017/uc.2017-
> 08-14-18.02.log.html
>
> So, do not worry all WG/Teams will use IRC to report their activities.
> They will still use any other communication mechanism or technology to make
> progress in their goals if they desire that. Finally, we have also enabled
> the #openstack-uc channel to record all conversations.
>
> Thanks,
>
> Edgar
>
> On 8/15/17, 11:03 PM, "Jay Pipes"  wrote:
>
> Hi Curtis, Andrew U, Jamie M,
>
> May I request that if the telco working group merges with the LCOO,
> that
> we get regular updates to the openstack[-operator|-dev] mailing list
> with information about the goings-on of LCOO? Would be good to get a
> bi-weekly or even monthly summary.
>
> Other working groups host their meetings on IRC and publish status
> reports to the mailing lists fairly regularly. I personally find this
> information quite useful and would be great to see a similar effort
> from
> LCOO.
>
> All the best,
> -jay
>
> On 08/15/2017 11:49 AM, Curtis wrote:
> > Hi All,
> >
> > We've been having NFV-Telecom meetings for OpenStack Operators for a
> > while now, but never quite reached a tipping point in terms of
> > attendance to "get things done" so to speak.
> >
> > The point of the group had been to find OpenStack Operators who are
> > tasked with designing/operating NFV deployments and help them, but
> > were never really able to surface enough of us to move forward. I
> want
> > to be super clear that this group was for OpenStack Operators, and
> > isn't a comment on any other NFV related groups in and around
> > OpenStack...simply just people like me who deploy NFV clouds based on
> > OpenStack.
> >
> > We are currently considering moving to be a subgroup in LCOO, but
> > that's just one idea. In the future, if we can surface more OpenStack
> > Operators doing NFV work maybe we revisit.
> >
> > If anyone has strong feelings around this, please do reply and let me
> > know of any ideas/comments/criticism, otherwise we'll probably move
> to
> > discussing with LCOO.
> >
> > Thanks to all those who attended meetings in the past, it's much
> appreciated,
> > Curtis.
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.
> openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=DwIGaQ=
> DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ=bZyB7R-t_eqqqR6lPt0-
> KRSt00wVbo3A7CHy4_oQHEk=QtsCWHLqVgD6CN8v7POve3MHv9xGABS08HzbjE_TTWI=
> >
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.
> openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=DwIGaQ=
> DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ=bZyB7R-t_eqqqR6lPt0-
> KRSt00wVbo3A7CHy4_oQHEk=QtsCWHLqVgD6CN8v7POve3MHv9xGABS08HzbjE_TTWI=
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nfv-telecom] Suspending meetings while we consider moving to being a sub group in LCOO

2017-08-16 Thread Edgar Magana
Jay,

Your request has been indeed a topic that we have discussed in the past. During 
the last UC IRC meeting the UC requested to all Working Groups and Teams under 
the umbrella of the UC to report their activities via IRC . Please, take a look 
to our last Monday meeting minutes:
http://eavesdrop.openstack.org/meetings/uc/2017/uc.2017-08-14-18.02.log.html

So, do not worry all WG/Teams will use IRC to report their activities. They 
will still use any other communication mechanism or technology to make progress 
in their goals if they desire that. Finally, we have also enabled the 
#openstack-uc channel to record all conversations.

Thanks,

Edgar

On 8/15/17, 11:03 PM, "Jay Pipes"  wrote:

Hi Curtis, Andrew U, Jamie M,

May I request that if the telco working group merges with the LCOO, that 
we get regular updates to the openstack[-operator|-dev] mailing list 
with information about the goings-on of LCOO? Would be good to get a 
bi-weekly or even monthly summary.

Other working groups host their meetings on IRC and publish status 
reports to the mailing lists fairly regularly. I personally find this 
information quite useful and would be great to see a similar effort from 
LCOO.

All the best,
-jay

On 08/15/2017 11:49 AM, Curtis wrote:
> Hi All,
> 
> We've been having NFV-Telecom meetings for OpenStack Operators for a
> while now, but never quite reached a tipping point in terms of
> attendance to "get things done" so to speak.
> 
> The point of the group had been to find OpenStack Operators who are
> tasked with designing/operating NFV deployments and help them, but
> were never really able to surface enough of us to move forward. I want
> to be super clear that this group was for OpenStack Operators, and
> isn't a comment on any other NFV related groups in and around
> OpenStack...simply just people like me who deploy NFV clouds based on
> OpenStack.
> 
> We are currently considering moving to be a subgroup in LCOO, but
> that's just one idea. In the future, if we can surface more OpenStack
> Operators doing NFV work maybe we revisit.
> 
> If anyone has strong feelings around this, please do reply and let me
> know of any ideas/comments/criticism, otherwise we'll probably move to
> discussing with LCOO.
> 
> Thanks to all those who attended meetings in the past, it's much 
appreciated,
> Curtis.
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> 
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=bZyB7R-t_eqqqR6lPt0-KRSt00wVbo3A7CHy4_oQHEk=QtsCWHLqVgD6CN8v7POve3MHv9xGABS08HzbjE_TTWI=
 
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=DwIGaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=bZyB7R-t_eqqqR6lPt0-KRSt00wVbo3A7CHy4_oQHEk=QtsCWHLqVgD6CN8v7POve3MHv9xGABS08HzbjE_TTWI=
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ocata glance configuration

2017-08-16 Thread Abel Lopez
I'm having a real head-scratcher with Ocata glance right now.

v1 API is deprecated and marked for deletion in Pike, so creating images from a 
remote URL is not as easy as it used to be.

When using Horizon to create an image, setting the location as a remote URL 
fails to create the image, because we're not allowed to set locations anymore. 
The error in glance-api logs is

403 Forbidden
It's not allowed to add locations
if locations are invisible.


ok, So I learn that `show_multiple_locations` now defaults to False, and is 
marked as a deprecated option.
Setting this to `True` makes Horizon work create-image work, makes ceph 
instance snapshots work, etc.

But the glance team says this is wrong. They suggest configuring this via RBAC.
The default policy file has

"delete_image_location": "",
"get_image_location": "",
"set_image_location": "",

Which should mean "No restrictions", however without 
`show_multiple_locations=True` - Creating an image from URL or creating an 
instance snapshot with a ceph backend errors out.

 glance.api.v2.images There is not available location for image

So, according to https://review.openstack.org/#/c/313936/
We should be able to do this without this option, but how?

Any one else using Ocata glance with ceph backend?


signature.asc
Description: Message signed with OpenPGP
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-operators][Ceilometer vs Monasca] Alarms: Ceilometer vs Monasca

2017-08-16 Thread Pedro Sousa
Hi,

I use Aodh + Gnocchi for autoscaling. I also use Mistral + Zaqar for
auto-healing. See the example below, hope it helps.


Main template:

(...)
mongocluster:
type: OS::Heat::AutoScalingGroup
properties:
  cooldown: 60
  desired_capacity: 2
  max_size: 3
  min_size: 1
  resource:
type: ./mongocluster.yaml
properties:
  network: { get_attr: [ voicis_network, be_om_net ] }
  flavor: { get_param: flavor }
  image: { get_param: image }
  key_name: { get_param: key_name }
  base_mgmt_security_group: { get_attr: [ security_groups,
base_mgmt ] }
  mongodb_security_group: { get_attr: [ security_groups, mongodb ] }
  root_stack_id: {get_param: "OS::stack_id"}
  metadata: {"metering.server_group": {get_param: "OS::stack_id"}}


mongodb_scaleup_policy:
type: OS::Heat::ScalingPolicy
properties:
  adjustment_type: change_in_capacity
  auto_scaling_group_id: {get_resource: mongocluster}
  cooldown: 60
  scaling_adjustment: 1

  mongodb_scaledown_policy:
type: OS::Heat::ScalingPolicy
properties:
  adjustment_type: change_in_capacity
  auto_scaling_group_id: {get_resource: mongocluster}
  cooldown: 60
  scaling_adjustment: -1

cpu_alarm_high:
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
  description: Scale-up if the average CPU > 95% for 1 minute
  metric: cpu_util
  aggregation_method: mean
  granularity: 300
  evaluation_periods: 1
  threshold: 80
  resource_type: instance
  comparison_operator: gt
  alarm_actions:
- str_replace:
template: trust+url
params:
  url: {get_attr: [mongodb_scaleup_policy, signal_url]}
  query:
str_replace:
  template: '{"=": {"server_group": "stack_id"}}'
  params:
stack_id: {get_param: "OS::stack_id"}

  cpu_alarm_low:
type: OS::Aodh::GnocchiAggregationByResourcesAlarm
properties:
  metric: cpu_util
  aggregation_method: mean
  granularity: 300
  evaluation_periods: 1
  threshold: 5
  resource_type: instance
  comparison_operator: lt
  alarm_actions:
- str_replace:
template: trust+url
params:
  url: {get_attr: [mongodb_scaledown_policy, signal_url]}
  query:
str_replace:
  template: '{"=": {"server_group": "stack_id"}}'
  params:
stack_id: {get_param: "OS::stack_id"}

outputs:
  mongo_stack_id:
description: UUID of the cluster nested stack
value: {get_resource: mongocluster}
  scale_up_url:
description: >
  This URL is the webhook to scale up the autoscaling group.  You
  can invoke the scale-up operation by doing an HTTP POST to this
  URL; no body nor extra headers are needed.
value: {get_attr: [mongodb_scaleup_policy, alarm_url]}
  scale_dn_url:
description: >
  This URL is the webhook to scale down the autoscaling group.
  You can invoke the scale-down operation by doing an HTTP POST to
  this URL; no body nor extra headers are needed.
value: {get_attr: [mongodb_scaledown_policy, alarm_url]}
  ceilometer_query:
value:
  str_replace:
template: >
  ceilometer statistics -m cpu_util
  -q metadata.user_metadata.stack=stackval -p 60 -a avg
params:
  stackval: { get_param: "OS::stack_id" }
description: >
  This is a Ceilometer query for statistics on the cpu_util meter
  Samples about OS::Nova::Server instances in this stack.  The -q
  parameter selects Samples according to the subject's metadata.
  When a VM's metadata includes an item of the form metering.X=Y,
  the corresponding Ceilometer resource has a metadata item of the
  form user_metadata.X=Y and samples about resources so tagged can
  be queried with a Ceilometer query term of the form
  metadata.user_metadata.X=Y.  In this case the nested stacks give
  their VMs metadata that is passed as a nested stack parameter,
  and this stack passes a metadata of the form metering.stack=Y,
  where Y is this stack's ID.




mongocluster.yaml

heat_template_version: ocata

description: >
  MongoDB cluster node


metadata:
type: json

  root_stack_id:
type: string
default: ""

conditions:
is_standalone: {equals: [{get_param: root_stack_id}, ""]}


resources:

mongodbserver:
type: OS::Nova::Server
properties:
  name: { str_replace: { params: { random_string: { get_resource:
random_str }, __zone__: { get_param: zone } }, template:
mongodb-random_string.__zone__ } }
  image: { get_param: image }
  flavor: { get_param: flavor }
  metadata: {get_param: metadata}
  key_name: { get_param: key_name }
  networks:
- port: { get_resource: om_port }
  user_data_format: SOFTWARE_CONFIG
  user_data: { get_resource: server_clu_init 

[Openstack-operators] [openstack-operators][Ceilometer vs Monasca] Alarms: Ceilometer vs Monasca

2017-08-16 Thread Krzysztof Świątek
Hi,

i have a question about alarms in openstack.

I want autoscaling with heat, and I'm looking for metric/alarm project
which I can use with heat.
I found that I can use Monasca or Ceilometer (with Aodh).
My question is:
Is any of you using heat (autoscaling) in production?
If yes what are you using (Monasca, Ceilometer, other) for metric and
alarms, and why?

-- 
Pozdrawiam,
Krzysztof Świątek

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Pacemaker / Corosync in guests on OpenStack

2017-08-16 Thread John Petrini
I just did recently and had no issues. I used a provider network so I don't
have experience using it with project networks but I believe the only issue
you might run into with project networks is multicast. You can work around
this by using unicast instead.

If you do you use multicast you need to enable IGMP in your security
groups. You can do this in Horizon by selecting other protocol and setting
the IP protocol number to 2.

I hit a minor issue setting up a VIP because port security wouldn't allow
traffic to the instance that was destined for that address but all I had to
do was add the VIP as an allowed address pair on the port of each instance.
Also, I attached an additional interface to one of the instances to
allocate the VIP, I just didn't configure the interface within the
instance. Since we use DHCP this was a simple way to reserve the IP. I'm
sure I could have created a pacemaker resource that would move the port
using the OpenStack API but I prefer the simplicity and speed of Pacemakers
ocf:ipaddr2 resource.

I setup fencing of the instances via the openstack api to avoid any chance
of a duplicate IP when moving the VIP. I borrowed this script
https://github.com/beekhof/fence_openstack/blob/master/fence_openstack and
made a few minor changes.

Overall there weren't many differences between setting up pacemaker in
OpenStack vs Iron but I hope this is helpful.


Regards,

John Petrini


On Wed, Aug 16, 2017 at 6:06 AM, Tim Bell  wrote:

>
>
> Has anyone had experience setting up a cluster of VM guests running
> Pacemaker / Corosync? Any recommendations?
>
>
>
> Tim
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Pacemaker / Corosync in guests on OpenStack

2017-08-16 Thread Tim Bell

Has anyone had experience setting up a cluster of VM guests running Pacemaker / 
Corosync? Any recommendations?

Tim

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nfv-telecom] Suspending meetings while we consider moving to being a sub group in LCOO

2017-08-16 Thread lebre . adrien

Thanks Curtis for your involvement in the community. 
I hope we will still continue to interact and see common interests between NFV 
use-cases and Fog/Edge ones. 

ad_rien_
- Mail original -
> De: "Curtis" 
> À: openstack-operators@lists.openstack.org
> Envoyé: Mardi 15 Août 2017 17:49:20
> Objet: [Openstack-operators] [nfv-telecom] Suspending meetings while we 
> consider moving to being a sub group in LCOO
> 
> Hi All,
> 
> We've been having NFV-Telecom meetings for OpenStack Operators for a
> while now, but never quite reached a tipping point in terms of
> attendance to "get things done" so to speak.
> 
> The point of the group had been to find OpenStack Operators who are
> tasked with designing/operating NFV deployments and help them, but
> were never really able to surface enough of us to move forward. I
> want
> to be super clear that this group was for OpenStack Operators, and
> isn't a comment on any other NFV related groups in and around
> OpenStack...simply just people like me who deploy NFV clouds based on
> OpenStack.
> 
> We are currently considering moving to be a subgroup in LCOO, but
> that's just one idea. In the future, if we can surface more OpenStack
> Operators doing NFV work maybe we revisit.
> 
> If anyone has strong feelings around this, please do reply and let me
> know of any ideas/comments/criticism, otherwise we'll probably move
> to
> discussing with LCOO.
> 
> Thanks to all those who attended meetings in the past, it's much
> appreciated,
> Curtis.
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [FEMDC] Meeting Cancelled - Next IRC meeting August, the 30th

2017-08-16 Thread lebre . adrien
Dear All, 


The today IRC meeting is cancelled (due to the holidays period, most of us are 
unavailable). 
Next meeting will be held on August, the 30th (agenda: TBD on the etherpad as 
usual)

Last but not the least, If you are interested by FEMDC's challenges and you are 
located close to SF, do not hesitate to attend the opendev event next 
September: http://www.opendevconf.com/schedule/


Best, 
Ad_rien_

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [publiccloud-wg] Reminder meeting PublicCloudWorkingGroup

2017-08-16 Thread Tobias Rydberg

Hi everyone,

Don't forget todays meeting for the PublicCloudWorkingGroup.
1400 UTC  in IRC channel #openstack-meeting-3

Etherpad: https://etherpad.openstack.org/p/publiccloud-wg

Regards,
Tobias Rydberg


smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nfv-telecom] Suspending meetings while we consider moving to being a sub group in LCOO

2017-08-16 Thread Jay Pipes

Hi Curtis, Andrew U, Jamie M,

May I request that if the telco working group merges with the LCOO, that 
we get regular updates to the openstack[-operator|-dev] mailing list 
with information about the goings-on of LCOO? Would be good to get a 
bi-weekly or even monthly summary.


Other working groups host their meetings on IRC and publish status 
reports to the mailing lists fairly regularly. I personally find this 
information quite useful and would be great to see a similar effort from 
LCOO.


All the best,
-jay

On 08/15/2017 11:49 AM, Curtis wrote:

Hi All,

We've been having NFV-Telecom meetings for OpenStack Operators for a
while now, but never quite reached a tipping point in terms of
attendance to "get things done" so to speak.

The point of the group had been to find OpenStack Operators who are
tasked with designing/operating NFV deployments and help them, but
were never really able to surface enough of us to move forward. I want
to be super clear that this group was for OpenStack Operators, and
isn't a comment on any other NFV related groups in and around
OpenStack...simply just people like me who deploy NFV clouds based on
OpenStack.

We are currently considering moving to be a subgroup in LCOO, but
that's just one idea. In the future, if we can surface more OpenStack
Operators doing NFV work maybe we revisit.

If anyone has strong feelings around this, please do reply and let me
know of any ideas/comments/criticism, otherwise we'll probably move to
discussing with LCOO.

Thanks to all those who attended meetings in the past, it's much appreciated,
Curtis.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators