Re: [Openstack-operators] the url for moderator request

2016-10-14 Thread Patricia Dugan
Thanks, EM. I was confirming the url, the location to view the need and supply 
our availability. 

We’ve added our information to the url I provided. Let me know if that’s not 
the location we need to be listing our availability to contribute on. Thx. See 
you soon! 

> On Oct 14, 2016, at 11:05 AM, Edgar Magana  wrote:
> 
> Patricia,
>  
> I think we are announcing which session we would like to moderate and add our 
> names in the respective etherpad.
>  
> Thanks,
>  
> Edgar
>  
> From: Patricia Dugan 
> Date: Thursday, October 13, 2016 at 7:33 AM
> To: "openstack-operators@lists.openstack.org" 
> 
> Subject: [Openstack-operators] the url for moderator request
>  
> Is this the url for the moderator request: 
> https://etherpad.openstack.org/p/BCN-ops-meetup 
> 
>  << at the bottom, is that where we are supposed to fill out? 
>  
>  
> On Oct 13, 2016, at 12:19 AM, openstack-operators-requ...@lists.openstack.org 
>  wrote:
>  
> Send OpenStack-operators mailing list submissions to
> openstack-operators@lists.openstack.org 
> 
> 
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> or, via email, send a message with subject or body 'help' to
> openstack-operators-requ...@lists.openstack.org
> 
> You can reach the person managing the list at
> openstack-operators-ow...@lists.openstack.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OpenStack-operators digest..."
> 
> 
> Today's Topics:
> 
>   1. Re: [openstack-operators][ceph][nova] How do you handle Nova
>  on Ceph? (Adam Kijak)
>   2. Ubuntu package for Octavia (Lutz Birkhahn)
>   3. Re: [openstack-operators][ceph][nova] How do you handle Nova
>  on Ceph? (Adam Kijak)
>   4. [nova] Does anyone use the os-diagnostics API? (Matt Riedemann)
>   5. OPNFV delivered its new Colorado release (Ulrich Kleber)
>   6. Re: [nova] Does anyone use the os-diagnostics API? (Tim Bell)
>   7. Re: [nova] Does anyone use the os-diagnostics
> API? (Joe Topjian)
>   8. Re: OPNFV delivered its new Colorado release (Jay Pipes)
>   9. glance, nova backed by NFS (Curtis)
>  10. Re: glance, nova backed by NFS (Kris G. Lindgren)
>  11. Re: glance, nova backed by NFS (Tobias Sch?n)
>  12. Re: glance, nova backed by NFS (Kris G. Lindgren)
>  13. Re: glance, nova backed by NFS (Curtis)
>  14. Re: glance, nova backed by NFS (James Penick)
>  15. Re: glance, nova backed by NFS (Curtis)
>  16. Re: Ubuntu package for Octavia (Xav Paice)
>  17. Re: [openstack-operators][ceph][nova] How do you handle Nova
>  on Ceph? (Warren Wang)
>  18. Re: [openstack-operators][ceph][nova] How do
> you handle Nova
>  on Ceph? (Clint Byrum)
>  19. Disable console for an instance (Blair Bethwaite)
>  20. host maintenance (Juvonen, Tomi (Nokia - FI/Espoo))
>  21. Ops@Barcelona - Call for Moderators (Tom Fifield)
> 
> 
> --
> 
> Message: 1
> Date: Wed, 12 Oct 2016 12:23:41 +
> From: Adam Kijak 
> To: Xav Paice ,
> "openstack-operators@lists.openstack.org"
> 
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]
> How do you handle Nova on Ceph?
> Message-ID: <839b3aba73394bf9aae56c801687e...@corp.ovh.com>
> Content-Type: text/plain; charset="iso-8859-1"
> 
> 
> 
> From: Xav Paice 
> Sent: Monday, October 10, 2016 8:41 PM
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do 
> you handle Nova on Ceph?
> 
> On Mon, 2016-10-10 at 13:29 +, Adam Kijak wrote:
> 
> Hello,
> 
> We use a Ceph cluster for Nova (Glance and Cinder as well) and over
> time,
> more and more data is stored there. We can't keep the cluster so big
> because of
> Ceph's limitations. Sooner or later it needs to be closed for adding
> new
> instances, images and volumes. Not to mention it's a big failure
> domain.
> 
> I'm really keen to hear more about those limitations.
> 
> Basically it's all related to the failure domain ("blast radius") and risk 
> management.
> Bigger Ceph cluster means more users.
> Growing the Ceph cluster temporary slows it down, so many users will be 
> affected.
> There are bugs in Ceph which can cause data corruption. It's rare, but when 
> it happens 
> it can affect many (maybe all) users of the Ceph cluster.
> 
> 
> 
> How do you handle this issue?
> What is your strategy to divide Ceph clusters between compute nodes?
> How do you solve VM snapshot placement and migrat

Re: [Openstack-operators] the url for moderator request

2016-10-14 Thread Edgar Magana
Patricia,

I think we are announcing which session we would like to moderate and add our 
names in the respective etherpad.

Thanks,

Edgar

From: Patricia Dugan 
Date: Thursday, October 13, 2016 at 7:33 AM
To: "openstack-operators@lists.openstack.org" 

Subject: [Openstack-operators] the url for moderator request

Is this the url for the moderator request: 
https://etherpad.openstack.org/p/BCN-ops-meetup
 << at the bottom, is that where we are supposed to fill out?


On Oct 13, 2016, at 12:19 AM, 
openstack-operators-requ...@lists.openstack.org
 wrote:

Send OpenStack-operators mailing list submissions to
openstack-operators@lists.openstack.org

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

or, via email, send a message with subject or body 'help' to
openstack-operators-requ...@lists.openstack.org

You can reach the person managing the list at
openstack-operators-ow...@lists.openstack.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of OpenStack-operators digest..."


Today's Topics:

  1. Re: [openstack-operators][ceph][nova] How do you handle Nova
 on Ceph? (Adam Kijak)
  2. Ubuntu package for Octavia (Lutz Birkhahn)
  3. Re: [openstack-operators][ceph][nova] How do you handle Nova
 on Ceph? (Adam Kijak)
  4. [nova] Does anyone use the os-diagnostics API? (Matt Riedemann)
  5. OPNFV delivered its new Colorado release (Ulrich Kleber)
  6. Re: [nova] Does anyone use the os-diagnostics API? (Tim Bell)
  7. Re: [nova] Does anyone use the os-diagnostics
API? (Joe Topjian)
  8. Re: OPNFV delivered its new Colorado release (Jay Pipes)
  9. glance, nova backed by NFS (Curtis)
 10. Re: glance, nova backed by NFS (Kris G. Lindgren)
 11. Re: glance, nova backed by NFS (Tobias Sch?n)
 12. Re: glance, nova backed by NFS (Kris G. Lindgren)
 13. Re: glance, nova backed by NFS (Curtis)
 14. Re: glance, nova backed by NFS (James Penick)
 15. Re: glance, nova backed by NFS (Curtis)
 16. Re: Ubuntu package for Octavia (Xav Paice)
 17. Re: [openstack-operators][ceph][nova] How do you handle Nova
 on Ceph? (Warren Wang)
 18. Re: [openstack-operators][ceph][nova] How do
you handle Nova
 on Ceph? (Clint Byrum)
 19. Disable console for an instance (Blair Bethwaite)
 20. host maintenance (Juvonen, Tomi (Nokia - FI/Espoo))
 21. Ops@Barcelona - Call for Moderators (Tom Fifield)


--

Message: 1
Date: Wed, 12 Oct 2016 12:23:41 +
From: Adam Kijak 
To: Xav Paice ,
"openstack-operators@lists.openstack.org"

Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova]
How do you handle Nova on Ceph?
Message-ID: <839b3aba73394bf9aae56c801687e...@corp.ovh.com>
Content-Type: text/plain; charset="iso-8859-1"



From: Xav Paice 
Sent: Monday, October 10, 2016 8:41 PM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-operators][ceph][nova] How do you 
handle Nova on Ceph?

On Mon, 2016-10-10 at 13:29 +, Adam Kijak wrote:

Hello,

We use a Ceph cluster for Nova (Glance and Cinder as well) and over
time,
more and more data is stored there. We can't keep the cluster so big
because of
Ceph's limitations. Sooner or later it needs to be closed for adding
new
instances, images and volumes. Not to mention it's a big failure
domain.

I'm really keen to hear more about those limitations.

Basically it's all related to the failure domain ("blast radius") and risk 
management.
Bigger Ceph cluster means more users.
Growing the Ceph cluster temporary slows it down, so many users will be 
affected.
There are bugs in Ceph which can cause data corruption. It's rare, but when it 
happens
it can affect many (maybe all) users of the Ceph cluster.



How do you handle this issue?
What is your strategy to divide Ceph clusters between compute nodes?
How do you solve VM snapshot placement and migration issues then
(snapshots will be left on older Ceph)?

Having played with Ceph and compute on the same hosts, I'm a big fan of
separating them and having dedicated Ceph hosts, and dedicated compute
hosts.  That allows me a lot more flexibility with hardware
configuration and maintenance, easier troubleshooting for resource
contention, and also allows scaling at different rates.

Exactly, I consider it the best practice as well.




--

Message: 2
Date: Wed, 12 Oct 2016 12:25:38 +
From: Lutz Birkhahn 
To: "openstack-operators@lists.openstack.org