[ceph-users] Fwd: large concurrent rbd operations block for over 15 mins!

2019-10-21 Thread Void Star Nill
Apparently the graph is too big, so my last post is stuck. Resending without the graph. Thanks -- Forwarded message - From: Void Star Nill Date: Mon, Oct 21, 2019 at 4:41 PM Subject: large concurrent rbd operations block for over 15 mins! To: ceph-users Hello, I have been

Re: [ceph-users] enterprise support

2019-07-17 Thread Void Star Nill
.io/) and they were really good. > > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Mon, Jul 15, 2019 at 12:53 PM Void Star Nill > wrote: > >> Hello, >> >> Other than Redhat and SUSE, are there other

[ceph-users] enterprise support

2019-07-15 Thread Void Star Nill
Hello, Other than Redhat and SUSE, are there other companies that provide enterprise support for Ceph? Thanks, Shridhar ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Ceph block storage cluster limitations

2019-03-29 Thread Void Star Nill
Hello, I wanted to know if there are any max limitations on - Max number of Ceph data nodes - Max number of OSDs per data node - Global max on number of OSDs - Any limitations on the size of each drive managed by OSD? - Any limitation on number of client nodes? - Any limitation on maximum number

Re: [ceph-users] block storage over provisioning

2019-01-30 Thread Void Star Nill
Thanks Wido. Appreciate quick response. On Wed, 30 Jan 2019 at 12:27, Wido den Hollander wrote: > > > On 1/30/19 9:12 PM, Void Star Nill wrote: > > Hello, > > > > When a Ceph block device is created with a given size, does Ceph > > allocate all that spac

[ceph-users] block storage over provisioning

2019-01-30 Thread Void Star Nill
Hello, When a Ceph block device is created with a given size, does Ceph allocate all that space right away or is that allocated as the user starts storing the data? I want to know if we can over provision the Ceph cluster. For example, if we have a cluster with 10G available space, am I allowed

Re: [ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-22 Thread Void Star Nill
at 04:10, Ilya Dryomov wrote: > On Fri, Jan 18, 2019 at 11:25 AM Mykola Golub > wrote: > > > > On Thu, Jan 17, 2019 at 10:27:20AM -0800, Void Star Nill wrote: > > > Hi, > > > > > > We am trying to use Ceph in our products to address some of the use >

[ceph-users] read-only mounts of RBD images on multiple nodes for parallel reads

2019-01-17 Thread Void Star Nill
Hi, We am trying to use Ceph in our products to address some of the use cases. We think Ceph block device for us. One of the use cases is that we have a number of jobs running in containers that need to have Read-Only access to shared data. The data is written once and is consumed multiple times.