Hi,
I don't suppose anyone ever managed to look at or fix this issue with
rbd-fuse? Or does anyone know what I'm maybe doing wrong?
Best regards
Graeme
On 07/02/14 12:20, Graeme Lambert wrote:
Hi,
Does anyone know what the issue is with this?
Thanks
*Graeme*
On 06/02/14 13:21
Hi,
Does anyone know what the issue is with this?
Thanks
*Graeme*
On 06/02/14 13:21, Graeme Lambert wrote:
Hi all,
Can anyone advise what the problem below is with rbd-fuse? From
http://mail.blameitonlove.com/lists/ceph-devel/msg14723.html it looks
like this has happened before
Hi,
I've got a few VMs in Ceph RBD that are running very slowly - presumably
down to a backfill after increasing the pg_num of a big pool.
Would RBD caching resolve that issue? If so, how do I enable it? The
documentation states that setting rbd cache = true in [global] enables
it, but
Hi all,
Can anyone advise what the problem below is with rbd-fuse? From
http://mail.blameitonlove.com/lists/ceph-devel/msg14723.html it looks
like this has happened before but should've been fixed way before now?
rbd-fuse -d -p libvirt-pool -c /etc/ceph/ceph.conf ceph
FUSE library version:
Hi,
I've got 6 OSDs and I want 3 replicas per object, so following the
function that's 200 PGs per OSD, which is 1,200 overall.
I've got two RBD pools and the .rgw.buckets pool that are considerably
higher in the number of objects it has compared to the others (given
that RADOS gateway
Hi,
I'm using the aws-sdk-for-php classes for the Ceph RADOS gateway but I'm
getting an intermittent issue with the uploading files.
I'm attempting to upload an array of objects to Ceph one by one using
the create_object() function. It appears to stop randomly when
attempting to do them
.
Best regards
Graeme
On 22/01/14 16:28, Yehuda Sadeh wrote:
On Wed, Jan 22, 2014 at 8:05 AM, Graeme Lambert glamb...@adepteo.net wrote:
Hi,
I'm using the aws-sdk-for-php classes for the Ceph RADOS gateway but I'm
getting an intermittent issue with the uploading files.
I'm attempting
objects per pg (76219) is more than 55.4723 times
cluster average (1374)
Ignore the cloudstack pool, I was using cloudstack but not anymore, it's
an inactive pool.
Best regards
Graeme
On 22/01/14 16:38, Graeme Lambert wrote:
Hi,
Following discussions with people in the IRC I set debug_ms
different between them but the level of disk
read across them does seem rather high?
Best regards
Graeme
On 22/01/14 16:55, Graeme Lambert wrote:
Hi Yehuda,
With regards to the health status of the cluster, it isn't healthy but
I haven't found any way of fixing the placement group errors. Looking