Anyone know why this happens? What datastore fills up specifically?
2014-04-04 17:01:51.277954 mon.0 [WRN] reached concerning levels of available
space on data store (16% free)
2014-04-04 17:03:51.279801 7ffd0f7fe700 0 monclient: hunting for new mon
2014-04-04 17:03:51.280844 7ffd0d6f9700 0 --
/var/lib/ceph/mon/$cluster-$id
On 2014-04-04, 1:22 PM, Joao Eduardo Luis joao.l...@inktank.com wrote:
Well, that's no mon crash.
On 04/04/2014 06:06 PM, Karol Kozubal wrote:
Anyone know why this happens? What datastore fills up specifically?
The monitor's. Your monitor is sitting on a disk
Hi All,
I am curious to know what is the largest known ceph production deployment?
I am looking for information in regards to:
* number of nodes
* number of OSDs
* total capacity
And if available details in regards to IOPS, types of disks, types of network
interfaces, switches and
Hi Everyone,
I am just wondering if any of you are running a ceph cluster with an iSCSI
target front end? I know this isn’t available out of the box, unfortunately in
one particular use case we are looking at providing iSCSI access and it's a
necessity. I am liking the idea of having rbd
:
On 03/15/2014 04:11 PM, Karol Kozubal wrote:
Hi Everyone,
I am just wondering if any of you are running a ceph cluster with an
iSCSI target front end? I know this isn¹t available out of the box,
unfortunately in one particular use case we are looking at providing
iSCSI access and it's
wrote:
On 03/15/2014 05:40 PM, Karol Kozubal wrote:
Hi Wido,
I will have some new hardware for running tests in the next two weeks
or
so and will report my findings once I get a chance to run some tests. I
will disable writeback on the target side as I will be attempting to
configure an ssd
http://ceph.com/docs/master/rados/operations/placement-groups/
Its provided in the example calculation on that page.
Karol
On 2014-03-14, 10:37 AM, Christian Kauhaus k...@gocept.com wrote:
Am 12.03.2014 18:54, schrieb McNamara, Bradley:
Round up your pg_num and pgp_num to the next power of
Correction: Sorry min_size is at 1 everywhere.
Thank you.
Karol Kozubal
From: Karol Kozubal karol.kozu...@elits.commailto:karol.kozu...@elits.com
Date: Wednesday, March 12, 2014 at 12:06 PM
To: ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
ceph-users@lists.ceph.commailto:ceph
acceptable?
3. Is it possible to scale down the number of pg’s ?
Thank you for your input.
Karol Kozubal
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
From what I understand about Ceph architecture you would be causing a
bottleneck for your ceph traffic. Ceph advantage is the potential
concurrency of the traffic and the decentralization of the client facing
interfaces increasing scale-out capabilities.
Can you give a bit more details about your
.
Karol
From: McNamara, Bradley
bradley.mcnam...@seattle.govmailto:bradley.mcnam...@seattle.gov
Date: Wednesday, March 12, 2014 at 7:01 PM
To: Karol Kozubal karol.kozu...@elits.commailto:karol.kozu...@elits.com,
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
ceph-users
11 matches
Mail list logo