Hi,
my osd folder current has a size of ~360MB but I do not have any
objects inside the corresponding pool; ceph status reports '8 bytes
data'. Even with 'rados -p mypool ls --all' I do not see any objects.
But there are a few current/12._head folders with files consuming
disk space.
How to
Ceph is a cool software but from time to time I am getting gray hairs
with it. And I hope that's because of a misunderstanding. This time I
want to balance the load between three osd's evenly (same usage %). Two
OSD are 2GB, one is 4GB (test environment). By the way: The pool is
erasure coded
Am 03.01.2015 um 00:36 schrieb Dyweni - Ceph-Users:
Your OSDs are full. The cluster will block, until space is freed up and
both OSDs leave full state.
Okay, I did not know that a rbd map alone is too much for a full
cluster. That makes things a bit hard to work around because reducing
the
In my test environment I changed the reweights of an osd. After this
some PGs get stucked in 'active+remapped' state. I can only repair it by
stepping back to the old value of the reweight.
Here is my ceph tree:
# idweight type name up/down reweight
-1 12 root default
-4
After I tried to copy some files into a rbd device I ran into a osd full
state. So I restarted my server and wanted to remove some files from the
filesystem again. But at this moment I cannot execute rbd map anymore and I do
not know why.
This all happened in my testing environment and this is
Hi, my cluster setup would be much easier if I use cephfs on it (instead of a
block device with ocfs2 or something else). But its said everywhere that it is
not ready for production-use at this time. I wonder what this is all about?
Does it mean that there are a few features missing or is the
I tried to install v0.90 in debian 8 jessie. I got a problem with a dependency
of the main package 'ceph'. It depends on libboost-system-1.49.0 and
libboost-thread-1.49.0. Jessie ships with libboost 1.55 which is not recognized
by the package because it has different package names.
What is the
I understand that the status osd full should never be reached. As I am new to
ceph I want to be prepared for this case. I tried two different scenarios and
here are my experiences:
The first one is to completely fill the storage (for me: writing files to a
rados blockdevice). I discovered that
I am trying to setup a small VM ceph cluster to excersise before creating a real
cluster. Currently there are two osd's on the same host. I wanted to create an
erasure coded pool with k=1 and m=1 (yes I know it's stupid, but it is a test
case). On top of it there is a cache tier (writeback) and I
Ilya Dryomov ilya.dryo...@inktank.com hat am 12. Dezember 2014 um 18:00
geschrieben:
Just a note, discard support went into 3.18, which was released a few
days ago.
I recently compiled 3.18 on Debian 7 and what do I have to say... It works
perfectly well. The used memory goes up and down
At the moment I am a bit confused about how to configure my journals and where.
I will start my first Ceph-experience with a small home cluster made of two
nodes. Both nodes will get around three to five harddisks and one ssd each. The
harddisks are XFS formated and each one represents an OSD. The
J-P Methot jpmet...@gtcomm.net hat am 15. Dezember 2014 um 16:05
geschrieben:
I must admit, I have a bit of difficulty understanding your diagram.
I had the illusion that a cache tier also has a journal but it has not. Sounds
less complex now.
But the XFS journals on the devices (as they are
I am new to Ceph and start discovering its features. I used ext4 partitions
(also mounted with -o discard) to place several osd on them. Then I created an
erasure coded pool in this cluster. On top of this there is the rados block
device which holds also an ext4 filesystem (of course mounted with
Wido den Hollander w...@42on.com hat am 12. Dezember 2014 um 12:53
geschrieben:
It depends. Kernel RBD does not support discard/trim yet. Qemu does
under certain situations and with special configuration.
Ah, Thank you. So this is my problem. I use rbd with the kernel modules. I think
I
14 matches
Mail list logo