, Mar 24, 2015 at 12:13 AM, Christian Balzer ch...@gol.com wrote:
On Tue, 24 Mar 2015 09:41:04 +0300 Kamil Kuramshin wrote:
Yes I read it and do no not understand what you mean when say *verify
this*? All 3335808 inodes are definetly files and direcories created by
ceph OSD process:
What I mean
Directory Hash Seed: 148ee5dd-7ee0-470c-a08a-b11c318ff90b
Journal backup: inode blocks
*fsck.ext4 /dev/sda1*
e2fsck 1.42.5 (29-Jul-2012)
/dev/sda1: clean, 3335808/3335808 files, 7668840/13342945 blocks
23.03.2015 17:09, Christian Balzer пишет:
On Mon, 23 Mar 2015 15:26:07 +0300 Kamil Kuramshin
or change
the block size when you build the filesystem. Unfortunately, the only
way to fix the problem that I have seen is to reformat
On Mon, Mar 23, 2015 at 5:51 AM, Kamil Kuramshin
kamil.kurams...@tatar.ru mailto:kamil.kurams...@tatar.ru wrote:
In my case there was cache pool for ec
.0.23a8f.238e1f29.00027632__head_C4F3D517__3
What's your use case, RBD, CephFS, RadosGW?
Regards,
Christian
On Mon, 23 Mar 2015 10:32:55 +0300 Kamil Kuramshin wrote:
Recently got a problem with OSDs based on SSD disks used in cache tier
for EC-pool
superuser@node02:~$ df -i
Filesystem
Recently got a problem with OSDs based on SSD disks used in cache tier
for EC-pool
superuser@node02:~$ df -i
FilesystemInodes IUsed *IFree* IUse% Mounted on
...
/dev/sdb13335808 3335808 *0* 100%
/var/lib/ceph/osd/ceph-45
/dev/sda1
For example, here is my confuguration:
superuser@admin:~$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
242T 209T 20783G 8.38
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
ec_backup-storage 4 9629G 3.88
hi, folks! I'm testing cache tier for erasure coded pool and with
RBD image on it. And now I'm facing a problem with full cache pool
and object are not evicted automatically, Only if I run manually
rados -p cache cache-flush-evict-all*
client side is:
superuser@share:~$ uname
of objects.
[14:52] stannum Be-El: and nothing said about to setting both options
[14:52] Be-El stannum: yes, the documentation is somewhat lacking.
ceph cannot determine the amount of available space (and thus the
maximum possible size of a pool)
10.03.2015 14:41, Kamil Kuramshin пишет
What did you mean when say ceph client?
The log piece that you posted seems to be about kernel that you are
using not supporting some features of ceph. Try to update you kernel if
your 'client' is Rados Block Device client.
06.03.2015 00:48, Sonal Dubey пишет:
Hi,
I am newbie for ceph, and
Cant find out why this can happen:
Got an HEALTH_OK cluster. ceph version 0.87, all nodes are Debian Wheezy
with a stable kernel 3.2.65-1+deb7u1. ceph df shows me this:
*$ ceph df*
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
*242T 221T8519G 3.43 *
POOLS:
10 matches
Mail list logo