i'm trying out the megacarbon branch so that i could use ceres. i run 8 writer 
instances and they're being consistently hashed into directly by the client 
sending the metrics (~400k/min across all 8 instances). i don't appear to have 
any problems.

however after a while i run into inode exhaustion:

# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb1 183042048 183042048 0 100% /srv/graphite



of course 183 million is a lot of inodes. look into a random ceres directory, i 
see:

-rw-r--r-- 1 user 136 Oct 28 07:46 [email protected]
-rw-r--r-- 1 user 264 Oct 28 07:46 [email protected]
-rw-r--r-- 1 user 64 Oct 28 07:46 [email protected]
-rw-r--r-- 1 user 224 Oct 28 07:46 [email protected]
-rw-r--r-- 1 user 824 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 224 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 32 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 136 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 160 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 16 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 528 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 544 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 728 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 256 Oct 28 13:34 [email protected]
-rw-r--r-- 1 user 1.2K Oct 28 19:12 [email protected]
-rw-r--r-- 1 user 1.1K Oct 28 19:12 [email protected]
-rw-r--r-- 1 user 344 Oct 28 19:12 [email protected]
-rw-r--r-- 1 user 16 Oct 28 19:12 [email protected]
-rw-r--r-- 1 user 160 Oct 28 19:12 [email protected]
-rw-r--r-- 1 user 16 Oct 28 19:12 [email protected]
-rw-r--r-- 1 user 248 Oct 28 19:12 [email protected]
-rw-r--r-- 1 user 600 Oct 28 19:12 [email protected]
-rw-r--r-- 1 user 1.2K Oct 29 00:48 [email protected]
-rw-r--r-- 1 user 1.1K Oct 29 00:48 [email protected]
-rw-r--r-- 1 user 472 Oct 29 00:48 [email protected]
-rw-r--r-- 1 user 192 Oct 29 00:48 [email protected]



in my carbon-daemons/<instance>/db.conf i have 

MAX_SLICE_GAP = 120

so from the comments i would expect slices to be at least 2h's apart (7200s) is 
there is no data coming in - but just from a random sample of the above, i see 
many that are not; they range from 180s to 4000s.

am i reading the configs wrong? shouldn't ceres just reuse the same slice and 
blank out the missing data if data is within 120*60 seconds?

am i hammering the system too much? the timestamps on the files suggest that 
they were all created at once (some contention issue?)

is there a way to 'merge' the slices back together to form one big slice so i 
can reclaim my inodes? there is a ceres-maintaince script... but i have no idea 
where the plugs are.

cheers,

Yee. 



_______________________________________________
Mailing list: https://launchpad.net/~graphite-dev
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~graphite-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to