Hey,

Given what I can see on our primary collector running ext4 (2Tb/134 million 
inodes) it appears it's assigned 64 inodes per 1mb of storage available. If 
you're similar I'm guessing your average size per cap files is < 16kb if you've 
run out of inodes before space.

You can override inode count when the file system is formatted (mke2fs -N), 
this may come with other dangers though, I'm not a filesystem expert. If you 
can shift your data somewhere else temporarily, reformat and restore this might 
be an option for you.

If you can live with managing older flows outside nfsen (kinda defeats the 
point though), and you don't need instant access to the older captures 
regularly (i.e you're just keeping it for compliance) you could just tar them 
up and delete the individual ones off disk.

Otherwise, you could merge devices into fewer (same collector for many 
devices), you'll lose the ability to reference them individually which may be a 
huge issue for you.  Depends on your use case, per layer collector may not be a 
bad option, for example we run a Boarder, Core, Aggregation and Edge 
collectors. Given we have over 1k edge devices which can sflow, an individual 
profile per device would be crazy (and a royal PIA for adhoc monitoring).

I'm guessing the least preferable choice is to reduce your retention period to 
fit within your inodes, 1 file/5 min/device. Remember to leave a few for 
directory structure and any other misc FS data (does meta data consume inodes?).

Out of interest, what is your file system capacity, max inodes and filesystem?

Cheers,

P. 

------------------------------------------------------------------------------
Rapidly troubleshoot problems before they affect your business. Most IT 
organizations don't have a clear picture of how application performance 
affects their revenue. With AppDynamics, you get 100% visibility into your 
Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro!
http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk
_______________________________________________
Nfsen-discuss mailing list
Nfsen-discuss@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfsen-discuss

Reply via email to