On Mon, Feb 17, 2014 at 12:29 AM, Alex Pearson <[email protected]> wrote:
> Hi All,
> I've been looking, but haven't been able to find any detailed documentation 
> about the journal usage on OSDs.  Does anyone have any detailed docs they 
> could share?  My initial questions are:

Hmm, I'm not sure if we have any serious laymen's stuff. Did you run a
search through the docs at ceph.com/docs?
>
> Is the journal always write-only? (except under recovery)

Yes.

> I'm using BTRFS, in the default layout, which I'm thinking is very 
> inefficient as it basically forces the discs to seek all the time.  (journal 
> partition at start of disc)
> Is there a documented process to relocate the journal, without re-creating 
> the OSD?

I believe that's in the docs. In short: flush the journal with the OSD
cli flag, change the journal location in the config, create the
journal with the OSD cli flag, turn on the OSD.

> What have other people done to optimize the journal without purchasing SSD's?

I'll let people who have done this talk about it in detail, but I
think it tends to involve placing the journal on a partition at the
outside edge of the disk and possibly giving the volume different
properties in the RAID controller if you're using one.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

>
>
> On another point, I'm running on HP Microservers (slow CPU - two cores) with 
> 5 discs - 1x OS, 4x OSD... I've currently got separate OSDs, however have 
> high load due to having more OSD's than cores in the box.  I'm thinking of 
> JBOD'ing the OSD discs into pairs using LVM (different sized disks) so I have 
> only two OSD's, does anyone have any opinions on the merits of this?
>
> Also has anyone seen any CPU usage comparisons of XFS vs EXT4 vs BTRFS?
>
> Obviously I know I'm running an enterprise system on a shoe string, however 
> I'm keen to use this as a test bed to get comfortable with ceph before 
> recommending it in a real production environment, and I think optimizing and 
> understanding it here could have great benefits when I scale out.
>
> Lots of questsions, and as ever any insight would be appreciated on any of 
> the points!
>
> Regards
>
> Alex
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to