No work has yet been done to optimize V4 for ssd devices. It would be straightforward to do it.
Hans Jure Pečar wrote: >On Thu, 23 Feb 2006 04:21:46 -0600 >David Masover <[EMAIL PROTECTED]> wrote: > > > >>Where can I find the paper on why this makes sense? Because offhand, >>it doesn't, unless you're hoping that the majority of transactions >>can be flushed on boot, rather than unrolled. >> >> > >Can't point you to any specific paper, but you can imagine running a >large mailserver for hundreds of tousands of users. Plenty of >small, random io, almost as much writes as reads. That's where ssd for >journal makes sense. > > > >>I'm going to assume you aren't talking about v4, since this sounds >>like a mission-critical production-style environment. As I >>understand it, v4 has a completely different way of doing journaling. >> >> > >Right and right. > > > >>I'm replying to you, not because I actually have an answer for you, >>but because your case seems interesting, and I'm curious how Reiser4 >>handles it. >> >> > >Check namesys.com on "wandering logs" :) > > > >>Problem is, I see nowhere for this to fit in the current model of >>Reiser4. As I understand it, there is no concept of a separate >>"journal" device, or of writing a file twice, because the vast >>majority of writes are simply written out to disk in the new >>location, and then the "commit" is updating the metadata to point to >>the new location and free the old. >> >> > >I suppose this "wandering logs" concept is going to be much better that >"journal file/device" concept ext3 uses, but right now it sounds like >it needs some more optimization work. > >The cost here we all want to avoid is called seek time. Even today, >it's still measured in miliseconds and that's a couple orders of >magnitude more that gigaherzs cpus tick at. Reiser4 is on a good way to >decrease this cost by spending some more cpu ticks, but because I need >a solution "yesterday" (welcome to the real world ... or how they >say:), I'm lookig for a more traditional approach, ssd journal. > > > >
