On Thu, June 3, 2010 13:04, Garrett D'Amore wrote:
> On Thu, 2010-06-03 at 11:49 -0500, David Dyer-Bennet wrote:
>> hot spares in place, but I have the bays reserved for that use.
>>
>> In the latest upgrade, I added 4 2.5" hot-swap bays (which got the
>> system
>> disks out of the 3.5" hot-swap bays).  I have two free, and that's the
>> form-factor SSDs come in these days, so if I thought it would help I
>> could
>> add an SSD there.  Have to do quite a bit of research to see which uses
>> would actually benefit me, and how much.  It's not obvious that either
>> l2arc or zil on SSD would help my program loading, image file loading,
>> or
>> image file saving cases that much.  There may be more other stuff than I
>> really think of though.
>
> It really depends on the working sets these programs deal with.
>
> zil is useful primarily when doing lots of writes, especially lots of
> writes to small files or to data scattered throughout a file.  I view it
> as a great solution for database acceleration, and for accelerating the
> filesystems I use for hosting compilation workspaces.  (In retrospect,
> since by definition the results of compilation are reproducible, maybe I
> should just turn off synchronous writes for build workspaces... provided
> that they do not contain any modifications to the sources themselves.
> I'm going to have to play with this.)

I suspect there are more cases here than I immediately think of.  For
example, sitting here thinking, I wonder if the web cache would benefit a
lot?  And all those email files?

RAW files from my camera are 12-15MB, and the resulting Photoshop files
are around 50MB (depending on compression, and they get bigger fast if I
add layers).  Those aren't small, and I don't read the same thing over and
over lots.

For build spaces, definitely should be reproducible from source.  A
classic production build starts with checking out a tagged version from
source control, and builds from there.

> l2arc is useful for data that is read back frequently but is too large
> to fit in buffer cache.  I can imagine that it would be useful for
> hosting storage associated with lots of  programs that are called
> frequently. You can think of it as a logical extension of the buffer
> cache in this regard... if your working set doesn't fit in RAM, then
> l2arc can prevent going back to rotating media.

I don't think I'm going to benefit much from this.

> All other things being equal, I'd increase RAM before I'd worry too much
> about l2arc.  The exception to that would be if I knew I had working
> sets that couldn't possibly fit in RAM... 160GB of SSD is a *lot*
> cheaper than 160GB of RAM. :-)

I just did increase RAM, same upgrade as the 2.5" bays and the additional
controller and the third mirrored vdev.  I increased it all the way to
4GB!  And I can't increase it further feasibly (4GB sticks of ECC RAM
being hard to find and extremely pricey; plus I'd have to displace some of
my existing memory).

Since this is a 2006 system, in another couple of years it'll be time to
replace MB and processor and memory, and I'm sure it'll have a lot more
memory next time.

I'm desperately waiting for Solaris 2006.$Q2 ("Q2" since it was pointed
out last time that "Spring" was wrong on half the Earth), since I hope it
will resolve my backup problems so I can get incremental backups happening
nightly (intention is to use zfs send/receive with incremental replication
streams, to keep external drives up-to-date with data and all snapshots). 
The oldness of the system and especially the drives makes this more
urgent, though of course it's important in general.  I do manage a full
backup that completes now and then, anyway, and they'll complete overnight
if they don't hang. Problem is, if they hang, have to reboot the Solaris
box and every Windows box using it.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to