On 6/5/2010 1:30 PM, zfsnoob4 wrote:
I was talking about a write cache (slog/zil I suppose). This is just a media 
server for home. The idea is when I copy an HD video from my camera to the 
network drive it is always several GBs. So if it could copy the file to the SSD 
first and then have it slowly copy to the normal HDs that would be very good 
since it would essentially saturate the GigE network.

Having a read cache for me is not useful, because all the files are huge and 
you can't really predict which one someone will watch.

I guess its not that useful to begin with, and without Trim the write 
performance will start to drop off anyways.

Thanks for the clarification.

Now, wait a minute. Are you copying the data from the camera to a local hard drive first, then to a network drive? If that's the scenario, then you'll likely be bottlenecked on the speed of your camera->PC connection, which is likely 400MB/s or 800MB/s Firewire. 3-4 hard drives can easily keep up with a large sequential write such as that. In this case, a local SSD isn't going to be any faster than a Hard drive. Sequential write for large files has no real difference in speed between an SSD and a HD.

If you're copying the data from your camera straight to a network-mounted drive, then Gigabit ethernet is your bottleneck, and there's no real benefit to an SSD on the server side at all - even a single HD should be able to keep up with Gigabit speeds for a sequential write. Having somewhere locally to copy the data first doesn't really buy you anything - you still have to push it through the GigE bottleneck.

And, ZFS isn't a network file system - it's not going to be able to cache something on the client side. That's up to the client.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to