On Tue, Jul 21, 2009 at 4:20 PM, chris no-re...@opensolaris.org wrote:
Thanks for your reply.
What if I wrap the ram in a sheet of lead?;-)
(hopefully the lead itself won't be radioactive)
I found these 4 AM3 motherboard with optional ECC memory support. I don't
know whether this means
The i7 and Xeon 3300 m/b that say they have ECC support have exactly this
problem as well.
On Wed, Jul 22, 2009 at 4:53 PM, Nicholas Lee emptysa...@gmail.com wrote:
On Tue, Jul 21, 2009 at 4:20 PM, chris no-re...@opensolaris.org wrote:
Thanks for your reply.
What if I wrap the ram
What is your NFS window size? 32kb * 120 * 7 should get you 25MB/s. Have
considered getting a Intel X25-E?Going from 840 sync nfs iops to 3-5k+
iops is not overkill for SSD slog device.
In fact probably cheaper to have one or two less vdevs and a single slog
device.
Nicholas
On Tue, Jul
On Fri, Jun 26, 2009 at 4:11 AM, Eric D. Mudama
edmud...@bounceswoosh.orgwrote:
True. In $ per sequential GB/s, rotating rust still wins by far.
However, your comment about all flash being slower than rotating at
sequential writes was mistaken. Even at 10x the price, if you're
working with
On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
ander...@ligo.caltech.eduwrote:
However, it is a bit disconcerting to have to run with reduced data
protection for an entire week. While I am certainly not going back to
UFS, it seems like it should be at least theoretically possible to do this
With XenServer 4 and NFS you had to grow the disks (modified manually from
thin to fat) in order to get decent performance.
On Fri, Jun 19, 2009 at 7:06 AM, lawrence ho no-re...@opensolaris.orgwrote:
We have a 7110 on try and buy program.
We tried using the 7110 with XEN Server 5 over iSCSI
IDE flash DOM?
On Tue, Jun 2, 2009 at 8:46 AM, Ray Van Dolson rvandol...@esri.com wrote:
Obviously we could throw in a couple smaller drives internally, or
elsewhere... but are there any other options here?
___
zfs-discuss mailing list
Not sure if this is a wacky question.
Given a slog device does not really need much more than 10 GB. If I was to
use a pair of X25-E (or STEC devices or whatever) in a mirror as the boot
device and then either 1. created a loopback file vdev or 2. separate
mirrored slice for the slog would this
Does Solaris flush a slog device before it powers down? If so, removal
during a shutdown cycle wouldn't lose any data.
On Wed, May 20, 2009 at 7:57 AM, Dave dave-...@dubkat.com wrote:
If you don't have mirrored slogs and the slog fails, you may lose any data
that was in a txg group waiting
So txg is sync to the slog device but retained in memory, and then rather
than reading it back from the slog to memory it is copied to the pool from
memory the copy?
With the txg being a working set of the active commit, so might be a set of
NFS iops?
On Wed, May 20, 2009 at 3:43 PM, Eric
I guess this also means the relative value of a slog is also limited by the
amount memory that can be allocated to the txg.
On Wed, May 20, 2009 at 4:03 PM, Eric Schrock eric.schr...@sun.com wrote:
Yes, that is correct. It is best to think of the ZIL and the txg sync
process as orthogonal
I've gotten Nexenta installed onto a USB stick on a SS4200-E. To get it
install required a PCI-E flex adapter. If you can reconfig EON for boot on a
USB stick and serial console it might be possible. I've got two SS4200 and I
might try EON on the second.
Nicholas
On Mon, Apr 20, 2009 at 8:39 PM,
2009/4/14 Miles Nordin car...@ivy.net
well that's not what I meant though. The battery RAM cache's behavior
can't be determined by RTFS whether you use ZFS or not, and the
behavior matters to both ZFS users and non ZFS users. The advantage I
saw to ZFS slogs, is that you can inspect the
On Tue, Apr 14, 2009 at 5:57 AM, Will Murnane will.murn...@gmail.comwrote:
Has anyone done any specific testing with SSD devices and solaris other
than
the FISHWORKS stuff? Which is better for what - SLC and MLC?
My impression is that the flash controllers make a much bigger
difference
On Thu, Apr 16, 2009 at 11:28 AM, Richard Elling
richard.ell...@gmail.comwrote:
As for space, 18GBytes is much, much larger than 99.9+% of workloads
require for slog space. Most measurements I've seen indicate that 100
MBytes
will be quite satisfactory for most folks. Unfortunately, there
On Thu, Apr 16, 2009 at 12:11 PM, Nicholas Lee emptysa...@gmail.com wrote:
Let me see if I understand this: A SSD slog can handle, say, 5000 (4k)
transactions in a sec (20M/s) vs maybe 300 (4k) iops for a single HDD. The
slog can then batch and dump say 30s worth of transactions - 600M
On Sun, Apr 12, 2009 at 7:24 PM, Miles Nordin car...@ivy.net wrote:
nl Supermicro have several LSI controllers. AOC-USASLP-L8i with
nl the LSI 1068E
That's what I'm using. It uses the proprietary mpt driver.
nl and AOC-USASLP-H8iR with the LSI 1078.
I'm not using this.
nl
The standard controller that has been recommended in the past is the
AOC-SAT2-MV8 - an 8 port with a marvel chipset. There have been several
mentions of LSI based controllers on the mailing lists and I'm wondering
about them.
One obvious difference is that the Marvel contoller is PCI-X and the
Forgot to include links. See below.
Thanks.
On Sat, Apr 11, 2009 at 8:35 PM, Nicholas Lee emptysa...@gmail.com wrote:
Supermicro have several LSI controllers. AOC-USASLP-L8i with the LSI 1068E
and AOC-USASLP-H8iR with the LSI 1078.
http://www.supermicro.com/products/accessories/addon/AOC
On Mon, Feb 23, 2009 at 11:33 AM, Blake blake.ir...@gmail.com wrote:
I thinks that's legitimate so long as you don't change ZFS versions.
Personally, I'm more comfortable doing a 'zfs send | zfs recv' than I
am storing the send stream itself. The problem I have with the stream
is that I may
A useful article about long term use of the Intel SSD X25-M:
http://www.pcper.com/article.php?aid=669 - Long-term performance analysis
of Intel Mainstream SSDs.
Would a zfs cache (ZIL or ARC) based on a SSD device see this kind of issue?
Maybe a periodic scrub via a full disk erase would be a
On Fri, Feb 6, 2009 at 11:29 AM, Richard Elling richard.ell...@gmail.com
wrote:
Seriously, is it so complicated that a best practice page is needed?
While you might be right about that, I think there is a need for a good
shared experiences site, howtos, etc.
For example, I want to put a new
...@gmail.comwrote:
Nicholas Lee wrote:
On Fri, Feb 6, 2009 at 11:29 AM, Richard Elling
richard.ell...@gmail.commailto:
richard.ell...@gmail.com wrote:
Seriously, is it so complicated that a best practice page is needed?
While you might be right about that, I think there is a need for a good
Not sure is best to put something like this.
There is wikis like
http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ
http://wiki.genunix.org/wiki/index.php/WhiteBox_ZFSStorageServer
But I haven't seen anything which has an active community like
Is it possible for someone to put up a wiki page somewhere with the various
SSD, ZIL, L2ARC options with Pros, Cons and Benchmarks.
Especially with notes like the below.
Given this is a key area of interest for zfs at the moment, seems like it
would be a useful resource.
On Wed, Feb 4, 2009 at
Another option to look at is:
set zfs:zfs_nocacheflush=1
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
Best option is to get a a fast ZIL log device.
Depends on your pool as well. NFS+ZFS means zfs will wait for write
completes before responding to a sync NFS write ops.
I've got mine sitting on the floor at the moment. Need to find the time to
try out the install.
Do you know why it would not work with the DOM? I'm planning to use a spare
4GB DOM and keep the EMC one for backup if nothing works.
Did you use a video card to install?
On Fri, Jan 9, 2009 at 10:46
Since zfs is so smart is other areas is there a particular reason why a high
water mark is not calculated and the available space not reset to this?
I'd far rather have a zpool of 1000GB that said it only had 900GB but did
not have corruption as it ran out of space.
Nicholas
Has anyone tried runing zfs on the Intel SS4200-E [1],[[2]?
Doesn't have a video port, but you could replace the IDE flash DOM with a
pre-installed system.
I'm interested in this as a four disk smallish (34x41x12) portable ZFS
appliance.
Seems that people have got it running with
On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling [EMAIL PROTECTED]wrote:
In short, separate logs with rotating rust may reduce sync write latency by
perhaps 2-10x on an otherwise busy system. Using write optimized SSDs
will reduce sync write latency by perhaps 10x in all cases. This is one of
On 4/19/07, Adam Lindsay [EMAIL PROTECTED] wrote:
16x hot swap SATAII hard drives (plus an internal boot drive)
Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)
2x Areca 8-port PCIe (8-lane) RAID drivers
2x AMD Opteron 275 CPUs (2.2GHz, dual core)
8 GiB RAM
The supplier is used
On 4/13/07, Eric Schrock [EMAIL PROTECTED] wrote:
You want:
6421958 want recursive zfs send ('zfs send -r')
Which is actively being worked on.
Exactly. :D
Perhaps they all have to have the same snapnames (which will be easier with
'zfs snapshot -r').
Maybe just assume that anyone who
On 3/30/07, Wee Yeh Tan [EMAIL PROTECTED] wrote:
Careful consideration of the layout of your file
system applies regardless of which type of file system it is (zfs,
ufs, etc.).
True. ZFS does open up a whole new can of worms/flexibility.
How do hard-links work across zfs
On 3/30/07, Shawn Walker [EMAIL PROTECTED] wrote:
Maybe, but they're far better at doing versioning and providing a
history of changes.
I;d have to agree. I track 6000 blobs (OOo gzip files, pdfs and other stuff)
in svn even with 1300 changesets over 3 years there is a marginal disk cost
on
On 3/30/07, Atul Vidwansa [EMAIL PROTECTED] wrote:
Lets say I reorganized my zpools. Now there are 2 pools:
Pool1:
Production data, combination of binary and text files. Only few files
change at a time. Average file sizes are around 1MB. Does it make
sense to take zfs snapshots of the pool?
On 3/29/07, Robert Milkowski [EMAIL PROTECTED] wrote:
1. Instructions for Manual set up:
http://fs.central/projects/zfsboot/zfsboot_manual_setup.html
2. Instructions for Netisntall set up:
http://fs.central/projects/zfsboot/how_to_netinstall_zfsboot
I think those documents should be
On 3/29/07, Malachi de Ælfweald [EMAIL PROTECTED] wrote:
Could I get your opinion then? I have just downloaded and burnt the b60
ISO. I was just getting ready to follow Tabriz and Tim's instructions from
last year in order to get the ZFS root boot. Seeing the Heads Up, it says
that the old
On 3/29/07, Robert Milkowski [EMAIL PROTECTED] wrote:
BFU - just for testing I guess. I would rather propose waiting for SXCE
b62.
Is there a release date for this? I note that the install iso for b60 seems
to only release in the last week.
Nicholas
On 3/23/07, John-Paul Drawneek [EMAIL PROTECTED] wrote:
I've got the same consideration at the moment.
Should i do 9 disk raidz2 with a spare, or could i do two raidz2 to get a
bit of performance?
Only done tests with striped mirrors which seems to give it a boost, so is
it worth it with a
On 3/23/07, John-Paul Drawneek [EMAIL PROTECTED] wrote:
Can i do to Raidz2 over 5 and a Raidz2 over 4 with a spare for them all?
or two Raidz2 over 4 with 2 spare?
This is a question I was planning to ask as well.
Does zfs allow a hot spare to be allocated to multiple pools or as a system
Has anyone run Solaris on one of these:
http://acmemicro.com/estore/merchant.ihtml?pid=4014step=4
2U with 12 hotswap SATA disks. Supermicro motherboard, would have to add a
second Supermicro SATA2 controller to cover all the disks and the onboard
intel controller can only handle 6.
Nicholas
Just installed Nexenta and I've been playing around with zfs.
[EMAIL PROTECTED]:/tank# uname -a
SunOS hzsilo 5.11 NexentaOS_20070105 i86pc i386 i86pc Solaris
[EMAIL PROTECTED]:/tank# zfs list
NAME USED AVAIL REFER MOUNTPOINT
home 89.5K 219G
On 2/25/07, Ian Collins [EMAIL PROTECTED] wrote:
Is the Gigabyte SATA2 controller recognised by Solaris?
Nexenta v6 seems to work. Based on the Nforce 55 chipset I believe. I
assume Opensolaris will work since it is based on that.
I couldn't tell you if NCQ works, as Solaris is pretty new
Note also I have the BIOS set to AHCI mode for the SATA controllers, not
IDE.
Nicholas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2/22/07, Gregory Shaw [EMAIL PROTECTED] wrote:
I was thinking of something similar to a scrub. An ongoing process
seemed too intrusive. I'd envisioned a cron job similar to a scrub (or
defrag) that could be run periodically to show any differences between disk
performance over time.
On 2/20/07, Jason J. W. Williams [EMAIL PROTECTED] wrote:
Ah. We looked at them for some Windows DR. They do have a nice product.
Just waiting for them to get iscsi and vlan support. Supposely sometime in
the next couple months. Combined with zfs/iscsi it will make a very nice
small data
On 2/19/07, Robert Milkowski [EMAIL PROTECTED] wrote:
5. there's no simple answer to this question as it greatly depends on
workload and data.
One thing you should keep in mind - Solaris *has* to boot in a 64bit
mode if you wan to
use all that memory as a cache for zfs, so old x86 32bit
Is there a best practice guide for using zfs as a basic rackable small
storage solution?
I'm considering zfs with a 2U 12 disk Xeon based server system vs
something like a second hand FAS250.
Target enviroment is mixature of Xen or VI hosts via iSCSI and nfs/cifs.
Being able to take snapshots
48 matches
Mail list logo