On Thu, Jul 24, 2008 at 1:28 AM, Steve [EMAIL PROTECTED] wrote:
And interesting of booting from CF, but it seems is possible to boot from the
zraid and I would go for it!
It's not possible to boot from a raidz volume yet. You can only boot
from a single drive or a mirror.
-B
--
Brandon High
exporting the individual drives and using zfs to handle
the mirroring? It might have better performance in your situation.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss
to the motherboard.
http://blog.flowbuzz.com/search/label/NAS
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
-8087) to (4) x1 Serial ATA
(controller based) fan-out cable with SFF-8448 sideband signals.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
second peak due to fsflush invocation.
However each peak is about ~5ms.
Our application can not recover from such higher latency.
Is the pool using raidz, raidz2, or mirroring? How many drives are you using?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
of ECC should do it. I believe all the AMD CPUs support
ECC, but you should verify this before buying.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of the LSI, which would give
me exactly 8 SATA ports and save about $250. I may still go this route
but given the overall cost it's not that big of a deal.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss
seriously doubt it will happen
with new drives.
My new workstation in the office had it's (sole) 400gb drive die after
about 2 months. It does happen. Production lots share failure
characteristics.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
, who are the initial target for ZFS.
Most enterprise users would just attach a new drive tray and add that
as another raid-z to the zpool.
That being said, there is an RFE for expanding the width of a raidz:
http://bugs.opensolaris.org/view_bug.do?bug_id=6718209
-B
--
Brandon High [EMAIL
?
How do you manage redundancy (e.g. mirror) for that boot device?
4gb is enough to hold a minimal system install. /var will go to a file
system on the raidz pool.
ZFS mirroring can be used on boot devices for redundancy.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
.
It looks like the LSI SAS3081E-R, but probably at 1/2 the cost.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
is that if this encl can assume 0 and the other assume 1 and
the zfs pool will come up that way?
Are you doing a zfs export / zfs import between taking the enclosures
down and bringing them back up?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
^^^ ^^^
It also looks like they are not identical drives.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
, but it could explain things a little.
How much of the memory is in use, and how much of that is used by the ARC cache?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss
?Item=N82E16813128335
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jun 9, 2008 at 10:44 PM, Robert Thurlow [EMAIL PROTECTED] wrote:
Brandon High wrote:
AFAIK, you're doing the best that you can while playing in the
constraints of ZFS. If you want to use nfs v3 with your clients,
you'll need to use UFS as the back end.
Just a clarification: NFSv3
memtest, swapping the
memory for known good (preferably ECC) memory is one option to
diagnose it.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
for my Windows gaming
system and after trying to get my 1066 memory to run stably at speed,
I gave up and run it at 800. You should try reducing the memory speed
and relaxing the timing to 5-5-5-15 to see if it helps.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
--
What about the LSISAS3081E-R? Does it use the same drivers as the
other LSI controllers?
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3081er/index.html
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
to address the performance problems that can be caused by the
ARC cache. Limiting the cache size can also help, but shouldn't be
needed in recent builds. I'm not sure if the write throttling has been
put back to Solaris 10u5 or if it's scheduled for 10u6 though.
-B
--
Brandon High [EMAIL PROTECTED
. If
you just want to take a shot i the dark and if this is the only
filesystem in your zpool, either reduce the size of the zfs ARC cache,
or reduce the size of the UFS cache.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
in the fs at /mnt. Provided your shell has large
file support, it should work just fine.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
is that Ghost and Drive Snapshot
can create images of known filesystems (NTFS, FAT, ext2/3, reiserfs)
that aren't raw images. zfs send is probably closest to that, except
both of the imaging tools allow you to mount images and browse them.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy
improved. (I think it was Legato's product running
under Linux, but I'm not certain.)
I can't think of any reason that something like this wouldn't work
with ZFS, though the ACLs may not get saved.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
Windows and Linux systems. It
might work for Opensolaris as well. It would create a block level
backup, and the restore might not work on a system which isn't
identical. http://www.drivesnapshot.de/en/
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
.)
There was some discussion about it recently, I think the reason is
that the GUI for SXDE is not open sourced so it was more
difficult/political to add. The 2008.05 installer should be able to do
it when they sync up to b90 or beyond.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
(7050) is $70.
I believe both have only 4 SATA ports, but that should be ok for your
build.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
case that came with a PSU and it's been reliable
for 2 years. I believe the case and PSU was about $100.
For my most recent build I looked at Silent PC Review and went with a
Corsair 520W PSU based on their testing.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
On Mon, Jun 2, 2008 at 2:17 PM, Scott L. Burson [EMAIL PROTECTED] wrote:
Would still like advice on the 1420SA.
It's been mentioned before. The 1420SA does not work.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
on the case.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, the drive's
The same feature can be enabled on WD's consumer SATA drives. Google
for wdtler.zip.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
a
SiI3132 chip (driver: si3124).
I had hoped to get a system with on board ports, but hadn't found one
with more than 6. Thanks for the pointer!
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing
(and instructions on how to resurrect any pre build 36 streams)
can be found here:
http://opensolaris.org/os/community/on/flag-days/pages/2008042301
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing
Caviar GP WD10EACS 1TB 5400 to 7200 RPM SATA
3.0Gb/s Hard Drive
Subtotal: $2,386.88
I may get another drive for the OS as well, or boot off of a
CF-card/IDE adapter like this one:
http://www.newegg.com/Product/Product.aspx?Item=N82E16812186038
-B
--
Brandon High [EMAIL PROTECTED
S2881UG2NR at $419.
Call it $960 (with a single 285 cpu) vs. $399 for the AM2 pieces.
I'd check prices on a single socket 939 Opteron with a suitable
motherboard, but neither appear to be available anymore.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
into my car's trunk as I
leave work one day, but that's not something I'd consider either.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
/6376021.stm
Full results here: http://research.google.com/archive/disk_failures.pdf
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
--
Brandon High[EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are only supported on Linux.
I remember there being an application in the Windows 95/98 timeframe
that did what you want, but do idea on what it was called, how well it
worked, or if it still exists.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
the zpool.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that are actually in use and works with the i/o
scheduler, so should have a lower impact on performance.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Wed, Apr 16, 2008 at 3:19 PM, Richard Elling [EMAIL PROTECTED] wrote:
Brandon High wrote:
The stripe size will be across all vdevs that have space. For each
stripe written, more data will land on the empty vdev. Once the
previously existing vdevs fill up, writes will go to the new vdev
if ZFS would have worked for him, but it
sounds like he's a Windows guy.
... and to threadjack, has there been any talk of a Windows ZFS driver?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing
On Tue, Apr 15, 2008 at 12:12 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Tue, 15 Apr 2008, Brandon High wrote:
I think RAID-Z is different, since the stripe needs to spread across
all devices for protection. I'm not sure how it's done.
My understanding is that RAID-Z is indeed
code to work with SCST. The SCST
project *claims* their code is better. I haven't used either, and it
may very well be a better solution, but I'd recommend testing both to
see.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
members and the rest written to the new device.
I did a quick search for references and could find any, so take this
with a grain of salt.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs
the config doesn't matter, but having
the configuration tied to the filesystem would be nice. You would
inherit a snapshot schedule and retention policy, just like other
filesystem properties.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
key and allow this.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
project
at macosforge.com so I'm guessing support for v9 isn't right around the
corner.
I'm not sure if it would work, but did you try to do zfs send / zfs
recv? If it's just sending the filesystem data, you may be able to get
around the zpool version problem.
-B
--
Brandon High [EMAIL
is causing the hiccup on a larger payload, not the RS690 PCIe
controller.
Of course, without more detailed spec on either component this is pure
conjecture but it seems to match the behavior you observed.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
is 4096 bytes.
This doesn't help explain why the throughput dropped when increasing
max_payload_size over 512 causes a drop in throughput, but at least
you can safely run the card with a payload greater than 128.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
On Thu, Mar 13, 2008 at 1:50 AM, Marc Bevand [EMAIL PROTECTED] wrote:
integrated AHCI controller (SB600 chipset), 2 disks on a 2-port $20 PCI-E 1x
SiI3132 controller, and the 7th disk on a $65 4-port PCI-X SiI3124 controller
Do you have access to a Sil3726 port multiplier? I'd like to see how
On Mon, Mar 17, 2008 at 2:09 PM, Tim [EMAIL PROTECTED] wrote:
On 3/17/08, Brandon High [EMAIL PROTECTED] wrote:
easier to use an external disk box like the CFI 8-drive eSATA tower
than find a reasonable server case that can hold that many drives.
Woah, why would you spend 1600
On Thu, Mar 13, 2008 at 1:50 AM, Marc Bevand [EMAIL PROTECTED] wrote:
PCI-X card...). The rest is also dirty cheap: $65 Asus M2A-VM motherboard,
$60
dual-core Athlon 64 X2 4000+, with 1GB of DDR2 800, and a 400W PSU.
Apologies for the threadjack (um, again) but did you know that the
RS690
401 - 455 of 455 matches
Mail list logo