On Thu, Mar 13, 2008 at 1:50 AM, Marc Bevand [EMAIL PROTECTED] wrote:
integrated AHCI controller (SB600 chipset), 2 disks on a 2-port $20 PCI-E 1x
SiI3132 controller, and the 7th disk on a $65 4-port PCI-X SiI3124 controller
Do you have access to a Sil3726 port multiplier? I'd like to see how
On Mon, Mar 17, 2008 at 2:09 PM, Tim [EMAIL PROTECTED] wrote:
On 3/17/08, Brandon High [EMAIL PROTECTED] wrote:
easier to use an external disk box like the CFI 8-drive eSATA tower
than find a reasonable server case that can hold that many drives.
Woah, why would you spend 1600
On Thu, Mar 13, 2008 at 1:50 AM, Marc Bevand [EMAIL PROTECTED] wrote:
PCI-X card...). The rest is also dirty cheap: $65 Asus M2A-VM motherboard,
$60
dual-core Athlon 64 X2 4000+, with 1GB of DDR2 800, and a 400W PSU.
Apologies for the threadjack (um, again) but did you know that the
RS690
is 4096 bytes.
This doesn't help explain why the throughput dropped when increasing
max_payload_size over 512 causes a drop in throughput, but at least
you can safely run the card with a payload greater than 128.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
is causing the hiccup on a larger payload, not the RS690 PCIe
controller.
Of course, without more detailed spec on either component this is pure
conjecture but it seems to match the behavior you observed.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
project
at macosforge.com so I'm guessing support for v9 isn't right around the
corner.
I'm not sure if it would work, but did you try to do zfs send / zfs
recv? If it's just sending the filesystem data, you may be able to get
around the zpool version problem.
-B
--
Brandon High [EMAIL
key and allow this.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the config doesn't matter, but having
the configuration tied to the filesystem would be nice. You would
inherit a snapshot schedule and retention policy, just like other
filesystem properties.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
code to work with SCST. The SCST
project *claims* their code is better. I haven't used either, and it
may very well be a better solution, but I'd recommend testing both to
see.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
members and the rest written to the new device.
I did a quick search for references and could find any, so take this
with a grain of salt.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs
if ZFS would have worked for him, but it
sounds like he's a Windows guy.
... and to threadjack, has there been any talk of a Windows ZFS driver?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing
On Tue, Apr 15, 2008 at 12:12 PM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
On Tue, 15 Apr 2008, Brandon High wrote:
I think RAID-Z is different, since the stripe needs to spread across
all devices for protection. I'm not sure how it's done.
My understanding is that RAID-Z is indeed
that are actually in use and works with the i/o
scheduler, so should have a lower impact on performance.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Wed, Apr 16, 2008 at 3:19 PM, Richard Elling [EMAIL PROTECTED] wrote:
Brandon High wrote:
The stripe size will be across all vdevs that have space. For each
stripe written, more data will land on the empty vdev. Once the
previously existing vdevs fill up, writes will go to the new vdev
the zpool.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are only supported on Linux.
I remember there being an application in the Windows 95/98 timeframe
that did what you want, but do idea on what it was called, how well it
worked, or if it still exists.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
--
Brandon High[EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/6376021.stm
Full results here: http://research.google.com/archive/disk_failures.pdf
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
(and instructions on how to resurrect any pre build 36 streams)
can be found here:
http://opensolaris.org/os/community/on/flag-days/pages/2008042301
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing
Caviar GP WD10EACS 1TB 5400 to 7200 RPM SATA
3.0Gb/s Hard Drive
Subtotal: $2,386.88
I may get another drive for the OS as well, or boot off of a
CF-card/IDE adapter like this one:
http://www.newegg.com/Product/Product.aspx?Item=N82E16812186038
-B
--
Brandon High [EMAIL PROTECTED
S2881UG2NR at $419.
Call it $960 (with a single 285 cpu) vs. $399 for the AM2 pieces.
I'd check prices on a single socket 939 Opteron with a suitable
motherboard, but neither appear to be available anymore.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
into my car's trunk as I
leave work one day, but that's not something I'd consider either.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, the drive's
The same feature can be enabled on WD's consumer SATA drives. Google
for wdtler.zip.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
a
SiI3132 chip (driver: si3124).
I had hoped to get a system with on board ports, but hadn't found one
with more than 6. Thanks for the pointer!
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing
(7050) is $70.
I believe both have only 4 SATA ports, but that should be ok for your
build.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
case that came with a PSU and it's been reliable
for 2 years. I believe the case and PSU was about $100.
For my most recent build I looked at Silent PC Review and went with a
Corsair 520W PSU based on their testing.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
On Mon, Jun 2, 2008 at 2:17 PM, Scott L. Burson [EMAIL PROTECTED] wrote:
Would still like advice on the 1420SA.
It's been mentioned before. The 1420SA does not work.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
on the case.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.)
There was some discussion about it recently, I think the reason is
that the GUI for SXDE is not open sourced so it was more
difficult/political to add. The 2008.05 installer should be able to do
it when they sync up to b90 or beyond.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
Windows and Linux systems. It
might work for Opensolaris as well. It would create a block level
backup, and the restore might not work on a system which isn't
identical. http://www.drivesnapshot.de/en/
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
in the fs at /mnt. Provided your shell has large
file support, it should work just fine.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
is that Ghost and Drive Snapshot
can create images of known filesystems (NTFS, FAT, ext2/3, reiserfs)
that aren't raw images. zfs send is probably closest to that, except
both of the imaging tools allow you to mount images and browse them.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy
improved. (I think it was Legato's product running
under Linux, but I'm not certain.)
I can't think of any reason that something like this wouldn't work
with ZFS, though the ACLs may not get saved.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
to address the performance problems that can be caused by the
ARC cache. Limiting the cache size can also help, but shouldn't be
needed in recent builds. I'm not sure if the write throttling has been
put back to Solaris 10u5 or if it's scheduled for 10u6 though.
-B
--
Brandon High [EMAIL PROTECTED
. If
you just want to take a shot i the dark and if this is the only
filesystem in your zpool, either reduce the size of the zfs ARC cache,
or reduce the size of the UFS cache.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
On Mon, Jun 9, 2008 at 10:44 PM, Robert Thurlow [EMAIL PROTECTED] wrote:
Brandon High wrote:
AFAIK, you're doing the best that you can while playing in the
constraints of ZFS. If you want to use nfs v3 with your clients,
you'll need to use UFS as the back end.
Just a clarification: NFSv3
memtest, swapping the
memory for known good (preferably ECC) memory is one option to
diagnose it.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
for my Windows gaming
system and after trying to get my 1066 memory to run stably at speed,
I gave up and run it at 800. You should try reducing the memory speed
and relaxing the timing to 5-5-5-15 to see if it helps.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
--
What about the LSISAS3081E-R? Does it use the same drivers as the
other LSI controllers?
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3081er/index.html
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
?Item=N82E16813128335
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, but it could explain things a little.
How much of the memory is in use, and how much of that is used by the ARC cache?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss
^^^ ^^^
It also looks like they are not identical drives.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
is that if this encl can assume 0 and the other assume 1 and
the zfs pool will come up that way?
Are you doing a zfs export / zfs import between taking the enclosures
down and bringing them back up?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
.
It looks like the LSI SAS3081E-R, but probably at 1/2 the cost.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
?
How do you manage redundancy (e.g. mirror) for that boot device?
4gb is enough to hold a minimal system install. /var will go to a file
system on the raidz pool.
ZFS mirroring can be used on boot devices for redundancy.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best
, who are the initial target for ZFS.
Most enterprise users would just attach a new drive tray and add that
as another raid-z to the zpool.
That being said, there is an RFE for expanding the width of a raidz:
http://bugs.opensolaris.org/view_bug.do?bug_id=6718209
-B
--
Brandon High [EMAIL
second peak due to fsflush invocation.
However each peak is about ~5ms.
Our application can not recover from such higher latency.
Is the pool using raidz, raidz2, or mirroring? How many drives are you using?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
of ECC should do it. I believe all the AMD CPUs support
ECC, but you should verify this before buying.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of the LSI, which would give
me exactly 8 SATA ports and save about $250. I may still go this route
but given the overall cost it's not that big of a deal.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss
seriously doubt it will happen
with new drives.
My new workstation in the office had it's (sole) 400gb drive die after
about 2 months. It does happen. Production lots share failure
characteristics.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
On Thu, Jul 24, 2008 at 1:28 AM, Steve [EMAIL PROTECTED] wrote:
And interesting of booting from CF, but it seems is possible to boot from the
zraid and I would go for it!
It's not possible to boot from a raidz volume yet. You can only boot
from a single drive or a mirror.
-B
--
Brandon High
exporting the individual drives and using zfs to handle
the mirroring? It might have better performance in your situation.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss
to the motherboard.
http://blog.flowbuzz.com/search/label/NAS
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
-8087) to (4) x1 Serial ATA
(controller based) fan-out cable with SFF-8448 sideband signals.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Fri, Jul 25, 2008 at 9:17 AM, David Collier-Brown [EMAIL PROTECTED] wrote:
And do you really have 4-sided raid 1 mirrors, not 4-wide raid-0 stripes???
Or perhaps 4 RAID1 mirrors concatenated?
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
since people are more
likely to report an error than success.
One of the Sun guys could probably set the record straight.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss
that the drivers in Solaris should be relatively stable.
If that's not the case, then I'd think Sun would want to address it.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss
disappointed that there is no support for power management on the
K8, which is a bit of a shock since Sun's been selling K8 based
systems for a few years now. The cost of an X3 ($125) and AM2+ mobo
($80) is about the same as an Intel chip ($80) and motherboard ($150)
that supports ECC.
-B
--
Brandon High
the
combination doesn't exist.
The AMD 790GX boards are starting to show up:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813128352
Dual 8x PCIe slots, integrated video and 6 AHCI SATA ports.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
stuff.
Socket 939 has been phased out for 2-3 years now, it's unlikely new
motherboards will be available.
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
happened on the
Intel side when the 65nm Core 2 came out (E6xxx and Q6xxx), and again
with the 45nm Core 2 (E8xxx and Q8xxx).
-B
--
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss
to know but may be an improvement. I'd recommend an LSI 1068e
based HBA like the Supermicro AOC-USAS-L8i.
You may want to put an Intel NIC into the AMD system, since support
with other ethernet solutions seems spotty at best.
-B
--
Brandon High [EMAIL PROTECTED]
You can't blow things up
in the 2510, etc)?
-B
--
Brandon High [EMAIL PROTECTED]
You can't blow things up with schools and hospitals. -Stephen Dailey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachments, document header pages and common user files.
-B
--
Brandon High [EMAIL PROTECTED]
I'm not against the police; I'm just afraid of them. -Alfred Hitchcock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
source
community as well...
-B
--
Brandon High : [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
come up as degraded, since one of its
vdevs is missing.
6. Copy your files onto the zpool.
7. replace the file vdev with the 5th disk.
Like I said, I haven't tried this but it might work. I'd love to hear
if it does.
-B
--
Brandon High : [EMAIL PROTECTED
connectors. I have a dying drive in the array (hereafter
drive N). Obviously I should replace it. But how?
Use a USB enclosure for the new drive, and do:
zfs replace bad_disk new_disk
You should be able to export the volume and physically replace the
disk at that point.
-B
--
Brandon High : bh
On Tue, Dec 30, 2008 at 12:18 AM, Brandon High bh...@freaks.com wrote:
Use a USB enclosure for the new drive, and do:
zfs replace bad_disk new_disk
You should be able to export the volume and physically replace the
disk at that point.
It was late when I wrote that, so let me clarify a few
://www.siliconmechanics.com/) (or even Dell or HP) is almost
always a better deal on hardware.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
expander is different than a SATA port multiplier (PMP). I'm
not sure if the SAS expander is supported, but it might be.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
with 8 or more
drive bays may be $300+. This puts the cost below something like the
ReadyNAS or HP home server. While you gain more functionality, it
forces you to handle the build and administration overhead.
-B
--
Brandon High : bh...@freaks.com
___
zfs
On Fri, Jan 16, 2009 at 2:47 AM, Nick Smith nick.sm...@techop.ch wrote:
meh
meh
You should ignore JZ, he seems to just be trolling the list.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
, but it is a OpenSolaris problem. The
drivers for hardware Realtek and other NICs are ... not so great.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that there are
problems with the driver (and most likely the hardware).
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Feb 5, 2009 at 8:15 AM, Karl Rossing ka...@barobinson.com wrote:
Would there be an advantage to using 4GB USB memory sticks on a home
system for zil and l2arc?
Probably not. Most USB devices are slower than SATA disks.
-B
--
Brandon High : bh...@freaks.com
something akin to T10 DIF (which others mentioned) would fit the
bill. You could also tunnel the traffic over a transit layer such as
TLS or SSH that provides a measure of validation. Latency should be
fun to deal with however.
-B
--
Brandon High : bh...@freaks.com
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to to have high
bandwidth and low latency access to memory. The IBM POWER 6 has on-die
memory controllers as well, which is less likely to be due to any
market pressure caused by AMD since the two firms' products don't
directly compete. It's just a reasonable engineering decision.
-B
--
Brandon
On Thu, Feb 26, 2009 at 8:35 AM, Blake blake.ir...@gmail.com wrote:
A big issue with running a VM is that ZFS prefers direct access to storage.
VMWare can give VMs direct access to the actual disks. This should
avoid the overhead of using virtual disks.
-B
--
Brandon High : bh...@freaks.com
not be in the
data path at all. You could take the drive out and install them in a
new machine and it would look just like a native disk.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
The keynote was given on Wednesday. Any more willingness to discuss
dedup on the list now?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the chip set lies about 64 bit
support.
I'm not sure where I read that, however, so you should verify on your own.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
off starting out with 6 drives. If you
can't buy them all now, use 3x1.5 in a raidz and add another 3x1.5
raidz to your pool later.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
a
filesystem that should have been created with utf8only and
normalization enabled, but ... well wasn't.
These can only be changed when the fs is created, so do I need to
create a new fs and rsync, losing all my snapshots, or will a send |
recv pipe help?
-B
--
Brandon High : bh...@freaks.com
port multiplier
When is this going to show up in the repo at
http://pkg.opensolaris.org/dev/ ? Is it already there?
Sorry if it's a dumb question, but I'm not sure where to look so the
release process is a bit opaque to me.
-B
--
Brandon High : bh...@freaks.com
,
which gives surprisingly good performance and is . There is a 4-bay
version available but lack of SATA ports on the motherboard kept me
from using it.
http://www.cooldrives.com/siseata5pomu.html
http://www.newegg.com/Product/Product.aspx?Item=N82E16811123122
-B
--
Brandon High : bh...@freaks.com
board from ASUS has a BIOS option to scrub memory, outside of
the OS. Check that?
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the
Windows side these folders show that HOMESERVER\Kids group has full control.
I think the CIFS password and group files are in /var/smb/smbpasswd
and /var/smb/smbgroup.db . The latter is a SQLite 2 database that you
can view with /lib/svc/bin/sqlite
-G
--
Brandon High : bh...@freaks.com
--
Brandon High : bh...@freaks.com
Always try to do things in chronological order; it's less confusing that way.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to match the uid on some existing CentOS systems. Maybe a hook to PAM
would work, but you'd have to make the passwd and group file changes
via a PAM interface.
-B
--
Brandon High : bh...@freaks.com
If it wasn't for pacifists, we could achieve peace.
___
zfs
but
never got around to it. Netcell is defunct or got bought out, so the
controller is no longer available.
-B
--
Brandon High : bh...@freaks.com
Always try to do things in chronological order; it's less confusing that way.
___
zfs-discuss mailing list
5 x 3.5 drives. This
doesn't leave space for a optical drive, but I used a USB drive to
install the OS and don't need it anymore.
-B
--
Brandon High : bh...@freaks.com
If it wasn't for pacifists, we could achieve peace.
___
zfs-discuss mailing list
zfs
enterprise users could find an
application for nearline storage where available space trumps
performance.
-B
--
Brandon High : bh...@freaks.com
Always try to do things in chronological order; it's less confusing that way.
___
zfs-discuss mailing list
zfs
and are
pretty nice at times. I think that's what Edward is looking for.
-B
--
Brandon High : bh...@freaks.com
If violence doesn't solve your problem, you're not using enough of it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
in the zpool, you should be
able to move the data to other vdevs and shrink the degraded one.
Unless bprewrite doesn't allow data to move between vdevs, that is.
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss
).
Linux's NFS v4 (especially the one in Centos 5.3, which is a little
older) is not a complete implementation. It might be worth seeing if
NFS v3 has better performance.
-B
--
Brandon High : bh...@freaks.com
If violence doesn't solve your problem, you're not using enough
which provides 133MB/s at half duplex. This a
much less than the full bandwidth from the number of drives you have
on the AOC card.
Getting a mobo with a PCI-X slot, getting a PCIe controller, or
leaving as many drives as you can on the ICH will help performance.
-B
--
Brandon High : bh
stripe will improve read performance
for raidz. Writes will generally be limited to the throughput of your
slowest device. On average, writes will still be faster than than
RAID5/6, since there is no read / re-write penalty for partial writes.
-B
--
Brandon High : bh...@freaks.com
If violence doesn't
replaced the partitions with real devices, you'd have less
protection than raidz2 would normally afford. You'd still be better
off replacing the 500GB drives and adding additional drives now and
avoid migration and rebuilds later.
-B
--
Brandon High : bh...@freaks.com
1 - 100 of 455 matches
Mail list logo