Is RFE 4852783 (need for an equivalent to LVM2's pvmove) likely to
happen within the next year?
My use-case is home user. I have 16 disks spinning, two towers of
eight disks each, exporting some of them as iSCSI targets. Four disks
are 1TB disks already in ZFS mirrors, and 12 disks are 180 -
ph == Peter Hawkins [EMAIL PROTECTED] writes:
ph Tried zpool replace. Unfortunately that takes me back into the
ph cycle where as soon as the resilver starts the system hangs,
ph not even CAPS Lock works. When I reset the system I have about
ph a 10 second window to detach the
jb == Jeff Bonwick [EMAIL PROTECTED] writes:
jb If you say 'zpool online pool disk' that should tell ZFS
jb that the disk is healthy again and automatically kick off a
jb resilver.
jb Of course, that should have happened automatically.
with b71 I find that it does sometimes
mo == Mertol Ozyoney [EMAIL PROTECTED] writes:
mo One of our customer is suffered from FS being corrupted after
mo an unattanded shutdonw due to power problem.
mo They want to switch to ZFS.
mo From what I read on, ZFS will most probably not be corrupted
mo from the same
re == Richard Elling [EMAIL PROTECTED] writes:
kb == Keith Bierman [EMAIL PROTECTED] writes:
re the disk lies about the persistence of the data. ZFS knows
re disks lie, so it sends sync commands when necessary
(1) i don't think ``lie'' is a correct characerization given that the
et == Erik Trimble [EMAIL PROTECTED] writes:
et SSD used to refer strictly to standard DRAM backed with a
et battery (and, maybe some sort of a fancy enclosure with a hard
et drive to write all DRAM data to after a power outage).
et * 3.5 LP disk form factor, SCSI hotswap/SATA2
bf == Bob Friesenhahn [EMAIL PROTECTED] writes:
re == Richard Elling [EMAIL PROTECTED] writes:
re If you run out of space, things fail. Pinwheels are a symptom
re of running out of RAM, not running out of swap.
okay. But what is the point?
Pinwheels are a symptom of thrashing.
bf == Bob Friesenhahn [EMAIL PROTECTED] writes:
bf What is the relationship between the size of the memory
bf reservation and thrashing?
The problem is that size-capping is the only control we have over
thrashing right now. Maybe there are better ways to predict thrashing
than through
bf == Bob Friesenhahn [EMAIL PROTECTED] writes:
bf sequential access to virtual memory causes reasonably
bf sequential I/O requests to disk.
no, thrashing is not when memory is accessed randomly instead of
sequentially. It's when the working set of pages is too big to fit in
physical
djm == Darren J Moffat [EMAIL PROTECTED] writes:
bf == Bob Friesenhahn [EMAIL PROTECTED] writes:
djm Why are you planning on using RAIDZ-2 rather than mirroring ?
isn't MTDL sometimes shorter for mirroring than raidz2? I think that
is the biggest point of raidz2, is it not?
bf The
r == Ross [EMAIL PROTECTED] writes:
np == Neil Perrin [EMAIL PROTECTED] writes:
np 2. I received the board and driver from another group within
np Sun. It would be better to contact Micro Memory (or whoever
np took them over) directly, as it's not my place to give out 3rd
np
ah == Al Hopper [EMAIL PROTECTED] writes:
ah I've had bad experiences with the Seagate products.
I've had bad experiences with all of them.
(maxtor, hgst, seagate, wd)
ah My guess is that it's related to duty cycle -
Recently I've been getting a lot of drives from companies like
r == Ross [EMAIL PROTECTED] writes:
r I think the problem Miles is that this isn't Sun hardware
In this case it's not, but please do not muddle my point: Marvell SATA
and LSI Logic mpt SATARAID and many other (most?) drivers have the
same problem.
Right now there are, AIUI:
*
bf == Bob Friesenhahn [EMAIL PROTECTED] writes:
bf since the dawn of time
since the dawn of time Sun has been playing these games with hard
drive ``sleds''. I still have sparc32 stuff on the shelf with
missing/extra sleds.
bf POTS line
bf cell phone
bf You are free to select
jh == Johan Hartzenberg [EMAIL PROTECTED] writes:
jh To be even MORE safe, you want the two disks to be on separate
jh controllers, so that you can survive a controller failure too.
or a controller-driver-failure. At least on Linux, when a disk goes
bad, Linux starts resetting
r == Ross [EMAIL PROTECTED] writes:
r the benefit of mirroring that CF drive would be minimal.
rather short-sighted. What if you want to replace the CF with a
bigger or faster one without shutting down?
pgpSx47yLusSx.pgp
Description: PGP signature
et == Erik Trimble [EMAIL PROTECTED] writes:
et Dedup Advantages:
et (1) save space
(2) coalesce data which is frequently used by many nodes in a large
cluster into a small nugget of common data which can fit into RAM
or L2 fast disk
(3) back up non-ZFS filesystems that don't
mh == Matt Harrison [EMAIL PROTECTED] writes:
mh http://breden.org.uk/2008/03/02/home-fileserver-zfs-hardware/
that's very helpful. I'll reshop for nForce 570 boards. i think my
untested guess was an nForce 630 or something, so it probably won't
work.
I would add:
1. do not get three
ic == Ian Collins [EMAIL PROTECTED] writes:
ic I'd use mirrors rather than raidz2. You should see better
ic performance
the problem is that it's common for a very large drive to have
unreadable sectors. This can happen because the drive is so big that
its bit-error-rate matters. But
s == Steve [EMAIL PROTECTED] writes:
s About freedom: I for sure would prefere open source drivers
s availability, let's account for it!
There is source for the Intel gigabit cards in the source browser.
bh == Brandon High [EMAIL PROTECTED] writes:
bh a system built around the Marvell or LSI chipsets
according to The Blogosphere, source of all reliable information,
there's some issue with LSI, too. The driver is not available in
stable Solaris nor OpenSolaris, or there are two drivers, or
jcm == James C McPherson [EMAIL PROTECTED] writes:
jcm I'm not convinced that this is a valid test; yanking a disk
it is the ONLY valid test. it's just testing more than ZFS.
pgpHTHYtENLmG.pgp
Description: PGP signature
___
zfs-discuss mailing
re == Richard Elling [EMAIL PROTECTED] writes:
re I will submit that this failure mode is often best
re solved by door locks, not software.
First, not just door locks, but:
* redundant power supplies
* sleds and Maintain Me, Please lights
* high-strung extremely conservative
mp == Mattias Pantzare [EMAIL PROTECTED] writes:
This is a big one: ZFS can continue writing to an unavailable
pool. It doesn't always generate errors (I've seen it copy
over 100MB before erroring), and if not spotted, this *will*
cause data loss after you reboot.
mp
s == Steve [EMAIL PROTECTED] writes:
s http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
no ECC:
http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets
pgpSbbK6c48b6.pgp
Description: PGP signature
___
zfs-discuss
r == Ross [EMAIL PROTECTED] writes:
r This is a big step for us, we're a 100% windows company and
r I'm really going out on a limb by pushing Solaris.
I'm using it in anger. I'm angry at it, and can't afford anything
that's better.
Whatever I replaced ZFS with, I would make sure it
cs == Chris Siebenmann [EMAIL PROTECTED] writes:
cs (Some versions of syslog let you turn this off for specific
cs log files, which is very useful for high volume, low
cs importance ones.)
To ensure that kernel messages are written to disk promptly,
syslogd(8)
tn == Thomas Nau [EMAIL PROTECTED] writes:
tn Nevertheless during the first hour of operation after onlining
tn we recognized numerous checksum errors on the formerly
tn offlined device. We decided to scrub the pool and after
tn several hours we got about 3500 error in 600GB of
tn == Thomas Nau [EMAIL PROTECTED] writes:
tn I never experienced that one but we usually don't touch any of
tn the iSCSI settings as long as a devices is offline. At least
tn as long as we don't have to for any reason
Usually I do 'zpool offline' followed by 'iscsiadm remove
c == Miles Nordin [EMAIL PROTECTED] writes:
tn == Thomas Nau [EMAIL PROTECTED] writes:
c 'zpool status' should not be touching the disk at all.
I found this on some old worklog:
http://web.Ivy.NET/~carton/oneNightOfWork/20061119-carton.html
-8-
Also, zpool status takes forEVer
mh == Matt Harrison [EMAIL PROTECTED] writes:
mh I'm worried about is if the entire batch is failing slowly
mh and will all die at the same time.
If you can download smartctl, you can use the approach described here:
http://web.Ivy.NET/~carton/rant/ml/raid-findingBadDisks-0.html
bh == Brandon High [EMAIL PROTECTED] writes:
nk == Nathan Kroenert [EMAIL PROTECTED] writes:
nk And I can certainly vouch for that series of chipsets... I
nk have a 750a-sli chipset (the one below the 790)
um...what?
750a is an nVidia chip
np == Neal Pollack [EMAIL PROTECTED] writes:
wj == wan jm [EMAIL PROTECTED] writes:
np Yes, it's too easy to administer. This makes it rough to
np charge a lot as a sysadmin.
yeah, sure, until you get a simple question like this:
wj there are two disks in one ZFS pool used as
re == Richard Elling [EMAIL PROTECTED] writes:
pf == Paul Fisher [EMAIL PROTECTED] writes:
re I was able to reproduce this in b93, but might have a
re different interpretation
You weren't able to reproduce the hang of 'zpool status'?
Your 'zpool status' was after the FMA fault kicked
re == Richard Elling [EMAIL PROTECTED] writes:
re This was fixed some months ago, and it should be hard to find
re the old B2 chips anymore (not many were made or sold). --
well, they all ended up on newegg. :)
pgpSasiHxNZEB.pgp
Description: PGP signature
em == Evert Meulie [EMAIL PROTECTED] writes:
em OpenSolaris+ZFS+RAIDZ+VirtualBox.
I'm using snv b83 + ZFS-unredundant + 32bit CPU + VirtualBox.
It's stable, but not all the features like USB and RDP are working for
me. Also it is being actively developed, so that's good.
I'm planning to
re == Richard Elling [EMAIL PROTECTED] writes:
tb == Tom Bird [EMAIL PROTECTED] writes:
tb There was a problem with the SAS bus which caused various
tb errors including the inevitable kernel panic, the thing came
tb back up with 3 out of 4 zfs mounted.
re In general, ZFS can
re == Richard Elling [EMAIL PROTECTED] writes:
c If that's really the excuse for this situation, then ZFS is
c not ``always consistent on the disk'' for single-VDEV pools.
re I disagree with your assessment. The on-disk format (any
re on-disk format) necessarily assumes no
re == Richard Elling [EMAIL PROTECTED] writes:
re If your pool is not redundant, the chance that data
re corruption can render some or all of your data inaccessible is
re always present.
1. data corruption != unclean shutdown
2. other filesystems do not need a mirror to recover
nw == Nicolas Williams [EMAIL PROTECTED] writes:
nw Without ZFS the OP would have had silent, undetected (by the
nw OS that is) data corruption.
It sounds to me more like the system would have paniced as soon as he
pulled the cord, and when it rebooted, it would have rolled the UFS
log
r == Ross [EMAIL PROTECTED] writes:
r Tom wrote There was a problem with the SAS bus which caused
r various errors including the inevitable kernel panic. It's
r the various errors part that catches my eye,
yeah, possibly, but there are checksums on the SAS bus, and its
t == Tim [EMAIL PROTECTED] writes:
t Why would you have to buy smaller disks? You can replace the
t 320's with 1tb drives and after the last 320 is out of the
t raidgroup, it will grow automatically.
This does work for me to grow a mirrored vdev on nevada b71. The way
I found
ff I have check the drives with smartctl:
ff ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE
UPDATED WHEN_FAILED RAW_VALUE
ff 1 Raw_Read_Error_Rate 0x000f 115 075 006Pre-fail
Always - 94384069
ff 5 Reallocated_Sector_Ct 0x0033
mp == Mattias Pantzare [EMAIL PROTECTED] writes:
mp Or the file was corrupted when you transfered it.
he stored the backup streams on ZFS, so obviously they couldn't
possibly be corrupt. :p
Jonathan, does 'zfs receive -nv' also detect the checksum error, or is
it only detected when you
cs == Cromar Scott [EMAIL PROTECTED] writes:
cs It appears that the metadata on that pool became corrupted
cs when the processor failed. The exact mechanism is a bit of a
cs mystery,
[...]
cs We were told that the probability of metadata corruption would
cs have been
jw == Jonathan Wheeler [EMAIL PROTECTED] writes:
mp == Mattias Pantzare [EMAIL PROTECTED] writes:
jw Miles: zfs receive -nv works ok
one might argue 'zfs receive' should validate checksums with the -n
option, so you can check if a just-written dump is clean before
counting on it. Without
cs == Cromar Scott [EMAIL PROTECTED] writes:
cs We opened a call with Sun support. We were told that the
cs corruption issue was due to a race condition within ZFS. We
cs were also told that the issue was known and was scheduled for
cs a fix in S10U6.
nice. Is there a bug
jw == Jonathan Wheeler [EMAIL PROTECTED] writes:
jw A common example used all over the place is zfs send | ssh
jw $host. In these examples is ssh guaranteeing the data delivery
jw somehow?
it is really all just appologetics. It sounds like a zfs bug to me.
The only alternative is
mb == Marc Bevand [EMAIL PROTECTED] writes:
mb Ask your hardware vendor. The hardware corrupted your data,
mb not ZFS.
You absolutely do NOT have adequate basis to make this statement.
I would further argue that you are probably wrong, and that I think
based on what we know that the
j == John [EMAIL PROTECTED] writes:
j There is also the human error factor. If someone accidentally
j grows a zpool
or worse, accidentally adds an unredundant vdev to a redundant pool.
Once you press return, all you can do is scramble to find mirrors for
it.
vdev removal is also
vf == Vincent Fox [EMAIL PROTECTED] writes:
vf Because arrays drives can suffer silent errors in the data
vf that are not found until too late. My zpool scrubs
vf occasionally find FIX errors that none of the array or
vf RAID-5 stuff caught.
well, just to make it clear again:
m == mike [EMAIL PROTECTED] writes:
m can you combine two zpools together?
no. You can have many vdevs in one pool. for example you can have a
mirror vdev and a raidz2 vdev in the same pool. You can also destroy
pool B, and add its (now empty) devices to pool A. but once two
separate
m == mike [EMAIL PROTECTED] writes:
m that could only be accomplished through combinations of pools?
m i don't really want to have to even think about managing two
m separate partitions - i'd like to group everything together
m into one large 13tb instance
You're not
jcm == James C McPherson [EMAIL PROTECTED] writes:
thp == Todd H Poole [EMAIL PROTECTED] writes:
mh == Matt Harrison [EMAIL PROTECTED] writes:
js == John Sonnenschein [EMAIL PROTECTED] writes:
re == Richard Elling [EMAIL PROTECTED] writes:
cg == Carson Gaspar [EMAIL PROTECTED] writes:
cm == Chris Murray [EMAIL PROTECTED] writes:
cm The next issue is that when the pool is actually imported
cm (zpool import -f zp), it too hangs the whole system, albeit
cm after a minute or so of disk activity.
could it be #6573681?
r == Ross [EMAIL PROTECTED] writes:
r I've just gotten a pool back online after the server booted
r with it unavailable, but found that NFS shares were not
r automatically restarted when the pool came online.
``me, too.'' in b44, in b71.
for workarounds, export/import can
re == Richard Elling [EMAIL PROTECTED] writes:
re unrecoverable read as the dominant disk failure mode. [...]
re none of the traditional software logical volume managers nor
re the popular open source file systems (other than ZFS :-)
re address this problem.
Other LVM's should
re == Richard Elling [EMAIL PROTECTED] writes:
re not all devices return error codes which indicate
re unrecoverable reads.
What you mean is, ``devices sometimes return bad data instead of an
error code.''
If you really mean there are devices out there which never return
error codes,
m == MC [EMAIL PROTECTED] writes:
m file another bug about how solaris recognizes your ACHI SATA
m hardware as old ide hardware.
I don't have that board but AIUI the driver attachment's chooseable in
the BIOS Blue Screen of Setup, by setting the controller to
``Compatibility'' mode
thp == Todd H Poole [EMAIL PROTECTED] writes:
Would try this with
your pci/pci-e cards in this system? I think not.
thp Unplugging one of them seems like a fine test to me
I've done it, with 32-bit 5 volt PCI, I forget why. I might have been
trying to use a board, but bypass the
vk == Vikas Kakkar [EMAIL PROTECTED] writes:
vk Actually customer wants to reduce the pool size, I guess we
vk cannot do this todaythere is a pending RFP on this.
RFE 4852783 is decreasing.
There was maybe some recent activity about INcreasing a pool size
which you can do already,
re == Richard Elling [EMAIL PROTECTED] writes:
If you really mean there are devices out there which never
return error codes, and always silently return bad data, please
tell us which one and the story of when you encountered it,
re I blogged about one such case.
re
t == Tim [EMAIL PROTECTED] writes:
t Solaris does not do this.
yeah but the locators for local disks are still based on
pci/controller/channel not devid, so the disk will move to a different
device name if he changes BIOS from pci-ide to AHCI because it changes
the driver attachment.
t == Tim [EMAIL PROTECTED] writes:
t Except he was, and is referring to a non-root disk.
wait, what? his root disk isn't plugged into the pci-ide controller?
t LVM hardly changes the way devices move around in Linux,
fine, be pedantic. It makes systems boot and mount all their
re == Richard Elling [EMAIL PROTECTED] writes:
re I really don't know how to please you.
dd from the raw device instead of through ZFS would be better. If you
could show that you can write data to a sector, and read back
different data, without getting an error, over and over, I'd be
re == Richard Elling [EMAIL PROTECTED] writes:
re There is no error in my math. I presented a failure rate for
re a time interval,
What is a ``failure rate for a time interval''?
AIUI, the failure rate for a time interval is 0.46% / yr, no matter how
many drives you have.
rm == Robert Milkowski [EMAIL PROTECTED] writes:
rm Please look for slides 23-27 at
rm http://unixdays.pl/i/unixdays-prezentacje/2007/milek.pdf
yeah, ok, ONCE AGAIN, I never said that checksums are worthless.
relling: some drives don't return errors on unrecoverable read events.
es == Eric Schrock [EMAIL PROTECTED] writes:
es Finally, imposing additional timeouts in ZFS is a bad idea.
es [...] As such, it doesn't have the necessary context to know
es what constitutes a reasonable timeout.
you're right in terms of fixed timeouts, but there's no reason it
jl == Jonathan Loran [EMAIL PROTECTED] writes:
jl Fe = 46% failures/month * 12 months = 5.52 failures
the original statistic wasn't of this kind. It was ``likelihood a
single drive will experience one or more failures within 12 months''.
so, you could say, ``If I have a thousand drives,
es == Eric Schrock [EMAIL PROTECTED] writes:
es I don't think you understand how this works. Imagine two
es I/Os, just with different sd timeouts and retry logic - that's
es B_FAILFAST. It's quite simple, and independent of any
es hardware implementation.
AIUI the main timeout
bf == Bob Friesenhahn [EMAIL PROTECTED] writes:
bf If the system or device is simply overwelmed with work, then
bf you would not want the system to go haywire and make the
bf problems much worse.
None of the decisions I described its making based on performance
statistics are
es == Eric Schrock [EMAIL PROTECTED] writes:
es The main problem with exposing tunables like this is that they
es have a direct correlation to service actions, and
es mis-diagnosing failures costs everybody (admin, companies,
es Sun, etc) lots of time and money. Once you expose
re == Richard Elling [EMAIL PROTECTED] writes:
re if you use Ethernet switches in the interconnect, you need to
re disable STP on the ports used for interconnects or risk
re unnecessary cluster reconfigurations.
RSTP/802.1w plus setting the ports connected to Solaris as ``edge'' is
dc == David Collier-Brown [EMAIL PROTECTED] writes:
dc one discovers latency growing without bound on disk
dc saturation,
yeah, ZFS needs the same thing just for scrub.
I guess if the disks don't let you tag commands with priorities, then
you have to run them at slightly below max
bs == Bill Sommerfeld [EMAIL PROTECTED] writes:
bs In an ip network, end nodes generally know no more than the
bs pipe size of the first hop -- and in some cases (such as true
bs CSMA networks like classical ethernet or wireless) only have
bs an upper bound on the pipe size.
rm == Robert Milkowski [EMAIL PROTECTED] writes:
rm What bothers me is why did I got CKSUM errors?
I think they accumulated latently while you had the pool imported on
Node 2 with half of the mirror missing. ZFS seems to count unexpected
resilvering as CKSUM errors sometimes.
mb == Matt Beebe [EMAIL PROTECTED] writes:
mb Anyone know of a SATA and/or SAS HBA with battery backed write
mb cache?
I've never heard of a battery that's used for anything but RAID
features. It's an interesting question, if you use the controller in
``JBOD mode'' will it use the
mp == Matthew Plumb [EMAIL PROTECTED] writes:
mp how best to use the disk I have to ensure I have a safe backup
mp strategy?
continue using rsync between a ZFS pool and an LVM2 pool. At the very
least, have two ZFS pools.
For ZFS over iSCSI, have some zpool-layer redundancy because
kp == Karl Pielorz [EMAIL PROTECTED] writes:
kp Thinking about it - perhaps I should have detached ad4 (the
kp failing drive) before attaching another device?
no, I think ZFS should be fixed.
1. the procedure you used is how hot spares are used, so anyone who
says it's wrong for any
ps == Peter Schuller [EMAIL PROTECTED] writes:
ps The software raid in Linux does not support [write barriers]
ps with raid5/raid6,
yeah i read this warning also and think it's a good argument for not
using it.
http://lwn.net/Articles/283161/
With RAID5 or RAID6 there is of course
mb == Matt Beebe [EMAIL PROTECTED] writes:
mb When using AVS's Async replication with memory queue, am I
mb guaranteed a consistent ZFS on the distant end? The assumed
mb failure case is that the replication broke, and now I'm trying
mb to promote the secondary replicate with
Did you guys ever fix this, or get a bug number, or anything? Should
I avoid that release? I was about to install b96 for ZFS fixes but
this 'zpool import -f' problem looks bad.
Corey
-8-
pr1# zpool offline tank c5t0d0s0
pr1# zpool status
pool: rpool
state: ONLINE
scrub: none
c == Miles Nordin [EMAIL PROTECTED] writes:
c Did you guys ever fix this, or get a bug number, or
c anything?
I found two bugs about this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6736213
http://bugs.opensolaris.org/view_bug.do?bug_id=6739532
I don't think either one fits
s == Solaris [EMAIL PROTECTED] writes:
s Point being that even if you can't run OpenSolaris due to
s support issues, you may still be able to use OpenSolaris to
s help resolve ZFS issues that you might run into in Solaris 10.
glad ZFS is improving, but this sentence is a
jd == Jim Dunham [EMAIL PROTECTED] writes:
jd If at the time the SNDR replica is deleted the set was
jd actively replicating, along with ZFS actively writing to the
jd ZFS storage pool, I/O consistency will be lost, leaving ZFS
jd storage pool in an indeterministic state on the
djm == Darren J Moffat [EMAIL PROTECTED] writes:
djm If c0t6d0 and c0t7d0 both fail (ie both sides of the same
djm mirror vdev) then the pool will be unable to retrieve all the
djm data stored in it.
won't be able to retrieve ANY of the data stored on it. It's correct
as you wrote it,
t == Tomas Ă–gren [EMAIL PROTECTED] writes:
t I recall some issue with 'zpool status' as root restarting
t resilvering.. Doing it as a regular user will not..
is there an mdb command similar to zpool status? maybe it's safer.
pgp8jYtCisPzr.pgp
Description: PGP signature
bi == Blake Irvin [EMAIL PROTECTED] writes:
bi running 'zpool status' or 'zpool status -xv'
bi during a resilver as a non-privileged user has no adverse
bi effect, but if i do the same as root, the resilver restarts.
I have this in my ZFS bug notes:
From: Thomas Bleek [EMAIL
np == Neal Pollack [EMAIL PROTECTED] writes:
np No attempt to acknowledge or recall defective silicon. No
np interest in customer data loss. Well, this customer has no
np further interest in Silicon Image. I refuse to acknowledge
np that they exist.
1. too bad Sil is the only
mk == Mikael Karlsson [EMAIL PROTECTED] writes:
mk Anyone with experience with the SIL3124 chipset? Does it work
mk good?
In Solaris, I believe Sil3124 has a SATA framework driver while
SIL3114 is the old IDE framework.
There is more than one version of the 3124, but I've not heard
js == Joerg Schilling [EMAIL PROTECTED] writes:
js If it works for your system, be happy. I mentioned that the
js controller may not be usable in all systems as it hangs up the
js BIOS in my machine if there is a disk connected to the card.
There are three different chips under
tf == Tim Foster [EMAIL PROTECTED] writes:
tf anyone else have an opinion?
keep the number of snapshots small until the performacne problems with
booting/importing/scrubbing while having lots of snapshots are
resolved.
pgp4Qi9Dyk7O4.pgp
Description: PGP signature
wm == Will Murnane [EMAIL PROTECTED] writes:
wm I'd rather have a working closed blob than a driver that is
wm Free Software for a device that is faulty. Ideals are very
wm nice, but broken hardware isn't.
except,
1. part of the reason the closed Solaris drivers are (also)
c 2. if the .vmdk's were stored in ZFS why was the corruption not
c flagged as a CKSUM error?
wm They were. From the OP:
NAMESTATE READ WRITE CKSUM
testing ONLINE 0 016
mirrorONLINE 0 016
jcm == James C McPherson [EMAIL PROTECTED] writes:
jcm I assume you're referring to mpt(7d) here?
jcm Since we started shipping it at all, with Solaris _8_, it's
jcm definitely been available in Solaris 10.
no, I was mistaken then.
My perhaps mistaken understanding though was that
sc == Srinivas Chadalavada [EMAIL PROTECTED] writes:
rr == Ralf Ramge [EMAIL PROTECTED] writes:
sc I see the first disk as unavailble, How do i make it online?
rr By replacing it with a non-broken one.
Ralf, aren't you missing this obstinence-error:
sc the following errors must
t == Tim [EMAIL PROTECTED] writes:
t http://www.supermicro.com/products/accessories/addon/AOC-USASLP-L8i.cfm
I'm not sure. A different thing is wrong with it depending on what
driver attaches to it. I can't tell for sure because this page:
http://linuxmafia.com/faq/Hardware/sas.html
jcm == James C McPherson [EMAIL PROTECTED] writes:
t == Tim [EMAIL PROTECTED] writes:
jcm find out from Miles why mega_sas is new and unproven
jcm given that it's been in NV since build 88.
This has degenerated to an argument over definition, so if you like I
can retract ``new and
jl == Jonathan Loran [EMAIL PROTECTED] writes:
jl the single drive speed is in line with the raidz2 vdev,
reviewing the OP
UFS single drive: 50MB/s write 70MB/s read
ZFS 1-drive: 42MB/s write 43MB/s read
raidz2 11-drive: 40MB/s write 40MB/s read
so,
jcm == James C McPherson [EMAIL PROTECTED] writes:
jcm Can I assume that my 2008-07-26 post was in fact two
jcm messages that were sent to you and cc'd to zfs-discuss:
jcm http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/049605.html
jcm and
jcm
1 - 100 of 473 matches
Mail list logo