On 09/23/10 19:08, Peter Jeremy wrote:
The downsides are generally that it'll be slower and less power-
efficient that a current generation server and the I/O interfaces will
be also be last generation (so you are more likely to be stuck with
parallel SCSI and PCI or PCIx rather than SAS/SATA
On 09/23/10 03:01, Ian Collins wrote:
So, I wonder - what's the recommendation, or rather, experience as far
as home users are concerned? Is it safe enough now do use ZFS on
non-ECC-RAM systems (if backups are around)?
It's as safe as running any other OS.
The big difference is ZFS will tell
/09/10 17:00, Frank Middleton wrote:
This is a hypothetical question that could actually happen:
Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0
and for some reason c0t0d0s0 goes off line, but comes back
on line after a shutdown. The primary boot disk would then
be c0t0d0s0 which would have
On 07/19/10 07:26, Andrej Podzimek wrote:
I run ArchLinux with Btrfs and OpenSolaris with ZFS. I haven't had a
serious issue with any of them so far.
Moblin/Meego ships with btrfs by default. COW file system on a
cell phone :-). Unsurprisingly for a read-mostly file system it
seems pretty
On 07/18/10 17:39, Packet Boy wrote:
What I can not find is how to take an existing Fedora image and copy
the it's contents into a ZFS volume so that I can migrate this image
from my existing Fedora iScsi target to a Solaris iScsi target (and
of course get the advantages of having that disk
This is a hypothetical question that could actually happen:
Suppose a root pool is a mirror of c0t0d0s0 and c0t1d0s0
and for some reason c0t0d0s0 goes off line, but comes back
on line after a shutdown. The primary boot disk would then
be c0t0d0s0 which would have much older data than c0t1d0s0.
On 05/27/10 05:16 PM, Dennis Clarke wrote:
I just tried this with a UFS based filesystem just for a lark.
It never failed on UFS, regardless of the contents of /etc/dfs/dfstab.
Guess I must now try this with a ZFS fs under that iso file.
Just tried it again with b134 *with* share /mnt in
Many many moons ago, I submitted a CR into bugs about a
highly reproducible panic that occurs if you try to re-share
a lofi mounted image. That CR has AFAIK long since
disappeared - I even forget what it was called.
This server is used for doing network installs. Let's say
you have a 64 bit iso
On 05/ 4/10 05:37 PM, Vadim Comanescu wrote:
Im wondering is there a way to actually delete a zvol ignoring the fact
that it has attached LU?
You didn't say what version of what OS you are running. As of b134
or so it seems to be impossible to delete a zfs iscsi target. You might
look at the
On 04/20/10 11:06 AM, Don wrote:
Who else, besides STEC, is making write optimized drives and what
kind of IOP performance can be expected?
Just got a distributor email about Texas Memory Systems' RamSan-630,
one of a range of huge non-volatile SAN products they make. Other
than that this
On 04/16/10 07:41 PM, Brandon High wrote:
1. Attach the new drives.
2. Reboot from LiveCD.
3. zpool create new_rpool on the ssd
Is step 2 actually necessary? Couldn't you create a new BE
# beadm create old_rpool
# beadm activate old_rpool
# reboot
# beadm delete rpool
It's the same number
On 04/16/10 08:57 PM, Frank Middleton wrote:
AFAIK the official syntax for installing the MBR is
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/ssd
Sorry, that's for SPARC. You had the installgrub down correctly
On 04/16/10 09:53 PM, Brandon High wrote:
Right now, my boot environments are named after the build it's
running. I'm guessing that by 'rpool' you mean the current BE above.
No, I didn't :-(. Please ignore that part - too much caffeine :-).
I figure that by booting to a live cd / live usb,
On 04/ 7/10 03:09 PM, Jason S wrote:
I was actually already planning to get another 4 gigs of ram for the
box right away anyway, but thank you for mentioning it! As there
appears to be a couple ways to skin the cat here i think i am going
to try both a 14 spindle RaidZ2 and 2 X 7 RaidZ2
On 04/ 4/10 10:00 AM, Willard Korfhage wrote:
What should I make of this? All the disks are bad? That seems
unlikely. I found another thread
http://opensolaris.org/jive/thread.jspa?messageID=399988
where it finally came down to bad memory, so I'll test that. Any
other suggestions?
It could
On 03/31/10 12:21 PM, lori.alt wrote:
The problem with splitting a root pool goes beyond the issue of the
zpool.cache file. If you look at the comments for 6939334
http://monaco.sfbay.sun.com/detail.jsf?cr=6939334, you will see other
files whose content is not correct when a root pool is
Our backup system has a couple of datasets used for iscsi
that have somehow lost their baseline snapshots with the
live system. In fact zfs list -t snapshots doesn't show
any snapshots at all for them. We rotate backup and live
every now and then, so these datasets have been shared
at some time.
Thanks to everyone who made suggestions! This machine has run
memtest for a week and VTS for several days with no errors. It
does seem that the problem is probably in the CPU cache.
On 03/24/10 10:07 AM, Damon Atkins wrote:
You could try copying the file to /tmp (ie swap/ram) and do a
continues
Zpool split is a wonderful feature and it seems to work well,
and the choice of which disk got which name was perfect!
But there seems to be an odd anomaly (at least with b132) .
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
On 03/22/10 11:50 PM, Richard Elling wrote:
Look again, the checksums are different.
Whoops, you are correct, as usual. Just 6 bits out of 256 different...
Last year
expected 4a027c11b3ba4cec bf274565d5615b7b 3ef5fe61b2ed672e ec8692f7fd33094a
actual 4a027c11b3ba4cec bf274567d5615b7b
On 03/21/10 03:24 PM, Richard Elling wrote:
I feel confident we are not seeing a b0rken drive here. But something is
clearly amiss and we cannot rule out the processor, memory, or controller.
Absolutely no question of that, otherwise this list would be flooded :-).
However, the purpose of
On 03/15/10 01:01 PM, David Dyer-Bennet wrote:
This sounds really bizarre.
Yes, it is. ButCR 6880994 is bizarre too.
One detail suggestion on checking what's going on (since I don't have a
clue towards a real root-cause determination): Get an md5sum on a clean
copy of the file, say from a
Can anyone say what the status of CR 6880994 (kernel/zfs Checksum failures on
mirrored drives) might be?
Setting copies=2 has mitigated the problem, which manifests itself consistently
at
boot by flagging libdlpi.so.1, but two recent power cycles in a row with no
normal
shutdown has resulted
On 02/17/10 02:38 PM, Miles Nordin wrote:
copies=2 has proven to be mostly useless in practice.
Not true. Take an ancient PC with a mirrored root pool, no
bus error checking and non-ECC memory, that flawlessly
passes every known diagnostic (SMC included).
Reboot with copies=1 and the same
On 02/ 6/10 11:21 AM, Thorsten Hirsch wrote:
I wonder where ~10G have gone. All the subdirs in / use ~4.5G only
(that might be the size of REFER in opensolaris-7), and my $HOME uses
38.5M, that's correct. But since rpool has a size of 15G there must
be more than 10G somewhere.
Do you have
On 02/ 6/10 11:50 AM, Thorsten Hirsch wrote:
Uhmm... well, no, but there might be something left over.
When I was doing an image-update last time, my / ran out of space. I
even couldn't beadm destroy any old boot environment, because beadm
told me that there's no space left. So what I did was
On 01/30/10 05:33 PM, Ross Walker wrote:
On Jan 30, 2010, at 2:53 PM, Mark white...@gmail.com wrote:
I have a 1U server that supports 2 SATA drives in the chassis. I have
2 750 GB SATA drives. When I install opensolaris, I assume it will
want to use all or part of one of those drives for the
On 01/20/10 04:27 PM, Cindy Swearingen wrote:
Hi Frank,
I couldn't reproduce this problem on SXCE build 130 by failing a disk in
mirrored pool and then immediately running a scrub on the pool. It works
as expected.
The disk has to fail whilst the scrub is running. It has happened twice now,
On 01/20/10 05:55 PM, Cindy Swearingen wrote:
Hi Frank,
We need both files.
The vmcore is 1.4GB. An http upload is never going to complete.
Is there an ftp-able place to send it, or can you download it if I
post it somewhere?
Cheers -- Frank
___
On 01/20/10 04:27 PM, Cindy Swearingen wrote:
Hi Frank,
I couldn't reproduce this problem on SXCE build 130 by failing a disk in
mirrored pool and then immediately running a scrub on the pool. It works
as expected.
As noted, the disk mustn't go offline until well after the scrub has started.
This is probably unreproducible, but I just got a panic whilst
scrubbing a simple mirrored pool on scxe snv124. Evidently
on of the disks went offline for some reason and shortly
thereafter the panic happened. I have the dump and the
/var/adm/messages containing the trace.
Is there any point in
On 11/23/09 10:10 AM, David Dyer-Bennet wrote:
Is there enough information available from system configuration utilities
to make an automatic HCL (or unofficial HCL competitor) feasible? Someone
could write an application people could run which would report their
opinion on how well it works,
Got some out-of-curiosity questions for the gurus if they
have time to answer:
Isn't dedupe in some ways the antithesis of setting copies 1?
We go to a lot of trouble to create redundancy (n-way mirroring,
raidz-n, copies=n, etc) to make things as robust as possible and
then we reduce
On 10/28/09 10:18 AM, Tim Cook wrote:
If Nexenta was too expensive, there's nothing Sun will ever offer that
will fit your price profile. Home electronics is not their business
model and never will be.
True, but this was discussed that on a different thread some time
ago. Sun's prices on X86s
On 10/13/09 18:35, Albert Chin wrote:
Maybe this will help:
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html
Well, it does seem to explain the scrub problem. I think it might
also explain the slow boot and startup problem - the VM only has
564M available,
On 10/15/09 23:31, Cameron Jones wrote:
by cross-mounting do you mean mounting the drives on 2 running OS's?
that wasn't really what i was looking for but nice to know the option
is there, even tho not recommended!
No, since you really can't run two OSs at the same time unless you use
zones.
On 10/16/09 09:29, I wrote:
I assume the id is ignored on the root pool at boot time or it
wouldn't be able to boot at all. Undoubtedly a guru will chip in here
if this is incorrect :-)
Of course this was hogwash. You create the pool before receiving
the snapshot, so the ID is local. One of
IIRC the trigger for this thread was the suggestion that
primarycache=none be set on datasets used for swap.
Presumably swap only gets used when memory is low or
exhausted, so it would it be correct to say that it wouldn't
make any sense for swap to be in /any/ cache? If this isn't
what
On 10/15/09 20:36, Cameron Jones wrote:
My question is tho, since I can boot into either OpenSolaris or
Solaris (but not both at the same time obviousvly :) i'd like to be
able to mount the other disks into whatever host OS i boot into.
Is this possible recommended?
Definitely possible.
After a recent upgrade to b124, decided to switch to COMSTAR
for iscsi targets for VirtualBox hosted on AMD64 Fedora C10. Both
target and initiator are running zfs under b124. This combination
seems unbelievably slow compared to the old iscsi subsystem.
A scrub of a local 20GB disk on the
In an attempt to recycle some old PATA disks, we bought some
really cheap PATA/SATA adapters, some of which actually work
to the point where it is possible to boot from a ZFS installation
(e.g., c1t2d0s0). Not all PATA disks work, just Seagates, it would
seem, but not Maxstors. I wonder why?
On 10/01/09 05:08 AM, Darren J Moffat wrote:
In the future there will be a distinction between the local and the
received values see the recently (yesterday) approved case PSARC/2009/510:
http://arc.opensolaris.org/caselog/PSARC/2009/510/20090924_tom.erickson
Currently non-recursive
On 09/29/09 10:23 PM, Marc Bevand wrote:
If I were you I would format every 1.5TB drive like this:
* 6GB slice for the root fs
As noted in another thread, 6GB is way too small. Based on
actual experience, an upgradable rpool must be more than
20GB. I would suggest at least 32GB; out of 1.5TB
On 09/30/09 12:59 PM, Marc Bevand wrote:
It depends on how minimal your install is.
Absolutely minimalist install from live CD subsequently updated
via pkg to snv111b. This machine is an old 32 bit PC used now
as an X-terminal, so doesn't need any additional software. It
now has a bigger
On 09/28/09 12:40 AM, Ron Watkins wrote:
Thus, im at a loss as to how to get the root pool setup as a 20Gb
slice
20GB is too small. You'll be fighting for space every time
you use pkg. From my considerable experience installing to a
20GB mirrored rpool, I would go for 32GB if you can.
Trying to move this to a new thread, although I don't think it
has anything to do with ZFS :-)
On 09/28/09 08:54 AM, Chris Gerhard wrote:
TMPFS was not in the first release of 4.0. It was introduced to boost
the performance of diskless clients which no longer had the old
network disk for their
On 09/28/09 01:22 PM, David Dyer-Bennet wrote:
That seems truly bizarre. Virtualbox recommends 16GB, and after doing an
install there's about 12GB free.
There's no way Solaris will install in 4GB if I understand what
you are saying. Maybe fresh off a CD when it doesn't have to
download a
On 09/27/09 03:05 AM, Joerg Schilling wrote:
BTW: Solaris has tmpfs since late 1987.
Could you fix the Wikipedia article? http://en.wikipedia.org/wiki/TMPFS
it first appeared in SunOS 4.1, released in March 1990
It is a de-facto standard since then as it e.g. helps to reduce compile
On 09/27/09 11:25 AM, Joerg Schilling wrote:
Frank Middletonf.middle...@apogeect.com wrote:
Could you fix the Wikipedia article? http://en.wikipedia.org/wiki/TMPFS
it first appeared in SunOS 4.1, released in March 1990
It appeared with SunOS-4.0. The official release was probably Februars
On 09/25/09 09:58 PM, David Magda wrote:
The contents of /var/tmp can be expected to survive between boots (e.g.,
/var/tmp/vi.recover); /tmp is nuked on power cycles (because it's just
memory/swap):
Yes, but does mapping it to /tmp have any issues regarding booting
or image-update in the
On 09/26/09 12:11 PM, Toby Thain wrote:
Yes, but unless they fixed it recently (=RHFC11), Linux doesn't
actually nuke /tmp, which seems to be mapped to disk. One side
effect is that (like MSWindows) AFAIK there isn't a native tmpfs,
...
Are you sure about that? My Linux systems do.
On 09/26/09 05:25 PM, Ian Collins wrote:
Most of /opt can be relocated
There isn't much in there on a vanilla install (X86 snv111b)
# ls /opt
DTT SUNWmlib
http://www.sun.com/bigadmin/features/articles/nvm_boot.jsp
You pretty much answered the OP with this link. Thanks for
posting it!
On 09/25/09 11:08 AM, Travis Tabbal wrote:
... haven't heard if it's a known
bug or if it will be fixed in the next version...
Out of courtesy to our host, Sun makes some quite competitive
X86 hardware. I have absolutely no idea how difficult it is
to buy Sun machines retail, but it seems they
On 09/25/09 04:44 PM, Lori Alt wrote:
rpool
rpool/ROOT
rpool/ROOT/snv_124 (or whatever version you're running)
rpool/ROOT/snv_124/var (you might not have this)
rpool/ROOT/snv_121 (or whatever other BEs you still have)
rpool/dump
rpool/export
rpool/export/home
rpool/swap
Unless you machine is
On 09/20/09 03:20 AM, dick hoogendijk wrote:
On Sat, 2009-09-19 at 22:03 -0400, Jeremy Kister wrote:
I added a disk to the rpool of my zfs root:
# zpool attach rpool c1t0d0s0 c1t1d0s0
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
I waited for the resilver to complete,
A while back I posted a script that does individual send/recvs
for each file system, sending incremental streams if the remote
file system exists, and regular streams if not.
The reason for doing it this way rather than a full recursive
stream is that there's no way to avoid sending certain file
[Originally posted to indiana-discuss]
On certain X86 machines there's a hardware/software glitch
that causes odd transient checksum failures that always seem
to affect the same files even if you replace them. This has
been submitted as a bug:
Bug 11201 - Checksum failures on mirrored drives -
Absent any replies to the list, submitted as a bug:
http://defect.opensolaris.org/bz/show_bug.cgi?id=11358
Cheers -- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 09/11/09 03:20 PM, Brandon Mercer wrote:
They are so well known that simply by asking if you were using them
suggests that they suck. :) There are actually pretty hit or miss
issues with all 1.5TB drives but that particular manufacturer has had
a few more than others.
FWIW I have a few
Is there any reason why an iscsi disk could not be used
to extend an rpool? It would be pretty amazing if it
could but I thought I'd try it anyway :-)
The 20GB disk I am using to try ZFS booting on SPARC
ran out of space doing an image update to snv122, so I
thought I'd try extending it with an
On 09/07/09 07:29 PM, David Dyer-Bennet wrote:
Is anybody doing this [zfs send/recv] routinely now on 2009-6
OpenSolaris, and if so can I see your commands?
Wouldn't a simple recursive send/recv work in your case? I
imagine all kinds of folks are doing it already. The only problem
with it,
An attempt to pkg image-update from snv111b to snv122 failed
miserably for a number of reasons which are probably out of
scope here. Suffice it to say that it ran out of disk space
after the third attempt.
Before starting, I was careful to make a baseline snapshot,
but rolling back to that
Correction
On 09/06/09 12:00 PM, I wrote:
(there are no hidden directories in / ),
Well, there is .zfs, of course, but it is normally hidden,
apparently by default on SPARC rpool, but not on X86 rpool
or non-rpool pools on either. Hmmm. I don't recollect setting
the snapdir property on any
Near Success! After 5 (yes, five) attempts, managed to do
an update of snv111b to snv122, until it ran out of space
again. Looks like I need to get a bigger disk...
Sorry about the monolog, but there might be someone on this
list trying to use pkg on SPARC who, like me, has been
unable to
It was someone from Sun that recently asked me to repost here
about the checksum problem on mirrored drives. I was reluctant
to do so because you and Bob might start flames again, and you
did! You both sound very defensive, but of course I would never
make an unsubstantiated speculation that you
On 09/02/09 05:40 AM, Henrik Johansson wrote:
For those of us which have already upgraded and written data to our
raidz pools, are there any risks of inconsistency, wrong checksums in
the pool? Is there a bug id?
This may not be a new problem insofar as it may also affect mirrors.
As part of
On 09/02/09 10:01 AM, Gaëtan Lehmann wrote:
I see the same problem on a workstation with ECC RAM and disks in mirror.
The host is a Dell T5500 with 2 cpus and 24 GB of RAM.
Would you know if it has ECC on the buses? I have no idea if or what
Solaris does on X86 to check or correct bus errors,
On 09/02/09 10:34 AM, Simon Breden wrote:
I too see checksum errors ocurring for the first time using OpenSolaris 2009.06
on the /dev package repository at version snv_121.
I see the problem occur within a mirrored boot pool (rpool) using SSDs.
Hardware is AMD BE-2350 (ECC) processor with 4GB
On Sep 2, 2009, at 7:14 PM, rarok wrote:
I'm just a casual at ZFS but you want something that now don't exists.
The most of the consumers want this but Sun is not interested in that
market. To grow a existing RAIDZ just adding more disk to the RAIDZ
would be great but at this moment there
On 09/02/09 12:31 PM, Richard Elling wrote:
I believe this is a different problem. Adam, was this introduced in b120?
Doubtless you are correct as usual. However, if this is a new problem,
how did it get through Sun's legendary testing process unless it is
(as you have always maintained)
Great to hear a few success stories! We have been experimentally
running ZFS on really crappy hardware and it has never lost a
pool. Running on VB with ZFS/iscsi raw disks we have yet to see
any errors at all. On sun4u with lsi sas/sata it is really rock
solid. And we've been going out of our way
On 07/27/09 01:27 PM, Eric D. Mudama wrote:
Everyone on this list seems to blame lying hardware for ignoring
commands, but disks are relatively mature and I can't believe that
major OEMs would qualify disks or other hardware that willingly ignore
commands.
You are absolutely correct, but if
On 07/25/09 04:30 PM, Carson Gaspar wrote:
No. You'll lose unwritten data, but won't corrupt the pool, because
the on-disk state will be sane, as long as your iSCSI stack doesn't
lie about data commits or ignore cache flush commands. Why is this so
difficult for people to understand? Let me
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure / record is
underneath the corrupted data in the tree then it won't be able to be
reached.
Can you comment on if/how mirroring or raidz mitigates this, or tree
corruption in general? I have yet
On 07/24/09 04:35 PM, Bob Friesenhahn wrote:
Regardless, it [VirtualBox] has committed a crime.
But ZFS is a journalled file system! Any hardware can lose a flush;
it's just more likely in a VM, especially when anything Microsoft
is involved, and the whole point of journalling is to prevent
On 07/21/09 01:21 PM, Richard Elling wrote:
I never win the lottery either :-)
Let's see. Your chance of winning a 49 ball lottery is apparently
around 1 in 14*10^6, although it's much better than that because of
submatches (smaller payoffs for matches on less than 6 balls).
There are about
On 07/19/09 06:10 PM, Richard Elling wrote:
Not that bad. Uncommitted ZFS data in memory does not tend to
live that long. Writes are generally out to media in 30 seconds.
Yes, but memory hits are instantaneous. On a reasonably busy
system there may be buffers in queue all the time. You may
On 07/19/09 05:00 AM, dick hoogendijk wrote:
(i.e. non ECC memory should work fine!) / mirroring is a -must- !
Yes, mirroring is a must, although it doesn't help much if you
have memory errors (see several other threads on this topic):
On 06/03/09 09:10 PM, Aurélien Larcher wrote:
PS: for the record I roughly followed the steps of this blog entry =
http://blogs.sun.com/edp/entry/moving_from_nevada_and_live
Thanks for posting this link! Building pkg with gdb was an
interesting exercise, but it worked, with the additional
On 06/03/09 09:10 PM, Aurélien Larcher wrote:
PS: for the record I roughly followed the steps of this blog entry =
http://blogs.sun.com/edp/entry/moving_from_nevada_and_live
Thanks for posting this link! Building pkg with gcc 4.3.2 was an
interesting exercise, but it worked, with the
On 06/04/09 06:44 PM, cindy.swearin...@sun.com wrote:
Hi Noz,
This problem was reported recently and this bug was filed:
6844090 zfs should be able to mirror to a smaller disk
Is this filed on bugs or defects? I had the exact same problem,
and it turned out to be a rounding error in Solaris
On 05/26/09 13:07, Kjetil Torgrim Homme wrote:
also thank you, all ZFS developers, for your great job :-)
I'll second that! A great achievement - puts Solaris in a league of
its own, so much so, you'd want to run it on all your hardware,
however crappy the hardware might be ;-)
There are too
On 05/23/09 10:21, Richard Elling wrote:
preface
This forum is littered with claims of zfs checksums are broken where
the root cause turned out to be faulty hardware or firmware in the data
path.
/preface
I think that before you should speculate on a redesign, we should get to
the root cause.
On 05/26/09 03:23, casper@sun.com wrote:
And where exactly do you get the second good copy of the data?
From the first. And if it is already bad, as noted previously, this
is no worse than the UFS/ext3 case. If you want total freedom from
this class of errors, use ECC.
If you copy the
On 05/22/09 21:08, Toby Thain wrote:
Yes, the important thing is to *detect* them, no system can run reliably
with bad memory, and that includes any system with ZFS. Doing nutty
things like calculating the checksum twice does not buy anything of
value here.
All memory is bad if it doesn't have
There have been a number of threads here on the reliability of ZFS in the
face of flaky hardware. ZFS certainly runs well on decent (e.g., SPARC)
hardware, but isn't it reasonable to expect it to run well on something
less well engineered? I am a real ZFS fan, and I'd hate to see folks
trash it
On 04/17/09 12:37, casper@sun.com wrote:
I'd like to submit an RFE suggesting that data + checksum be copied for
mirrored writes, but I won't waste anyone's time doing so unless you
think there is a point. One might argue that a machine this flaky should
be retired, but it is actually
Experimenting with OpenSolaris on an elderly PC with equally
elderly drives, zpool status shows errors after a pkg image-update
followed by a scrub. It is entirely possible that one of these
drives is flaky, but surely the whole point of a zfs mirror is
to avoid this? It seems unlikely that both
On 04/15/09 14:30, Bob Friesenhahn wrote:
On Wed, 15 Apr 2009, Frank Middleton wrote:
zpool status shows errors after a pkg image-update
followed by a scrub.
If a corruption occured in the main memory, the backplane, or the disk
controller during the writes to these files, then the original
These problems both occur when accessing a ZFS dataset from
Linux (FC10) via NFS.
Jigdo is a fairly new bit-torrent-like downloader. It is not
entirely bug free, and the one time I tried it, it recursively
downloaded one directory's worth until ZFS eventually sort
of died. It put all the disks
On 03/29/09 11:58, David Magda wrote:
On Mar 29, 2009, at 00:41, Michael Shadle wrote:
Well I might back up the more important stuff offsite. But in theory
it's all replaceable. Just would be a pain.
And what is the cost of the time to replace it versus the price of a
hard disk? Time ~
On 03/28/09 20:01, Harry Putnam wrote:
Finding a sataII card is proving to be very difficult. The reason is
that I only have PCI no PCI express. I haven't see a single one
listed as SATAII compatible and have spent a bit time googling.
It's even worse if you have an old SPARC system. We've
92 matches
Mail list logo