I have had the same problem too, but managed to work around it by setting the
mountpoint to none before performing the ZFS send. But that only works on
file-systems you can quiesce.
How about making a clone of your snapshot, then set the mounpoint of the clone
to none, take a snapshot of the
So we finnaly got arround the problem, after replacing almost everything it
seems that the memory was the devil. I pulled it out and replaced it with ECC
memory and now everything works fine for 14 days already.
This knowing i will never putt non ecc memory in my boxes again.
thanks for al the
Trevor Watson wrote:
I have had the same problem too, but managed to work around it by
setting the mountpoint to none before performing the ZFS send. But that
only works on file-systems you can quiesce.
Yeah, and / is always going to be a bit of a problem ;-)
How about making a clone of
Hi,
a
zfs create -V 1M pool/foo
dd if=/dev/random of=/dev/zvol/rdsk/pool/foo bs=1k count=1k
(using Nevada b94) yields
zfs get all pool/foo
pool/foo used 1,09M -
pool/foo referenced 1,09M -
pool/foo volsize 1M-
indeed that's one of the nice things that ZFS is picky on data and allerts you
immediatly. Before some files became corrupt and one was wondering what happend
and how this was possible since everything seems fine for months :)
the more i use solaris the more i love it :)
This message posted
This knowing i will never putt non ecc memory in my boxes again.
What's your mainboard and CPU? I've looked up the thread on the forum
and there's no hardware information. Don't be fooled just because the
RAM's ECC. The mainboard (and CPU in case of AMDs) have to support that.
There are two
I am trying to find any statistics on the amount of time doing a upgrade from
one version of ZFS to another. I recently updated my system and my zpool is
showing that I need to upgrade it. I have a large pool, around 3 TB divided
into 3 - 1 TB LUNS in a zraid configuration. I want to do the
Dear All,
I will try to post DTool source code asap
DTool is depend on our patented middleware, need one or two days to
clarify :-P
Very Sorry.
Bob,
I have tried your pdf but did not get good latency numbers even after
array tuning...
cheers
tharindu
Bob Friesenhahn wrote:
On
Ron Warner II wrote:
I am trying to find any statistics on the amount of time doing a upgrade from
one version of ZFS to another. I recently updated my system and my zpool is
showing that I need to upgrade it. I have a large pool, around 3 TB divided
into 3 - 1 TB LUNS in a zraid
Marc,
Thanks - you were right - I had two identical drives and I mixed them
up. It's going through the resilver process now... I expect it will
run all night.
Breandan
On Jul 27, 2008, at 11:20 PM, Marc Bevand wrote:
It looks like you *think* you are trying to add the new drive, when
On Mon, 28 Jul 2008, BG wrote:
indeed that's one of the nice things that ZFS is picky on data and
allerts you immediatly. Before some files became corrupt and one was
wondering what happend and how this was possible since everything
seems fine for months :)
Unfortunately, ZFS does not
On Mon, 28 Jul 2008, Tharindu Rukshan Bamunuarachchi wrote:
I have tried your pdf but did not get good latency numbers even after array
tuning...
Right. And since I observed only slightly less optimal performance
from a mirror pair of USB drives it seems that your requirement is not
4. While reading an offline disk causes errors, writing does not!
*** CAUSES DATA LOSS ***
This is a big one: ZFS can continue writing to an unavailable pool. It
doesn't always generate errors (I've seen it copy over 100MB
before erroring), and if not spotted, this *will* cause data
On Mon, 28 Jul 2008, Ross wrote:
TEST1: Opened File Browser, copied the test data to the pool.
Half way through the copy I pulled the drive. THE COPY COMPLETED
WITHOUT ERROR. Zpool list reports the pool as online, however zpool
status hung as expected.
Are you sure that this reference
File Browser is the name of the program that Solaris opens when you open
Computer on the desktop. It's the default graphical file manager.
It does eventually stop copying with an error, but it takes a good long while
for ZFS to throw up that error, and even when it does, the pool doesn't
snv_91. I downloaded snv_94 today so I'll be testing with that tomorrow.
Date: Mon, 28 Jul 2008 09:58:43 -0700 From: [EMAIL PROTECTED] Subject: Re:
[zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed To: [EMAIL
PROTECTED] Which OS and revision? -- richard Ross wrote: Ok,
Bob Friesenhahn wrote:
On Mon, 28 Jul 2008, BG wrote:
indeed that's one of the nice things that ZFS is picky on data and
allerts you immediatly. Before some files became corrupt and one was
wondering what happend and how this was possible since everything
seems fine for months :)
Charles Emery wrote:
New server build with Solaris-10 u5/08,
Can you try on a later release? The enhanced FMA for disks did not
make the Solaris 10 5/08 release.
http://www.opensolaris.org/os/community/on/flag-days/pages/2007080901/
-- richard
on a SunFire t5220, and this is our first
On Mon, 28 Jul 2008, Richard Elling wrote:
But ZFS can do better. I filed CR6674679 which basically says
that if redundant copies of data have the same, wrong checksum,
then ZFS should issue an e-report to that effect. This will allow
you to move suspicion away from the disks as a root
mainboard is :
KFN4-DRE
more info you find here :
http://www.asus.com/products.aspx?l1=9l2=39l3=174l4=0model=1844modelmenu=2
cpu:
2x opteron aMD Opteron 2350 2.0GHz HT 4MB SF
memory was cheap stuff non ecc replaced it with kingston ECC mem KVR667D2D8P5/2G
in the mean time we have 4x500Gb in
Bob Friesenhahn wrote:
On Mon, 28 Jul 2008, Richard Elling wrote:
But ZFS can do better. I filed CR6674679 which basically says
that if redundant copies of data have the same, wrong checksum,
then ZFS should issue an e-report to that effect. This will allow
you to move suspicion away from
On Mon, 28 Jul 2008, Richard Elling wrote:
It is not clear to me where ARC validation occurs. Perhaps someone
who deals with the ARC code could shed some light.
More than likely, ARC data is not stored using original filesystem
blocks so the existing filesystem block checksums are not
mainboard is :
KFN4-DRE
more info you find here :
http://www.asus.com/products.aspx?l1=9l2=39l3=174l4=0model=1844modelmenu=2
cpu:
2x opteron aMD Opteron 2350 2.0GHz HT 4MB SF
You'll be fine with that. Just had to make sure.
Regards,
-mg
signature.asc
Description: OpenPGP digital
We already have memory scrubbers which check memory. Actually,
we've had these for about 10 years, but it only works for ECC
memory... if you have only parity memory, then you can't fix anything
at the hardware level, and the best you can hope is that FMA will do
the right thing.
In
Since the information obtained it seems that the better choice is ASUS M2A-VM:
tested happily, enough cheap (47€), not bad performing, 4 sata, gb ethernet,
dvi, firewire, ecc. The only notice was a possible DMA bug of the south bridge,
but it seems not so important. (!)
Now the options will
Mario Goebbels wrote:
We already have memory scrubbers which check memory. Actually,
we've had these for about 10 years, but it only works for ECC
memory... if you have only parity memory, then you can't fix anything
at the hardware level, and the best you can hope is that FMA will do
the
I'd like to extend my ZFS root pool by adding the old swap and root slice
left over from the previous LU BE.
Are there any known issues with concatenating slices from the same drive?
Cheers,
Ian.
___
zfs-discuss mailing list
I have built mine the last few days, and it seems to be running fine right now.
Originally I wanted Solaris 10, but switched to using SXCE (nevada build 94,
the latest right now) because I wanted the new CIFS support and some additional
ZFS features.
Here's my setup. These were my goals:
-
On Mon, Jul 28, 2008 at 04:13:54PM -0700, Steve wrote:
Since the information obtained it seems that the better choice is ASUS
M2A-VM: tested happily, enough cheap (47€), not bad performing, 4 sata, gb
ethernet, dvi, firewire, ecc. The only notice was a possible DMA bug of the
south bridge,
I have built mine the last few days, and it seems to
be running fine right now.
Originally I wanted Solaris 10, but switched to using
SXCE (nevada build 94, the latest right now) because
I wanted the new CIFS support and some additional ZFS
features.
Here's my setup. These were my
I would love to go back to using shuttles.
Actually, my ideal setup would be:
Shuttle XPC w/ 2x PCI-e x8 or x16 lanes
2x PCI-e eSATA cards (each with 4 eSATA port multiplier ports)
then I could chain up to 8 enclosures off a single small, nearly silent host
machine.
8 enclosures x 5 drives =
mp == Mattias Pantzare [EMAIL PROTECTED] writes:
This is a big one: ZFS can continue writing to an unavailable
pool. It doesn't always generate errors (I've seen it copy
over 100MB before erroring), and if not spotted, this *will*
cause data loss after you reboot.
mp
W. Wayne Liauh wrote:
As to cases, our experience is, unless you have good air-conditioning or have
a means to nicely enclose your machine (like the BlackBox :-) ), get a box
as big as your space would allow. We had enough bad experiences with mini
cases, especially those Shuttle-type
Holy crap! That sounds cool. Firmware-based-VPN connectivity!
At Intel we're getting better too I suppose.
Anyway... I don't know where you're at in the company but you should rattle
some cages about my idea :)
This message posted from opensolaris.org
34 matches
Mail list logo