On 18/09/10 15:25, George Wilson wrote:
Tom Bird wrote:
In my case, other than an hourly snapshot, the data is not
significantly changing.
It'd be nice to see a response other than you're doing it wrong,
rebuilding 5x the data on a drive relative to its capacity is clearly
erratic behaviour
On 18/09/10 09:02, Ian Collins wrote:
On 09/18/10 06:47 PM, Carsten Aulbert wrote:
Has someone an idea how it is possible to resilver 678G of data on a 500G
drive?
I see this all the time on a troublesome Thumper. I believe this happens
because the data in the pool is continuously changing.
On 18/09/10 13:06, Edho P Arief wrote:
On Sat, Sep 18, 2010 at 7:01 PM, Tom Birdt...@marmot.org.uk wrote:
All said and done though, we will have to live with snv_134's bugs from now
on, or perhaps I could try Sol 10.
or OpenIllumos. Or Nexenta. Or FreeBSD. Orinsert osol distro name.
...
Morning,
c7t5000CCA221F4EC54d0 is a 2T disk, how can it resilver 5.63T of it?
This is actually an old capture of the status output, it got to nearly
10T before deciding that there was an error and not completing, reseat
disk and it's doing it all again.
It's happened on another pool as
r...@cs6:~# zpool import
pool: content3
id: 14184872052409584084
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
Afternoon,
I note to my dismay that I can't get the community edition any more
past snv_129, this version was closest to the normal way of doing things
that I am used to with Solaris = 10, the standard OpenSolaris releases
seem only to have this horrible Gnome based installer that gives you
Richard Elling wrote:
It is not true that there is only a horrible Gnome based installer. Try the
Automated
Installation (AI) version instead of the LiveCD if you've used JumpStart
previously.
But if you just want a text-based installer and AI is overkill, then b131 is available
with the
Ross wrote:
Yup, that one was down to a known (and fixed) bug though, so it isn't
the normal story of ZFS problems.
Got a bug ID or anything for that, just out of interest?
As an update on my storage situation, I've got some JBODs now, see how
that goes.
--
Tom
// www.portfast.co.uk --
Victor Latushkin wrote:
This issue (and previous one reported by Tom) has got some publicity
recently - see here
http://www.uknof.org.uk/uknof13/Bird-Redux.pdf
So i feel like i need to provide a little bit more information about the
outcome (sorry that it is delayed and not as full as
Hi guys,
I've been having trouble with my archival kit, in the performance
department rather than data loss this time (phew!).
At the point when I took these stats where was about 250 mbit of traffic
outbound on an ixgb NIC on the thing, also about 100 mbit of new stuff
incoming.
As you
Toby Thain wrote:
On 18-Jan-09, at 6:12 PM, Nathan Kroenert wrote:
Hey, Tom -
Correct me if I'm wrong here, but it seems you are not allowing ZFS any
sort of redundancy to manage.
Every other file system out there runs fine on a single LUN, when things
go wrong you have a fsck utility that
Morning,
For those of you who remember last time, this is a different Solaris,
different disk box and different host, but the epic nature of the fail
is similar.
The RAID box that is the 63T LUN has a hardware fault and has been
crashing, up to now the box and host got restarted and both came up
Tim wrote:
On Sun, Jan 18, 2009 at 8:02 AM, Tom Bird t...@marmot.org.uk
mailto:t...@marmot.org.uk wrote:
errors: Permanent errors have been detected in the following files:
content:0x0
content:0x2c898
r...@cs4:~# find /content
/content
r...@cs4
Bob Friesenhahn wrote:
On Sun, 11 Jan 2009, Eric D. Mudama wrote:
My impression is not that other OS's aren't interested in ZFS, they
are, it's that the licensing restrictions limit native support to
Solaris, BSD, and OS-X.
Perhaps the philosophical issues of the other OS's (i.e. Linux) are
Victor Latushkin wrote:
Hi Tom and all,
[EMAIL PROTECTED]:~# uname -a
SunOS cs3.kw 5.10 Generic_127127-11 sun4v sparc SUNW,Sun-Fire-T200
Btw, have you considered opening support call for this issue?
As a follow up to the whole story, with the fantastic help of Victor,
the failed pool is
Victor Latushkin wrote:
Hi Tom and all,
Tom Bird wrote:
Hi,
Have a problem with a ZFS on a single device, this device is 48 1T SATA
drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
a ZFS on it as a single device.
There was a problem with the SAS bus which caused
Hi,
Have a problem with a ZFS on a single device, this device is 48 1T SATA
drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had
a ZFS on it as a single device.
There was a problem with the SAS bus which caused various errors
including the inevitable kernel panic, the thing
17 matches
Mail list logo