Dear Cindy and Edward
Many thanks for your input. Indeed there is something wrong with the SSD.
Smartmontools confirm me also couples of errors.
So I open a case and hopefully they will replace the SSD. What I learned?
- Be careful of special offers
- Use also rock solid components for your homese
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Benjamin Grogg
>
> When I scrub my pool I got a lot of checksum errors :
>
> NAMESTATE READ WRITE CKSUM
> rpool DEGRADED 0 0 5
> c8d0s0DEGRA
Hi Benjamin,
I'm not familiar with this disk but you can see the fmstat output that
disk, system event, and zfs-related diagnostics are on overtime about
something and its probably this disk.
You can get further details from fmdump -eV and you will probably
see lots of checksum errors on this di
Dear Forum
I use a KINGSTON SNV125-S2/30GB SSD on a ASUS M3A78-CM Motherboard (AMD SB700
Chipset).
SATA Type (in BIOS) is SATA
Os : SunOS homesvr 5.11 snv_134 i86pc i386 i86pc
When I scrub my pool I got a lot of checksum errors :
NAMESTATE READ WRITE CKSUM
rpool DEGRA
[this seems to be the question of the day, today...]
On Apr 14, 2010, at 2:57 AM, bonso wrote:
> Hi all,
> I recently experienced a disk failure on my home server and observed checksum
> errors while resilvering the pool and on the first scrub after the resilver
> had completed. Now everything
Hi all,
I recently experienced a disk failure on my home server and observed checksum
errors while resilvering the pool and on the first scrub after the resilver had
completed. Now everything seems fine but I'm posting this to get help with
calming my nerves and detect any possible future fault
Hi,
One of my colleagues was confused by the output of 'zpool status' on a pool
where a hot spare is being resilvered in after a drive failure:
$ zpool status data
pool: data
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function,
Hi UNIX admin,
I would check fmdump -eV output to see if this error is isolated or
persistent.
If fmdump says this error is isolated, then you might just monitor the
status. For example, if fmdump says that these errors occurred on 6/15
and you moved this system on that date or you know that som
pool: space01
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the dev
Hi Jay,
Jay Anderson schrieb:
> I have b105 running on a Sun Fire X4500, and I am constantly seeing checksum
> errors reported by zpool status. The errors are showing up over time on every
> disk in the pool. In normal operation there might be errors on two or three
> disks each day, and someti
I have b105 running on a Sun Fire X4500, and I am constantly seeing checksum
errors reported by zpool status. The errors are showing up over time on every
disk in the pool. In normal operation there might be errors on two or three
disks each day, and sometimes there are enough errors so it repor
> "tn" == Thomas Nau <[EMAIL PROTECTED]> writes:
tn> I never experienced that one but we usually don't touch any of
tn> the iSCSI settings as long as a devices is offline. At least
tn> as long as we don't have to for any reason
Usually I do 'zpool offline' followed by 'iscsiadm re
Miles
On Sat, 2 Aug 2008, Miles Nordin wrote:
>> "tn" == Thomas Nau <[EMAIL PROTECTED]> writes:
>
>tn> Nevertheless during the first hour of operation after onlining
>tn> we recognized numerous checksum errors on the formerly
>tn> offlined device. We decided to scrub the pool and a
> "tn" == Thomas Nau <[EMAIL PROTECTED]> writes:
tn> Nevertheless during the first hour of operation after onlining
tn> we recognized numerous checksum errors on the formerly
tn> offlined device. We decided to scrub the pool and after
tn> several hours we got about 3500 error i
Dear all
As we wanted to patch one of our iSCSI Solaris servers we had to offline
the ZFS submirrors on the clients connected to that server. The devices
connected to the second server stayed online so the pools on the clients
were still available but in degraded mode. When the server came back
I wrote:
> Bill Sommerfeld wrote:
> > On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
> > > > I ran a scrub on a root pool after upgrading to snv_94, and got
> > > > checksum errors:
> > >
> > > Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> > > on a system that is
Bill Sommerfeld wrote:
> On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
> > > I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> > > errors:
> >
> > Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> > on a system that is running post snv_94 bi
Rustam wrote:
> I'm living with this error for almost 4 months and probably have record
> number of checksum errors:
> # zpool status -xv
> pool: box5
...
> errors: Permanent errors have been detected in the
> following files:
>
> box5:<0x0>
>
> I've Sol 10 U5 though.
I suspect that
Bill Sommerfeld wrote:
> On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
> > > I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> > > errors:
> >
> > Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> > on a system that is running post snv_94 b
Miles Nordin wrote:
> "jk" == Jürgen Keil <[EMAIL PROTECTED]> writes:
> jk> And a zpool scrub under snv_85 doesn't find checksum errors, either.
> how about a second scrub with snv_94? are the checksum errors gone
> the second time around?
Nope.
I've now seen this problem on 4 zpools on three
On Sun, 20 Jul 2008 11:26:16 -0700
Bill Sommerfeld <[EMAIL PROTECTED]> wrote:
> once is accident. twice is coincidence. three times is enemy
> action :-)
I have no access to b94 yet, but as it is, it probably is better to
skip this one when it comes out then.
--
Dick Hoogendijk -- PGP/GnuPG k
On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
> > I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> > errors:
>
> Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> on a system that is running post snv_94 bits: It also found checksum errors
> "jk" == Jürgen Keil <[EMAIL PROTECTED]> writes:
jk> And a zpool scrub under snv_85 doesn't find checksum errors,
jk> either.
how about a second scrub with snv_94? are the checksum errors gone
the second time around?
I get checksum errors counted all the time when it is really just
I'm living with this error for almost 4 months and probably have record
number of checksum errors:
core# zpool status -xv
pool: box5
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in
> > I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> > errors:
>
> Hmm, after reading this, I started a zpool scrub on my mirrored pool,
> on a system that is running post snv_94 bits: It also found checksum errors
...
> OTOH, trying to verify checksums with zdb -c did
> I ran a scrub on a root pool after upgrading to snv_94, and got checksum
> errors:
Hmm, after reading this, I started a zpool scrub on my mirrored pool,
on a system that is running post snv_94 bits: It also found checksum errors
# zpool status files
pool: files
state: DEGRADED
status: One
I ran a scrub on a root pool after upgrading to snv_94, and got checksum
errors:
pool: r00t
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are
unaffected.
action: Determine if the device needs to
In the meantime, the SUN supporter did figure out that zdb does not work
because zdb uses the information from /etc/zfs/zpool.cache. However,
I did use "zpool -R" to import the pool, which did not update
/etc/zfs/zpool.cache. Is there another method to map a dataset
number to a filesystem?
Han
Hi,
I am using ZFS under Solaris 10u3.
After the defect of a 3510 Raid controller, I have several storage pools
with defect objects. "zpool status -xv" prints a long list:
DATASET OBJECT RANGE
4c0c 5dd lvl=0 blkid=2
28 b346lvl=0 blkid=9
errors: The following persistent errors have been detected:
DATASET OBJECT RANGE
z_tsmsun1_pool/tsmsrv1_pool 26208464760832-8464891904
Looks like I have possibly a single file that is corrupted. My question is how do I find the file. Is it as si
Background:
Large ZFS pool built on a couple of Sun 3511 SATA arrays. RAID-5 is done in the
3511s. ZFS is non-redundant. We have been using this setup for a couple of
months now with no issues.
Problem:
Yesterday afternoon we started getting checksum errors. There have been no
hardware errors
31 matches
Mail list logo