Hi,
I think you are saying that you copied the data on this system from a
previous system with hardware problems. It looks like the data that was
copied was corrupt, which is causing the permanent errors on the new
system (?)
The manual removal of the corrupt files, zpool scrub and zpool clear
m
Cindys, thank you for answer, but i need explain some details. This pool is new
hardware for my system - 2x1Tb WD Green hard drives, but data on this pool was
copied from old 9x300 Gb hard drives pool with hw problem. while i copied it
data where was many errors, but at the end i see this pictur
Hi--
The best approach is to correct the issues that are causing these
problems in the first place. The fmdump -eV commnand will identify
the hardware problems that caused the checksum errors and the corrupted
files.
You might be able to use some combination of zpool scrub, zpool clear,
and remo
Hello !
Can anybody help me with some trouble:
j...@opensolaris:~# zpool status -v
pool: green
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possi
On Sat, Dec 05, 2009 at 01:52:12AM +0300, Victor Latushkin wrote:
> On Dec 5, 2009, at 0:52, Cindy Swearingen
> wrote:
>
> >The zpool status -v command will generally print out filenames, dnode
> >object numbers, or identify metadata corruption problems. These look
> >like object numbers, becau
On Fri, Dec 04, 2009 at 02:52:47PM -0700, Cindy Swearingen wrote:
>
> If space/dcc is a dataset, is it mounted? ZFS might not be able to
> print the filenames if the dataset is not mounted, but I'm not sure
> if this is why only object numbers are displayed.
Yes, it's mounted and is quite an acti
On Dec 5, 2009, at 0:52, Cindy Swearingen
wrote:
Hi Gary,
To answer your questions, the hardware read some data and ZFS detected
a problem with the checksums in this dataset and reported this
problem.
ZFS can do this regardless of ZFS redundancy.
I don't think a scrub will fix these perm
Hi Gary,
To answer your questions, the hardware read some data and ZFS detected
a problem with the checksums in this dataset and reported this problem.
ZFS can do this regardless of ZFS redundancy.
I don't think a scrub will fix these permanent errors, but it depends
on the corruption. If its da
I just noticed this today:
# zpool status -v
pool: space
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
Thanks. From what I read on the forum, it also seems to be a problem with
physical installs where a drive hastily reports a cache flushed to disk to
improve benchmarks.
Following advice from Sun, I lodged a bug report because of the core dumps on
failed assertion (#5949).
Hopefully a zdb bug f
Can't help with recovering your data but can shed some light on how this may
have happened, its in another old thread.
This problem may happen if ZFS Thought that the data has been written but its
not! It can happen in virtual machine environment as VM has to go through host
OS buffers which ma
Reading through the post the error message didn't come through properly. It is
"tank/mail:0x0" (with lesser than and greater than on either sides of the 0's).
Also, the 4 disks (2 vdevs x 2 for raid-z) are physical sata disks dedicated to
the vmware image.
Thanks.
--
This message posted from ope
Hi
I am looking for guidance on the following zfs setup and error:
- opensolaris 2008.05 running as guest in vmware server - ubuntu host
- system has run flawlessly as an NFS file server for some months now. Single
zpool (called 'tank'), 2 vdevs each as raid-Z, about 10 filesystems (one of
them
13 matches
Mail list logo