On Mon, Nov 9, 2009 at 2:51 PM, Cindy Swearingen
cindy.swearin...@sun.comwrote:
Hi,
I can't find any bug-related issues with marvell88sx2 in b126.
I looked over Dave Hollister's shoulder while he searched for
marvell in his webrevs of this putback and nothing came up:
driver change with
Does this mean that there are no driver changes in marvell88sx2, between b125
and b126? If no driver changes, then it means that we both had extreme unluck
with our drives, because we both had checksum errors? And my discs were brand
new.
How probable is this? Something is weird here. What is
I believe the best practice is to use seperate disks/zpool for oracle database
files as the record size needs to be set the same as the db block size - when
using a jbod or internal disks.
If the server is using a large SAN LUN can anybody see any issues if there is
only one zpool and the
Apparently went live on 6th November. This isn't FreeBSD 8.x zfs,
but at least raidz2 is there.
http://www.freenas.org/
FreeNAS 0.7 (Khasadar)
Sunday, 21 June 2009
Majors changes:
* Add ability to configure the login shell for a user.
* Upgrade Samba to 3.0.37.
* Upgrade
On 01/09/09 08:26, James Andrewartha wrote:
Jorgen Lundman wrote:
The mv8 is a marvell based chipset, and it appears there are no
Solaris drivers for it. There doesn't appear to be any movement from
Sun or marvell to provide any either.
Do you mean specifically Marvell 6480 drivers? I use
Hi Orvar,
Correct, I don't see any marvell8sx2 driver changes between b125-126.
So far, only you and Tim are reporting these issues.
Generally, we see bugs filed by the internal test teams if they see
similar problems.
I will try to reproduce the RAIDZ checksum errors separately from the
So, I currently have a pool with 12 disks raid-z2 (12+2). As you may have
seen in the other thread, I've been having on and off issues with b126
randomly dropping drives. Well, I think after changing several cables, and
doing about 20 reboots plugging one drive in at a time (I only booted to the
James C. McPherson wrote, On 09-11-09 04:40 PM:
Roman Naumenko wrote:
Interesting stuff.
By the way, is there a place to watch lated news like this on zfs/opensolaris?
rss maybe?
You could subscribe to onnv-not...@opensolaris.org...
James C. McPherson
--
Senior Kernel Software
Toby Thain wrote:
On 8-Nov-09, at 12:20 PM, Joe Auty wrote:
Tim Cook wrote:
On Sun, Nov 8, 2009 at 2:03 AM, besson3c j...@netmusician.org
mailto:j...@netmusician.org wrote:
...
Why not just convert the VM's to run in virtualbox and run Solaris
directly on the hardware?
That's
On Nov 10, 2009, at 5:32 AM, Ian Garbutt wrote:
I believe the best practice is to use seperate disks/zpool for
oracle database files as the record size needs to be set the same as
the db block size - when using a jbod or internal disks.
recordsize is not a pool property, it is a dataset
On Nov 10, 2009, at 1:25 AM, Orvar Korvar wrote:
Does this mean that there are no driver changes in marvell88sx2,
between b125 and b126? If no driver changes, then it means that we
both had extreme unluck with our drives, because we both had
checksum errors? And my discs were brand new.
Hi,
*everybody* is interested in the flag days page. Including me.
Asking me to raise the priority is not helpful.
From my perspective, it's a surprise that 'everybody' is interested, as I'm
not seeing a lot of people complaining that the flag day page is not updating.
Only a couple of
On Nov 10, 2009, at 10:23 AM, Andrew Daugherity wrote:
For example:
rsync -avn --delete-before /export/ims/.zfs/snapshot/zfs-auto-
snap_daily-2009-11-09-1900/ /export/ims/.zfs/snapshot/zfs-auto-
snap_daily-2009-11-08-1900/
[...]
If you cared to see changes within files (I don't),
toss
Thanks for info, although the audit system seems a lot more complex than what I
need. Would still be nice if they fixed bart to work on large filesystems,
though.
Turns out the solution was right under my nose -- rsync in dry-run mode works
quite well as a snapshot diff tool. I'll share this
Say I end up with a handful of unrecoverable bad blocks that just so happen to
be referenced by ALL of my snapshots (in some file that's been around forever).
Say I don't care about the file or two in which the bad blocks exist. Is
there any way to purge those blocks from the pool (and all
On Tue, Nov 10, 2009 at 2:40 PM, BJ Quinn bjqu...@seidal.com wrote:
Say I end up with a handful of unrecoverable bad blocks that just so happen
to be referenced by ALL of my snapshots (in some file that's been around
forever). Say I don't care about the file or two in which the bad blocks
On Tue, Nov 10, 2009 at 03:04:24PM -0600, Tim Cook wrote:
No. The whole point of a snapshot is to keep a consistent on-disk state
from a certain point in time. I'm not entirely sure how you managed to
corrupt blocks that are part of an existing snapshot though, as they'd be
read-only.
On Tue, Nov 10, 2009 at 3:19 PM, A Darren Dunham ddun...@taos.com wrote:
On Tue, Nov 10, 2009 at 03:04:24PM -0600, Tim Cook wrote:
No. The whole point of a snapshot is to keep a consistent on-disk state
from a certain point in time. I'm not entirely sure how you managed to
corrupt blocks
On Tue, Nov 10, 2009 at 03:33:22PM -0600, Tim Cook wrote:
You're telling me a scrub won't actively clean up corruption in snapshots?
That sounds absolutely absurd to me.
Depends on how much redundancy you have in your pool. If you have no
mirrors, no RAID-Z, and no ditto blocks for data, well,
Greetings folks,
Something funny happened to my amd64 box last night. I shut it down while a
a scrub was running on rpool. This was not a fast reboot or anything like that.
Since then, the system does not come up any more. I still can boot in single
user mode but /sbin/zfs mount -va hangs while
by some posting on zfs-fuse mailinglist, i came across zle compression which
seems to be part of the dedupe-commit some days ago:
http://hg.genunix.org/onnv-gate.hg/diff/e2081f502306/usr/src/uts/common/fs/zfs/zle.c
--snipp
31 + * Zero-length encoding. This is a fast and simple algorithm to
On Fri, 2009-09-11 at 13:51 -0400, Will Murnane wrote:
On Thu, Sep 10, 2009 at 13:06, Will Murnane will.murn...@gmail.com wrote:
On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld sommerf...@sun.com wrote:
Any suggestions?
Let it run for another day.
I'll let it keep running as long as it
Hi Tim,
I'm not sure I understand this output completely, but have you
tried detaching the spare?
Cindy
On 11/10/09 09:21, Tim Cook wrote:
So, I currently have a pool with 12 disks raid-z2 (12+2). As you may
have seen in the other thread, I've been having on and off issues with
b126
On Tue, Nov 10, 2009 at 4:38 PM, Cindy Swearingen
cindy.swearin...@sun.comwrote:
Hi Tim,
I'm not sure I understand this output completely, but have you
tried detaching the spare?
Cindy
Hey Cindy,
Detaching did in fact solve the issue. During my previous issues when the
spare kicked in,
On Tue, Nov 10, 2009 at 10:55 AM, Richard Elling
richard.ell...@gmail.comwrote:
On Nov 10, 2009, at 1:25 AM, Orvar Korvar wrote:
Does this mean that there are no driver changes in marvell88sx2, between
b125 and b126? If no driver changes, then it means that we both had extreme
unluck with
I believe it was physical corruption of the media. Strange thing is last time
it happened to me it also managed to replicate the bad blocks over to my backup
server replicated with SNDR...
And yes, it IS read only, and a scrub will NOT actively clean up corruption in
snapshots. It will
I've been following the use of SSD with ZFS and HSPs for some time now, and I
am working (in an architectural capacity) with one of our IT guys to set up our
own ZFS HSP (using a J4200 connected to an X2270).
The best practice seems to be to use an Intel X25-M for the L2ARC (Readzilla)
and an
Hi,
I was discussing the common practice of disk eradication used by many firms for
security. I was thinking this may be a useful feature of ZFS to have an option
to eradicate data as its removed, meaning after the last reference/snapshot is
done and a block is freed, then write the
On Tue, Nov 10, 2009 at 6:51 PM, George Janczuk
geor...@objectconsulting.com.au wrote:
I've been following the use of SSD with ZFS and HSPs for some time now, and
I am working (in an architectural capacity) with one of our IT guys to set
up our own ZFS HSP (using a J4200 connected to an
Typically this is called Sanitization and could be done as part of
an evacuation of data from the disk in preparation for removal.
You would want to specify the patterns to write and the number of
passes.
-- mark
Brian Kolaci wrote:
Hi,
I was discussing the common practice of disk
This didn't occur on a production server, but I thought I'd post this anyway
because it might be interesting.
I'm currently testing a ZFS NAS machine consisting of a Dell R710 with two Dell
5/E SAS HBAs. Right now I'm in the middle of torture testing the system,
simulating drive failures,
On Nov 10, 2009, at 20:55, Mark A. Carlson wrote:
Typically this is called Sanitization and could be done as part of
an evacuation of data from the disk in preparation for removal.
You would want to specify the patterns to write and the number of
passes.
See also remanence:
Excuse me for mentioning it but why not just use the format command?
format(1M)
analyze
Run read, write, compare tests, and data
purge. The data purge
function implements the National Computer Security Center Guide to
Understanding Data Remnance (NCSC-TG-025 version 2) Overwriting
upgrade to the latest dev release fixed the problem for me.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Nov 10, 2009 at 5:15 PM, Tim Cook t...@cook.ms wrote:
On Tue, Nov 10, 2009 at 10:55 AM, Richard Elling richard.ell...@gmail.com
wrote:
On Nov 10, 2009, at 1:25 AM, Orvar Korvar wrote:
Does this mean that there are no driver changes in marvell88sx2, between
b125 and b126? If no
35 matches
Mail list logo