On Wed, February 11, 2009 18:16, Uwe Dippel wrote:
I need to disappoint you here, LED inactive for a few seconds is a very
bad indicator of pending writes. Used to experience this on a stick on
Ubuntu, which was silent until the 'umount' and then it started to write
for some 10 seconds.
We use several X4540's over here as well, what type of workload do you
have, and how much performance increase did you see by disabling the
write caches?
We see the difference between our tests completing in around 2.5 minutes
(with write caches) to around a minute an and a half without them,
This sounds like exactly the kind of problem I've been shouting about for 6
months or more. I posted a huge thread on availability on these forums because
I had concerns over exactly this kind of hanging.
ZFS doesn't trust hardware or drivers when it comes to your data - everything
is
after all statements read here I just want to highlight another issue regarding
ZFS.
It was here many times recommended to set copies=2.
Installing Solaris 10 10/2008 or snv_107 you can choose either to use UFS or
ZFS.
If you choose ZFS by default, the rpool will be created by default with
Are you sure thar write cache is back on after restart?
Yes, I've checked with format -e, on each drive.
When disabling the write cache with format, it also gives a warning
stating this is the case.
What I'm looking for is a faster way to do this than format -e -d disk
-f script, for all
All that and yet the fact
remains: I#39;ve never quot;ejectedquot; a USB
drive from OS X or Windows, I simply pull it and go,
and I#39;ve never once lost data, or had it become
unrecoverable or even corrupted.br
brAnd yes, I do keep checksums of all the data
sitting on them and periodically
On Thu, Feb 12, 2009 at 9:25 AM, Ross myxi...@googlemail.com wrote:
This sounds like exactly the kind of problem I've been shouting about for 6
months or more. I posted a huge thread on availability on these forums
because I had concerns over exactly this kind of hanging.
ZFS doesn't trust
Hello Bob,
Wednesday, February 11, 2009, 11:25:12 PM, you wrote:
BF I agree. ZFS apparently syncs uncommitted writes every 5 seconds.
BF If there has been no filesystem I/O (including read I/O due to atime)
BF for at least 10 seconds, and there has not been more data
BF burst-written into
On Thu, Feb 12, 2009 at 10:33:40AM -0500, Greg Mason wrote:
What I'm looking for is a faster way to do this than format -e -d disk
-f script, for all 48 disks.
Is the speed critical? I mean, do you have to pause startup while the
script runs, or does it interfere with data transfer?
--
Ross wrote:
I can also state with confidence that very, very few of the 100 staff working
here will even be aware that it's possible to unmount a USB volume in windows.
They will all just pull the plug when their work is saved, and since they all
come to me when they have problems, I think I
I upgraded my 280R system to yesterday's nightly build, and when I
rebooted, this happened:
Boot device:
/p...@8,60/SUNW,q...@4/f...@0,0/d...@w212037e9abe4,0:a File and args:
SunOS Release 5.11 Version snv_108 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
On Thu, February 12, 2009 10:10, Ross wrote:
Of course, that does assume that devices are being truthful when they say
that data has been committed, but a little data loss from badly designed
hardware is I feel acceptable, so long as ZFS can have a go at recovering
corrupted pools when it
On 02/11/09 12:14, Jonny Gerold wrote:
I have a non bootable disk and need to recover files from /root...
When I import the disk via zpool import /root isnt mounted...
Thanks, Jonny
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Heh, yeah, I've thought the same kind of thing in the past. The
problem is that the argument doesn't really work for system admins.
As far as I'm concerned, the 7000 series is a new hardware platform,
with relatively untested drivers, running a software solution that I
know is prone to locking
On Thu, Feb 12, 2009 at 19:02, Brandon High bh...@freaks.com wrote:
There's a post there from a guy using two of the AOC-USAS-L8i in his
system here:
http://hardforum.com/showthread.php?p=1033321345
Read again---he's using the AOC-SAT2-MV8, which is PCI-X. That is
known to work fine, even in
The problem was with the shell. For whatever reason,
/usr/bin/ksh can't rejoin the files correctly. When
I switched to /sbin/sh, the rejoin worked fine, the
cksum's matched, ...
The ksh I was using is:
# what /usr/bin/ksh
/usr/bin/ksh:
Version M-11/16/88i
SunOS 5.10 Generic
On Thu, Feb 12, 2009 at 11:31 AM, David Dyer-Bennet d...@dd-b.net wrote:
On Thu, February 12, 2009 10:10, Ross wrote:
Of course, that does assume that devices are being truthful when they say
that data has been committed, but a little data loss from badly designed
hardware is I feel
On Thu, Feb 12, 2009 at 1:22 PM, Will Murnane will.murn...@gmail.comwrote:
On Thu, Feb 12, 2009 at 19:02, Brandon High bh...@freaks.com wrote:
There's a post there from a guy using two of the AOC-USAS-L8i in his
system here:
http://hardforum.com/showthread.php?p=1033321345
Read
well, since the write cache flush command is disabled, I would like this
to happen as early as practically possible in the bootup process, as ZFS
will not be issuing the cache flush commands to the disks.
I'm not really sure what happens in the case where the write flush
command is disabled,
Mark,
I believe creating a older version pool is supported:
zpool create -o version=vers whirl c0t0d0
I'm not sure what version of ZFS in Solaris 10 you are running.
Try running zpool upgrade and replacing vers above with that version number.
Neil.
: trasimene ; zpool create -o version=11
Mark Winder wrote:
We’ve been experimenting with zfs on OpenSolaris 2008.11. We created a
pool in OpenSolaris and filled it with data. Then we wanted to move it
to a production Solaris 10 machine (generic_137138_09) so I “zpool
exported” in OpenSolaris, moved the storage, and “zpool
On Thu, Feb 12, 2009 at 11:53:40AM -0500, Greg Palmer wrote:
Ross wrote:
I can also state with confidence that very, very few of the 100 staff
working here will even be aware that it's possible to unmount a USB volume
in windows. They will all just pull the plug when their work is saved,
Right, well I can't imagine it's impossible to write a small app that can
test whether or not drives are honoring correctly by issuing a commit and
immediately reading back to see if it was indeed committed or not. Like a
zfs test cXtX. Of course, then you can't just blame the hardware
That would be the ideal, but really I'd settle for just improved error
handling and recovery for now. In the longer term, disabling write
caching by default for USB or Firewire drives might be nice.
On Thu, Feb 12, 2009 at 8:35 PM, Gary Mills mi...@cc.umanitoba.ca wrote:
On Thu, Feb 12, 2009
Is this the crux of the problem?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6424510
'For usb devices, the driver currently ignores DKIOCFLUSHWRITECACHE.
This can cause catastrophic data corruption in the event of power loss,
even for filesystems like ZFS that are designed to
Hello,
I need advice how to import unformatted partition. I split my 150GB disk
into 3 partitions:
1. 50GB windows
2. 50GB Opensolaris
3. 50GB unformatted
I would like to import 3. partition as a another pool but I can't see
this partition.
sh-3.2# format -e
Searching for disks...done
On Thu, Feb 12, 2009 at 20:05, Tim t...@tcsac.net wrote:
Are you selectively ignoring responses to this thread or something? Dave
has already stated he *HAS IT WORKING TODAY*.
No, I saw that post. However, I saw one unequivocal it doesn't work
earlier (even if I can't show it to you), which
On Thu, Feb 12, 2009 at 21:59, Jan Hlodan jh231...@mail-emea.sun.com wrote:
I would like to import 3. partition as a another pool but I can't see this
partition.
sh-3.2# format -e
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c7t0d0 drive type unknown
Will Murnane wrote:
On Thu, Feb 12, 2009 at 20:05, Tim t...@tcsac.net wrote:
Are you selectively ignoring responses to this thread or something? Dave
has already stated he *HAS IT WORKING TODAY*.
No, I saw that post. However, I saw one unequivocal it doesn't work
earlier (even if I can't
For what it's worth, I know that at least one person is using a LSI SAS3081E
card which I believe is based on exactly the same chipset:
http://www.opensolaris.org/jive/message.jspa?messageID=186415
--
This message posted from opensolaris.org
___
That does look like the issue being discussed.
It's a little alarming that the bug was reported against snv54 and is
still not fixed :(
Does anyone know how to push for resolution on this? USB is pretty
common, like it or not for storage purposes - especially amongst the
laptop-using dev crowd
On Thu, February 12, 2009 14:02, Tim wrote:
Right, well I can't imagine it's impossible to write a small app that can
test whether or not drives are honoring correctly by issuing a commit and
immediately reading back to see if it was indeed committed or not. Like a
zfs test cXtX. Of
I think you could try clearing the pool - however, consulting the
fault management tools (fmdump and it's kin) might be smart first.
It's possible this is an error in the controller.
The output of 'cfgadm' might be of use also.
On Wed, Feb 11, 2009 at 7:12 PM, Jens Elkner
Thanks Nathan,
I want to test the underlying performance, of course the problem is I want
to test the 16 or so disks in the stripe, rather than individual devices.
Thanks
Rob
On 28/01/2009 22:23, Nathan Kroenert nathan.kroen...@sun.com wrote:
Also - My experience with a very small ARC is
I just tried putting a pool on a USB flash drive, writing a file to it, and
then yanking it. I did not lose any data or the pool, but I had to reboot
before I could get any zpool command to complete without freezing. I also had
OS reboot once on its own, when I tried to issue a zpool command
Hi,
Can anyone explain the following to me?
Two zpool devices points at the same data, I was installing osol
2008.11 in xVM when I saw that there already was a partition on the
installation disk. An old dataset that I deleted since i gave it a
slightly different name than I intended is
On Thu, 2009-02-12 at 17:35 -0500, Blake wrote:
That does look like the issue being discussed.
It's a little alarming that the bug was reported against snv54 and is
still not fixed :(
bugs.opensolaris.org's information about this bug is out of date.
It was fixed in snv_54:
changeset:
On 12-Feb-09, at 3:02 PM, Tim wrote:
On Thu, Feb 12, 2009 at 11:31 AM, David Dyer-Bennet d...@dd-b.net
wrote:
On Thu, February 12, 2009 10:10, Ross wrote:
Of course, that does assume that devices are being truthful when
they say
that data has been committed, but a little data loss
On Thu, 12 Feb 2009, Ross Smith wrote:
As far as I'm concerned, the 7000 series is a new hardware platform,
You are joking right? Have you ever looked at the photos of these
new systems or compared them to other Sun systems? They are just
re-purposed existing systems with a bit of extra
I'm sure it's very hard to write good error handling code for hardware
events like this.
I think, after skimming this thread (a pretty wild ride), we can at
least decide that there is an RFE for a recovery tool for zfs -
something to allow us to try to pull data from a failed pool. That
seems
On Thu, Feb 12, 2009 at 5:16 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Thu, 12 Feb 2009, Ross Smith wrote:
As far as I'm concerned, the 7000 series is a new hardware platform,
You are joking right? Have you ever looked at the photos of these new
systems or compared them
On Thu, Feb 12 at 21:45, Mattias Pantzare wrote:
A read of data in the disk cache will be read from the disk cache. You
can't tell the disk to ignore its cache and read directly from the
plater.
The only way to test this is to write and the remove the power from
the disk. Not easy in software.
I tried to export the zpool also, and I got this, the strange part is
that it sometimes still thinks that the ubuntu-01-dsk01 dataset exists:
# zpool export zpool01
cannot open 'zpool01/xvm/dsk/ubuntu-01-dsk01': dataset does not exist
cannot unmount '/zpool01/dump': Device busy
But:
# zfs
Blake wrote:
I'm sure it's very hard to write good error handling code for hardware
events like this.
I think, after skimming this thread (a pretty wild ride), we can at
least decide that there is an RFE for a recovery tool for zfs -
something to allow us to try to pull data from a failed pool.
Henrik Johansson wrote:
I tried to export the zpool also, and I got this, the strange part is
that it sometimes still thinks that the ubuntu-01-dsk01 dataset exists:
# zpool export zpool01
cannot open 'zpool01/xvm/dsk/ubuntu-01-dsk01': dataset does not exist
cannot unmount '/zpool01/dump':
On 12-Feb-09, at 7:02 PM, Eric D. Mudama wrote:
On Thu, Feb 12 at 21:45, Mattias Pantzare wrote:
A read of data in the disk cache will be read from the disk cache.
You
can't tell the disk to ignore its cache and read directly from the
plater.
The only way to test this is to write and the
Hey Tim,
I've been happily using the AOC-USAS-L8i since we started talking about it a
while ago. I have it stuck in a generic motherboard from ebay in a PCI-Express
x16 slot since i wasn't going to have a 3d card in my NAS device or anything.
Using 8 sata drives across it's two ports with
Blake,
On Thu, Feb 12, 2009 at 05:35:14PM -0500, Blake wrote:
That does look like the issue being discussed.
It's a little alarming that the bug was reported against snv54 and is
still not fixed :(
Looks like the bug-report is out of sync.
I see that the bug has been fixed in B54. Here is
bcirvin,
you proposed something to allow us to try to pull data from a failed pool.
Yes and no. 'Yes' as a pragmatic solution; 'no' for what ZFS was 'sold' to be:
the last filesystem mankind would need. It was conceived as a filesystem that
does not need recovery, due to its guaranteed
Thanks for all the help guys. Based on the success reports, i'll give it a shot
in my intel s3210shlc board next week when the UIO card arrives. I'll report
back on the success or destruction that follows...now i just hope solaris 10
10/08, but it sounds like it should.
Cheers,
Brent
On February 12, 2009 1:44:34 PM -0800 bdebel...@intelesyscorp.com wrote:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6424510
...
Dropping a flush-cache command is just as bad as dropping a write.
Not that it matters, but it seems obvious that this is wrong or
anyway an
51 matches
Mail list logo