Hello Greg,
Thursday, February 12, 2009, 8:24:38 PM, you wrote:
GM well, since the write cache flush command is disabled, I would like this
GM to happen as early as practically possible in the bootup process, as ZFS
GM will not be issuing the cache flush commands to the disks.
GM I'm not really
Hello Peter,
Friday, February 13, 2009, 10:41:54 AM, you wrote:
PT I'm moving some data off an old machine to something reasonably new.
PT Normally, the new machine performs better, but I have one case just now
PT where the new system is terribly slow.
PT Old machine - V880 (Solaris 8) with SVM
Hello Bob,
Saturday, February 14, 2009, 6:16:54 PM, you wrote:
BF If you do use ZFS's redundancy features, it is important to consider
BF resilver time. Try to keep volume size small enough that it may be
BF resilvered in a reasonable amount of time.
Well, in most cases resilver in ZFS
On Sun, Feb 15, 2009 at 12:37 PM, Robert Milkowski mi...@task.gda.pl wrote:
Hello Peter,
Friday, February 13, 2009, 10:41:54 AM, you wrote:
PT I'm moving some data off an old machine to something reasonably new.
PT Normally, the new machine performs better, but I have one case just now
PT
Hello Peter,
Sunday, February 15, 2009, 12:54:40 PM, you wrote:
PT On Sun, Feb 15, 2009 at 12:37 PM, Robert Milkowski mi...@task.gda.pl
wrote:
Hello Peter,
Friday, February 13, 2009, 10:41:54 AM, you wrote:
PT I'm moving some data off an old machine to something reasonably new.
PT
Hello Brent,
Friday, February 13, 2009, 8:15:55 AM, you wrote:
BJ Sad to report that I am seeing the slow zfs recv issue cropping up
BJ again while running b105 :(
BJ Not sure what has triggered the change, but I am seeing the same
BJ behavior again: massive amounts of reads on the receiving
On Sun, 15 Feb 2009, Robert Milkowski wrote:
Well, in most cases resilver in ZFS should be quicker than resilver in
a disk array because ZFS will resilver only blocks which are actually
in use while most disk arrays will blindly resilver full disk drives.
So assuming you still have plenty
On Sun, Feb 15, 2009 at 5:00 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 15 Feb 2009, Robert Milkowski wrote:
Well, in most cases resilver in ZFS should be quicker than resilver in
a disk array because ZFS will resilver only blocks which are actually
in use while most
On Sun, 15 Feb 2009, Colin Raven wrote:
Pardon me for jumping into this discussion. I invariably lurk and keep mouth
firmly shut. In this case however, curiosity and a degree of alarm bade me
to jump incould you elaborate on 'fragmentation' since the only context
I know this is Windows. Now
Hi Jan,
I tried out what you say long ago, but zfs fails on pool creation.
This is, when I issue the zpool create trunk /dev/dsk/c9d0p3 the command
fails saying that there's no such file or directory. And the disk is
correct!!
What I think is that /dev/dsk/c9d0p3 is a symbolic name used by
I can mount those partitions well using ext2fs, so I assume I won't need
to run gparted at all.
This is what prtpart says about my stuff.
Kind regards,
Antonio
r...@antonio:~# prtpart /dev/rdsk/c3d0p0 -ldevs
Fdisk information for device /dev/rdsk/c3d0p0
** NOTE **
/dev/dsk/c3d0p0 -
r...@antonio:~# zpool create -f test /dev/dsk/c3d0p9
cannot open '/dev/dsk/c3d0p9': No such file or directory
Jan Hlodan escribió:
Antonio wrote:
I can mount those partitions well using ext2fs, so I assume I won't
need to run gparted at all.
This is what prtpart says about my stuff.
Kind
On Sun, Feb 15, 2009 at 8:02 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Sun, 15 Feb 2009, Colin Raven wrote:
Pardon me for jumping into this discussion. I invariably lurk and keep
mouth
firmly shut. In this case however, curiosity and a degree of alarm bade me
to jump
Thanks, I've filed your message where I can easily get at it even if I'm
having trouble with the server. I'm afraid I'd rather I didn't get the
chance to use it, but if something weird does go on, I'm happy to have the
procedure to capture information that might help get it identified and
fixed.
On Sun, 15 Feb 2009, Colin Raven wrote:
As a followup; is there any ongoing sensible way to defend against the
dreaded fragmentation? A [shudder] defrag routine of some kind perhaps?
Forgive the silly questions from the sidelines.ignorance knows no
bounds apparently :)
There is no
On Sun, 15 Feb 2009 01:27:10 +0100, Antonio
anto...@antonioshome.net wrote:
r...@antonio:~# zpool create -f test /dev/dsk/c3d0p9
cannot open '/dev/dsk/c3d0p9': No such file or directory
I would:
Clean obsolete device links :
devfsadm -Cv
Make sure all sensed devices have a correct link :
Sendai,
On Fri, Feb 13, 2009 at 03:21:25PM -0800, Andras Spitzer wrote:
Hi,
When I read the ZFS manual, it usually recommends to configure redundancy at
the ZFS layer, mainly because there are features that will work only with
redundant configuration (like corrupted data correction), also
So I did this:
zfs send -R rpool/export/h...@bup-20090216-044512utc | zfs receive\
-dv bup-ruin/fsfs
And it indicated no errors, and said it was creating various expected
filesystems (pool bup-ruin was brand new at this test).
And afterward bup-ruin seemed to have reasonable filesystems in it:
Sriram,
On Mon, Feb 16, 2009 at 11:12:42AM +0530, Sriram Narayanan wrote:
On Mon, Feb 16, 2009 at 9:11 AM, Sanjeev sanjeev.bagew...@sun.com wrote:
Sendai,
On Fri, Feb 13, 2009 at 03:21:25PM -0800, Andras Spitzer wrote:
Hi,
When I read the ZFS manual, it usually recommends to
19 matches
Mail list logo