thanks,
yes, it is what I want to see.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, Jan 11, 2009 at 11:40 AM, Marvin Wang, Min wrote:
> sorry, my question might be misleading, I mean the starting datablock of a
> file
"starting" is somewhat misleading, since zfs will allocate a new block
whenever a block is updated, so the physical blocks for a file is not
necessarily a
sorry, my question might be misleading, I mean the starting datablock of a file
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I need to identify the datablock of one file is located in zfs?
how can I get that info for a file or directory (not a file system dataset) is
located in a zfs file system?
ps, is zdb able to show that stuff?
thanks,
--
This message posted from opensolaris.org
__
Here is zdb -l results:
bash-3.2# zdb -l /dev/dsk/c1d1
LABEL 0
failed to unpack label 0
LABEL 1
failed to unpack label
Hi Gold,
9987988 sounds factual to me...
IMHO,
z
- Original Message -
From: "Steve Goldthorpe"
To:
Sent: Saturday, January 10, 2009 6:59 PM
Subject: Re: [zfs-discuss] can't import zpool after upgrade to solaris 10u6
> There's definately something strange going on as these are the onl
There's definately something strange going on as these are the only uberblocks
I can find by scanning /dev/dsk/c0t0d0s7 - nothing to conflict with my theory
so far:
TXG: 106052 TIME: 2009-01-04:11:06:12 BLK: 0e29000 (14848000) VER: 10 GUID_SUM:
9f8d9ef301489223 (11497020190282519075)
TXG: 10605
muchas gracias por vuestro tiempo, por que me hiciste mas fuerte
[if misleading, plz excuse my french...]
;-)
z
- Original Message -
From: "Bob Friesenhahn"
To: "Dmitry Razguliaev"
Cc:
Sent: Saturday, January 10, 2009 10:28 AM
Subject: Re: [zfs-discuss] ZFS poor performance on Areca
> Are you sure this isn't a case of CR 6433264 which
> was fixed
> long ago, but arrived in patch 118833-36 to Solaris
> 10?
It certainly looks similar, but this system already had 118833-36 when the
error occurred, so if this bug is truly fixed, it must be something else. Then
again, I wasn't
Brad Plecs wrote:
> Problem solved... after the resilvers completed, the status reported that the
> filesystem needed an upgrade.
>
> I did a zpool upgrade -a, and after that completed and there was no
> resilvering going on, the zpool add ran successfully.
>
> I would like to suggest, however,
Problem solved... after the resilvers completed, the status reported that the
filesystem needed an upgrade.
I did a zpool upgrade -a, and after that completed and there was no resilvering
going on, the zpool add ran successfully.
I would like to suggest, however, that the behavior be fixed --
On Sat, 10 Jan 2009, Dmitry Razguliaev wrote:
> At the time of writing that post, no, I didn't run zpool iostat -v
> 1. However, I run it after that. Results for operations of iostat
> command has changed from 1 for every device in raidz to something in
> between 20 and 400 for raidz volume and
Tried with my own 8GB DOM first (with different OS'es) but could only make
some linux working (bootdriver) so i went for a usb memory stick based solution.
I just added a serial console cable with a db9 connector on one side which fits
nicely in the back of the box and setup the bios to do console
My current solution is a -d option that takes a colon set of aruments min:max
giving the minimum and maximum depth so
zfs list -d 1:1 tank
behaves like zfs list -c is described and only lists the direct children of
tank.
zfs list -d 1: tank
Will list all the descendants of tank
zfs list -
At the time of writing that post, no, I didn't run zpool iostat -v 1. However,
I run it after that. Results for operations of iostat command has changed from
1 for every device in raidz to something in between 20 and 400 for raidz volume
and from 3 to something in between 200 and 450 for a singl
Hmm... that's a tough one. To me, it's a trade off either way, using
a -r parameter to specify the depth for zfs list feels more intuitive
than adding extra commands to modify the -r behaviour, but I can see
your point.
But then, using -c or -d means there's an optional parameter for zfs
list tha
16 matches
Mail list logo