Hello List,
I am trying to fetch the data/hole info of a sparse file through the
lseek(SEEK_HOLE/SEEK_DATA)
stuff, the result of fpathconf(..., _PC_MIN_HOLE_SIZE) is ok, so I think this
interface is supported
on my testing ZFS, but SEEK_HOLE always return the sparse file total size
instead of
On Apr 18, 2011, at 11:22 AM, jeff.liu wrote:
Hello List,
I am trying to fetch the data/hole info of a sparse file through the
lseek(SEEK_HOLE/SEEK_DATA)
stuff, the result of fpathconf(..., _PC_MIN_HOLE_SIZE) is ok, so I think this
interface is supported
on my testing ZFS, but
Victor Latushkin wrote:
On Apr 18, 2011, at 11:22 AM, jeff.liu wrote:
Hello List,
I am trying to fetch the data/hole info of a sparse file through the
lseek(SEEK_HOLE/SEEK_DATA)
stuff, the result of fpathconf(..., _PC_MIN_HOLE_SIZE) is ok, so I think
this interface is supported
on my
jeff.liu jeff@oracle.com wrote:
Hello List,
I am trying to fetch the data/hole info of a sparse file through the
lseek(SEEK_HOLE/SEEK_DATA)
stuff, the result of fpathconf(..., _PC_MIN_HOLE_SIZE) is ok, so I think this
interface is supported
on my testing ZFS, but SEEK_HOLE always
Hi Tuomas:
Before you run zpool clear, please make sure that the os device name
exists in the output of 'iscsiadm list target -S'.
#iscsiadm list target -S
Target: iqn.1986-03.com.sun:02:test
Alias: -
TPGT: 1
ISID: 402a
Connections: 1
LUN: 0
Vendor: SUN
在 2011-4-18,下午6:03, Joerg Schilling 写道:
jeff.liu jeff@oracle.com wrote:
Hello List,
I am trying to fetch the data/hole info of a sparse file through the
lseek(SEEK_HOLE/SEEK_DATA)
stuff, the result of fpathconf(..., _PC_MIN_HOLE_SIZE) is ok, so I think
this interface is supported
Hi,
As I understand it there were restrictions on a bootable root pool where it
cannot be defined to use whole-disk configurations for a single disk, or
multiple disks which are mirrored.
Does it still apply that you need to define such pools as using slices, ie. by
either defining a partition
Hi Darren,
Yes, a bootable root pool must be created on a disk slice.
You can use a cache device, but not a log device, and the cache device
must be a disk slice.
See the output below.
Thanks,
Cindy
# zpool add rpool log c0t2d0s0
cannot add to 'rpool': root pool can not have multiple vdevs
So i figured out after a couple of scrubs and fmadm faulty that drive
c9t15d0 was bad.
I then replaced the drive using
-bash-3.2$ pfexec /usr/sbin/zpool offline vdipool c9t15d0
-bash-3.2$ pfexec /usr/sbin/zpool replace vdipool c9t15d0 c9t19d0
The drive resilvered and I rebooted the server,
I'm going to replace c9t15d0 with a new drive.
I find it odd that zfs needed to resilver the drive after the reboot.
Shouldn't the resilvered information be kept across reboots?
the iostat data, as returned from iostat -en, are not kept over a reboot. I
don't know if it's possible to keep
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Karl Rossing
So i figured out after a couple of scrubs and fmadm faulty that drive
c9t15d0 was bad.
My pool now looks like this:
NAME STATE READ WRITE CKSUM
11 matches
Mail list logo