Roch - PAE wrote:
Manoj Nayak writes:
Hi All.
ZFS document says ZFS schedules it's I/O in such way that it manages to
saturate a single disk bandwidth using enough concurrent 128K I/O.
The no of concurrent I/O is decided by vq_max_pending.The default value
Hi All,
If any dtrace script is available to figure out the vdev_cache (or
software track buffer) reads in kiloBytes ?
The document says the default size of the read is 128k , However
vdev_cache source code implementation says the default size is 64k
Thanks
Manoj Nayak
Manoj Nayak
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Manoj Nayak wrote:
Hi All.
ZFS document says ZFS schedules it's I/O in such way that it manages to
saturate a single disk bandwidth using enough concurrent 128K I/O.
The no of concurrent I/O is decided by vq_max_pending.The default value
for vq_max_pending is 35.
We have created 4
Manoj Nayak writes:
Hi All,
If any dtrace script is available to figure out the vdev_cache (or
software track buffer) reads in kiloBytes ?
The document says the default size of the read is 128k , However
vdev_cache source code implementation says the default size is 64k
'zfs_vdev_cache_max' 1
Thanks
Manoj Nayak
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Manoj Nayak
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0:a R43520
50062 0 none
disk_io sd13
/devices/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0:a R87552
49463 0 none
Thanks
Manoj Nayak
Do Check out :
http
Roch - PAE wrote:
Manoj Nayak writes:
Roch - PAE wrote:
Why do you want greater than 128K records.
A single-parity RAID-Z pool on thumper is created it consists of four
disk.Solaris 10 update 4 runs on thumper.Then zfs filesystem is created in
the pool.1 mb data
How I can destroy the following pool ?
pool: mstor0
id: 5853485601755236913
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
mstor0 UNAVAIL
error: Bad file number
Abort - core dumped
#
Thanks
Manoj Nayak
#!/bin/ksh
# This script generates Solaris ramdisk image for works nodes
PKGADD=/usr/sbin/pkgadd
PKGLOG=/tmp/packages.log
PKGADMIN=/tmp/pkgadmin
ROOTDIR=/tmp/miniroot
OPTDIR=$ROOTDIR/opt
HOMEDIR=$ROOTDIR/home/kealia
USRDIR
Hi ,
I am using s10u3 in x64 AMD Opteron thumper.
Thanks
Manoj Nayak
Manoj Nayak wrote:
Hi ,
I am getting following error message when I run any zfs command.I have
attach the script I use to create ramdisk image for Thumper.
# zfs volinit
internal error: Bad file number
Abort - core
when
writing to preallocated space, since extra filesystem transactions are
required to convert extent flags on the range of the file written.
Thanks
Manoj Nayak
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
13 matches
Mail list logo