Re: [zfs-discuss] ZFS vq_max_pending value ?

2008-01-23 Thread Manoj Nayak
Roch - PAE wrote: Manoj Nayak writes: Hi All. ZFS document says ZFS schedules it's I/O in such way that it manages to saturate a single disk bandwidth using enough concurrent 128K I/O. The no of concurrent I/O is decided by vq_max_pending.The default value

[zfs-discuss] ZFS vdev_cache

2008-01-22 Thread Manoj Nayak
Hi All, If any dtrace script is available to figure out the vdev_cache (or software track buffer) reads in kiloBytes ? The document says the default size of the read is 128k , However vdev_cache source code implementation says the default size is 64k Thanks Manoj Nayak

[zfs-discuss] ZFS vq_max_pending value ?

2008-01-22 Thread Manoj Nayak
Manoj Nayak ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS vq_max_pending value ?

2008-01-22 Thread manoj nayak
Manoj Nayak wrote: Hi All. ZFS document says ZFS schedules it's I/O in such way that it manages to saturate a single disk bandwidth using enough concurrent 128K I/O. The no of concurrent I/O is decided by vq_max_pending.The default value for vq_max_pending is 35. We have created 4

Re: [zfs-discuss] ZFS vdev_cache

2008-01-22 Thread manoj nayak
Manoj Nayak writes: Hi All, If any dtrace script is available to figure out the vdev_cache (or software track buffer) reads in kiloBytes ? The document says the default size of the read is 128k , However vdev_cache source code implementation says the default size is 64k

Re: [zfs-discuss] ZFS vq_max_pending value ?

2008-01-22 Thread Manoj Nayak
'zfs_vdev_cache_max' 1 Thanks Manoj Nayak ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] ZFS recordsize

2008-01-18 Thread Manoj Nayak
Manoj Nayak ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS recordsize

2008-01-18 Thread Manoj Nayak
,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a R43520 50062 0 none disk_io sd13 /devices/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a R87552 49463 0 none Thanks Manoj Nayak Do Check out : http

Re: [zfs-discuss] ZFS recordsize

2008-01-18 Thread Manoj Nayak
Roch - PAE wrote: Manoj Nayak writes: Roch - PAE wrote: Why do you want greater than 128K records. A single-parity RAID-Z pool on thumper is created it consists of four disk.Solaris 10 update 4 runs on thumper.Then zfs filesystem is created in the pool.1 mb data

[zfs-discuss] How to destory a faulted pool

2007-11-16 Thread Manoj Nayak
How I can destroy the following pool ? pool: mstor0 id: 5853485601755236913 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS-8000-5E config: mstor0 UNAVAIL

[zfs-discuss] internal error: Bad file number

2007-11-14 Thread Manoj Nayak
error: Bad file number Abort - core dumped # Thanks Manoj Nayak #!/bin/ksh # This script generates Solaris ramdisk image for works nodes PKGADD=/usr/sbin/pkgadd PKGLOG=/tmp/packages.log PKGADMIN=/tmp/pkgadmin ROOTDIR=/tmp/miniroot OPTDIR=$ROOTDIR/opt HOMEDIR=$ROOTDIR/home/kealia USRDIR

Re: [zfs-discuss] internal error: Bad file number

2007-11-14 Thread Manoj Nayak
Hi , I am using s10u3 in x64 AMD Opteron thumper. Thanks Manoj Nayak Manoj Nayak wrote: Hi , I am getting following error message when I run any zfs command.I have attach the script I use to create ramdisk image for Thumper. # zfs volinit internal error: Bad file number Abort - core

[zfs-discuss] XFS_IOC_FSGETXATTR XFS_IOC_RESVSP64 like options in ZFS ?

2007-10-12 Thread Manoj Nayak
when writing to preallocated space, since extra filesystem transactions are required to convert extent flags on the range of the file written. Thanks Manoj Nayak ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org