Re: [zfs-discuss] making sense of arcstat.pl output

2010-10-01 Thread Christian Meier
 Hello Mike,
thank you for your update.

r...@s0011 # ./arcstat.pl 3
time read miss miss% dmis dm% pmis pm% mmis mm% arcszc
11:23:38 197K 7.8K 3 5.7K   3 2.1K   4 6.1K   5  511M 1.5G
11:23:41   700 00   00   00   0  511M 1.5G
11:23:44   760 00   00   00   0  511M 1.5G
11:23:47   760 00   00   00 *1.4210854715202e-14* 
511M 1.5G
11:23:50   710 00   00   00   0  511M 1.5G
11:23:53   740 00   00   00 *1.4210854715202e-14 *
511M 1.5G
11:23:56   740 00   00   00   0  511M 1.5G
11:23:59   790 00   00   00   0  511M 1.5G
11:24:02   760 00   00   00   0  511M 1.5G
11:24:05   740 00   00   00 *1.4210854715202e-14* 
511M 1.5G
11:24:08   930 1.4210854715202e-140 1.4210854715202e-140  
00   0  511M 1.5G
11:24:11   750 00   00 1.4210854715202e-140   0 
511M 1.5G
11:24:14   770 00   00   00 1.4210854715202e-14 
511M 1.5G

would be nice, if the highlighted values are also human readable.

thank you
Christian
 For posterity, I'd like to point out the following:

 neel's original arcstat.pl uses a crude scaling routine that results in a 
 large loss of precision as numbers cross from Kilobytes to Megabytes to 
 Gigabytes.  The 1G reported arc size case described here, could actually be 
 anywhere between 1,000,000MB and 1,999,999MB.  Use 'kstat zfs::arcstats' to 
 read the arc size directly from the kstats (for comparison). 

 I've updated arcstat.pl with a better scaling routine that returns more 
 appropriate results (similar to df -h human-readable output).  I've also 
 added support for L2ARC stats.  The updated version can be found here:

 http://github.com/mharsch/arcstat

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good benchmarking software?

2011-03-22 Thread Christian Meier
Hello Roy,
depending on the data you have you could use gnuplot to visualize.
Normally an X for time and Y for the data show enough.
I did this once with CPU and Memory usage.
RRD is also a nice Tool to visualize (most OpenSource Tools use it) but for
me gnuplot was the easier way to do it.
If you like you could send some example Reports.

Regards Christian
On Mar 22, 2011 8:13 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
 Hi all

 I've been doing some testing on a test box to try to generate good
performance reports. I've been trying bonnie++ and iozone, and while both
give me lots and lots of statistics and numbers, I can't find a good way to
visualize them. I see there are some spreadsheets avaliable for iozone, but
I'm still rather confused. I'm not a statistics/math guy, I just want to
show, clear and once and for all, the differences between a set of striped
mirrors and a raidz2. Also, I want to visualize the performance gain by
using L2ARC/SLOG on normal operation or during a resilver/scrub.

 So - question is quite simple - where can I find a good way to do this
without too much hassle?

 Vennlige hilsener / Best regards

 roy
 --
 Roy Sigurd Karlsbakk
 (+47) 97542685
 r...@karlsbakk.net
 http://blogg.karlsbakk.net/
 --
 I all pedagogikk er det essensielt at pensum presenteres intelligibelt.
Det er et elementært imperativ for alle pedagoger å unngå eksessiv
anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller
eksisterer adekvate og relevante synonymer på norsk.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-26 Thread Christian Meier
Hi

 1) I am using zpool create -f pool name devid --- for creating the zpool.
 2) I am using reboot -- -r,shutdown -i6  -y -g0 both to reboot the
 machine.
 3) I already force loaded my drivers in etc/system
 4) and finally I am using the FC Luns for creation of zpools.
please provide more information.

- are the FC luns available after the reboot?
- which OS-Level are you useing
- after zpool create, zpool is available ? zpool status poolname
- after reboot is the pool available for import? zpool import?

thank you
Christian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-26 Thread Christian Meier
Hi Sudheer

 1)The FC Luns are available after reboot.
ok, you are using FC Luns and do not enable MPXIO?
 2) The Os level of the machine : Oracle Solaris 10 8/11
 s10x_u10wos_17b X86.


 3)bash-3.2# zpool status
   pool: pool name
  state: UNAVAIL
 status: One or more devices could not be opened.  There are insufficient
 replicas for the pool to continue functioning.
 action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-3C
  scan: none requested
 config:NAMESTATE READ WRITE CKSUM
 pool name  UNAVAIL  0 0 0  insufficient replicas
   c5t1d1UNAVAIL  0 0 0  cannot open


 4) at the time of reboot the machine itself imports the zpool  no
 need to import it again.
 bash-3.2# zpool import pool name
 cannot import 'pool name': a pool with that name is already
 created/imported,
 and no additional pools with that name were found.


 And the important thing is when I export  import the zpool, then I
 was able to access it.
As Gary and Bob mentioned, I saw this Issue with ISCSI Devices.
Instead of export / import is a zpool clear also working?

mpathadm list LU
mpathadm show LU /dev/rdsk/c5t1d1s2

Regards
Christian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk failing? High asvc_t and %b.

2012-02-03 Thread Christian Meier
Hello Jan
 BTW: Can someone explain why this:
8. c6t72d0 ATA-WDC WD6400AAKS--3B01 cyl 38909 alt 2 hd 255 sec 
 126
 is not shown the same way as this:
4. c6t68d0 ATA-WDC WD6400AAKS-2-3B01-596.17GB

 Why the cylinder/sector in line 8?
As I know this is depending on the Format Label you have
SMI or EFI

what does the prtvtoc shows you?

S0013(root)#~ prtvtoc /dev/dsk/disknames2
* /dev/dsk/disknames2 partition map
*
* Dimensions:
* 512 bytes/sector
* 2097152 sectors
* 2097085 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector
*  34   222   255
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400256   2080479   2080734
   8 11002080735 16384   2097118  indicates EFI
Label

S0013(root)#~ prtvtoc /dev/dsk/disknames2   
* /dev/dsk/c1t0d0s2 (volume ROOTDISK) partition map
*
* Dimensions:
* 512 bytes/sector
* 255 sectors/track
*  16 tracks/cylinder
*4080 sectors/cylinder
*   38309 cylinders
*   38307 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  200  0 156292560 156292559
   2  500  0 156292560 156292559   indicates
SMI Label


  19. c0tdisknamed0 SUN-SOLARIS-1-1.00GB
  24. c1tdisknamed0 DEFAULT cyl 38307 alt 2 hd 16 sec 255  ROOTDISK


Regards Christian


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss