Re: [dtrace-discuss] DTrace (API) version?

2008-11-30 Thread Alex Peng
Hi Adam,

Perhaps that "zfs upgrade -v" example is not very clear, here what I need is a 
glue showing me in which version, what actions are supported, e.g. if I am at 
DTrace 1.5, then I don't have "stddev()", but "inet_ntoa()" is there, right?

So maybe dtrace could setup a page at 
http://www.opensolaris.org/os/community/dtrace/version 
just list the actions added in each version.

Then the "dtrace -V" is like this:

# dtrace -V
dtrace: Sun D 1.6.2
For more information on a particular version, including supported
releases, see:

http://www.opensolaris.org/os/community/dtrace/version
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] more friendly "dtrace -x" output

2008-11-30 Thread Alex Peng
Just filed a bug, CR 6777997.

In short, I prefer to have a "dtrace -xv" or "dtrace --options"(or dtrace 
--flags).

BTW, I also want to have a "dtrace --actions" and "dtrace --aggregations", yes, 
again, a short manual.

-Alex
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] more friendly "dtrace -x" output

2008-11-30 Thread Adam Leventhal
On Nov 30, 2008, at 9:40 PM, Alex Peng wrote:
> Here is my "dtrace" only output,  it is [-x opt[=val]],   but what I  
> really want is:
>
> - which options are there?
> - for each option, which value is valid?


Hey Alex,

That would be a great RFE. Please file it if you haven't already.

Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] DTrace (API) version?

2008-11-30 Thread Adam Leventhal
Hey Alex,

The ZFS version number has a very different use since new versions of  
ZFS
must continue to work with disks containing an old format. The DTrace
version number is useful to track what fixes and features are present
across different operating systems. You can see how the version number  
is
used here:

   
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libdtrace/common/dt_open.c#_dtrace_globals

In addition you can find out more about versioning in the documentation:

   http://wikis.sun.com/display/DTrace/Versioning

Adam

On Nov 30, 2008, at 7:38 PM, Alex Peng wrote:

> Hi all,
>
> I want to know which dtrace abilities are available in my Solaris  
> box, so I use:
>
> # dtrace -V
> dtrace: Sun D 1.6.2
>
> But what does that "Sun D 1.6.2" mean?   I guess there could be a  
> RFE.   What I expect is something like this:
>
>
> ===
> # zpool upgrade -v
> This system is currently running ZFS pool version 13.
>
> The following versions are supported:
>
> VER  DESCRIPTION
> ---  
> 1   Initial ZFS version
> 2   Ditto blocks (replicated metadata)
> 3   Hot spares and double parity RAID-Z
> 4   zpool history
> 5   Compression using the gzip algorithm
> 6   bootfs pool property
> 7   Separate intent log devices
> 8   Delegated administration
> 9   refquota and refreservation properties
> 10  Cache devices
> 11  Improved scrub performance
> 12  Snapshot properties
> 13  snapused property
> For more information on a particular version, including supported  
> releases, see:
>
> http://www.opensolaris.org/os/community/zfs/version/N
>
> Where 'N' is the version number.
> ===
>
> Yes, it's better to have a short description, and a long detailed  
> webpage.Therefore at least I could know which (new) provider  
> were added at when.
>
>
> Thanks,
> -Alex
> -- 
> This message posted from opensolaris.org
> ___
> dtrace-discuss mailing list
> dtrace-discuss@opensolaris.org


--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] DIF content is invalid?

2008-11-30 Thread Alex Peng
So as in my previous message, 

http://www.opensolaris.org/jive/thread.jspa?threadID=84228&tstart=0

if the DTrace version information is much more informative, then this kind of 
problem would be much easier (even without OS info)

-Alex
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] more friendly "dtrace -x" output

2008-11-30 Thread Alex Peng
Here is my "dtrace" only output,  it is [-x opt[=val]],   but what I really 
want is:

- which options are there?
- for each option, which value is valid?

I do prefer following the "zfs set all" way, just list option and its possible 
value.

-Alex

# dtrace
Usage: dtrace [-32|-64] [-aACeFGhHlqSvVwZ] [-b bufsz] [-c cmd] [-D name[=def]]
[-I path] [-L path] [-o output] [-p pid] [-s script] [-U name]
[-x opt[=val]] [-X a|c|s|t]

[-P provider [[ predicate ] action ]]
[-m [ provider: ] module [[ predicate ] action ]]
[-f [[ provider: ] module: ] func [[ predicate ] action ]]
[-n [[[ provider: ] module: ] func: ] name [[ predicate ] action ]]
[-i probe-id [[ predicate ] action ]] [ args ... ]

predicate -> '/' D-expression '/'
   action -> '{' D-statements '}'

-32 generate 32-bit D programs and ELF files
-64 generate 64-bit D programs and ELF files

-a  claim anonymous tracing state
-A  generate driver.conf(4) directives for anonymous tracing
-b  set trace buffer size
-c  run specified command and exit upon its completion
-C  run cpp(1) preprocessor on script files
-D  define symbol when invoking preprocessor
-e  exit after compiling request but prior to enabling probes
-f  enable or list probes matching the specified function name
-F  coalesce trace output by function
-G  generate an ELF file containing embedded dtrace program
-h  generate a header file with definitions for static probes
-H  print included files when invoking preprocessor
-i  enable or list probes matching the specified probe id
-I  add include directory to preprocessor search path
-l  list probes matching specified criteria
-L  add library directory to library search path
-m  enable or list probes matching the specified module name
-n  enable or list probes matching the specified probe name
-o  set output file
-p  grab specified process-ID and cache its symbol tables
-P  enable or list probes matching the specified provider name
-q  set quiet mode (only output explicitly traced data)
-s  enable or list probes according to the specified D script
-S  print D compiler intermediate code
-U  undefine symbol when invoking preprocessor
-v  set verbose mode (report stability attributes, arguments)
-V  report DTrace API version
-w  permit destructive actions
-x  enable or modify compiler and tracing options
-X  specify ISO C conformance settings for preprocessor
-Z  permit probe descriptions that match zero probes
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Concurrent Disk Activity from "Solaris Performance & Tools" Chap. 4.17.4

2008-11-30 Thread Clayton, Paul D
Jim, James..

My thanks for the info! The fog is starting to lift.

It is that averaging from iostat and other utilities that has been
making tracking down the actual activities taking place that has forced
me to DTrace. 

One last question about the graph in Figure 4.7. I understand the bit
about dealing with start/end times to figure out length of time. I did
that in my spreadsheet. The part of the graph that is confusing me still
is the offset of the 'strategy' line from 0 (zero) time. What is
involved in figuring out what that ~100us offset from 0 for a bunch of
block addresses? Where would THAT time offset come from with the DTrace
script?

Take care.

pdc

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 30, 2008 11:35 PM
To: Jim Mauro
Cc: Clayton, Paul D; dtrace-discuss@opensolaris.org
Subject: Re: [dtrace-discuss] Concurrent Disk Activity from "Solaris
Performance & Tools" Chap. 4.17.4

Keep in mind that the service times are *averages* over the interval.
The average for the data in his spreadsheet would be pretty nice but
would
obscure the two pretty bad times he saw...

Jim
---


Jim Mauro wrote:
> Hey Paul - I should add that "iostat -xnz 1" is a great method
> for determine how well the SAN is performance.
> The asvc_t times are disk IO service times in milliseconds.
> I usually start there to sanity check disk IO times...
>
> Thanks,
> /jim 

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] DIF content is invalid?

2008-11-30 Thread James Litchfield
Might help if I told you what OS and all that stuff. The bits are standard
from onnv.sfbay...

# uname -a
SunOS jlaptop 5.11 onnv-gate:2008-11-28 i86pc i386 i86pc
# cat /etc/motd
Sun Microsystems Inc.SunOS 5.11onnv-gate:2008-11-28January 2008
bfu'ed from /export/home/archives/i386/nightly-nd on 2008-11-28
Sun Microsystems Inc.SunOS 5.11snv_103November 2008
# cat /etc/release
  Solaris Express Community Edition snv_103 X86
   Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 17 November 2008

Jim
---
James Litchfield wrote:
> What's going on?
>
> # dtrace -s iotime_all.d 100
> dtrace: failed to enable 'iotime_all.d': DIF program content is invalid
>
> The errant script
>
> #pragma D option quiet
>
> BEGIN
> {
> stime = timestamp;
>io_count = 0;
> }
>
> io:::start
> /args[2]->fi_pathname != ""/
> {
> start[pid, args[2]->fi_pathname, args[0]->b_edev, 
> args[0]->b_blkno, args[0]->b_bcount] = timestamp;
>self->pid = pid;
>self->name = args[2]->fi_pathname;
>self->size = args[0]->b_bcount;
> }
> io:::start
> /args[2]->fi_pathname != ""/
> {
> start[pid, args[1]->dev_pathname, args[0]->b_edev, 
> args[0]->b_blkno, args[0]->b_bcount] = timestamp;
>self->pid = pid;
>self->name = args[1]->dev_pathname;
>self->size = args[0]->b_bcount;
> }
>
> io:::done
> /start[self->pid, self->name, args[0]->b_edev, args[0]->b_blkno, 
> self->size]/
> {
> this->elapsed = timestamp - start[self->pid, self->name, 
> args[0]->b_edev, args[0]->b_blkno, self->size];
>printf("%5u %10s %58s %2s %8u %8u %3d.%03d\n", self->pid, 
> args[1]->dev_statname,
>self->name, args[0]->b_flags & B_READ ? "R" : "W",
>args[0]->b_bcount, self->size,
>this->elapsed / 100, (this->elapsed / 1000) % 1000);
>start[self->pid, self->name, args[0]->b_edev, args[0]->b_blkno, 
> self->size] = 0;
>self->pid = 0;
>self->name = 0;
>self->size = 0;
>io_count++;
> }
> io:::done
> /io_count > $1/
> {
> printf("Elapsed Time: %u seconds\n\n", (timestamp - stime) / 
> 10);
>exit (0);
> }
>
>

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


[dtrace-discuss] DIF content is invalid?

2008-11-30 Thread James Litchfield
What's going on?

# dtrace -s iotime_all.d 100
dtrace: failed to enable 'iotime_all.d': DIF program content is invalid

The errant script

#pragma D option quiet

BEGIN
{
 stime = timestamp;
io_count = 0;
}

io:::start
/args[2]->fi_pathname != ""/
{
 start[pid, args[2]->fi_pathname, args[0]->b_edev, args[0]->b_blkno, 
args[0]->b_bcount] = timestamp;
self->pid = pid;
self->name = args[2]->fi_pathname;
self->size = args[0]->b_bcount;
}
io:::start
/args[2]->fi_pathname != ""/
{
 start[pid, args[1]->dev_pathname, args[0]->b_edev, 
args[0]->b_blkno, args[0]->b_bcount] = timestamp;
self->pid = pid;
self->name = args[1]->dev_pathname;
self->size = args[0]->b_bcount;
}

io:::done
/start[self->pid, self->name, args[0]->b_edev, args[0]->b_blkno, 
self->size]/
{
 this->elapsed = timestamp - start[self->pid, self->name, 
args[0]->b_edev, args[0]->b_blkno, self->size];
printf("%5u %10s %58s %2s %8u %8u %3d.%03d\n", self->pid, 
args[1]->dev_statname,
self->name, args[0]->b_flags & B_READ ? "R" : "W",
args[0]->b_bcount, self->size,
this->elapsed / 100, (this->elapsed / 1000) % 1000);
start[self->pid, self->name, args[0]->b_edev, args[0]->b_blkno, 
self->size] = 0;
self->pid = 0;
self->name = 0;
self->size = 0;
io_count++;
}
io:::done
/io_count > $1/
{
 printf("Elapsed Time: %u seconds\n\n", (timestamp - stime) / 
10);
exit (0);
}

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Concurrent Disk Activity from "Solaris Performance & Tools" Chap. 4.17.4

2008-11-30 Thread James Litchfield
Keep in mind that the service times are *averages* over the interval.
The average for the data in his spreadsheet would be pretty nice but would
obscure the two pretty bad times he saw...

Jim
---


Jim Mauro wrote:
> Hey Paul - I should add that "iostat -xnz 1" is a great method
> for determine how well the SAN is performance.
> The asvc_t times are disk IO service times in milliseconds.
> I usually start there to sanity check disk IO times...
>
> Thanks,
> /jim
>
>
> Paul Clayton wrote:
>   
>> Hello..
>>
>> Due to  growing performance problems on numerous systems, I have been 
>> reading through a lot of information about DTrace and what it can find out 
>> about a system. It is a great tool, and while there is a learning curve to 
>> using it successfully, getting useful information quickly is helped by such 
>> books as the ‘Solaris Performance & Tools’ book.
>>
>> In light of us also ramping up new SANs and ever larger SAN fabrics, I have 
>> long been wondering what the times are for getting data in/out of the SANs. 
>> If we talk with the SAN vendor they do the say the SAN not a problem and we 
>> should look elsewhere. Talk with the fabric folks and they say no problems, 
>> look elsewhere.
>>
>> So, it was with high interest that I have been reading Chapter 4 in the book 
>> about disk activity multiple times and trying various commands out on 
>> systems here.
>>
>> One thing I have been puzzled with a lot this weekend is the information and 
>> plot in Figure 4.7. This section if I understand it correctly, offers the 
>> means to track the actual times from when an IO starts in the kernel to when 
>> it completes, implying the time to either read or write from disk or memory 
>> cache. 
>>
>> I have been using a data file for an Oracle database as the test subject for 
>> this work.
>>
>> I have several confusion points with this section.
>>
>> •The graph mentions ‘strategy’ and ‘biodone’ which seem to imply TNF 
>> based data, not output from the DTrace script above Figure 4.7. 
>>
>> •In looking at the data gotten from the DTrace script I see no way to 
>> generate the graph of Figure 4.7. Specifically the time difference between 
>> ‘0’ and the points for ‘strategy’. With the DTrace script we have the start 
>> time of the IO. I see no way to determine some amount of time between 
>> ‘start’ and something earlier. The time values on the ‘X’ axis also don’t 
>> fall out of the data generated by the DTrace script.
>>
>> •How can it be determined if the IO completed from a memory cache or 
>> required an I/O to a physical disk? I have a lot of times less than 0.5 ms 
>> but also have a fair number that are in the range of 1 ms to 300 ms.
>>
>> I modified the script to dump out the size of the I/O being done and that 
>> was interesting to see some unexpected lengths.  Also added ‘start’ and 
>> ‘end’ to the appropriate lines as a sanity check to make it easier to pair 
>> up the entries. Should always have one start/end pair for a block address.
>>
>> I have attached an Excel spreadsheet with what I was able to create based on 
>> the data collected.
>>
>> My thanks for any clarifications to these confusions.
>>
>> pdc
>>   
>> 
>>
>> ___
>> dtrace-discuss mailing list
>> dtrace-discuss@opensolaris.org
>> 
> ___
> dtrace-discuss mailing list
> dtrace-discuss@opensolaris.org
>
>   

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Concurrent Disk Activity from "Solaris Performance & Tools" Chap. 4.17.4

2008-11-30 Thread Jim Mauro
Hey Paul - I should add that "iostat -xnz 1" is a great method
for determine how well the SAN is performance.
The asvc_t times are disk IO service times in milliseconds.
I usually start there to sanity check disk IO times...

Thanks,
/jim


Paul Clayton wrote:
> Hello..
>
> Due to  growing performance problems on numerous systems, I have been reading 
> through a lot of information about DTrace and what it can find out about a 
> system. It is a great tool, and while there is a learning curve to using it 
> successfully, getting useful information quickly is helped by such books as 
> the ‘Solaris Performance & Tools’ book.
>
> In light of us also ramping up new SANs and ever larger SAN fabrics, I have 
> long been wondering what the times are for getting data in/out of the SANs. 
> If we talk with the SAN vendor they do the say the SAN not a problem and we 
> should look elsewhere. Talk with the fabric folks and they say no problems, 
> look elsewhere.
>
> So, it was with high interest that I have been reading Chapter 4 in the book 
> about disk activity multiple times and trying various commands out on systems 
> here.
>
> One thing I have been puzzled with a lot this weekend is the information and 
> plot in Figure 4.7. This section if I understand it correctly, offers the 
> means to track the actual times from when an IO starts in the kernel to when 
> it completes, implying the time to either read or write from disk or memory 
> cache. 
>
> I have been using a data file for an Oracle database as the test subject for 
> this work.
>
> I have several confusion points with this section.
>
> • The graph mentions ‘strategy’ and ‘biodone’ which seem to imply TNF 
> based data, not output from the DTrace script above Figure 4.7. 
>
> • In looking at the data gotten from the DTrace script I see no way to 
> generate the graph of Figure 4.7. Specifically the time difference between 
> ‘0’ and the points for ‘strategy’. With the DTrace script we have the start 
> time of the IO. I see no way to determine some amount of time between ‘start’ 
> and something earlier. The time values on the ‘X’ axis also don’t fall out of 
> the data generated by the DTrace script.
>
> • How can it be determined if the IO completed from a memory cache or 
> required an I/O to a physical disk? I have a lot of times less than 0.5 ms 
> but also have a fair number that are in the range of 1 ms to 300 ms.
>
> I modified the script to dump out the size of the I/O being done and that was 
> interesting to see some unexpected lengths.  Also added ‘start’ and ‘end’ to 
> the appropriate lines as a sanity check to make it easier to pair up the 
> entries. Should always have one start/end pair for a block address.
>
> I have attached an Excel spreadsheet with what I was able to create based on 
> the data collected.
>
> My thanks for any clarifications to these confusions.
>
> pdc
>   
> 
>
> ___
> dtrace-discuss mailing list
> dtrace-discuss@opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Concurrent Disk Activity from "Solaris Performance & Tools" Chap. 4.17.4

2008-11-30 Thread James Litchfield
Paul Clayton wrote:
> Hello..
>
> Due to  growing performance problems on numerous systems, I have been reading 
> through a lot of information about DTrace and what it can find out about a 
> system. It is a great tool, and while there is a learning curve to using it 
> successfully, getting useful information quickly is helped by such books as 
> the ‘Solaris Performance & Tools’ book.
>
> In light of us also ramping up new SANs and ever larger SAN fabrics, I have 
> long been wondering what the times are for getting data in/out of the SANs. 
> If we talk with the SAN vendor they do the say the SAN not a problem and we 
> should look elsewhere. Talk with the fabric folks and they say no problems, 
> look elsewhere.
>
>   
They always do until you show them otherwise.
> So, it was with high interest that I have been reading Chapter 4 in the book 
> about disk activity multiple times and trying various commands out on systems 
> here.
>
> One thing I have been puzzled with a lot this weekend is the information and 
> plot in Figure 4.7. This section if I understand it correctly, offers the 
> means to track the actual times from when an IO starts in the kernel to when 
> it completes, implying the time to either read or write from disk or memory 
> cache. 
>
> I have been using a data file for an Oracle database as the test subject for 
> this work.
>
> I have several confusion points with this section.
>
> • The graph mentions ‘strategy’ and ‘biodone’ which seem to imply TNF 
> based data, not output from the DTrace script above Figure 4.7. 
>   
Simply put, when a physical IO is started (which may be connected to a 
read or write from a program
or from some other place), the 'strategy' routine is called for the 
driver underlying the file system the
program is writing to. The driver may be for a real device (e.g., ssd) 
or for a pseudo device (e.g., a solaris
volume manager device). The strategy routine figures out what to do to 
satisfy the IO request. If the device is
a pseudo device, the strategy routine calles into underlying strategy 
routines (which may themselves be
pseudo devices - tracking IOs on a clustered system with SVM devices 
takes you through the md layer,
the did layer and the underlying device layer). biodone is a kernel 
convenience function typically invoked
when the physical IO is actually finished. TNF has its hooks in the same 
places but you should ignore TNF.
> • In looking at the data gotten from the DTrace script I see no way to 
> generate the graph of Figure 4.7. Specifically the time difference between 
> ‘0’ and the points for ‘strategy’. With the DTrace script we have the start 
> time of the IO. I see no way to determine some amount of time between ‘start’ 
> and something earlier. The time values on the ‘X’ axis also don’t fall out of 
> the data generated by the DTrace script.
>
> • How can it be determined if the IO completed from a memory cache or 
> required an I/O to a physical disk? I have a lot of times less than 0.5 ms 
> but also have a fair number that are in the range of 1 ms to 300 ms.
>   
DTrace provides the io provider which has hooks in various places in the 
kernel functions related to handling
physical disk IO. io:::start is invoked when a *physical* IO is started. 
It is not invoked when the IO request is
satisfied in cache.
> I modified the script to dump out the size of the I/O being done and that was 
> interesting to see some unexpected lengths.  Also added ‘start’ and ‘end’ to 
> the appropriate lines as a sanity check to make it easier to pair up the 
> entries. Should always have one start/end pair for a block address.
>
> I have attached an Excel spreadsheet with what I was able to create based on 
> the data collected.
>
> My thanks for any clarifications to these confusions.
>
> pdc
>   
> 
>
> ___
> dtrace-discuss mailing list
> dtrace-discuss@opensolaris.org

___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] Concurrent Disk Activity from "Solaris Performance & Tools" Chap. 4.17.4

2008-11-30 Thread Jim Mauro

Hi Paul -
>
> One thing I have been puzzled with a lot this weekend is the information and 
> plot in Figure 4.7. This section if I understand it correctly, offers the 
> means to track the actual times from when an IO starts in the kernel to when 
> it completes, implying the time to either read or write from disk or memory 
> cache. 
>
> I have been using a data file for an Oracle database as the test subject for 
> this work.
>
> I have several confusion points with this section.
>
> • The graph mentions ‘strategy’ and ‘biodone’ which seem to imply TNF 
> based data, not output from the DTrace script above Figure 4.7. 
>   

The DTrace script uses the IO provider io:::start and io:::done probes.
io:genunix::start enables several probes, including the bdev_strategy 
kernel routine;

nv98> dtrace -n 'io:genunix::start'
dtrace: description 'io:genunix::start' matched 3 probes

CPU ID FUNCTION:NAME
0 22920 bdev_strategy:start
0 22920 bdev_strategy:start
0 22920 bdev_strategy:start

io:genunix::done enables a probe in biodone;

nv98> dtrace -n 'io:genunix::done'
dtrace: description 'io:genunix::done' matched 1 probe
CPU ID FUNCTION:NAME
0 22908 biodone:done


The actions in the dtrace script gather timestamps and block numbers at 
each firing
(assuming the predicates evaluate true).

> • In looking at the data gotten from the DTrace script I see no way to 
> generate the graph of Figure 4.7. Specifically the time difference between 
> ‘0’ and the points for ‘strategy’. With the DTrace script we have the start 
> time of the IO. I see no way to determine some amount of time between ‘start’ 
> and something earlier. The time values on the ‘X’ axis also don’t fall out of 
> the data generated by the DTrace script.
>   
The script in the book generates output that looks like this;
nv98> ./iot.d
122065039977,2100,
122065039980,,2100
122070310632,72,
122070310637,,72
122070310833,81,
122070310836,,81
. . .

The value on the left is the timestamp, the value on the right is the 
block number.
The data was imported into a spreadsheet, and the math was done on the 
start time
and stop (done) time for each block, resulting in IO times on a 
per-block basis.

> • How can it be determined if the IO completed from a memory cache or 
> required an I/O to a physical disk? I have a lot of times less than 0.5 ms 
> but also have a fair number that are in the range of 1 ms to 300 ms.
>   

The lookup in the file system memory cache happens above the bio layer. 
Put another way,
if we're hitting bdev_strategy, we need to do a disk IO to get the data 
(we already missed
in the cache).

HTH,

Thanks,
/jim


> I modified the script to dump out the size of the I/O being done and that was 
> interesting to see some unexpected lengths.  Also added ‘start’ and ‘end’ to 
> the appropriate lines as a sanity check to make it easier to pair up the 
> entries. Should always have one start/end pair for a block address.
>
> I have attached an Excel spreadsheet with what I was able to create based on 
> the data collected.
>
> My thanks for any clarifications to these confusions.
>
> pdc
>   
> 
>
> ___
> dtrace-discuss mailing list
> dtrace-discuss@opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


Re: [dtrace-discuss] more friendly "dtrace -x" output

2008-11-30 Thread Boyd Adamson
Alex Peng <[EMAIL PROTECTED]> writes:

> Hi,
>
> Now to know all DTrace runtime option or D compiler option,  the only way is 
> from that user guide.  Is it possible to just *list* all options in dtrace 
> command line?   Either like "cc" or like "zfs set all":
>
> # cc -x
> cc: Warning: illegal option -x
> usage: cc [ options] files.  Use 'cc -flags' for details
> # cc -flags
> -#Verbose mode
> -###  Show compiler commands built by driver, no compilation
> -APreprocessor predicate assertion
> -B<[static|dynamic]>  Specify dynamic or static binding
> . [output is snipped]
>
> # zfs set all
> missing dataset name
> usage:
>   set   ...
>
> The following properties are supported:
>
>   PROPERTY   EDIT  INHERIT   VALUES
>
>   availableNO   NO   
>   compressratioNO   NO   <1.00x or higher if compressed>
>   creation NO   NO   
>   mounted  NO   NO   yes | no
>   origin   NO   NO   
> .. [output is snipped]

Did you try just "dtrace" ?

Boyd
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


[dtrace-discuss] more friendly "dtrace -x" output

2008-11-30 Thread Alex Peng
Hi,

Now to know all DTrace runtime option or D compiler option,  the only way is 
from that user guide.  Is it possible to just *list* all options in dtrace 
command line?   Either like "cc" or like "zfs set all":

# cc -x
cc: Warning: illegal option -x
usage: cc [ options] files.  Use 'cc -flags' for details
# cc -flags
-#  Verbose mode
-###Show compiler commands built by driver, no compilation
-A  Preprocessor predicate assertion
-B<[static|dynamic]>Specify dynamic or static binding
. [output is snipped]

# zfs set all
missing dataset name
usage:
set   ...

The following properties are supported:

PROPERTY   EDIT  INHERIT   VALUES

availableNO   NO   
compressratioNO   NO   <1.00x or higher if compressed>
creation NO   NO   
mounted  NO   NO   yes | no
origin   NO   NO   
.. [output is snipped]


Thanks,
-Alex
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


[dtrace-discuss] DTrace (API) version?

2008-11-30 Thread Alex Peng
Hi all,

I want to know which dtrace abilities are available in my Solaris box, so I use:

# dtrace -V
dtrace: Sun D 1.6.2

But what does that "Sun D 1.6.2" mean?   I guess there could be a RFE.   What I 
expect is something like this:


===
# zpool upgrade -v
This system is currently running ZFS pool version 13.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.
===

Yes, it's better to have a short description, and a long detailed webpage.
Therefore at least I could know which (new) provider were added at when.


Thanks,
-Alex
-- 
This message posted from opensolaris.org
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org


[dtrace-discuss] Triaging CR #6753139 ('ksh(1) does not document the "==" operator in [[ ]]') ... / was: Re: Round three: Re: code review req: 6750659drti.ocrashes app due to corrupt environment

2008-11-30 Thread Roland Mainz
James Carlson wrote:
> Roland Mainz writes:
> > > ksh(1) and test(1) could use an update.
> >
> > Known problem - http://bugs.opensolaris.org/view_bug.do?bug_id=6753139
> > ('ksh(1) does not document the "==" operator in [[ ]]') was filed to
> > address this documentation problem but currently the bug is "stuck" in
> > "4-Defer:No Plan to Fix" (somehow I feel someone thought this is a ksh88
> > code bug and not a documentation issue and did the wrong bug triage in
> > this case... ;-( ).
> 
> Is this just a man page issue?

Yes, IMO this is just a manpage issue. Technically the whole manual page
_may_ need an update to ksh88 version 'i' since (at least) Solaris >=
2.6.

> If so, I'll recategorize there.

Thanks! ... :-)
... but please keep [EMAIL PROTECTED] in the
bugster "interests" field...



Bye,
Roland

P.S.: Reply-To: set to ksh93-integration-discuss
<[EMAIL PROTECTED]>

-- 
  __ .  . __
 (o.\ \/ /.o) [EMAIL PROTECTED]
  \__\/\/__/  MPEG specialist, C&&JAVA&&Sun&&Unix programmer
  /O /==\ O\  TEL +49 641 3992797
 (;O/ \/ \O;)
___
dtrace-discuss mailing list
dtrace-discuss@opensolaris.org