Thomas Maier-Komor schrieb:
Hi,

I am once again looking at a kstat output and trying to understand what
> some of these fields might mean and what their unit might be.
> Unfortunately the units aren't documented anywhere, are they?

Use the source, Luke!


biostats is probably the statistics for the ddi I/O buffers of Solaris
> that are accessible via bioinit(9f). So lookup and cache hits and misses
> are counted here. Unit is probably "each".

http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/os/bio.c#biostats

Later in the same file you can look up when the statistics will be incremented. But I don't think biostats are of much interest any more.


Concerning ufs_inode_cache and hsfs_hsnode_cache, I'd like to know what
> unit buf_inuse has. Is it kB or pages or something else?

Unit is number of elements, as with all kstat entries of class "kmem_cache".


BTW: is the result sysconf(_SC_CPUID_MAX) the maximum id a processor can
have or the maximum id no processor will ever have?

Well, sysconf(_SC_CPUID_MAX) ends finally in the kernel:

http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/syscall/sysconfig.c#165

max_cpuid is initialized to a default value of (NCPU - 1) - some architectures may re-set max_cpuid:

http://cvs.opensolaris.org/source/xref/on/usr/src/uts/common/os/cpu.c#max_cpuid


So _SC_CPUID_MAX returns the maximum possible value. If a architecture supports cpuids from 0..31 (total 32 cpus) then sysconf(_SC_CPUID_MAX) will return 31. So you should iterate over

for(cpuid = 0; cpuid <= cpuid_max; ++cpuid)
  ...

But for my performance gathering tool I wrote a few years ago I didn't bother to get _SC_CPUID_MAX at all. I just iterated over all kstat entries while searching for the right kstat modules:

  ncpu = 0;
  for(ksp = kc->kc_chain; ksp != NULL; ksp = ksp->ks_next)
  {
    cpu_stat_t *cp;

    if((ksp->ks_type != KSTAT_TYPE_RAW) ||
       (strncmp(ksp->ks_module, "cpu_stat", 8)))
      continue;
    if(kstat_read(kc, ksp, NULL) == -1)
      continue;
    ++ncpu;
    [...]
  }

It really isn't that inefficient. Even on large machines with many RAM, lots of disks and CPUs the cumulative running time was ~60 minutes over a period of >200 days. The program fetched every 60 seconds performance counters from disks, network, memory and cpu. Only on Solaris 2.6 accessing the kstat system_misc module blocked for a few seconds on machines with large memory.



Daniel
_______________________________________________
opensolaris-discuss mailing list
[email protected]

Reply via email to