Re: Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS)

2006-03-25 Thread Peter Jeremy
On Fri, 2006-Mar-24 10:00:20 -0800, Matthew Dillon wrote:
Ok.  The next test is to NOT do umount/remount and then use a data set
that is ~2x system memory (but can still be mmap'd by grep).  Rerun
the data set multiple times using grep and grep --mmap.

The results here are weird.  With 1GB RAM and a 2GB dataset, the
timings seem to depend on the sequence of operations: reading is
significantly faster, but only when the data was mmap'd previously
There's one outlier that I can't easily explain.

hw.physmem: 932249600
hw.usermem: 815050752
+ ls -l /6_i386/var/tmp/test
-rw-r--r--  1 peter  wheel  2052167894 Mar 25 05:44 /6_i386/var/tmp/test
+ /usr/bin/time -l grep dfhfhdsfhjdsfl /6_i386/var/tmp/test
+ /usr/bin/time -l grep --mmap dfhfhdsfhjdsfl /6_i386/var/tmp/test

This was done in multi-user on a VTY using a script.  X was running
(and I forgot to kill an xclock) but there shouldn't have been anything
else happening.

grep --mmap followed by grep --mmap:
mm 77.94 real 1.65 user 2.08 sys
mm 78.22 real 1.53 user 2.21 sys
mm 78.34 real 1.55 user 2.21 sys
mm 79.33 real 1.48 user 2.37 sys

grep --mmap followed by grep/read
mr 56.64 real 0.77 user 2.45 sys
mr 56.73 real 0.67 user 2.53 sys
mr 56.86 real 0.68 user 2.60 sys
mr 57.64 real 0.64 user 2.63 sys
mr 57.71 real 0.62 user 2.68 sys
mr 58.04 real 0.63 user 2.59 sys
mr 58.83 real 0.78 user 2.50 sys
mr 59.15 real 0.74 user 2.50 sys

grep/read followed by grep --mmap
rm 75.98 real 1.56 user 2.19 sys
rm 76.06 real 1.50 user 2.29 sys
rm 76.50 real 1.40 user 2.38 sys
rm 77.35 real 1.47 user 2.30 sys
rm 77.49 real 1.39 user 2.44 sys
rm 79.14 real 1.56 user 2.19 sys
rm 88.88 real 1.57 user 2.27 sys

grep/read followed by grep/read
rr 78.00 real 0.69 user 2.74 sys
rr 78.34 real 0.67 user 2.74 sys
rr 79.64 real 0.69 user 2.71 sys
rr 79.69 real 0.73 user 2.75 sys

free and cache pages.  The system would only be allocating ~60MB/s
(or whatever your disk can do), so the pageout thread ought to be able
to keep up.

This is a laptop so the disk can only manage a bit over 25 MB/sec.

-- 
Peter Jeremy
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: nve timeout (and down) regression?

2006-03-25 Thread JoaoBR
On Saturday 25 March 2006 02:29, Ion-Mihai Tetcu wrote:

  I updated my system (which was happy on Feb. 15 code) to March 13 code
  and I am still running fine. No errors at all. Also, another system was
  updated to RELENG_6 yesterday and it is also running clean.
 
  Again, all systems are identical dual core AMD64 systems running i386
  code. (We would like to run amd64, but OpenOffice.org still does not run
  on it and we need that.)

 Both my systems are single core single CPU.


It appears to be a point
the machines with problem are all SMP, UP do no show the nve timeout or any 
other problem with it
alias, same with SK, on SMP the system crashes and with UP it's ok

João







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS)

2006-03-25 Thread Peter Jeremy
On Fri, 2006-Mar-24 15:18:00 -0500, Mikhail Teterin wrote:
which there is not with the read. Read also requires fairly large buffers in 
the user space to be efficient -- *in addition* to the buffers in the kernel. 

I disagree.  With a filesystem read, the kernel is solely responsible
for handling physical I/O with an efficient buffer size.  The userland
buffers simply amortise the cost of the system call and copyout
overheads.

I'm also quite certain, that fulfulling my demands would add quite a bit of 
complexity to the mmap support in kernel, but hey, that's what the kernel is 
there for :-)

Unfortunately, your patches to implement this seem to have become detached
from your e-mail. :-)

Unlike grep, which seems to use only 32k buffers anyway (and does not use 
madvise -- see attachment), my program mmaps gigabytes of the input file at 
once, trusting the kernel to do a better job at reading the data in the most 
efficient manner :-)

mmap can lend itself to cleaner implementatione because there's no
need to have a nested loop to read buffers and then process them.  You
can mmap then entire file and process it.  The downside is that on a
32-bit architecture, this limits you to processing files that are
somewhat less than 2GB.  The downside is that touching an uncached
page triggers a trap which may not be as efficient as reading a block
of data through the filesystem interface, and I/O errors are delivered
via signals (which may not be as easy to handle).

Peter Jeremy wrote:
 On an amd64 system running about 6-week old -stable, both ['grep' and 'grep 
 --mmap' -mi] behave pretty much identically.

Peter, I read grep's source -- it is not using madvise (because it hurts 
performance on SunOS-4.1!) and reads in chunks of 32k anyway. Would you care 
to look at my program instead? Thanks:

   http://aldan.algebra.com/mzip.c

fetch: http://aldan.algebra.com/mzip.c: Not Found

I tried writing a program that just mmap'd my entire (2GB) test file
and summed all the longwords in it.  This gave me similar results to
grep.  Setting MADV_SEQUENTIAL and/or MADV_WILLNEED made no noticable
difference.  I suspect something about your code or system is disabling
the mmap read-ahead functionality.

What happens if you simulate read-ahead yourself?  Have your main
program fork and the child access pages slightly ahead of the parent
but do nothing else.

-- 
Peter Jeremy
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: nve timeout (and down) regression?

2006-03-25 Thread David Xu
在 Saturday 25 March 2006 18:04,JoaoBR 写道:

 It appears to be a point
 the machines with problem are all SMP, UP do no show the nve timeout or any 
 other problem with it
 alias, same with SK, on SMP the system crashes and with UP it's ok
 
 João
 
Mine is UP, chipset is NForce3 250GB, current it shows TIMEOUT error, system 
freezes while resetting the NIC, but still works.

David Xu
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: nve timeout (and down) regression?

2006-03-25 Thread David G. Lawrence
 This happens w/o any real activity on that interface (which goes into
 an Allied Telesyn switch):
 ...
 Mar 24 19:39:54 worf kernel: nve0: device timeout (1)
 Mar 24 19:39:54 worf kernel: nve0: link state changed to DOWN
 Mar 24 19:39:55 worf kernel: nve0: link state changed to UP
 Mar 24 19:40:14 worf kernel: nve0: device timeout (1)

   The problem is the watchdog timeout itself. I've attached am email that
I sent a few months ago which describes the problem, along with a simple
patch which disables the watchdog timer.

-DG

David G. Lawrence
President
Download Technologies, Inc. - http://www.downloadtech.com - (866) 399 8500
The FreeBSD Project - http://www.freebsd.org
Pave the road of life with opportunities.

Date: Wed, 4 Jan 2006 16:21:03 -0800
Subject: Re: nve(4) patch - please test!

 Since I sent the mail below I had to discover that the new driver
 has a problem when no cable is plugged in, at least on my Asus board.
 
 It doesn't only run into timeouts, during some of these timeout the
 machine or at least the keyboard hangs for about a minute.
 
 Is there anything I can do to help debug this?

   I ran into this problem recently as well and spent some time diagnosing
it. It's not that the cable isn't plugged in - rather it happens whenever
the traffic levels are low.
   The problem is that the nvidia-supplied portion of the driver is defering
the releasing of the completed transmit buffers and this occasionally
results in if_timer expiring, causing the driver watchdog routine to be
called (device timeout). The watchdog routine resets the card and the
nvidia-supplied code sits in a high-priority loop waiting for the card
to reset. This can take many seconds and your system will be hung until
it completes.
   I have a work-around patch for the problem that I've attached to this
email. It simply disables the watchdog. A real fix would involve accounting
for the outstanding transmit buffers differently (or perhaps not at all -
e.g. always attempt to call the nvidia-supplied code and if a queue-full
error occurs, then wait for an interrupt before trying to queue more
transmit packets).

-DG

David G. Lawrence
President
Download Technologies, Inc. - http://www.downloadtech.com - (866) 399 8500
The FreeBSD Project - http://www.freebsd.org
Pave the road of life with opportunities.

Index: if_nve.c
===
RCS file: /home/ncvs/src/sys/dev/nve/if_nve.c,v
retrieving revision 1.7.2.8
diff -c -r1.7.2.8 if_nve.c
*** if_nve.c25 Dec 2005 21:57:03 -  1.7.2.8
--- if_nve.c5 Jan 2006 00:12:45 -
***
*** 943,949 
return;
}
/* Set watchdog timer. */
!   ifp-if_timer = 8;
  
/* Copy packet to BPF tap */
BPF_MTAP(ifp, m0);
--- 943,949 
return;
}
/* Set watchdog timer. */
!   ifp-if_timer = 0;
  
/* Copy packet to BPF tap */
BPF_MTAP(ifp, m0);
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: nve timeout (and down) regression?

2006-03-25 Thread Bjoern A. Zeeb

On Sat, 25 Mar 2006, David Xu wrote:


ÿÿ Saturday 25 March 2006 18:04ÿÿJoaoBR ÿÿ


It appears to be a point
the machines with problem are all SMP, UP do no show the nve timeout or any
other problem with it
alias, same with SK, on SMP the system crashes and with UP it's ok


For sk please try the new driver Pyun is regularly postng and will
commit once the 5/6Rs are done.



Mine is UP, chipset is NForce3 250GB, current it shows TIMEOUT error, system
freezes while resetting the NIC, but still works.


Yes, people are seeing this with Nf4 too. Could you give me the full
details as asked earlier in this thread or as questioned in
http://www.freebsd.org/cgi/query-pr.cgi?pr=94524 ?

--
Bjoern A. Zeeb  bzeeb at Zabbadoz dot NeT___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]

Usb Router

2006-03-25 Thread Maher Mohamed
How can i connect to my usb router?

--
Mohamed M. Maher
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS)

2006-03-25 Thread Mikhail Teterin
On Saturday 25 March 2006 05:39 am, Peter Jeremy wrote:
= On Fri, 2006-Mar-24 15:18:00 -0500, Mikhail Teterin wrote:
= which there is not with the read. Read also requires fairly large
= buffers in the user space to be efficient -- *in addition* to the
= buffers in the kernel. 
= 
= I disagree.  With a filesystem read, the kernel is solely responsible
= for handling physical I/O with an efficient buffer size. The userland
= buffers simply amortise the cost of the system call and copyout
= overheads.

I don't see a disagreement in the above :-) Mmap API can be slightly faster 
than read -- kernel is still responsible for handling physical I/O with an 
efficient buffer size. But instead of copying the data out after reading, it 
can read it directly into the process' memory.

= I'm also quite certain, that fulfulling my demands would add quite a
= bit of complexity to the mmap support in kernel, but hey, that's what the
=  kernel is there for :-)
= 
= Unfortunately, your patches to implement this seem to have become detached
= from your e-mail. :-)

If I manage to *convince* someone, that there is a problem to solve, I'll 
consider it a good contribution to the project...

= mmap can lend itself to cleaner implementatione because there's no
= need to have a nested loop to read buffers and then process them.  You
= can mmap then entire file and process it.  The downside is that on a
= 32-bit architecture, this limits you to processing files that are
= somewhat less than 2GB.

First, only one of our architectures is 32-bit :-) On 64-bit systems, the 
addressable memory (kind of) matches the maximum file size. Second even with 
the loop reading/processing chunks at a time, the implementation is cleaner, 
because it does not need to allocate any memory nor try to guess, which 
buffer size to pick for optimal performance, nor align the buffers on pages 
(which grep is doing, for example, rather hairily).

= The downside is that touching an uncached page triggers a trap which may
= not be as efficient as reading a block of data through the filesystem
= interface, and I/O errors are delivered via signals (which may not be as
= easy to handle).

My point exactly. It does seem to be less efficient *at the moment* and I
am trying to have the kernel support for this cleaner method of reading 
*improved*. By convincing someone with a clue to do it, that is... :-)

= Would you care to look at my program instead? Thanks:
= 
=  http://aldan.algebra.com/mzip.c

I'm sorry, that should be  http://aldan.algebra.com/~mi/mzip.c -- I checked 
this time :-(

= I tried writing a program that just mmap'd my entire (2GB) test file
= and summed all the longwords in it.

The files I'm dealing with are database dumps -- 10-80Gb :-) Maybe, that's,
what triggers some pessimal case?..

Thanks! Yours,

-mi
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Fwd: Usb Router

2006-03-25 Thread Dennis Melentyev
AFAIK, it's still impossible to connect any human body to the Internet via
USB routers. :)

Be more descriptive: show your 'uname -sr', usb-related parts of dmesg,
router model, what do you wish to have as a result, etc.

2006/3/25, Maher Mohamed [EMAIL PROTECTED]:

 How can i connect to my usb router?

 --
 Mohamed M. Maher
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to [EMAIL PROTECTED]


--
Dennis Melentyev
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]

RCS file error in cvsup

2006-03-25 Thread Troy
In doing a cvsup I've run into the following error. I have seen this same
error show up over the last few days.  

Thoughts?

-Troy


Server warning: RCS file error in
/usr/local/etc/cvsup/prefixes/FreeBSD.cvs/src/sys/modules/i2c/controllers/nfsmb/Makefile,v:
1: head expected
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Temperature monitoring in FreeBSD 4/5/6

2006-03-25 Thread Jim King
Marian Hettwer [EMAIL PROTECTED] wrote:
 Doug Ambrisko wrote:
  Stephan Koenig writes:
  | Does anyone know of an easy way to get temperature information out of
  | a Dell PowerEdge 1550/1650/1750/1850/2650/2850 running FreeBSD4/5/6?
  |
  | Something that has a very simple CLI that just outputs the temperature
  | without any formatting, or a library/sysctl, would be ideal.

 Since it wasn't mentioned yet, the OpenBSD folks have a driver for the
 Dell OMSA, ems(4).
 http://www.openbsd.org/cgi-bin/man.cgi?query=esmapropos=0sektion=0manpath=OpenBSD+Currentarch=i386format=html
 It'll do the job you want, and some more :)

 I was wondering wether this could be ported to FreeBSD... hm... would be
 great :)

 I know this doesn't help you now,  but, anyway...

 regards,
 Marian

Interesting!  I have a couple Dell 2450's - I'd LOVE to see this
driver ported to FreeBSD.  If anybody is interested in working on it I
can probably provide access to one of the boxes for testing, and I
might even be able to contribute some cash.

Jim
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS)

2006-03-25 Thread Matthew Dillon

:The results here are weird.  With 1GB RAM and a 2GB dataset, the
:timings seem to depend on the sequence of operations: reading is
:significantly faster, but only when the data was mmap'd previously
:There's one outlier that I can't easily explain.
:...
:Peter Jeremy

Really odd.  Note that if your disk can only do 25 MBytes/sec, the
calculation is: 2052167894 / 25MB = ~80 seconds, not ~60 seconds 
as you would expect from your numbers.

So that would imply that the 80 second numbers represent read-ahead,
and the 60 second numbers indicate that some of the data was retained
from a prior run (and not blown out by the sequential reading in the
later run).

This type of situation *IS* possible as a side effect of other
heuristics.  It is particularly possible when you combine read() with
mmap because read() uses a different heuristic then mmap() to
implement the read-ahead.  There is also code in there which depresses
the page priority of 'old' already-read pages in the sequential case.
So, for example, if you do a linear grep of 2GB you might end up with
a cache state that looks like this:

l = low priority page
m = medium priority page
h = high priority page

FILE: [---m]

Then when you rescan using mmap,

FILE: [l--m]
  [--lm]
  [-l-m]
  [l--m]
  [---l---m]
  [--lm]
  [-llHHHmm]
  [lllHHmmm]
  [---H]
  [---mmmHm]

The low priority pages don't bump out the medium priority pages
from the previous scan, so the grep winds up doing read-ahead
until it hits the large swath of pages already cached from the
previous scan, without bumping out those pages.

There is also a heuristic in the system (FreeBSD and DragonFly)
which tries to randomly retain pages.  It clearly isn't working :-)
I need to change it to randomly retain swaths of pages, the
idea being that it should take repeated runs to rebalance the VM cache
rather then allowing a single run to blow it out or allowing a 
static set of pages to be retained indefinitely, which is what your
tests seem to show is occuring.

-Matt
Matthew Dillon 
[EMAIL PROTECTED]
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS)

2006-03-25 Thread John-Mark Gurney
Mikhail Teterin wrote this message on Sat, Mar 25, 2006 at 09:20 -0500:
 = The downside is that touching an uncached page triggers a trap which may
 = not be as efficient as reading a block of data through the filesystem
 = interface, and I/O errors are delivered via signals (which may not be as
 = easy to handle).
 
 My point exactly. It does seem to be less efficient *at the moment* and I
 am trying to have the kernel support for this cleaner method of reading 
 *improved*. By convincing someone with a clue to do it, that is... :-)

I think the thing is that there isn't an easy way to speed up the
faulting of the page, and that is why you are getting such trouble
making people believe that there is a problem...

To convince people that there is a problem, you need to run benchmarks,
and make code modifications to show that yes, something can be done to
improve the performance...

The other useful/interesting number would be to compare system time
between the mmap case and the read case to see how much work the
kernel is doing in each case...

-- 
  John-Mark Gurney  Voice: +1 415 225 5579

 All that I will do, has been done, All that I have, has not.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS)

2006-03-25 Thread Peter Jeremy
On Sat, 2006-Mar-25 10:29:17 -0800, Matthew Dillon wrote:
Really odd.  Note that if your disk can only do 25 MBytes/sec, the
calculation is: 2052167894 / 25MB = ~80 seconds, not ~60 seconds 
as you would expect from your numbers.

systat was reporting 25-26 MB/sec.  dd'ing the underlying partition gives
27MB/sec (with 24 and 28 for adjacent partions).

This type of situation *IS* possible as a side effect of other
heuristics.  It is particularly possible when you combine read() with
mmap because read() uses a different heuristic then mmap() to
implement the read-ahead.  There is also code in there which depresses
the page priority of 'old' already-read pages in the sequential case.
So, for example, if you do a linear grep of 2GB you might end up with
a cache state that looks like this:

If I've understood you correctly, this also implies that the timing
depends on the previous two scans, not just the previous scan.  I
didn't test all combinations of this but would have expected to see
two distinct sets of mmap/read timings - one for read/mmap/read and
one for mmap/mmap/read.

I need to change it to randomly retain swaths of pages, the
idea being that it should take repeated runs to rebalance the VM cache
rather then allowing a single run to blow it out or allowing a 
static set of pages to be retained indefinitely, which is what your
tests seem to show is occuring.

I dont think this sort of test is a clear indication that something is
wrong.  There's only one active process at any time and it's performing
a sequential read of a large dataset.  In this case, evicting already
cached data to read new data is not necessarily productive (a simple-
minded algorithm will be evicting data this is going to be accessed in
the near future).

Based on the timings, mmap/read case manages to retain ~15% of the file
in cache.  Given the amount of RAM available, the theoretical limit is
about 40% so this isn't too bad.  It would be nicer if both read and
mmap managed this gain, irrespective of how the data had been previously
accessed.

-- 
Peter Jeremy
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS)

2006-03-25 Thread Peter Jeremy
On Sat, 2006-Mar-25 09:20:13 -0500, Mikhail Teterin wrote:
I'm sorry, that should be  http://aldan.algebra.com/~mi/mzip.c -- I checked 
this time :-(

It doesn't look like it's doing anything especially weird.  As Matt
pointed out, creating files with mmap() is not a good idea because the
syncer can cause massive fragmentation when allocating space.  I can't
test is as-is because it insists on mmap'ing its output and I only
have one disk and you can't mmap /dev/null.

Since your program is already written to mmap the input and output in
pieces, it would be trivial to convert it to use read/write.

= I tried writing a program that just mmap'd my entire (2GB) test file
= and summed all the longwords in it.

The files I'm dealing with are database dumps -- 10-80Gb :-) Maybe, that's,
what triggers some pessimal case?..

I tried generating an 11GB test file and got results consistent with my
previous tests:  grep using read or mmap, as well as mmap'ing the entire
file give similar times with the disk mostly saturated.

I suggest you try converting mzip.c to use read/write and see if the
problem is still present.

-- 
Peter Jeremy
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Need help with isp driver and disk arrary

2006-03-25 Thread kreios
I am having a problem with the isp driver seeing a StorageTek disk
array LUNs.  I am directly attaching to the array and everything works
find as long as I plug into the A controller on the array.  When I
connect up to the B controller, I can not see any of the LUNs
advertised.  The QLogic firmware can see the LUNs. The only difference
I can see are that the LUNs start at 0 on the A side and at 3 on the B
side.

Kernel messages for the isp driver are:

isp0: Qlogic ISP 2312 PCI FC-AL Adapter port 0xa000-0xa0ff mem
0xec06-0xec060fff irq 17 at device 0.0 on pci1
isp0: Reserved 0x1000 bytes for rid 0x14 type 3 at 0xec06
isp0: using Memory space register mapping
isp0: [GIANT-LOCKED]
isp0: Board Type 2312, Chip Revision 0x2, loaded F/W Revision 3.3.6
isp0: 839 max I/O commands supported
isp0: NVRAM Port WWN 0x21e08b1e4a71
isp0: bad execution throttle of 0- using 16
isp0: LIP Received
isp0: Loop UP
isp0: Port Database Changed
isp0: LIP Received
isp0: Port Database Changed
isp0: Firmware State Config Wait-Ready
isp0: 2Gb link speed/s
isp0: Loop ID 0, Port ID 0xef, Loop State 0x2, Topology 'Private Loop'
isp0: Target 0 (Loop 0x0) Port ID 0xef (role Initiator) Arrived
isp0: Target 2 (Loop 0x2) Port ID 0xe4 (role Target) Arrived

camcontrol devlist -v shows the following when connected to port A:

scbus0 on isp0 bus 0:
STK BladeCtlr B210 0612  at scbus0 target 0 lun 0 (da0,pass0)
STK BladeCtlr B210 0612  at scbus0 target 0 lun 1 (da1,pass1)
STK BladeCtlr B210 0612  at scbus0 target 0 lun 2 (da2,pass2)
 at scbus0 target -1 lun -1 ()
scbus-1 on xpt0 bus 0:
 at scbus-1 target -1 lun -1 (xpt0)

camcontrol devlist -v shows the following when connected to port B:

scbus0 on isp0 bus 0:
 at scbus0 target -1 lun -1 ()
scbus-1 on xpt0 bus 0:
 at scbus-1 target -1 lun -1 (xpt0)

Thanks,
--
Dave
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Reading via mmap stinks (Re: weird bugs with mmap-ing via NFS)

2006-03-25 Thread Peter Jeremy
On Fri, 2006-Mar-24 15:18:00 -0500, Mikhail Teterin wrote:
On the machine, where both mzip and the disk run at only 50%, the disk is a 
plain SATA drive (mzip's state goes from RUN to vnread and back).
...
   18 usersLoad  0.46  0.53  0.60  24 ??? 15:15

Mem:KBREALVIRTUAL VN PAGER  SWAP PAGER
Tot   Share  TotShareFree in  out in  out
Act 18338645880 2775855245268   92216 count  240
All 18811885992 1432466k52864 pages 3413
 Interrupts
Proc:r  p  d  s  wCsw  Trp  Sys  Int  Sof  Fltcow2252 total
 1 2101  1605 2025  197  4222 2018 251432 wireirq1: 
 atkb
   506156 act irq6: 
 fdc0
 3.0%Sys   0.0%Intr 45.2%User  0.0%Nice 51.9%Idl  1038216 inact   irq15: 
 ata
||||||||||  89252 cache   irq17: 
fwo
= 2964 freeirq20: 
nve
  daefr   irq21: 
 ohc
Namei Name-cacheDir-cache prcfr   241 irq22: 
ehc
Calls hits% hits% 951 react11 irq25: 
 em0
  pdwak   irq29: 
 amr
  618 zfodpdpgs  2000 cpu0: 
 time
Disks   ad4 amrd0 ofodintrn
KB/t  56.79  0.00 %slo-z   200816 buf
tps 241 05143 tfree 8 dirtybuf
MB/s  13.38  0.00  10 desiredvnodes
% busy   47 0   34717 numvnodes
24991 freevnodes

OK.  I _can_ see something like this when I try to compress a big file using
either your program or gzip.  In my case, both the disk % busy and system idle
vary widely but there's typicaly 50-60% disk utilisation and 30-40% CPU idle.
However, systat is reporting 23-25MB/sec (whereas dd peaks at ~30MB/sec) so the
time to gzip the datafile isn't that much different to the time to just read it.

My guess is that the read-ahead algorithms are working but aren't doing enough
re-ahead to cope with read a bit, do some cpu-intensive processing and repeat
at 25MB/sec so you're winding up with a degree of serialisation where the I/O
and compressing aren't overlapped.  I'm not sure how tunable the read-ahead is.

-- 
Peter Jeremy
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


6.1BETA4 sysinstall

2006-03-25 Thread ian j hart
Run sysinstall, custom, distributions
select minimal

No [x] appears.

Selecting custom indicates items were selected.

Seems to be one of the March 8th commits (sam)
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]