Auto bridge for qemu network [was: kqemu support: not compiled]

2008-05-15 Thread bazzoola

Thanks for updating the port!

I have few suggestions:

#cat /etc/rc.conf
[...]
#KQEMU for qemu
kqemu_enable=YES

#Bridge for qemu
cloned_interfaces=bridge0
ifconfig_bridge0=up addm sk0
autobridge_interfaces=bridge0
autobridge_bridge0=tap*

This should take care of the network connection between qemu virtual  
host and the host instead of doing it manually. Assuming that qemu is  
using tap and the default if on the host is sk0.


Also, is it possible to update this page, it has some outdated info:
http://people.freebsd.org/~maho/qemu/qemu.html
*It is the 1st answer from google when asked freebsd qemu

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Approaching the limit on PV entries, consider increasing either the vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.

2008-05-15 Thread Evren Yurtesen

Jeremy Chadwick wrote:

On Wed, May 14, 2008 at 11:39:10PM +0300, Evren Yurtesen wrote:
Approaching the limit on PV entries, consider increasing either the 
vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.
Approaching the limit on PV entries, consider increasing either the 
vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.


I've seen this message on one of our i386 RELENG_7 boxes, which has a
medium load (webserver with PHP) and 2GB RAM.  Our counters, for
comparison:

vm.pmap.pmap_collect_active: 0
vm.pmap.pmap_collect_inactive: 0
vm.pmap.pv_entry_spare: 7991
vm.pmap.pv_entry_allocs: 807863761
vm.pmap.pv_entry_frees: 807708792
vm.pmap.pc_chunk_tryfail: 0
vm.pmap.pc_chunk_frees: 2580082
vm.pmap.pc_chunk_allocs: 2580567
vm.pmap.pc_chunk_count: 485
vm.pmap.pv_entry_count: 154969
vm.pmap.shpgperproc: 200
vm.pmap.pv_entry_max: 1745520



I guess one good question is, how can one see the number of PV entries used by a 
process? shouldnt these appear in the output of ipcs -a command?


Another good question is, in many places there is references to rebooting after 
putting a new vm.pmap.shpgperproc value to loader.conf. However I just changed 
this on a running system, has it really been changed or was I suppose to reboot?


In either case, I already increased vm.pmap.shpgperproc to 2000 (from 200) and 
still the error occurs, there is not so much load on this box, maybe there is a 
leak somewhere?


Thanks,
Evren
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Andriy Gapon

on 15/05/2008 07:22 David Xu said the following:

In fact, libthr is trying to avoid this conveying, if thread #1
hands off the ownership to thread #2, it will cause lots of context
switch, in the idea world, I would let thread #1 to run until it
exhausts its time slice, and at the end of its time slices,
thread #2 will get the mutex ownership, of course it is difficult to
make this work on SMP, but on UP, I would expect the result will
be close enough if thread scheduler is sane, so we don't raise
priority in kernel umtx code if a thread is blocked, this gives
thread #1 some times to re-acquire the mutex without context switches,
increases throughput.


Brent, David,

thank you for the responses.
I think I incorrectly formulated my original concern.
It is not about behavior at mutex unlock but about behavior at mutex 
re-lock. You are right that waking waiters at unlock would hurt 
performance. But I think that it is not fair that at re-lock former 
owner gets the lock immediately and the thread that waited on it for 
longer time doesn't get a chance.


Here's a more realistic example than the mock up code.
Say you have a worker thread that processes queued requests and the load 
is such that there is always something on the queue. Thus the worker 
thread doesn't ever have to block waiting on it.
And let's say that there is a GUI thread that wants to convey some 
information to the worker thread. And for that it needs to acquire some 
mutex and do something.
With current libthr behavior the GUI thread would never have a chance to 
get the mutex as worker thread would always be a winner (as my small 
program shows).
Or even more realistic: there should be a feeder thread that puts things 
on the queue, it would never be able to enqueue new items until the 
queue becomes empty if worker thread's code looks like the following:


while(1)
{
pthread_mutex_lock(work_mutex);
while(queue.is_empty())
pthread_cond_wait(...);
//dequeue item
...
pthread_mutex_unlock(work mutex);
//perform some short and non-blocking processing of the item
...
}

Because the worker thread (while the queue is not empty) would never 
enter cond_wait and would always re-lock the mutex shortly after 
unlocking it.


So while improving performance on small scale this mutex re-acquire-ing 
unfairness may be hurting interactivity and thread concurrency and thus 
performance in general. E.g. in the above example queue would always be 
effectively of depth 1.

Something about lock starvation comes to mind.

So, yes, this is not about standards, this is about reasonable 
expectations about thread concurrency behavior in a particular 
implementation (libthr).
I see now that performance advantage of libthr over libkse came with a 
price. I think that something like queued locks is needed. They would 
clearly reduce raw throughput performance, so maybe that should be a new 
(non-portable?) mutex attribute.


--
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: bin/40278: mktime returns -1 for certain dates/timezones when it should normalize

2008-05-15 Thread Marc Olzheim
With the testcode I put on
http://www.stack.nl/~marcolz/FreeBSD/pr-bin-40278/40278.c I can
reproduce it on FreeBSD 4.11:

output on 4.11-STABLE
--

Init: mktime: 1014944400 Fri Mar  1 02:00:00 CET 2002
1: mktime: 4294967295 Fri Apr  0 02:00:00 CET 2002

Init: mktime: 1014944400 Fri Mar  1 02:00:00 CET 2002
2a: mktime: 1017622800 Mon Apr  1 03:00:00 CEST 2002
2b: mktime: 1017536400 Sun Mar 31 03:00:00 CEST 2002

Init: mktime: 1014944400 Fri Mar  1 02:00:00 CET 2002
3a: mktime: 1014858000 Thu Feb 28 02:00:00 CET 2002
3b: mktime: 1017277200 Thu Mar 28 02:00:00 CET 2002
--

But it is fixed on my FreeBSD 6.x and up systems:

output on 6.3-PRERELEASE:
--

Init: mktime: 1014944400 Fri Mar  1 02:00:00 CET 2002
1: mktime: 1017536400 Sun Mar 31 03:00:00 CEST 2002

Init: mktime: 1014944400 Fri Mar  1 02:00:00 CET 2002
2a: mktime: 1017622800 Mon Apr  1 03:00:00 CEST 2002
2b: mktime: 1017536400 Sun Mar 31 03:00:00 CEST 2002

Init: mktime: 1014944400 Fri Mar  1 02:00:00 CET 2002
3a: mktime: 1014858000 Thu Feb 28 02:00:00 CET 2002
3b: mktime: 1017277200 Thu Mar 28 02:00:00 CET 2002
--

So it looks like it has been fixed in the mean time and that this PR can
be closed.

Marc
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: how much memory does increasing max rules for IPFW take up?

2008-05-15 Thread Andrey V. Elsukov

Vivek Khera wrote:
I had a box run out of dynamic state space yesterday.  I found I can 
increase the number of dynamic rules by increasing the sysctl parameter 
net.inet.ip.fw.dyn_max.  I can't find, however, how this affects memory 
usage on the system.  Is it dyanamically allocated and de-allocated, or 
is it a static memory buffer?


Each dynamic rule allocated dynamically. Be careful, too many dynamic 
rules will work very slow.


--
WBR, Andrey V. Elsukov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread David Xu

Andriy Gapon wrote:


Brent, David,

thank you for the responses.
I think I incorrectly formulated my original concern.
It is not about behavior at mutex unlock but about behavior at mutex 
re-lock. You are right that waking waiters at unlock would hurt 
performance. But I think that it is not fair that at re-lock former 
owner gets the lock immediately and the thread that waited on it for 
longer time doesn't get a chance.


Here's a more realistic example than the mock up code.
Say you have a worker thread that processes queued requests and the load 
is such that there is always something on the queue. Thus the worker 
thread doesn't ever have to block waiting on it.
And let's say that there is a GUI thread that wants to convey some 
information to the worker thread. And for that it needs to acquire some 
mutex and do something.
With current libthr behavior the GUI thread would never have a chance to 
get the mutex as worker thread would always be a winner (as my small 
program shows).
Or even more realistic: there should be a feeder thread that puts things 
on the queue, it would never be able to enqueue new items until the 
queue becomes empty if worker thread's code looks like the following:


while(1)
{
pthread_mutex_lock(work_mutex);
while(queue.is_empty())
pthread_cond_wait(...);
//dequeue item
...
pthread_mutex_unlock(work mutex);
//perform some short and non-blocking processing of the item
...
}

Because the worker thread (while the queue is not empty) would never 
enter cond_wait and would always re-lock the mutex shortly after 
unlocking it.


So while improving performance on small scale this mutex re-acquire-ing 
unfairness may be hurting interactivity and thread concurrency and thus 
performance in general. E.g. in the above example queue would always be 
effectively of depth 1.

Something about lock starvation comes to mind.

So, yes, this is not about standards, this is about reasonable 
expectations about thread concurrency behavior in a particular 
implementation (libthr).
I see now that performance advantage of libthr over libkse came with a 
price. I think that something like queued locks is needed. They would 
clearly reduce raw throughput performance, so maybe that should be a new 
(non-portable?) mutex attribute.




You forgot that default scheduling policy is time-sharing, after thread
#2 has blocked on the mutex for a while, when thread #1 unlocks the 
mutex and unblocks thread #1, the thread #2's priority will be raised

and it preempts thread #1, the thread #2 then acquires the mutex,
that's how it balances between fairness and performance.

Regards,
David Xu

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: syslog console log not logging SCSI problems

2008-05-15 Thread Jeremy Chadwick
On Thu, May 15, 2008 at 10:14:01AM +0100, Nick Barnes wrote:
 One of our FreeBSD boxes has a SCSI controller and disk, which showed
 problems earlier this week.  There was a lot of of chatter from the
 SCSI driver in /var/log/messages and to the console.  However, the
 console is unattended and we only discovered the problem subsequently
 because /var/log/console.log didn't show any of the chatter.
 
 console.log is otherwise working, and very helpful (e.g. it shows
 /etc/rc output at boot which lets us spot daemon failures).
 
 We've rebuilt the machine now (fan failure leading to boot disk
 failure), so I can't report the SCSI chatter in question, but here is
 the dmesg and syslog.conf.  Any suggestions?

/boot/loader.conf, /boot.config, and /etc/make.conf would also be
useful.

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: syslog console log not logging SCSI problems

2008-05-15 Thread Nick Barnes
At 2008-05-15 10:02:03+, Jeremy Chadwick writes:

 Another thing I can think of would be your kernel configuration.  Can
 you provide it?

Just GENERIC.

By the way, the need to set kern.maxdsiz - for really big processes -
doesn't seem to be documented anywhere.  I could have sworn it used to
be in the Handbook, but I don't see it there.  I guess it should be
both there and in tuning(7). Rediscovering this switch took me 30
minutes yesterday.

Nick B
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: syslog console log not logging SCSI problems

2008-05-15 Thread Jeremy Chadwick
On Thu, May 15, 2008 at 10:55:50AM +0100, Nick Barnes wrote:
 At 2008-05-15 09:46:32+, Jeremy Chadwick writes:
  On Thu, May 15, 2008 at 10:14:01AM +0100, Nick Barnes wrote:
   One of our FreeBSD boxes has a SCSI controller and disk, which showed
   problems earlier this week.  There was a lot of of chatter from the
   SCSI driver in /var/log/messages and to the console.  However, the
   console is unattended and we only discovered the problem subsequently
   because /var/log/console.log didn't show any of the chatter.
   
   console.log is otherwise working, and very helpful (e.g. it shows
   /etc/rc output at boot which lets us spot daemon failures).
   
   We've rebuilt the machine now (fan failure leading to boot disk
   failure), so I can't report the SCSI chatter in question, but here is
   the dmesg and syslog.conf.  Any suggestions?
  
  /boot/loader.conf, /boot.config, and /etc/make.conf would also be
  useful.
 
 All empty, except setting kern.maxdsiz in /boot/loader.conf.  Running
 GENERIC.
 
 If this happens again, I will retain a copy of /var/log/messages so I
 can report the SCSI messages in question, but they aren't as
 interesting to me as the fact that they didn't appear in console.log.

I've seen odd behaviour with syslog before, but it's hard to explain.  I
don't use the console.info entry in /etc/syslog.conf, so I can't tell
you what's going on there.

Another thing I can think of would be your kernel configuration.  Can
you provide it?

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: how much memory does increasing max rules for IPFW take up?

2008-05-15 Thread Bruce M. Simpson

Andrey V. Elsukov wrote:

Vivek Khera wrote:
I had a box run out of dynamic state space yesterday.  I found I can 
increase the number of dynamic rules by increasing the sysctl 
parameter net.inet.ip.fw.dyn_max.  I can't find, however, how this 
affects memory usage on the system.  Is it dyanamically allocated and 
de-allocated, or is it a static memory buffer?


Each dynamic rule allocated dynamically. Be careful, too many dynamic 
rules will work very slow.


Got any figures for this? I took a quick glance and it looks like it 
just uses a hash over dst/src/dport/sport. If there are a lot of raw IP 
or ICMP flows then that's going to result in hash collisions.


It might be a good project for someone to optimize if it isn't scaling 
for folk. Bloomier filters are probably worth a look -- bloom filters 
are a class of probabilistic hash which may return a false positive, 
bloomier filters are a refinement which tries to limit the false 
positives.


Having said that the default tunable of 256 state entries is probably 
quite low for use cases other than home/small office NAT gateway.


cheers
BMS
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Andriy Gapon

on 15/05/2008 12:05 David Xu said the following:

Andriy Gapon wrote:


Brent, David,

thank you for the responses.
I think I incorrectly formulated my original concern.
It is not about behavior at mutex unlock but about behavior at mutex 
re-lock. You are right that waking waiters at unlock would hurt 
performance. But I think that it is not fair that at re-lock former 
owner gets the lock immediately and the thread that waited on it for 
longer time doesn't get a chance.


Here's a more realistic example than the mock up code.
Say you have a worker thread that processes queued requests and the 
load is such that there is always something on the queue. Thus the 
worker thread doesn't ever have to block waiting on it.
And let's say that there is a GUI thread that wants to convey some 
information to the worker thread. And for that it needs to acquire 
some mutex and do something.
With current libthr behavior the GUI thread would never have a chance 
to get the mutex as worker thread would always be a winner (as my 
small program shows).
Or even more realistic: there should be a feeder thread that puts 
things on the queue, it would never be able to enqueue new items until 
the queue becomes empty if worker thread's code looks like the following:


while(1)
{
pthread_mutex_lock(work_mutex);
while(queue.is_empty())
pthread_cond_wait(...);
//dequeue item
...
pthread_mutex_unlock(work mutex);
//perform some short and non-blocking processing of the item
...
}

Because the worker thread (while the queue is not empty) would never 
enter cond_wait and would always re-lock the mutex shortly after 
unlocking it.


So while improving performance on small scale this mutex 
re-acquire-ing unfairness may be hurting interactivity and thread 
concurrency and thus performance in general. E.g. in the above example 
queue would always be effectively of depth 1.

Something about lock starvation comes to mind.

So, yes, this is not about standards, this is about reasonable 
expectations about thread concurrency behavior in a particular 
implementation (libthr).
I see now that performance advantage of libthr over libkse came with a 
price. I think that something like queued locks is needed. They would 
clearly reduce raw throughput performance, so maybe that should be a 
new (non-portable?) mutex attribute.




You forgot that default scheduling policy is time-sharing, after thread
#2 has blocked on the mutex for a while, when thread #1 unlocks the 
mutex and unblocks thread #1, the thread #2's priority will be raised

and it preempts thread #1, the thread #2 then acquires the mutex,
that's how it balances between fairness and performance.


Maybe. But that's not what I see with my small example program. One 
thread releases and re-acquires a mutex 10 times in a row while the 
other doesn't get it a single time.
I think that there is a very slim chance of a blocked thread preempting 
a running thread in this circumstances. Especially if execution time 
between unlock and re-lock is very small.
I'd rather prefer to have an option to have FIFO fairness in mutex lock 
rather than always avoiding context switch at all costs and depending on 
scheduler to eventually do priority magic.


--
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


syslog console log not logging SCSI problems

2008-05-15 Thread Nick Barnes
One of our FreeBSD boxes has a SCSI controller and disk, which showed
problems earlier this week.  There was a lot of of chatter from the
SCSI driver in /var/log/messages and to the console.  However, the
console is unattended and we only discovered the problem subsequently
because /var/log/console.log didn't show any of the chatter.

console.log is otherwise working, and very helpful (e.g. it shows
/etc/rc output at boot which lets us spot daemon failures).

We've rebuilt the machine now (fan failure leading to boot disk
failure), so I can't report the SCSI chatter in question, but here is
the dmesg and syslog.conf.  Any suggestions?

(yes, I know this is 6.2-RELEASE; I'm partway through cvsupping; the
failure was under 6.2p11).

Thanks in advance,

Nick Barnes

dmesg:

Copyright (c) 1992-2007 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 6.2-RELEASE #0: Fri Jan 12 11:05:30 UTC 2007
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/SMP
Timecounter i8254 frequency 1193182 Hz quality 0
CPU: Intel(R) Pentium(R) 4 CPU 3.00GHz (3006.01-MHz 686-class CPU)
  Origin = GenuineIntel  Id = 0xf4a  Stepping = 10
  
Features=0xbfebfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE
  Features2=0x649dSSE3,RSVD2,MON,DS_CPL,EST,CNTX-ID,CX16,b14
  AMD Features=0x2010NX,LM
  AMD Features2=0x1LAHF
  Logical CPUs per core: 2
real memory  = 2137440256 (2038 MB)
avail memory = 2086498304 (1989 MB)
ACPI APIC Table: INTEL  D945GTP 
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
 cpu0 (BSP): APIC ID:  0
 cpu1 (AP): APIC ID:  1
ioapic0: Changing APIC ID to 2
ioapic0 Version 2.0 irqs 0-23 on motherboard
kbd1 at kbdmux0
ath_hal: 0.9.17.2 (AR5210, AR5211, AR5212, RF5111, RF5112, RF2413, RF5413)
acpi0: INTEL D945GTP on motherboard
acpi0: Power Button (fixed)
Timecounter ACPI-fast frequency 3579545 Hz quality 1000
acpi_timer0: 24-bit timer at 3.579545MHz port 0x408-0x40b on acpi0
cpu0: ACPI CPU on acpi0
acpi_perf0: ACPI CPU Frequency Control on cpu0
acpi_perf0: failed in PERF_STATUS attach
device_attach: acpi_perf0 attach returned 6
acpi_perf0: ACPI CPU Frequency Control on cpu0
acpi_perf0: failed in PERF_STATUS attach
device_attach: acpi_perf0 attach returned 6
cpu1: ACPI CPU on acpi0
acpi_perf1: ACPI CPU Frequency Control on cpu1
acpi_perf1: failed in PERF_STATUS attach
device_attach: acpi_perf1 attach returned 6
acpi_perf1: ACPI CPU Frequency Control on cpu1
acpi_perf1: failed in PERF_STATUS attach
device_attach: acpi_perf1 attach returned 6
acpi_button0: Sleep Button on acpi0
pcib0: ACPI Host-PCI bridge port 0xcf8-0xcff on acpi0
pci0: ACPI PCI bus on pcib0
pci0: display, VGA at device 2.0 (no driver attached)
pci0: multimedia at device 27.0 (no driver attached)
pcib1: ACPI PCI-PCI bridge at device 28.0 on pci0
pci1: ACPI PCI bus on pcib1
pcib2: ACPI PCI-PCI bridge at device 28.2 on pci0
pci2: ACPI PCI bus on pcib2
pcib3: ACPI PCI-PCI bridge at device 28.3 on pci0
pci3: ACPI PCI bus on pcib3
uhci0: UHCI (generic) USB controller port 0x2080-0x209f irq 23 at device 29.0 
on pci0
uhci0: [GIANT-LOCKED]
usb0: UHCI (generic) USB controller on uhci0
usb0: USB revision 1.0
uhub0: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub0: 2 ports with 2 removable, self powered
uhci1: UHCI (generic) USB controller port 0x2060-0x207f irq 19 at device 29.1 
on pci0
uhci1: [GIANT-LOCKED]
usb1: UHCI (generic) USB controller on uhci1
usb1: USB revision 1.0
uhub1: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub1: 2 ports with 2 removable, self powered
uhci2: UHCI (generic) USB controller port 0x2040-0x205f irq 18 at device 29.2 
on pci0
uhci2: [GIANT-LOCKED]
usb2: UHCI (generic) USB controller on uhci2
usb2: USB revision 1.0
uhub2: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub2: 2 ports with 2 removable, self powered
uhci3: UHCI (generic) USB controller port 0x2020-0x203f irq 16 at device 29.3 
on pci0
uhci3: [GIANT-LOCKED]
usb3: UHCI (generic) USB controller on uhci3
usb3: USB revision 1.0
uhub3: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub3: 2 ports with 2 removable, self powered
ehci0: Intel 82801GB/R (ICH7) USB 2.0 controller mem 0x901c4000-0x901c43ff 
irq 23 at device 29.7 on pci0
ehci0: [GIANT-LOCKED]
usb4: EHCI version 1.0
usb4: companion controllers, 2 ports each: usb0 usb1 usb2 usb3
usb4: Intel 82801GB/R (ICH7) USB 2.0 controller on ehci0
usb4: USB revision 2.0
uhub4: Intel EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
uhub4: 8 ports with 8 removable, self powered
pcib4: ACPI PCI-PCI bridge at device 30.0 on pci0
pci4: ACPI PCI bus on pcib4
ahc0: Adaptec 29160 Ultra160 SCSI adapter port 0x1000-0x10ff mem 
0x9000-0x9fff irq 21 at device 0.0 on pci4
ahc0: [GIANT-LOCKED]
aic7892: Ultra160 Wide Channel A, SCSI Id=7, 32/253 SCBs
fxp0: 

Re: syslog console log not logging SCSI problems

2008-05-15 Thread Nick Barnes
At 2008-05-15 09:46:32+, Jeremy Chadwick writes:
 On Thu, May 15, 2008 at 10:14:01AM +0100, Nick Barnes wrote:
  One of our FreeBSD boxes has a SCSI controller and disk, which showed
  problems earlier this week.  There was a lot of of chatter from the
  SCSI driver in /var/log/messages and to the console.  However, the
  console is unattended and we only discovered the problem subsequently
  because /var/log/console.log didn't show any of the chatter.
  
  console.log is otherwise working, and very helpful (e.g. it shows
  /etc/rc output at boot which lets us spot daemon failures).
  
  We've rebuilt the machine now (fan failure leading to boot disk
  failure), so I can't report the SCSI chatter in question, but here is
  the dmesg and syslog.conf.  Any suggestions?
 
 /boot/loader.conf, /boot.config, and /etc/make.conf would also be
 useful.

All empty, except setting kern.maxdsiz in /boot/loader.conf.  Running
GENERIC.

If this happens again, I will retain a copy of /var/log/messages so I
can report the SCSI messages in question, but they aren't as
interesting to me as the fact that they didn't appear in console.log.

Nick B
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Andriy Gapon

David Schwartz wrote:

Are you out of your mind?! You are specifically asking for the absolute =
worst possible behavior!

If you have fifty tiny things to do on one side of the room and fifty =
tiny things to do on the other side, do you cross the room after each =
one? Of course not. That would be *ludicrous*.

If you want/need strict alternation, feel free to code it. But it's the =
maximally inefficient scheduler behavior, and it sure as hell had better =
not be the default.


David,

what if you have an infinite number of items on one side and finite 
number on the other, and you want to process them all (in infinite time, 
of course). Would you still try to finish everything on one side (the 
infinite one) or would you try to look at what you have on the other side?


I am sorry about fuzzy wording of my original report, I should have 
mentioned starvation somewhere in it.


--
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread David Xu

Andriy Gapon wrote:


Maybe. But that's not what I see with my small example program. One 
thread releases and re-acquires a mutex 10 times in a row while the 
other doesn't get it a single time.
I think that there is a very slim chance of a blocked thread 
preempting a running thread in this circumstances. Especially if 
execution time between unlock and re-lock is very small.
It does not depends on how many times your thread acquires or 
re-acquires mutex,  or
how small the region the mutex is protecting. as long as current thread 
runs too long,
other threads will have higher priorities and the ownership definitely 
will be transfered,

though there will be some extra context switchings.

I'd rather prefer to have an option to have FIFO fairness in mutex 
lock rather than always avoiding context switch at all costs and 
depending on scheduler to eventually do priority magic.



It is better to implement this behavior in your application code,  if it
is implemented in thread library, you still can not control how many
times acquiring and re-acquiring  can be allowed  for a thread without
context switching, a simple FIFO as you said here will cause dreadful
performance problem.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Daniel Eischen

On Thu, 15 May 2008, Andriy Gapon wrote:

Or even more realistic: there should be a feeder thread that puts things on 
the queue, it would never be able to enqueue new items until the queue 
becomes empty if worker thread's code looks like the following:


while(1)
{
pthread_mutex_lock(work_mutex);
while(queue.is_empty())
pthread_cond_wait(...);
//dequeue item
...
pthread_mutex_unlock(work mutex);
//perform some short and non-blocking processing of the item
...
}

Because the worker thread (while the queue is not empty) would never enter 
cond_wait and would always re-lock the mutex shortly after unlocking it.


Well in theory, the kernel scheduler will let both threads run fairly
with regards to their cpu usage, so this should even out the enqueueing
and dequeueing threads.

You could also optimize the above a little bit by dequeueing everything
in the queue instead of one at a time.

So while improving performance on small scale this mutex re-acquire-ing 
unfairness may be hurting interactivity and thread concurrency and thus 
performance in general. E.g. in the above example queue would always be 
effectively of depth 1.

Something about lock starvation comes to mind.

So, yes, this is not about standards, this is about reasonable expectations 
about thread concurrency behavior in a particular implementation (libthr).
I see now that performance advantage of libthr over libkse came with a price. 
I think that something like queued locks is needed. They would clearly reduce 
raw throughput performance, so maybe that should be a new (non-portable?) 
mutex attribute.


--
DE
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Upgrade path from 5.5?

2008-05-15 Thread Jason Porter
I have an old sandbox (obviously if it's 5.5) that I was thinking of
upgrading.  Is there an upgrade path I should think about taking, or would
it be best to backup my /home directory and install from scratch?  Note, I'm
currently not subscribed to the list.

-- 
--Jason Porter
Real Programmers think better when playing Adventure or Rogue.

PGP key id: 926CCFF5
PGP fingerprint: 64C2 C078 13A9 5B23 7738 F7E5 1046 C39B 926C CFF5
PGP key available at: keyserver.net, pgp.mit.edu
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: bin/40278: mktime returns -1 for certain dates/timezones when it should normalize

2008-05-15 Thread Gavin Atkinson
On Thu, 2008-05-15 at 10:51 +0200, Marc Olzheim wrote:
 With the testcode I put on
 http://www.stack.nl/~marcolz/FreeBSD/pr-bin-40278/40278.c I can
 reproduce it on FreeBSD 4.11:

[snip]

 But it is fixed on my FreeBSD 6.x and up systems: 

[snip]

Many thanks for going to the effort of testing this.  I've closed the
PR.

Gavin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: how much memory does increasing max rules for IPFW take up?

2008-05-15 Thread Vivek Khera


On May 15, 2008, at 6:03 AM, Bruce M. Simpson wrote:

Having said that the default tunable of 256 state entries is  
probably quite low for use cases other than home/small office NAT  
gateway.


The deafult on my systems seems to be 4096.  My steady state on a  
pretty popular web server is about 400, on a busy inbound mail server,  
around 800 states.  I need to account for peaks much higher, though.   
Luckily most of my connections are short-lived.


Thanks for the answers!

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: how much memory does increasing max rules for IPFW take up?

2008-05-15 Thread Jeremy Chadwick
On Thu, May 15, 2008 at 11:03:53AM +0100, Bruce M. Simpson wrote:
 Andrey V. Elsukov wrote:
 Vivek Khera wrote:
 I had a box run out of dynamic state space yesterday.  I found I can 
 increase the number of dynamic rules by increasing the sysctl parameter 
 net.inet.ip.fw.dyn_max.  I can't find, however, how this affects memory 
 usage on the system.  Is it dyanamically allocated and de-allocated, or 
 is it a static memory buffer?

 Each dynamic rule allocated dynamically. Be careful, too many dynamic 
 rules will work very slow.

 Got any figures for this? I took a quick glance and it looks like it just 
 uses a hash over dst/src/dport/sport. If there are a lot of raw IP or ICMP 
 flows then that's going to result in hash collisions.

 It might be a good project for someone to optimize if it isn't scaling for 
 folk. Bloomier filters are probably worth a look -- bloom filters are a 
 class of probabilistic hash which may return a false positive, bloomier 
 filters are a refinement which tries to limit the false positives.

 Having said that the default tunable of 256 state entries is probably quite 
 low for use cases other than home/small office NAT gateway.

It's far too low for home/small office.  Standard Linux NAT routers,
such as the Linksys WRT54G/GL, come with a default state table count of
2048, and often is increased by third-party firmwares to 8192 based on
justified necessity.  Search for conntrack below:

http://www.polarcloud.com/firmware

256 can easily be exhausted by more than one user loading multiple HTTP
1.0 web pages at one time (such is the case with many users now have
browsers that load 7-8 web pages into separate tabs during startup).

And if that's not enough reason, consider torrents, which is quite often
what results in a home or office router exhausting its state table.

Bottom line: the 256 default is too low.  It needs to be increased to at
least 2048.

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Upgrade path from 5.5?

2008-05-15 Thread Sergey N. Voronkov
On Thu, May 15, 2008 at 08:31:06AM -0600, Jason Porter wrote:
 I have an old sandbox (obviously if it's 5.5) that I was thinking of
 upgrading.  Is there an upgrade path I should think about taking, or would
 it be best to backup my /home directory and install from scratch?  Note, I'm
 currently not subscribed to the list.

Source upgrade 5.5 - 6.2 - 6.3 works fine for me.

Serg N. Voronkov,
Sibitex JSC.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


cron hanging on to child processes

2008-05-15 Thread Pete French
I have a process which is run daily from cron that stops mysql, does
some stuff, and starts it again. The scriput outputs a number of lines
which are emailed to me in the output of the cron job.

What I have noticed is that my emials actually lag by a day - it turns out
that the cron job appears to not send the email until mysql is sut down the
following day. I can only assume that when mysql is restarted, cron sees it
as a child process, and thus does not terminate until that process does. Which
happens when a new cron job shuts it down again 24 hours later.

Any suggestions on fixing this ? I wouldn't have thought that stopping
and starting a daemon was a particularly unusual thing to want to
do from a cron job.

-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Daniel Eischen

On Thu, 15 May 2008, Daniel Eischen wrote:


On Thu, 15 May 2008, Andriy Gapon wrote:

Or even more realistic: there should be a feeder thread that puts things on 
the queue, it would never be able to enqueue new items until the queue 
becomes empty if worker thread's code looks like the following:


while(1)
{
pthread_mutex_lock(work_mutex);
while(queue.is_empty())
pthread_cond_wait(...);
//dequeue item
...
pthread_mutex_unlock(work mutex);
//perform some short and non-blocking processing of the item
...
}

Because the worker thread (while the queue is not empty) would never enter 
cond_wait and would always re-lock the mutex shortly after unlocking it.


Well in theory, the kernel scheduler will let both threads run fairly
with regards to their cpu usage, so this should even out the enqueueing
and dequeueing threads.

You could also optimize the above a little bit by dequeueing everything
in the queue instead of one at a time.


I suppose you could also enforce your own scheduling with
something like the following:

pthread_cond_t  writer_cv;
pthread_cond_t  reader_cv;
pthread_mutex_t q_mutex;
...
thingy_q_t  thingy_q;
int writers_waiting = 0;
int readers_waiting = 0;
...

void
enqueue(thingy_t *thingy)
{
pthread_mutex_lock(q_mutex);
/* Insert into thingy q */
...
if (readers_waiting  0) {
pthread_cond_broadcast(reader_cv, q_mutex);
readers_waiting = 0;
}
while (thingy_q.size  ENQUEUE_THRESHOLD_HIGH) {
writers_waiting++;
pthread_cond_wait(writer_cv, q_mutex);
}
pthread_mutex_unlock(q_mutex);
}

thingy_t *
dequeue(void)
{
thingy_t *thingy;

pthread_mutex_lock(q_mutex);
while (thingy_q.size == 0) {
readers_waiting++;
pthread_cond_wait(reader_cv, q_mutex);
}
/* Dequeue thingy */
...

if ((writers_waiting  0)
 thingy_q.size  ENQUEUE_THRESHOLD_LOW)) {
/* Wakeup the writers. */
pthread_cond_broadcast(writer_cv, q_mutex);
writers_waiting = 0;
}
pthread_mutex_unlock(q_mutex);
return (thingy);
}

The above is completely untested and probably contains some
bugs ;-)

You probably shouldn't need anything like that if the kernel
scheduler is scheduling your threads fairly.

--
DE
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: thread scheduling at mutex unlock

2008-05-15 Thread David Schwartz

 what if you have an infinite number of items on one side and finite 
 number on the other, and you want to process them all (in infinite time, 
 of course). Would you still try to finish everything on one side (the 
 infinite one) or would you try to look at what you have on the other side?
 
 I am sorry about fuzzy wording of my original report, I should have 
 mentioned starvation somewhere in it.

There is no such thing as a fair share when comparing an infinite quantity to 
a finite quantity. It is just as sensible to do 1 then 1 as 10 then 10 or a 
billion then 1.

What I would do in this case is work on one side for one timeslice then the 
other side for one timeslice, continuuing until either side was finished, then 
I'd work exclusively on the other side. This is precisely the purpose for 
having timeslices in a scheduler.

The timeslice is carefully chosen so that it's not so long that you ignore a 
side for too long. It's also carefully chosen so that it's not so short that 
you spend all your time switching swides.

What sane schedulers do is assume that you want to make as much forward 
progress as quickly as possible. This means getting as many work units done per 
unit time as possible. This means as few context switches as possible.

A scheduler that switches significantly more often than once per timeslice with 
a load like this is *broken*. The purpose of the timeslice is to place an upper 
bound on the number of context switches in cases where forward progress can be 
made on more than one process. An ideal scheduler would not switch more often 
than once per timeslice unless it could not make further forward progress.

Real-world schedulers actually may allow one side to pre-empt the other, and 
may switch a bit more often than a scheduler that's ideal in the sense 
described above. This is done in an attempt to boost interactive performance.

But your basic assumption that strict alternation is desirable is massively 
wrong. That's the *worst* *possible* outcome.

DS


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Brent Casavant
On Thu, 15 May 2008, Andriy Gapon wrote:

 With current libthr behavior the GUI thread would never have a chance to get
 the mutex as worker thread would always be a winner (as my small program
 shows).

The example you gave indicates an incorrect mechanism being used for the
GUI to communicate with this worker thread.  For the behavior you desire,
you need a common condition that lets both the GUI and the work item
generator indicate that there is something for the worker to do, *and*
you need seperate mechanisms for the GUI and work item generator to add
to their respective queues.

Something like this (could be made even better with a little effor):

struct worker_queues_s {
pthread_mutex_t work_mutex;
struct work_queue_s work_queue;

pthread_mutex_t gui_mutex;
struct gui_queue_s  gui_queue;

pthread_mutex_t stuff_mutex;
int stuff_todo;
pthread_cond_t  stuff_cond;
};
struct worker_queue_s wq;

int
main(int argc, char *argv[]) {
// blah blah
init_worker_queue(wq);
// blah blah
}

void
gui_callback(...) {
// blah blah

// Set up GUI message

pthread_mutex_lock(wq.gui_mutex);
// Add GUI message to queue
pthread_mutex_unlock(wq.gui_mutex);

pthread_mutex_lock(wq.stuff_mutex);
wq.stuff_todo++;
pthread_cond_signal(wq.stuff_cond);
pthread_mutex_unlock(wq.stuff_mutex);

// blah blah
}

void*
work_generator_thread(void*) {
// blah blah

while (1) {
// Set up work to do

pthread_mutex_lock(wq.work_mutex);
// Add work item to queue
pthread_mutex_unlock(wq.work_mutex);

pthread_mutex_lock(wq.stuff_mutex);
wq.stuff_todo++;
pthread_cond_signal(wq.stuff_cond);
pthread_mutex_unlock(wq.stuff_mutex);
}

// blah blah
}

void*
worker_thread(void* arg) {
// blah blah

while (1) {
// Wait for there to be something to do
pthread_mutex_lock(wq.stuff_mutex);
while (wq.stuff_todo  1) {
pthread_cond_wait(wq.stuff_cond,
  wq.stuff_mutex);
}
pthread_mutex_unlock(wq.stuff_mutex);

// Handle GUI messages
pthread_mutex_lock(wq.gui_mutex);
while (!gui_queue_empty(wq.gui_queue) {
// dequeue and process GUI messages
pthread_mutex_lock(wq.stuff_mutex);
wq.stuff_todo--;
pthread_mutex_unlock(wq.stuff_mutex);
}
pthread_mutex_unlock(wq.gui_mutex);

// Handle work items
pthread_mutex_lock(wq.work_mutex);
while (!work_queue_empty(wq.work_queue)) {
// dequeue and process work item
pthread_mutex_lock(wq.stuff_mutex);
wq.stuff_todo--;
pthread_mutex_unlock(wq.stuff_mutex);
}
pthread_mutex_unlock(wq.work_mutex);
}

// blah blah
}

This should accomplish what you desire.  Caution that I haven't
compiled, run, or tested it, but I'm pretty sure it's a solid
solution.

The key here is unifying the two input sources (the GUI and work queues)
without blocking on either one of them individually.  The value of
(wq.stuff_todo  1) becomes a proxy for the value of
(gui_queue_empty(...)  work_queue_empty(...)).

I hope that helps,
Brent

-- 
Brent Casavant  Dance like everybody should be watching.
www.angeltread.org
KD5EMB, EN34lv
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: thread scheduling at mutex unlock

2008-05-15 Thread David Schwartz

 Brent, David,

 thank you for the responses.
 I think I incorrectly formulated my original concern.
 It is not about behavior at mutex unlock but about behavior at mutex
 re-lock. You are right that waking waiters at unlock would hurt
 performance. But I think that it is not fair that at re-lock former
 owner gets the lock immediately and the thread that waited on it for
 longer time doesn't get a chance.

You are correct, but fairness is not the goal, performance is. If you want
fairness, you are welcome to code it. But threads don't file union
grievances, and it would be absolute foolishness for a scheduler to
sacrifice performance to make threads happier.

The scheduler decides which thread runs, you decide what the running thread
does. You are expected to use your control over that latter part to exercise
whatever your application notion of fairness is.

Your test program is a classic example of a case where the use of a mutex is
inappropriate.

 Here's a more realistic example than the mock up code.
 Say you have a worker thread that processes queued requests and the load
 is such that there is always something on the queue. Thus the worker
 thread doesn't ever have to block waiting on it.
 And let's say that there is a GUI thread that wants to convey some
 information to the worker thread. And for that it needs to acquire some
 mutex and do something.
 With current libthr behavior the GUI thread would never have a chance to
 get the mutex as worker thread would always be a winner (as my small
 program shows).

Nonsense. The worker thread would be doing work most of the time and
wouldn't be holding the mutex.

 Or even more realistic: there should be a feeder thread that puts things
 on the queue, it would never be able to enqueue new items until the
 queue becomes empty if worker thread's code looks like the following:

 while(1)
 {
 pthread_mutex_lock(work_mutex);
 while(queue.is_empty())
   pthread_cond_wait(...);
 //dequeue item
 ...
 pthread_mutex_unlock(work mutex);
 //perform some short and non-blocking processing of the item
 ...
 }

 Because the worker thread (while the queue is not empty) would never
 enter cond_wait and would always re-lock the mutex shortly after
 unlocking it.

So what? The feeder thread could get the mutex after the mutex is unlocked
before the worker thread goes to do work. The only reason your test code
encountered a problem was because you yielded the CPU while you held the
mutex and never used up a timeslice.

 So while improving performance on small scale this mutex re-acquire-ing
 unfairness may be hurting interactivity and thread concurrency and thus
 performance in general. E.g. in the above example queue would always be
 effectively of depth 1.
 Something about lock starvation comes to mind.

Nope. You have to create a situation where the mutex is held much more often
than not held to get this behavior. That's a pathological case where the use
of a mutex is known to be inappropriate.

 So, yes, this is not about standards, this is about reasonable
 expectations about thread concurrency behavior in a particular
 implementation (libthr).
 I see now that performance advantage of libthr over libkse came with a
 price. I think that something like queued locks is needed. They would
 clearly reduce raw throughput performance, so maybe that should be a new
 (non-portable?) mutex attribute.

If you want queued locks, feel free to code them and use them. But you have
to work very hard to create a case where they are useful. If you find you're
holding the mutex more often than not, you're doing something *very* wrong.

DS


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Andriy Gapon

on 15/05/2008 15:57 David Xu said the following:

Andriy Gapon wrote:


Maybe. But that's not what I see with my small example program. One 
thread releases and re-acquires a mutex 10 times in a row while the 
other doesn't get it a single time.
I think that there is a very slim chance of a blocked thread 
preempting a running thread in this circumstances. Especially if 
execution time between unlock and re-lock is very small.
It does not depends on how many times your thread acquires or 
re-acquires mutex,  or
how small the region the mutex is protecting. as long as current thread 
runs too long,
other threads will have higher priorities and the ownership definitely 
will be transfered,

though there will be some extra context switchings.


David,

did you examine or try the small program that I sent before?
The lucky thread slept for 1 second each time it held mutex.
So in total it spent about 8 seconds sleeping and holding the mutex. And 
the unlucky thread, consequently, spent 8 seconds blocked waiting for 
that mutex. And it didn't get lucky.
Yes, technically the lucky thread was not running while holding the 
mutex, so probably this is why scheduling algorithm didn't immediately work.


I did more testing and see that the unlucky thread eventually gets a 
chance (eventually means after very many lock/unlock cycles), but I 
think that it is penalized too much still.
I wonder if with current code it is possible and easy to make this 
behavior more deterministic.

Maybe something like the following:
if (oldest_waiter.wait_time  X)
   do what we do now...
else
   go into kernel for possible switch

I have very little idea about unit and value of X.

I'd rather prefer to have an option to have FIFO fairness in mutex 
lock rather than always avoiding context switch at all costs and 
depending on scheduler to eventually do priority magic.



It is better to implement this behavior in your application code,  if it
is implemented in thread library, you still can not control how many
times acquiring and re-acquiring  can be allowed  for a thread without
context switching, a simple FIFO as you said here will cause dreadful
performance problem.


I almost agree. But I still wouldn't take your last statement for a 
fact. Dreadful performance  - on micro-scale maybe, not necessarily on 
macro scale.
After all, never switching context would be the best performance for a 
single CPU-bound task, but you wouldn't think that this is the best 
performance for the whole system.


As a data point: it seems that current Linux threading library is not 
significantly worse than libthr, but my small test program on Fedora 7 
works to my expectations.


--
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Andrew Snow

But I think that it is not fair that at re-lock former
owner gets the lock immediately and the thread that waited on it for
longer time doesn't get a chance.


I believe this is what yield() is for.  Before attempting a re-lock you 
should call yield() to allow other threads a chance to run.


(Side note: On FreeBSD, I believe only high priority threads will run 
when you yield().  As a workaround, I think you have to lower the 
thread's priority before yield() and then raise it again afterwards.)



- Andrew
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Andriy Gapon

on 15/05/2008 22:29 David Schwartz said the following:

what if you have an infinite number of items on one side and finite
 number on the other, and you want to process them all (in infinite
time, of course). Would you still try to finish everything on one
side (the infinite one) or would you try to look at what you have
on the other side?

I am sorry about fuzzy wording of my original report, I should have
 mentioned starvation somewhere in it.


There is no such thing as a fair share when comparing an infinite
quantity to a finite quantity. It is just as sensible to do 1 then 1
as 10 then 10 or a billion then 1.

What I would do in this case is work on one side for one timeslice
then the other side for one timeslice, continuuing until either side
was finished, then I'd work exclusively on the other side. This is
precisely the purpose for having timeslices in a scheduler.

The timeslice is carefully chosen so that it's not so long that you
ignore a side for too long. It's also carefully chosen so that it's
not so short that you spend all your time switching swides.

What sane schedulers do is assume that you want to make as much
forward progress as quickly as possible. This means getting as many
work units done per unit time as possible. This means as few context
switches as possible.

A scheduler that switches significantly more often than once per
timeslice with a load like this is *broken*. The purpose of the
timeslice is to place an upper bound on the number of context
switches in cases where forward progress can be made on more than one
process. An ideal scheduler would not switch more often than once per
timeslice unless it could not make further forward progress.

Real-world schedulers actually may allow one side to pre-empt the
other, and may switch a bit more often than a scheduler that's
ideal in the sense described above. This is done in an attempt to
boost interactive performance.

But your basic assumption that strict alternation is desirable is
massively wrong. That's the *worst* *possible* outcome.


David,

thank you for the tutorial, it is quite enlightening.
But first of all, did you take a look at my small test program?
There are 1 second sleeps in it, this is not about timeslices and 
scheduling at that level at all. This is about basic expectation about 
fairness of acquiring a lock at macro level. I know that when one thread 
acquires and releases and reacquires a mutex during 10 seconds while the 
other thread is blocked on that mutex for 10 seconds, then this is not 
about timeslices.


--
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-15 Thread Andriy Gapon

on 15/05/2008 22:51 Brent Casavant said the following:

On Thu, 15 May 2008, Andriy Gapon wrote:


With current libthr behavior the GUI thread would never have a chance to get
the mutex as worker thread would always be a winner (as my small program
shows).


The example you gave indicates an incorrect mechanism being used for the
GUI to communicate with this worker thread.  For the behavior you desire,
you need a common condition that lets both the GUI and the work item
generator indicate that there is something for the worker to do, *and*
you need seperate mechanisms for the GUI and work item generator to add
to their respective queues.



Brent,

that was just an example. Probably a quite bad example.
I should only limit myself to the program that I sent and I should 
repeat that the result that it produces is not what I would call 
reasonably expected. And I will repeat that I understand that the 
behavior is not prohibited by standards (well, never letting other 
threads to run is probably not prohibited either).




Something like this (could be made even better with a little effor):

struct worker_queues_s {
pthread_mutex_t work_mutex;
struct work_queue_s work_queue;

pthread_mutex_t gui_mutex;
struct gui_queue_s  gui_queue;

pthread_mutex_t stuff_mutex;
int stuff_todo;
pthread_cond_t  stuff_cond;
};
struct worker_queue_s wq;

int
main(int argc, char *argv[]) {
// blah blah
init_worker_queue(wq);
// blah blah
}

void
gui_callback(...) {
// blah blah

// Set up GUI message

pthread_mutex_lock(wq.gui_mutex);
// Add GUI message to queue
pthread_mutex_unlock(wq.gui_mutex);

pthread_mutex_lock(wq.stuff_mutex);
wq.stuff_todo++;
pthread_cond_signal(wq.stuff_cond);
pthread_mutex_unlock(wq.stuff_mutex);

// blah blah
}

void*
work_generator_thread(void*) {
// blah blah

while (1) {
// Set up work to do

pthread_mutex_lock(wq.work_mutex);
// Add work item to queue
pthread_mutex_unlock(wq.work_mutex);

pthread_mutex_lock(wq.stuff_mutex);
wq.stuff_todo++;
pthread_cond_signal(wq.stuff_cond);
pthread_mutex_unlock(wq.stuff_mutex);
}

// blah blah
}

void*
worker_thread(void* arg) {
// blah blah

while (1) {
// Wait for there to be something to do
pthread_mutex_lock(wq.stuff_mutex);
while (wq.stuff_todo  1) {
pthread_cond_wait(wq.stuff_cond,
  wq.stuff_mutex);
}
pthread_mutex_unlock(wq.stuff_mutex);

// Handle GUI messages
pthread_mutex_lock(wq.gui_mutex);
while (!gui_queue_empty(wq.gui_queue) {
// dequeue and process GUI messages
pthread_mutex_lock(wq.stuff_mutex);
wq.stuff_todo--;
pthread_mutex_unlock(wq.stuff_mutex);
}
pthread_mutex_unlock(wq.gui_mutex);

// Handle work items
pthread_mutex_lock(wq.work_mutex);
while (!work_queue_empty(wq.work_queue)) {
// dequeue and process work item
pthread_mutex_lock(wq.stuff_mutex);
wq.stuff_todo--;
pthread_mutex_unlock(wq.stuff_mutex);
}
pthread_mutex_unlock(wq.work_mutex);
}

// blah blah
}

This should accomplish what you desire.  Caution that I haven't
compiled, run, or tested it, but I'm pretty sure it's a solid
solution.

The key here is unifying the two input sources (the GUI and work queues)
without blocking on either one of them individually.  The value of
(wq.stuff_todo  1) becomes a proxy for the value of
(gui_queue_empty(...)  work_queue_empty(...)).

I hope that helps,
Brent




--
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail 

Re: thread scheduling at mutex unlock

2008-05-15 Thread Brent Casavant
On Fri, 16 May 2008, Andriy Gapon wrote:

 that was just an example. Probably a quite bad example.
 I should only limit myself to the program that I sent and I should repeat that
 the result that it produces is not what I would call reasonably expected. And
 I will repeat that I understand that the behavior is not prohibited by
 standards (well, never letting other threads to run is probably not prohibited
 either).

Well, I don't know what to tell you at this point.  I believe I
understand the nature of the problem you're encountering, and I
believe there are perfectly workable mechanisms in Pthreads to
allow you to accomplish what you desire without depending on
implementation-specific details.  Yes, it's more work on your
part, but if done well it's one-time work.

The behavior you desire is useful only in limited situations,
and can be implemented at the application level through the
use of Pthreads primitives.  If Pthreads behaved as you apparently
expect, it would be impossible to implement the current behavior
at the application level.

Queueing mutexes are innappropriate in the majority of code designs.
I'll take your word that it is appropriate in your particular case,
but that does not make it appropriate for more typical designs.

Several solutions have been presented, including one from me.  If
you choose not to implement such solutions, then best of luck to you.

OK, I'm a sucker for punishment.  So use this instead of Pthreads
mutexes.  This should work on both FreeBSD and Linux (FreeBSD has
some convenience routines in the sys/queue.h package that Linux doesn't):

#include sys/queue.h
#include pthread.h

struct thread_queue_entry_s {
TAILQ_ENTRY(thread_queue_entry_s) tqe_list;
pthread_cond_t tqe_cond;
pthread_mutex_t tqe_mutex;
int tqe_wakeup;
};
TAILQ_HEAD(thread_queue_s, thread_queue_entry_s);

typedef struct {
struct thread_queue_s qm_queue;
pthread_mutex_t qm_queue_lock;
unsigned int qm_users;
} queued_mutex_t;

int
queued_mutex_init(queued_mutex_t *qm) {
TAILQ_INIT(qm-qm_queue);
qm-qm_users = 0;
return pthread_mutex_init(qm-qm_queue_lock, NULL);
}

int
queued_mutex_lock(queued_mutex_t *qm) {
struct thread_queue_entry_s waiter;

pthread_mutex_lock(qm-qm_queue_lock);
qm-qm_users++;
if (1 == qm-qm_users) {
/* Nobody was waiting for mutex, we own it.
 * Fast path out.
 */
pthread_mutex_unlock(qm-qm_queue_lock);
return 0;
}

/* There are others waiting for the mutex. Slow path. */

/* Initialize this thread's wait structure */
pthread_cond_init(waiter-tqe_cond, NULL);
pthread_mutex_init(waiter-tqe_mutex, NULL);
pthread_mutex_lock(waiter-tqe_mutex);
waiter-tqe_wakeup = 0;

/* Add this thread's wait structure to queue */
TAILQ_INSERT_TAIL(qm-qm_queue, waiter, tqe_list);
pthread_mutex_unlock(qm-qm_queue_lock);

/* Wait for somebody to hand the mutex to us */
while (!waiter-tqe_wakeup) {
pthread_cond_wait(waiter-tqe_cond,
  waiter-tqe_mutex);
}

/* Destroy this thread's wait structure */
pthread_mutex_unlock(waiter-tqe_mutex);
pthread_mutex_destroy(waiter-tqe_mutex);
pthread_cond_destroy(waiter-tqe_cond);

/* We own the queued mutex (handed to us by unlock) */
return 0;
}

int
queued_mutex_unlock(queued_mutex_t *qm) {
struct thread_queue_entry_s *waiter;

pthread_mutex_lock(qm-qm_queue_lock);
qm-qm_users--;
if (0 == qm-qm_users) {
/* No waiters to wake up.  Fast path out. */
pthread_mutex_unlock(qm-qm_queue_lock);
return 0;
}

/* Wake up first waiter. Slow path. */

/* Remove the first waiting thread. */
waiter = qm-qm_queue.tqh_first;
TAILQ_REMOVE(qm-qm_queue, waiter, tqe_list);
pthread_mutex_unlock(qm-qm_queue_lock);

/* Wake up the thread. */
pthread_mutex_lock(waiter-tqe_mutex);
waiter-tqe_wakeup = 1;
pthread_cond_signal(waiter-tqe_cond);
pthread_mutex_unlock(waiter-tqe_mutex);

return 0;
}

int

RE: thread scheduling at mutex unlock

2008-05-15 Thread David Schwartz

 David,
 
 thank you for the tutorial, it is quite enlightening.
 But first of all, did you take a look at my small test program?

Yes. It demonstrates the classic example of mutex abuse. A mutex is not an 
appropriate synchronization mechanism when it's going to be held most of the 
time and released briefly.

 There are 1 second sleeps in it, this is not about timeslices and 
 scheduling at that level at all. This is about basic expectation about 
 fairness of acquiring a lock at macro level. I know that when one thread 
 acquires and releases and reacquires a mutex during 10 seconds while the 
 other thread is blocked on that mutex for 10 seconds, then this is not 
 about timeslices.

I guess it comes down to what your test program is supposed to test. Threading 
primitives can always be made to look bad in toy test programs that don't even 
remotely reflect real-world use cases. No sane person optimizes for such toys.

The reason your program behaves the way it does is because the thread that 
holds the mutex relinquishes the CPU while it holds it. As such, it appears to 
be very nice and is its dynamic priority level rises. In a real-world case, the 
threads waiting for the mutex will have their priorities rise while the thread 
holding the mutex will use up its timeslice working.

This is simply not appropriate use of a mutex. It would be absolute foolishness 
to encumber the platform's default mutex implementation with any attempt to 
make such abuses do more what you happen to expect them to do.

In fact, the behavior I expect is the behavior seen. So the defect is in your 
unreasonable expectations. The scheduler's goal is to allow the running thread 
to make forward progress, and it does this perfectly.

DS


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


RE: cvsup.uk.FreeBSD.org

2008-05-15 Thread Dr Josef Karthauser
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:owner-freebsd-
 [EMAIL PROTECTED] On Behalf Of Tony Finch
 Sent: 12 May 2008 17:06
 To: Dr Joe Karthauser
 Cc: [EMAIL PROTECTED]; freebsd-stable@freebsd.org
 Subject: Re: cvsup.uk.FreeBSD.org
 
 On Sun, 11 May 2008, Dr Joe Karthauser wrote:
 
  I have reclassified this faulty mirror as cvsup1 and made cvsup a
 cname to
  cvsup3, which is the most recent addition and best hardware
 available. In
  the future we will always point to the most available machine in this
 way.
 
 Looks like I'm getting a bit more traffic than before - peaking at over
 100 logins an hour.

As a matter of interest, do you know what the peak bandwidth usage is?

Joe 


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


today's build is causing errors for me

2008-05-15 Thread Rob Lytle
First I am running 7.0-Stable and just cvsup'd today.  Then I built the
system, new kernel, and installed them.  Second I am using the GENERIC
KERNEL.  Sysctl.conf is empty.  I will put my /etc/rc.conf at the end.  I
tried to do a very careful job of merging /usr/src/etc with /etc.  I didn't
touch any files that I or the computer configured.

But I am getting these errors upon bootup:

1.  eval: /etc/rc.d/cleanvar: Permission Denied
2.  syslogd: bind: Can't assign requested address.  (repeated twice)
3.  syslogd: child pid 134 exited with return code 1
4.  /etc/rc:  Warning:  Dump device does not exist.  Savecore will not
run.   (this always worked before)
5.  /etc/rc.d/securelevel:  /etc/rc.d/sysctl:  Permission denied.
6.  My computer says Amnesiac yet the host name is clearly in rc.conf
7.  My WiFi no longer starts up by myself.  I have to do it all manually
using ifconfig and dhclient.

Any help would be appreciated.  I'm kind of lost as some of it makes no
sense to me, esp #6 and 7.
Has the default rc.conf format changed???

Thanks,

Sincerely,  Rob

--
My rc.conf file

# -- sysinstall generated deltas -- # Sun Oct 28 11:36:26 2007
# Created: Sun Oct 28 11:36:26 2007
# Enable network daemons for user convenience.
# Please make all changes to this file, not to /etc/defaults/rc.conf.
# This file now contains just the overrides from /etc/defaults/rc.conf.
hostname=xenon   #  for DHCP
linux_enable=YES
#moused_enable=YES
sshd_enable=YES
usbd_enable=YES
lpd_enable=YES
kern_securelevel_enable=NO
dumpdev=AUTO
dumpdir=/var/crash
cron_enable=YES
performance_cx_lowest=LOW
performance_cpu_freq=HIGH
economy_cx_lowest=LOW
economy_cpu_freq=LOW
ipfilter_enable=YES
ipfilter_rules=/etc/ipfw.rules
ipmon_enable=YES
ipmon_flags=-Ds
watchdogd_enable=YES
powerd_enable=YES
mixer_enable=YES

#ifconfig_ath0=WPA DHCP channel 3
#ifconfig_msk0=DHCP
ifconfig_ath0=DHCP ssid leighmorlock channel 6
# added by mergebase.sh
local_startup=/usr/local/etc/rc.d
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: how much memory does increasing max rules for IPFW take up?

2008-05-15 Thread Ian Smith
On Thu, 15 May 2008, Jeremy Chadwick wrote:
  On Thu, May 15, 2008 at 11:03:53AM +0100, Bruce M. Simpson wrote:
   Andrey V. Elsukov wrote:
   Vivek Khera wrote:
   I had a box run out of dynamic state space yesterday.  I found I can 
   increase the number of dynamic rules by increasing the sysctl parameter 
   net.inet.ip.fw.dyn_max.  I can't find, however, how this affects memory 
   usage on the system.  Is it dyanamically allocated and de-allocated, or 
   is it a static memory buffer?
  
   Each dynamic rule allocated dynamically. Be careful, too many dynamic 
   rules will work very slow.
  
   Got any figures for this? I took a quick glance and it looks like it just 
   uses a hash over dst/src/dport/sport. If there are a lot of raw IP or ICMP 
   flows then that's going to result in hash collisions.
  
   It might be a good project for someone to optimize if it isn't scaling for 
   folk. Bloomier filters are probably worth a look -- bloom filters are a 
   class of probabilistic hash which may return a false positive, bloomier 
   filters are a refinement which tries to limit the false positives.
  
   Having said that the default tunable of 256 state entries is probably 
   quite 
   low for use cases other than home/small office NAT gateway.
  
  It's far too low for home/small office.  Standard Linux NAT routers,
  such as the Linksys WRT54G/GL, come with a default state table count of
  2048, and often is increased by third-party firmwares to 8192 based on
  justified necessity.  Search for conntrack below:
  
  http://www.polarcloud.com/firmware
  
  256 can easily be exhausted by more than one user loading multiple HTTP
  1.0 web pages at one time (such is the case with many users now have
  browsers that load 7-8 web pages into separate tabs during startup).
  
  And if that's not enough reason, consider torrents, which is quite often
  what results in a home or office router exhausting its state table.
  
  Bottom line: the 256 default is too low.  It needs to be increased to at
  least 2048.

I think there may be some confusion in terms.  Looking at defaults on my
older 5.5 system - sure, call it a home/small office NAT gateway: 

 net.inet.ip.fw.dyn_buckets: 256
 net.inet.ip.fw.curr_dyn_buckets: 256
 net.inet.ip.fw.dyn_count: 212
 net.inet.ip.fw.dyn_max: 4096
 net.inet.ip.fw.static_count: 153

What defaults to 256 is the number of hash table buckets, not the max
number of dynamic rules, here 4096 (though the 5.5 manual says 8192).

On hash collisions, a linked list is used for duplicate hashes of:

i = (id-dst_ip) ^ (id-src_ip) ^ (id-dst_port) ^ (id-src_port);
i = (curr_dyn_buckets - 1);

So while 256 may well be too few buckets for many systems, and like
Bruce I wonder about the effectiveness of the xor hash for raw IP  ICMP
and wouldn't mind seeing some stats on bucket use vs linked list lengths
for various workloads, it doesn't determine the max no. of dynamic rules
available, which is adjustable up without any apparent static memory
allocation, and is moderated by the various expiry timeout sysctls. 

For reference, I admin a 4.8 filtering bridge with up to 20 boxes behind
it, that has only very rarely reported exceeding the max no. of dynamic
rules with the (4.8) default net.inet.ip.fw.dyn_max of 1000 .. however
it only keeps state for UDP connections (and yes, it only ever hits that
limit on torrents or skype, which are generally admin. prohib. :)

cheers, Ian  (not subscribed to -ipfw)

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: how much memory does increasing max rules for IPFW take up?

2008-05-15 Thread Andrey V. Elsukov

Bruce M. Simpson wrote:
Got any figures for this? I took a quick glance and it looks like it 
just uses a hash over dst/src/dport/sport. If there are a lot of raw IP 
or ICMP flows then that's going to result in hash collisions.


It's my guess, i haven't any figures..
Yes, hash collisions will trigger many searching in buckets lists.
And increasing only dyn_max without increasing dyn_buckets will
grow collisions.

It might be a good project for someone to optimize if it isn't scaling 
for folk. Bloomier filters are probably worth a look -- bloom filters 
are a class of probabilistic hash which may return a false positive, 
bloomier filters are a refinement which tries to limit the false 
positives.


There were some ideas from Vadim Goncharov about rewriting dynamic
rules implementation..

--
WBR, Andrey V. Elsukov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: today's build is causing errors for me

2008-05-15 Thread Jeremy Chadwick
On Thu, May 15, 2008 at 04:44:57PM -0700, Rob Lytle wrote:
 First I am running 7.0-Stable and just cvsup'd today.  Then I built the
 system, new kernel, and installed them.  Second I am using the GENERIC
 KERNEL.  Sysctl.conf is empty.  I will put my /etc/rc.conf at the end.  I
 tried to do a very careful job of merging /usr/src/etc with /etc.  I didn't
 touch any files that I or the computer configured.
 
 But I am getting these errors upon bootup:
 
 1.  eval: /etc/rc.d/cleanvar: Permission Denied
 2.  syslogd: bind: Can't assign requested address.  (repeated twice)
 3.  syslogd: child pid 134 exited with return code 1
 4.  /etc/rc:  Warning:  Dump device does not exist.  Savecore will not
 run.   (this always worked before)
 5.  /etc/rc.d/securelevel:  /etc/rc.d/sysctl:  Permission denied.
 6.  My computer says Amnesiac yet the host name is clearly in rc.conf
 7.  My WiFi no longer starts up by myself.  I have to do it all manually
 using ifconfig and dhclient.

You should have followed the instructions in /usr/src/Makefile.  You
don't merge things by hand.  You can use mergemaster for that.  Please
use it, as I'm willing to bet there's a portion of your rc framework
which is broken in some way.

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]