Re: ATA APM and NCQ support in FreeBSD atacontrol

2008-05-14 Thread Bruce Cran

Ian Smith wrote:

I take Jonathan's point that it would be nice to have this functionality
in atacontrol, though perhaps the BUGS section in ataidle(8) precludes
merging that?  cc'ing Bruce Cran in case he wants to add something ..


ataidle is at the moment quite dumb about sending commands: it doesn't 
check that the drive actually supports APM/AAM before sending the 
commands, and that's an easy check to do.  If this was being added to 
atacontrol I think I'd want to do quite a bit of work first to make it 
more robust.  However I don't think the code from ataidle could ever 
just be merged in to atacontrol because the code styles are quite 
different; however since the interface to the ATA driver is quite 
straightforward it should be trivial to re-implement or copy the bits 
needed.


--
Bruce
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Panic after hung rsync, probably zfs related

2008-05-14 Thread Kris Kennaway

Xin LI wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ben Stuyts wrote:
| Hi,
|
| While doing an rsync from a zfs filesystem to an external usb hd (also
| zfs), the rsync processes hung in zfs state. I could not kill these
| processes, although the rest of the server seemingly continued to run
| fine. The reboot command did not work. Next I tried a shutdown now
| command. This caused a panic:

Sound like you somehow run out of memory, there is an known issue with
ZFS which causes livelock when there is memory pressure.  Which rsync
version are you using?  With rsync 3.x the memory usage would drop
drastically which would help to prevent this from happening.


This isn't known to cause a double fault though.  Unfortunately we 
probably need more information than was available in order to proceed.


Kris

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Panic after hung rsync, probably zfs related

2008-05-14 Thread Ben Stuyts


On 13 mei 2008, at 21:53, Xin LI wrote:

| While doing an rsync from a zfs filesystem to an external usb hd  
(also

| zfs), the rsync processes hung in zfs state. I could not kill these
| processes, although the rest of the server seemingly continued to  
run

| fine. The reboot command did not work. Next I tried a shutdown now
| command. This caused a panic:

Sound like you somehow run out of memory, there is an known issue with
ZFS which causes livelock when there is memory pressure.  Which rsync
version are you using?  With rsync 3.x the memory usage would drop
drastically which would help to prevent this from happening.


I'm running rsync 3.0.0. Two weeks ago I already increased the memory  
in this machine from 4 to 8 GB, but I did not change any loader.conf  
variables.


BTW, The wired memory goes all over the place in this machine: even  
when it's mostly idle it varies between 300 MB and 3.5 GB. That's why  
I added 4 more GB.


Ben

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Panic after hung rsync, probably zfs related

2008-05-14 Thread Kostik Belousov
On Wed, May 14, 2008 at 10:39:36AM +0200, Ben Stuyts wrote:
 
 On 13 mei 2008, at 21:53, Xin LI wrote:
 
 | While doing an rsync from a zfs filesystem to an external usb hd  
 (also
 | zfs), the rsync processes hung in zfs state. I could not kill these
 | processes, although the rest of the server seemingly continued to  
 run
 | fine. The reboot command did not work. Next I tried a shutdown now
 | command. This caused a panic:
 
 Sound like you somehow run out of memory, there is an known issue with
 ZFS which causes livelock when there is memory pressure.  Which rsync
 version are you using?  With rsync 3.x the memory usage would drop
 drastically which would help to prevent this from happening.
 
 I'm running rsync 3.0.0. Two weeks ago I already increased the memory  
 in this machine from 4 to 8 GB, but I did not change any loader.conf  
 variables.
 
 BTW, The wired memory goes all over the place in this machine: even  
 when it's mostly idle it varies between 300 MB and 3.5 GB. That's why  
 I added 4 more GB.
3.5Gb wired ? Do you run amd64 ?

Does wired memory drops lower after you change the load ?


pgpXo266QLuqV.pgp
Description: PGP signature


Re: Panic after hung rsync, probably zfs related

2008-05-14 Thread Christian Baer

Kostik Belousov wrote:

BTW, The wired memory goes all over the place in this machine: even  
when it's mostly idle it varies between 300 MB and 3.5 GB. That's why  
I added 4 more GB.



3.5Gb wired ? Do you run amd64 ?
Does wired memory drops lower after you change the load ?


Either that or some other 64bit Plattform (alpha, sparc64 etc.).

Regards
Chris

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Panic after hung rsync, probably zfs related

2008-05-14 Thread Ben Stuyts


On 14 mei 2008, at 10:50, Kostik Belousov wrote:


BTW, The wired memory goes all over the place in this machine: even
when it's mostly idle it varies between 300 MB and 3.5 GB. That's why
I added 4 more GB.

3.5Gb wired ? Do you run amd64 ?

Does wired memory drops lower after you change the load ?


Yes, it was in my original message.

Ben

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Panic after hung rsync, probably zfs related

2008-05-14 Thread Ben Stuyts


On 14 mei 2008, at 10:15, Kris Kennaway wrote:

| While doing an rsync from a zfs filesystem to an external usb hd  
(also

| zfs), the rsync processes hung in zfs state. I could not kill these
| processes, although the rest of the server seemingly continued to  
run

| fine. The reboot command did not work. Next I tried a shutdown now
| command. This caused a panic:
Sound like you somehow run out of memory, there is an known issue  
with

ZFS which causes livelock when there is memory pressure.  Which rsync
version are you using?  With rsync 3.x the memory usage would drop
drastically which would help to prevent this from happening.


This isn't known to cause a double fault though.  Unfortunately we  
probably need more information than was available in order to proceed.


Yes, I agree. But I don't know what to do in this sort of case where  
the machine is completely wedged.


Ben

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Panic after hung rsync, probably zfs related

2008-05-14 Thread Ben Stuyts


On 14 mei 2008, at 10:50, Kostik Belousov wrote:


3.5Gb wired ? Do you run amd64 ?


Yes.


Does wired memory drops lower after you change the load ?



Sorry, forgot to answer this in my previous msg.

It is very unpredictable, and I have not found a pattern. It is a  
small business server, and both during non-office and regular office  
hours the wired memory varies between the numbers above.


Ben

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Automounting USB sticks - questions

2008-05-14 Thread Michael Voorhis
 bms == Bruce M Simpson [EMAIL PROTECTED] writes:
 bms It would be great if we could ship FreeBSD out of the box, ready
 bms to automount removable media. This would be useful to all users,
 bms but particularly for novices and people who just wanna get on
 bms and use the beast.

I think this would be nice, so long as it isn't enabled by default.
I've always enjoyed how FreeBSD shuns the feature-creep that infects
other operating systems.  Wouldn't it also be nice to have automatic
printer discovery, automatic WEP/WPA network setup via a nice GUI, etc
etc.

FreeBSD's documentation is really wonderful, and newbies should be
encouraged to read the rich documentation to find out how to (among
other things) enable automounting.  They'll find out many other great
things on the way.

   Mike V.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


nscd strange behaviour

2008-05-14 Thread Anton - Valqk

Hi there group,

I have nscd running on 6.3 with backports patches, but maybe this will 
apply to the 7.0?

What's the problem:

i have nss setup with nss_pg module and authenticates passing through 
pam pg module.
I have nscd running so I can make fewer queries to the pg server when 
system retrieves uid-userid.
But when I change the password in pg database, I can't get authenticated 
on the machine running nscd.
When I restart it, the new password is retrieved as it should be and 
authentication goes ok.


my config is:
moser# cat /etc/nscd.conf
#
# Default caching daemon configuration file
# $FreeBSD: src/etc/nscd.conf,v 1.2.2.1 2007/10/19 00:09:54 bushman Exp $
#
enable-cache passwd yes
enable-cache group yes
enable-cache hosts no
enable-cache services no
enable-cache protocols no
enable-cache rpc no
enable-cache networks no
#custom
threads 25

So my question:
Is this a suggested behaviour  and shouldn't nscd cache all in passwd 
struct but password itself?


any suggestions, opinions and comments are welcome!

cheers,
valqk.


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


thread scheduling at mutex unlock

2008-05-14 Thread Andriy Gapon

I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces BROKEN message.

I compile this program as follows:
cc sched_test.c -o sched_test -pthread

I believe that the behavior I observe is broken because: if thread #1
releases a mutex and then tries to re-acquire it while thread #2 was
already blocked waiting on that mutex, then thread #1 should be queued
after thread #2 in mutex waiter's list.

Is there any option (thread scheduler, etc) that I could try to achieve
good behavior?

P.S. I understand that all this is subject to (thread) scheduler policy,
but I think that what I expect is more reasonable, at least it is more
reasonable for my application.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]

how much memory does increasing max rules for IPFW take up?

2008-05-14 Thread Vivek Khera
I had a box run out of dynamic state space yesterday.  I found I can  
increase the number of dynamic rules by increasing the sysctl  
parameter net.inet.ip.fw.dyn_max.  I can't find, however, how this  
affects memory usage on the system.  Is it dyanamically allocated and  
de-allocated, or is it a static memory buffer?


Thanks!

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


RELENG_6 regression: ums0: X report 0x0002 not supported

2008-05-14 Thread Ulrich Spoerlein
Hi,

after updating an Intel S5000PAL system from 6.2 to 6.3, ums(4) is no
longer attaching correctly.

Here's an dmesg diff between 6.2 and 6.3

 uhub3: Intel UHCI root hub, class 9/0, rev 1.00/1.00, addr 1
 uhub3: 2 ports with 2 removable, self powered
 ehci0: EHCI (generic) USB 2.0 controller mem 0xe8d0-0xe8d003ff
irq 23 at device 29.7 on pci0
 ehci0: [GIANT-LOCKED]
 usb4: EHCI version 1.0
 usb4: companion controllers, 2 ports each: usb0 usb1 usb2 usb3
 usb4: EHCI (generic) USB 2.0 controller on ehci0
 usb4: USB revision 2.0
 uhub4: Intel EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
 uhub4: 8 ports with 8 removable, self powered
 ukbd0: Avocent Avocent Embedded DVC 1.0, rev 2.00/0.00, addr 2, iclass 3/1
 kbd2 at ukbd0
 ums0: Avocent Avocent Embedded DVC 1.0, rev 2.00/0.00, addr 2, iclass 3/1
-ums0: 3 buttons and Z dir.
-uhid0: Avocent Avocent Embedded DVC 1.0, rev 2.00/0.00, addr 2, iclass 3/1
-uhid0: could not read endpoint descriptor
-device_attach: uhid0 attach returned 6
+ums0: X report 0x0002 not supported
+device_attach: ums0 attach returned 6

Attached is the full 6.3 dmesg. Looks weird to me, anything I can try
on this hardware?

Uli


dmesg.boot
Description: Binary data
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]

RELENG_6 regression: panic: vm_fault on nofault entry, addr: c8000000

2008-05-14 Thread Ulrich Spoerlein
Hi,

there's a regression going from 6.2 to 6.3, where it will panic upon
booting the kernel within vm_fault. This problem has been discussed
before, but I'm seeing it reliably on a RELENG_6 checkout from 5th of
May.

It affects multiple (but identical) systems, here's an verbose boot
leading to the panic. Please note that 6.2 was running fine on these
machines, they also boot normally if I disable ACPI (but this is not
really an option).

SMAP type=01 base= len=0009d800
SMAP type=02 base=0009d800 len=2800
SMAP type=02 base=000ce000 len=2000
SMAP type=02 base=000e4000 len=0001c000
SMAP type=01 base=0010 len=cfe6
SMAP type=03 base=cff6 len=9000
SMAP type=04 base=cff69000 len=00017000
SMAP type=02 base=cff8 len=0008
SMAP type=02 base=e000 len=1000
SMAP type=02 base=fec0 len=0001
SMAP type=02 base=fee0 len=1000
SMAP type=02 base=ff00 len=0100
SMAP type=01 base=0001 len=3000
786432K of memory above 4GB ignored
Copyright (c) 1992-2008 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 6.3-20080505-SNAP #0: Mon May  5 11:42:32 UTC 2008
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/GENERIC
Preloaded elf kernel /boot/kernel/kernel at 0xc1051000.
Preloaded mfs_root /boot/mfsroot at 0xc10511e8.
Preloaded elf module /boot/modules/acpi.ko at 0xc105122c.
MP Configuration Table version 1.4 found at 0xc009dd71
Table 'FACP' at 0xcff68e48
Table 'APIC' at 0xcff68ebc
MADT: Found table at 0xcff68ebc
APIC: Using the MADT enumerator.
MADT: Found CPU APIC ID 0 ACPI ID 0: enabled
MADT: Found CPU APIC ID 4 ACPI ID 1: enabled
MADT: Found CPU APIC ID 2 ACPI ID 2: enabled
MADT: Found CPU APIC ID 6 ACPI ID 3: enabled
ACPI APIC Table: PTLTD  APIC  
Calibrating clock(s) ... i8254 clock: 1193204 Hz
CLK_USE_I8254_CALIBRATION not specified - using default frequency
Timecounter i8254 frequency 1193182 Hz quality 0
Calibrating TSC clock ... TSC clock: 3000122064 Hz
CPU: Intel(R) Xeon(TM) CPU 3.00GHz (3000.12-MHz 686-class CPU)
  Origin = GenuineIntel  Id = 0xf64  Stepping = 4
  
Features=0xbfebfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE
  Features2=0xe4bdSSE3,RSVD2,MON,DS_CPL,VMX,EST,CNXT-ID,CX16,xTPR,PDCM
  AMD Features=0x2010NX,LM
  AMD Features2=0x1LAHF
  Cores per package: 2
  Logical CPUs per core: 2
real memory  = 3489005568 (3327 MB)
Physical memory chunk(s):
0x1000 - 0x0009cfff, 638976 bytes (156 pages)
0x0010 - 0x003f, 3145728 bytes (768 pages)
0x01425000 - 0xcc488fff, 3406184448 bytes (831588 pages)
avail memory = 3405979648 (3248 MB)
bios32: Found BIOS32 Service Directory header at 0xc00f5960
bios32: Entry = 0xfd520 (c00fd520)  Rev = 0  Len = 1
pcibios: PCI BIOS entry at 0xfd520+0x247
pnpbios: Found PnP BIOS data at 0xc00f59e0
pnpbios: Entry = f:af28  Rev = 1.0
Other BIOS signatures found:
APIC: CPU 0 has ACPI ID 0
MADT: Found IO APIC ID 8, Interrupt 0 at 0xfec0
ioapic0: Routing external 8259A's - intpin 0
MADT: Found IO APIC ID 9, Interrupt 24 at 0xfec8
lapic0: Routing NMI - LINT1
lapic0: LINT1 trigger: edge
lapic0: LINT1 polarity: high
lapic4: Routing NMI - LINT1
lapic4: LINT1 trigger: edge
lapic4: LINT1 polarity: high
lapic2: Routing NMI - LINT1
lapic2: LINT1 trigger: edge
lapic2: LINT1 polarity: high
lapic6: Routing NMI - LINT1
lapic6: LINT1 trigger: edge
lapic6: LINT1 polarity: high
MADT: Interrupt override: source 0, irq 2
ioapic0: Routing IRQ 0 - intpin 2
MADT: Interrupt override: source 9, irq 9
ioapic0: intpin 9 trigger: level
ioapic0 Version 2.0 irqs 0-23 on motherboard
ioapic1 Version 2.0 irqs 24-47 on motherboard
cpu0 BSP:
 ID: 0x   VER: 0x00050014 LDR: 0xff00 DFR: 0x
  lint0: 0x00010700 lint1: 0x0400 TPR: 0x SVR: 0x01ff
  timer: 0x000100ef therm: 0x0200 err: 0x0001 pcm: 0x0001
ath_rate: version 1.2 SampleRate bit-rate selection algorithm
wlan: 802.11 Link Layer
null: null device, zero device
random: entropy source, Software, Yarrow
nfslock: pseudo-device
io: I/O
kbd: new array size 4
kbd1 at kbdmux0
mem: memory
Pentium Pro MTRR support enabled
ath_hal: 0.9.20.3 (AR5210, AR5211, AR5212, RF5111, RF5112, RF2413, RF5413)
rr232x: RocketRAID 232x controller driver v1.02 (May  5 2008 11:42:16)
hptrr: HPT RocketRAID controller driver v1.1 (May  5 2008 11:42:14)
npx0: INT 16 interface
acpi0: PTLTD   RSDT on motherboard
ioapic0: routing intpin 9 (ISA IRQ 9) to vector 48
acpi0: [MPSAFE]
pci_open(1):mode 1 addr port (0x0cf8) is 0x80008058

Re: RELENG_6 regression: panic: vm_fault on nofault entry, addr: c8000000

2008-05-14 Thread Gavin Atkinson
On Wed, 2008-05-14 at 17:32 +0200, Ulrich Spoerlein wrote:
 Hi,
 
 there's a regression going from 6.2 to 6.3, where it will panic upon
 booting the kernel within vm_fault. This problem has been discussed
 before, but I'm seeing it reliably on a RELENG_6 checkout from 5th of
 May.
 
 It affects multiple (but identical) systems, here's an verbose boot
 leading to the panic. Please note that 6.2 was running fine on these
 machines, they also boot normally if I disable ACPI (but this is not
 really an option).

[snip dmesg output]

 What to do?

If you don't get any suggestions from people as to what it may be, and
you have a system you can afford to reboot a few times, the easiest
thing to do is to take the system back to 6.2, and then update your
source to a date midway between 6.2 and 6.3 and see if that crashes.
Use this in your supfile:

*default tag=RELENG_6
*default date=2007.07.01.00.00.00

(For reference, 6.2 was released on 2007.01.15, with 6.3 on 2008.01.18)

From then, go half way again either forwards or backwards, to narrow
down the window when the problem was introduced - with only eight kernel
recompiles you should be able to narrow it down to a one-day window, and
looking at the spec of the machine you should be able to do that in a
morning :).  Once you've got it down to a window of a couple of days or
less, give csup the -L 2 option, and it'll give you a list of files
changed between dates.  

Obviously this is dependant on you being able to take one of the
affected machines down for a few hours, but if you can, this may well be
the quickest way of establishing when the problem was introduced.

Out of interest, what type of hardware is this?

Gavin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-14 Thread Andriy Gapon

on 14/05/2008 18:17 Andriy Gapon said the following:

I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces BROKEN message.

I compile this program as follows:
cc sched_test.c -o sched_test -pthread

I believe that the behavior I observe is broken because: if thread #1
releases a mutex and then tries to re-acquire it while thread #2 was
already blocked waiting on that mutex, then thread #1 should be queued
after thread #2 in mutex waiter's list.

Is there any option (thread scheduler, etc) that I could try to achieve
good behavior?

P.S. I understand that all this is subject to (thread) scheduler policy,
but I think that what I expect is more reasonable, at least it is more
reasonable for my application.




Daniel Eischen has just kindly notified me that the code (as an 
attachment) didn't make it to the list, so here it is inline.



#include stdio.h
#include unistd.h
#include stdlib.h
#include pthread.h


pthread_mutex_t mutex;
int count = 0;

static void * thrfunc(void * arg)
{
while (1) {
pthread_mutex_lock(mutex);

count++;
if (count  10) {
fprintf(stderr, you have a BROKEN thread scheduler!!!\n);
exit(1);
}

sleep(1);

pthread_mutex_unlock(mutex);
}
}

int main(void)
{
pthread_t thr;

#if 0
pthread_mutexattr_t attr;
pthread_mutexattr_init(attr);
pthread_mutexattr_settype(attr, PTHREAD_MUTEX_RECURSIVE_NP);
pthread_mutex_init(mutex, attr);
#else
pthread_mutex_init(mutex, NULL);
#endif

pthread_create(thr, NULL, thrfunc, NULL);

sleep(2);

pthread_mutex_lock(mutex);
count = 0;
printf(you have good thread scheduler\n);
pthread_mutex_unlock(mutex);

return 0;
}

--
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


instability with gmirror and atacontrol spindown

2008-05-14 Thread Aragon Gouveia
Hi,

I eagerly started using atacontrol's new spindown command the other day. 
There's a gmirror volume running on top of the two disks that get spundown. 
I find that often when the drives are spun back up to serve a disk request,
one of the ata devices times out and my system goes into a never ending loop
of retrying.  At that point I'm lucky if I'm able to shutdown gracefully.

This is what is logged on the console after the disk spin up message:

ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout - completing
request directly
ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout - completing 
request directly
ad8: WARNING - SETFEATURES ENABLE RCACHE taskqueue timeout - completing
request directly
ad8: WARNING - SETFEATURES ENABLE RCACHE taskqueue timeout - completing
request directly
ad8: WARNING - SET_MULTI taskqueue timeout - completing request directly
ad8: TIMEOUT - READ_DMA retrying (1 retry left) LBA=20047817
ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout - completing 
request directly
ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout - completing 
request directly
ad8: WARNING - SETFEATURES ENABLE RCACHE taskqueue timeout - completing
request directly
ad8: WARNING - SETFEATURES ENABLE RCACHE taskqueue timeout - completing
request directly
ad8: WARNING - SET_MULTI taskqueue timeout - completing request directly
ad8: TIMEOUT - READ_DMA retrying (0 retries left) LBA=20047817
rinse and repeat

I can only reproduce it when my gmirror volume is running.

Any ideas?


Thanks,
Aragon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Status of ZFS in -stable?

2008-05-14 Thread UBM
On Tue, 13 May 2008 00:26:49 -0400
Pierre-Luc Drouin [EMAIL PROTECTED] wrote:

 Hi,
 
 I would like to know if the memory allocation problem with zfs has
 been fixed in -stable? Is zfs considered to be more stable now?
 
 Thanks!
 Pierre-Luc Drouin

We just set up a zfs based fileserver in our home. It's accessed via
samba and ftp, connected via an em 1gb card.
FreeBSD is installed on an 80GB ufs2 disk, the zpool consists of two
750GB disks, set up as raidz (my mistake, mirror would probably have
been the better choice).
We've been using it for about 2 weeks now and there have been no
problems (transferred lots of big and small files off/on it, maxing out
disk speed).

System is amd64, 4gb of ram.

FreeBSD blah 7.0-STABLE-200804 FreeBSD 7.0-STABLE-200804 #0: Thu Apr
10 16:32:11 UTC 2008
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/GENERIC  amd64


Bye
Marc

-- 
A sudden blow: the great wings beating still
Above the staggering girl, her thighs caressed
By the dark webs, her nape caught in his bill,
He holds her helpless breast upon his breast.

W.B. Yeats, Leda and the Swan
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Approaching the limit on PV entries, consider increasing either the vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.

2008-05-14 Thread Evren Yurtesen

Hello,

I have a FreeBSD 7.0-STABLE amd64 box which gives this message with apache 2.2 
very often. Previously the contents of the box was on 6.3-STABLE x86 and I had 
no such problems. This started right away when we moved to 7, 64bit.


FreeBSD web.X.com 7.0-STABLE FreeBSD 7.0-STABLE #0: Tue Apr 22 02:13:30 UTC 
2008 [EMAIL PROTECTED]:/usr/obj/usr/src/sys/WEB  amd64


Approaching the limit on PV entries, consider increasing either the 
vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.
Approaching the limit on PV entries, consider increasing either the 
vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.


I have increased vm.pmap.shpgperproc to 2000 and this seemed to stop complaints 
for a little while but they occur again...


ipcs -a return nothing

web:/root#ipcs -a
Message Queues:
T   ID  KEY MODEOWNERGROUPCREATOR  CGROUP 
  CBYTES QNUM   QBYTESLSPID 
LRPID STIMERTIMECTIME


Shared Memory:
T   ID  KEY MODEOWNERGROUPCREATOR  CGROUP 
  NATTCHSEGSZ CPID LPID ATIMEDTIMECTIME


Semaphores:
T   ID  KEY MODEOWNERGROUPCREATOR  CGROUP 
   NSEMS OTIMECTIME


web:/root#

Here are the active sysctl values:

vm.pmap.pmap_collect_active: 0
vm.pmap.pmap_collect_inactive: 0
vm.pmap.pv_entry_spare: 7366
vm.pmap.pv_entry_allocs: 93953399357
vm.pmap.pv_entry_frees: 93953160939
vm.pmap.pc_chunk_tryfail: 0
vm.pmap.pc_chunk_frees: 559631374
vm.pmap.pc_chunk_allocs: 559632837
vm.pmap.pc_chunk_count: 1463
vm.pmap.pv_entry_count: 238418
vm.pmap.shpgperproc: 2000
vm.pmap.pv_entry_max: 13338058

The box is lightly loaded, it is an 8 core box with load average about 0.2-0.4

Any ideas about what to check/do next? I only could find a post which suggests 
using:


kern.ipc.shm_use_phys: 1

But I already set it and it has no effect...box has 4gb of memory:

CPU:  0.1% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.9% idle
Mem: 180M Active, 1584M Inact, 467M Wired, 131M Cache, 214M Buf, 1578M Free
Swap: 8192M Total, 8548K Used, 8184M Free


I have the following in make.conf

CPUTYPE?=core2

CFLAGS= -O2 -fno-strict-aliasing -pipe
CXXFLAGS+= -fconserve-space
COPTFLAGS= -O -pipe

NO_GAMES=true
NO_PROFILE=true

WITHOUT_X11=true

below is the kernel config file:

cpu HAMMER
ident   WEB

options SCHED_ULE   # ULE scheduler
options PREEMPTION  # Enable kernel thread preemption
options INET# InterNETworking
#optionsINET6   # IPv6 communications protocols
#optionsSCTP# Stream Control Transmission Protocol
options FFS # Berkeley Fast Filesystem
options SOFTUPDATES # Enable FFS soft updates support
options UFS_DIRHASH # Improve performance on big directories
options UFS_GJOURNAL# Enable gjournal-based UFS journaling
options CD9660  # ISO 9660 Filesystem
options PROCFS  # Process filesystem (requires PSEUDOFS)
options PSEUDOFS# Pseudo-filesystem framework
options GEOM_PART_GPT   # GUID Partition Tables.
options GEOM_LABEL  # Provides labelization
options COMPAT_43TTY# BSD 4.3 TTY compat [KEEP THIS!]
options COMPAT_IA32 # Compatible with i386 binaries
options COMPAT_FREEBSD4 # Compatible with FreeBSD4
options COMPAT_FREEBSD5 # Compatible with FreeBSD5
options COMPAT_FREEBSD6 # Compatible with FreeBSD6
options SCSI_DELAY=5000 # Delay (in ms) before probing SCSI
options KTRACE  # ktrace(1) support
options STACK   # stack(9) support
options SYSVSHM # SYSV-style shared memory
options SYSVMSG # SYSV-style message queues
options SYSVSEM # SYSV-style semaphores
options _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-time 
extensions
options KBD_INSTALL_CDEV# install a CDEV entry in /dev
options ADAPTIVE_GIANT  # Giant mutex is adaptive.
options STOP_NMI# Stop CPUS using NMI instead of IPI
#optionsAUDIT   # Security event auditing

# Make an SMP-capable kernel by default
options SMP # Symmetric MultiProcessor Kernel

# CPU frequency control
device  cpufreq

# Bus support.
device  acpi
device  pci

# Floppy drives
device  fdc

# ATA and ATAPI devices
device  ata
device  atadisk # ATA disk drives
device  atapicd # ATAPI CDROM drives
options ATA_STATIC_ID   # Static device numbering

# SCSI peripherals

RE: thread scheduling at mutex unlock

2008-05-14 Thread David Schwartz

 I am trying the small attached program on FreeBSD 6.3 (amd64,
 SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
 library and on both it produces BROKEN message.
 
 I compile this program as follows:
 cc sched_test.c -o sched_test -pthread
 
 I believe that the behavior I observe is broken because: if thread #1
 releases a mutex and then tries to re-acquire it while thread #2 was
 already blocked waiting on that mutex, then thread #1 should be queued
 after thread #2 in mutex waiter's list.
 
 Is there any option (thread scheduler, etc) that I could try to achieve
 good behavior?
 
 P.S. I understand that all this is subject to (thread) scheduler policy,
 but I think that what I expect is more reasonable, at least it is more
 reasonable for my application.
 
 -- 
 Andriy Gapon

Are you out of your mind?! You are specifically asking for the absolute worst 
possible behavior!

If you have fifty tiny things to do on one side of the room and fifty tiny 
things to do on the other side, do you cross the room after each one? Of course 
not. That would be *ludicrous*.

If you want/need strict alternation, feel free to code it. But it's the 
maximally inefficient scheduler behavior, and it sure as hell had better not be 
the default.

DS


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: instability with gmirror and atacontrol spindown

2008-05-14 Thread Jeremy Chadwick
On Wed, May 14, 2008 at 10:20:42PM +0200, Aragon Gouveia wrote:
 This is what is logged on the console after the disk spin up message:
 
 ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout - completing
 request directly
 ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout - completing 
 request directly
 ad8: WARNING - SETFEATURES ENABLE RCACHE taskqueue timeout - completing
 request directly
 ad8: WARNING - SETFEATURES ENABLE RCACHE taskqueue timeout - completing
 request directly
 ad8: WARNING - SET_MULTI taskqueue timeout - completing request directly
 ad8: TIMEOUT - READ_DMA retrying (1 retry left) LBA=20047817
 ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout - completing 
 request directly
 ad8: WARNING - SETFEATURES SET TRANSFER MODE taskqueue timeout - completing 
 request directly
 ad8: WARNING - SETFEATURES ENABLE RCACHE taskqueue timeout - completing
 request directly
 ad8: WARNING - SETFEATURES ENABLE RCACHE taskqueue timeout - completing
 request directly
 ad8: WARNING - SET_MULTI taskqueue timeout - completing request directly
 ad8: TIMEOUT - READ_DMA retrying (0 retries left) LBA=20047817
 rinse and repeat
 
 I can only reproduce it when my gmirror volume is running.
 
 Any ideas?

http://wiki.freebsd.org/JeremyChadwick/Commonly_reported_issues

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Approaching the limit on PV entries, consider increasing either the vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.

2008-05-14 Thread Jeremy Chadwick
On Wed, May 14, 2008 at 11:39:10PM +0300, Evren Yurtesen wrote:
 Approaching the limit on PV entries, consider increasing either the 
 vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.
 Approaching the limit on PV entries, consider increasing either the 
 vm.pmap.shpgperproc or the vm.pmap.pv_entry_max sysctl.

I've seen this message on one of our i386 RELENG_7 boxes, which has a
medium load (webserver with PHP) and 2GB RAM.  Our counters, for
comparison:

vm.pmap.pmap_collect_active: 0
vm.pmap.pmap_collect_inactive: 0
vm.pmap.pv_entry_spare: 7991
vm.pmap.pv_entry_allocs: 807863761
vm.pmap.pv_entry_frees: 807708792
vm.pmap.pc_chunk_tryfail: 0
vm.pmap.pc_chunk_frees: 2580082
vm.pmap.pc_chunk_allocs: 2580567
vm.pmap.pc_chunk_count: 485
vm.pmap.pv_entry_count: 154969
vm.pmap.shpgperproc: 200
vm.pmap.pv_entry_max: 1745520

-- 
| Jeremy Chadwickjdc at parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-14 Thread David Xu

Andriy Gapon wrote:

I am trying the small attached program on FreeBSD 6.3 (amd64,
SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads
library and on both it produces BROKEN message.

I compile this program as follows:
cc sched_test.c -o sched_test -pthread

I believe that the behavior I observe is broken because: if thread #1
releases a mutex and then tries to re-acquire it while thread #2 was
already blocked waiting on that mutex, then thread #1 should be queued
after thread #2 in mutex waiter's list.

Is there any option (thread scheduler, etc) that I could try to achieve
good behavior?

P.S. I understand that all this is subject to (thread) scheduler policy,
but I think that what I expect is more reasonable, at least it is more
reasonable for my application.



In fact, libthr is trying to avoid this conveying, if thread #1
hands off the ownership to thread #2, it will cause lots of context
switch, in the idea world, I would let thread #1 to run until it
exhausts its time slice, and at the end of its time slices,
thread #2 will get the mutex ownership, of course it is difficult to
make this work on SMP, but on UP, I would expect the result will
be close enough if thread scheduler is sane, so we don't raise
priority in kernel umtx code if a thread is blocked, this gives
thread #1 some times to re-acquire the mutex without context switches,
increases throughput.

Regards,
David Xu

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: thread scheduling at mutex unlock

2008-05-14 Thread Brent Casavant
On Wed, 14 May 2008, Andriy Gapon wrote:

 I believe that the behavior I observe is broken because: if thread #1
 releases a mutex and then tries to re-acquire it while thread #2 was
 already blocked waiting on that mutex, then thread #1 should be queued
 after thread #2 in mutex waiter's list.

The behavior of scheduling with respect to mutex contention (apart from
pthread_mutexattr_setprotocol()) is not specified by POSIX, to the best
of my knowledge, and thus is left to the discretion of the  implementation.

 Is there any option (thread scheduler, etc) that I could try to achieve
 good behavior?

No portable mechanism, and not any mechanism in the operating systems
with which I am familiar.  That said, as the behavior is not specified
by POSIX, there would be nothing preventing an implementation from
providing this as an optional behavior through a custom
pthread_mutexattr_???_np() interface.

 P.S. I understand that all this is subject to (thread) scheduler policy,
 but I think that what I expect is more reasonable, at least it is more
 reasonable for my application.

As other responders have indicated, the behavior you desire is as
unoptimal as possible for the general case.  If your application would
truly benefit from this sort of queuing behavior, I'd suggest that
either you need to implement your own mechanism to accomplish the
queueing (probably the easier fix), or that the threading architecture
of your application could be designed in a different manner that
avoids this problem (probably the more difficult fix).

Brent

-- 
Brent Casavant  Dance like everybody should be watching.
www.angeltread.org
KD5EMB, EN34lv
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]