Re: FreeBSD 6.1 Released

2006-05-12 Thread Avleen Vig
On Thu, May 11, 2006 at 11:42:26PM -0400, Mike Jakubik wrote:
 The *entire* errata page was from 6.0; it was a mistake.  This wasn't
 some put on the rose-colored classes and gloss over major issues
 thing.  It was a long release cycle and something was forgotten.  C'est
 la vie.  It's always a good idea to check the most up-to-date version of
 the errata page on the web anyway, so it's *not* too late to update it.
 
 How convenient. These problems needed to be addressed in the release 
 notes, not some on line version.

If they're THAT big of a deal, learn some C, and fix the problems
yourself, instead of sending e-mails that sound rude and ungrateful.

And if you can't do that, how about helping scott put together release
notes which you seemed to think were so awful?

All I hear from you is complaining and a serious lack of anything
constructive.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Heavy system load by pagedaemon

2006-05-12 Thread Peter Jeremy
On Thu, 2006-May-11 18:33:10 +0300, Iasen Kostov wrote:
   And another odd thing (to me atleast):

#: vmstat -s | grep daemon\|fault
0 page daemon wakeups
0 pages examined by the page daemon
  4992608 copy-on-write faults
29021 copy-on-write optimized faults
75167 intransit blocking page faults
 66956262 total VM faults taken
0 pages freed by daemon

Acording to vmstat pagedaemon has never been awake.

'page daemon wakeups' counts the number of times that the pagedaemon
is woken up due to a page shortage.  It does not include the 5-sec
wakeups.

How about posting a complete 'vmstat -s'.  A look at the code suggests
that vm_pageout_page_stats() thinks there's a page shortage (since
that's the only piece of code that will actually do anything if page
daemon wakeups is zero).

Kamal R. Prasad suggested that Belady's anomaly might occur. If it's
true or possible what could be done about this ?

Belady's anomaly can only occur if paging really happens.

-- 
Peter Jeremy
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Heavy system load by pagedaemon

2006-05-12 Thread Iasen Kostov
On Fri, 2006-05-12 at 17:17 +1000, Peter Jeremy wrote:
 On Thu, 2006-May-11 18:33:10 +0300, Iasen Kostov wrote:
  And another odd thing (to me atleast):
 
 #: vmstat -s | grep daemon\|fault
 0 page daemon wakeups
 0 pages examined by the page daemon
   4992608 copy-on-write faults
 29021 copy-on-write optimized faults
 75167 intransit blocking page faults
  66956262 total VM faults taken
 0 pages freed by daemon
 
 Acording to vmstat pagedaemon has never been awake.
 
 'page daemon wakeups' counts the number of times that the pagedaemon
 is woken up due to a page shortage.  It does not include the 5-sec
 wakeups.
 
 How about posting a complete 'vmstat -s'.  A look at the code suggests
 that vm_pageout_page_stats() thinks there's a page shortage (since
 that's the only piece of code that will actually do anything if page
 daemon wakeups is zero).

#: vmstat -s
 38479689 cpu context switches
  8311921 device interrupts
  2247328 software interrupts
 77805675 traps
352344722 system calls
   41 kernel threads created
16904  fork() calls
 1827 vfork() calls
0 rfork() calls
0 swap pager pageins
0 swap pager pages paged in
0 swap pager pageouts
0 swap pager pages paged out
32095 vnode pager pageins
   152280 vnode pager pages paged in
  549 vnode pager pageouts
  549 vnode pager pages paged out
0 page daemon wakeups
0 pages examined by the page daemon
51939 pages reactivated
  5311477 copy-on-write faults
11437 copy-on-write optimized faults
 25579287 zero fill pages zeroed
 25490864 zero fill pages prezeroed
39320 intransit blocking page faults
 74643308 total VM faults taken
0 pages affected by kernel thread creation
 71254993 pages affected by  fork()
  8872256 pages affected by vfork()
0 pages affected by rfork()
 48455025 pages freed
0 pages freed by daemon
 12139663 pages freed by exiting processes
   261655 pages active
   346637 pages inactive
39970 pages in VM cache
   108891 pages wired down
  1236812 pages free
 4096 bytes per page
236461588 total name lookups
  cache hits (97% pos + 2% neg) system 0% per-directory
  deletions 0%, falsehits 0%, toolong 0%


But there is something else:
collecting pv entries -- suggest increasing PMAP_SHPGPERPROC x 5 times
(which is it maximum number of this warrnings).

When I checked sysctl vm.zone I saw PV ENTRY going near to it's
maximum right before the lock happen and then after the lock by
pagedaemon it go down to ~1000 ... and I saw this after a crash which
could explain alot of things. Most probably the lock occurs when this
collecting of pv entrys happen and this could lead to a crash (which is
inherited from 4.x series as I saw mails about that from 2003 and 4.7).



___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Heavy system load by pagedaemon

2006-05-12 Thread Peter Jeremy
On Fri, 2006-May-12 13:07:41 +0300, Iasen Kostov wrote:
On Fri, 2006-05-12 at 17:17 +1000, Peter Jeremy wrote:
 'page daemon wakeups' counts the number of times that the pagedaemon
 is woken up due to a page shortage.

I think I was not correct here.  It looks like the pagedaemon can be
woken up (via vm_pages_needed) even if vm_pages_needed is still zero.

 How about posting a complete 'vmstat -s'.

I spoke too soon.  The relevant counters are in vm.stats.vm.* and not
reported by vmstat.

But there is something else:
collecting pv entries -- suggest increasing PMAP_SHPGPERPROC x 5 times
(which is it maximum number of this warrnings).

The call tree for this is
vm_pageout()with wakeup(vm_pages_needed)  vm_pages_needed == 0
  vm_pageout_scan()
vm_pageout_pmap_collect() with pmap_pagedaemon_waken non-zero.

vm_pageout_pmap_collect() runs with Giant held and, based on a very
brief check, looks quite expensive.

When I checked sysctl vm.zone I saw PV ENTRY going near to it's
maximum right before the lock happen and then after the lock by
pagedaemon it go down to ~1000

You mentioned eaccelerator and having lots of users and httpd.  I
suspect the comment in NOTES is relevant for you and you need to
increase PMAP_SHPGPERPROC as recommended:

# Set the number of PV entries per process.  Increasing this can
# stop panics related to heavy use of shared memory.  However, that can
# (combined with large amounts of physical memory) cause panics at
# boot time due the kernel running out of VM space.
#
# If you're tweaking this, you might also want to increase the sysctls
# vm.v_free_min, vm.v_free_reserved, and vm.v_free_target.
#
# The value below is the one more than the default.
#
options PMAP_SHPGPERPROC=201

-- 
Peter Jeremy
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Heavy system load by pagedaemon

2006-05-12 Thread Iasen Kostov
On Fri, 2006-05-12 at 21:28 +1000, Peter Jeremy wrote:
 On Fri, 2006-May-12 13:07:41 +0300, Iasen Kostov wrote:
 On Fri, 2006-05-12 at 17:17 +1000, Peter Jeremy wrote:
  'page daemon wakeups' counts the number of times that the pagedaemon
  is woken up due to a page shortage.
 
 I think I was not correct here.  It looks like the pagedaemon can be
 woken up (via vm_pages_needed) even if vm_pages_needed is still zero.
 
  How about posting a complete 'vmstat -s'.
 
 I spoke too soon.  The relevant counters are in vm.stats.vm.* and not
 reported by vmstat.
 
 But there is something else:
 collecting pv entries -- suggest increasing PMAP_SHPGPERPROC x 5 times
 (which is it maximum number of this warrnings).
 
 The call tree for this is
 vm_pageout()  with wakeup(vm_pages_needed)  vm_pages_needed == 0
   vm_pageout_scan()
 vm_pageout_pmap_collect() with pmap_pagedaemon_waken non-zero.
 
 vm_pageout_pmap_collect() runs with Giant held and, based on a very
 brief check, looks quite expensive.
 
 When I checked sysctl vm.zone I saw PV ENTRY going near to it's
 maximum right before the lock happen and then after the lock by
 pagedaemon it go down to ~1000
 
 You mentioned eaccelerator and having lots of users and httpd.  I
 suspect the comment in NOTES is relevant for you and you need to
 increase PMAP_SHPGPERPROC as recommended:
 
 # Set the number of PV entries per process.  Increasing this can
 # stop panics related to heavy use of shared memory.  However, that can
 # (combined with large amounts of physical memory) cause panics at
 # boot time due the kernel running out of VM space.
 #
 # If you're tweaking this, you might also want to increase the sysctls
 # vm.v_free_min, vm.v_free_reserved, and vm.v_free_target.
 #
 # The value below is the one more than the default.
 #
 options PMAP_SHPGPERPROC=201
Exactly what i did :). I set vm.pmap.shpgperproc=600 in loader.conf and
about 5 min after boot the system paniced and I was not able even to see
the message (either because I was pressing enter for the command or it
just doesn't wait for a key). Then i set it to 500 in loader at boot
time and currently it works but when it crashed used PV entries were ~4
300 000 now they go to ~5 000 000 and it doesn't panic. Which make me
think that the panic is not related to setting vm.pmap.shpgperproc to
600 (which could probably lead to KVA exhastion) but to something else.
I'll try to increase KVA_PAGES (why isn't there tunable ?) and then set
vm.pmap.shpgperproc to some higher value, but this will be after a fresh
make world (I cvsuped already :( ) some time soon.



___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: FreeBSD 6.1 Released

2006-05-12 Thread J. T. farmer

Mike Jakubik wrote:

Jonathan Noack wrote:

The *entire* errata page was from 6.0; it was a mistake.  This wasn't
some put on the rose-colored classes and gloss over major issues
thing.  It was a long release cycle and something was forgotten.  C'est
la vie.  It's always a good idea to check the most up-to-date version of
the errata page on the web anyway, so it's *not* too late to update 
it.   
How convenient. These problems needed to be addressed in the release 
notes, not some on line version.

*Plonk*

You've just entered into my do not read bucket.  If you can't see
that errors occur, then I pity _anyone_ who codes for you...  Or
perhaps you never make a mistake.  That must be it.

John

--
John T. Farmer  Owner  CTO  GoldSword Systems
[EMAIL PROTECTED]   865-691-6498 Knoxville TN
   Consulting, Design,  Development of Networks  Software

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Heavy system load by pagedaemon

2006-05-12 Thread Mike Silbersack


On Fri, 12 May 2006, Iasen Kostov wrote:


Exactly what i did :). I set vm.pmap.shpgperproc=600 in loader.conf and
about 5 min after boot the system paniced and I was not able even to see
the message (either because I was pressing enter for the command or it
just doesn't wait for a key). Then i set it to 500 in loader at boot
time and currently it works but when it crashed used PV entries were ~4
300 000 now they go to ~5 000 000 and it doesn't panic. Which make me
think that the panic is not related to setting vm.pmap.shpgperproc to
600 (which could probably lead to KVA exhastion) but to something else.
I'll try to increase KVA_PAGES (why isn't there tunable ?) and then set
vm.pmap.shpgperproc to some higher value, but this will be after a fresh
make world (I cvsuped already :( ) some time soon.


Can you provide instructions on how to create a testbench that exhibits 
these same problems?  Can eAccelerator + PHP + Apache + some simple script 
+ apachebench do the trick?


If so, that would allow other people to work on the problem.  Kris 
Kennaway seems to like benchmarking; maybe you could pry him temporarily 
away from MySQL benchmarking to take a look at this.


Also note that Peter Wemm has been reducing the size of PV Entries in 
-current, as he was running out of KVA due to them too - maybe he could 
provide you with a patch for 6.x with the same feature.  Here's part of 
his description of the change:


---
  This is important because of the following scenario.   If you have a 1GB
  file (262144 pages) mmap()ed into 50 processes, that requires 13 million
  pv entries.  At 24 bytes per pv entry, that is 314MB of ram and kvm, while
  at 12 bytes it is 157MB.  A 157MB saving is significant.
---

HTH,

Mike Silby Silbersack
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Heavy system load by pagedaemon

2006-05-12 Thread Iasen Kostov
On Fri, 2006-05-12 at 10:27 -0500, Mike Silbersack wrote:
 On Fri, 12 May 2006, Iasen Kostov wrote:
 
  Exactly what i did :). I set vm.pmap.shpgperproc=600 in loader.conf and
  about 5 min after boot the system paniced and I was not able even to see
  the message (either because I was pressing enter for the command or it
  just doesn't wait for a key). Then i set it to 500 in loader at boot
  time and currently it works but when it crashed used PV entries were ~4
  300 000 now they go to ~5 000 000 and it doesn't panic. Which make me
  think that the panic is not related to setting vm.pmap.shpgperproc to
  600 (which could probably lead to KVA exhastion) but to something else.
  I'll try to increase KVA_PAGES (why isn't there tunable ?) and then set
  vm.pmap.shpgperproc to some higher value, but this will be after a fresh
  make world (I cvsuped already :( ) some time soon.
 
 Can you provide instructions on how to create a testbench that exhibits 
 these same problems?  Can eAccelerator + PHP + Apache + some simple script 
 + apachebench do the trick?
 
Nope, apache probaly needs to use many pages of shared memory to
exhaust the PV Entries (as I understand it). eAccelerator uses shm when
it has something to put there and most porbably apache does the same. So
I think You'll need a lot of different scripts (and many apache
processes) to make eAccelerator cache them and probaly some other media
to make apache use shm on it own (I'm realy not sure how apache uses
shared memory but it probably does because this problem apears when
people are using forking apache).

 If so, that would allow other people to work on the problem.  Kris 
 Kennaway seems to like benchmarking; maybe you could pry him temporarily 
 away from MySQL benchmarking to take a look at this.
 
 Also note that Peter Wemm has been reducing the size of PV Entries in 
 -current, as he was running out of KVA due to them too - maybe he could 
 provide you with a patch for 6.x with the same feature.  Here's part of 
 his description of the change:
 
 ---
This is important because of the following scenario.   If you have a 1GB
file (262144 pages) mmap()ed into 50 processes, that requires 13 million
pv entries.  At 24 bytes per pv entry, that is 314MB of ram and kvm, while
at 12 bytes it is 157MB.  A 157MB saving is significant.
 ---
 
That's realy nice to hear. Interesting thing is that:

sysctl vm.zone | grep PV
PV ENTRY: 48,  5114880, 4039498, 564470, 236393602

PV Entry's size is 48 here which is even worst than 28 case ... :)

Regards.


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Heavy system load by pagedaemon

2006-05-12 Thread Mike Silbersack


On Fri, 12 May 2006, Iasen Kostov wrote:


On Fri, 2006-05-12 at 10:27 -0500, Mike Silbersack wrote:

Can you provide instructions on how to create a testbench that exhibits
these same problems?  Can eAccelerator + PHP + Apache + some simple script
+ apachebench do the trick?


Nope, apache probaly needs to use many pages of shared memory to
exhaust the PV Entries (as I understand it). eAccelerator uses shm when
it has something to put there and most porbably apache does the same. So
I think You'll need a lot of different scripts (and many apache
processes) to make eAccelerator cache them and probaly some other media
to make apache use shm on it own (I'm realy not sure how apache uses
shared memory but it probably does because this problem apears when
people are using forking apache).


Well, let me restate what I said above.  If nobody else is running into 
this, nobody else will be motivated to fix it.  On the other hand, if you 
put in the time to figure out how others can reproduce it, others then 
will be able to try to help fixing it.  If you don't show a to reproduce 
it, there's no way it can be fixed.



That's realy nice to hear. Interesting thing is that:

sysctl vm.zone | grep PV
PV ENTRY: 48,  5114880, 4039498, 564470, 236393602

PV Entry's size is 48 here which is even worst than 28 case ... :)

Regards.


Ah, I was quoting the i386 change.  On amd64, he reduced it from 48 bytes 
to 24 bytes. :)


Mike Silby Silbersack
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Heavy system load by pagedaemon

2006-05-12 Thread Iasen Kostov
On Fri, 2006-05-12 at 11:28 -0500, Mike Silbersack wrote:
 On Fri, 12 May 2006, Iasen Kostov wrote:
 
  On Fri, 2006-05-12 at 10:27 -0500, Mike Silbersack wrote:
  Can you provide instructions on how to create a testbench that exhibits
  these same problems?  Can eAccelerator + PHP + Apache + some simple script
  + apachebench do the trick?
 
  Nope, apache probaly needs to use many pages of shared memory to
  exhaust the PV Entries (as I understand it). eAccelerator uses shm when
  it has something to put there and most porbably apache does the same. So
  I think You'll need a lot of different scripts (and many apache
  processes) to make eAccelerator cache them and probaly some other media
  to make apache use shm on it own (I'm realy not sure how apache uses
  shared memory but it probably does because this problem apears when
  people are using forking apache).
 
 Well, let me restate what I said above.  If nobody else is running into 
 this, nobody else will be motivated to fix it.  On the other hand, if you 
 put in the time to figure out how others can reproduce it, others then 
 will be able to try to help fixing it.  If you don't show a to reproduce 
 it, there's no way it can be fixed.
 
The problem is not realy hum problem. I think the only way to fix
this (probably) is to have vm zones to auto scale their sizes and
KVA_PAGES should probaly tune itself on boot time according to available
memory (that's ofcourse are just thoughts I'm don't know anything about
pros and cons of this situation). But it's too fundamental may be as
there is no even tunable for KVA_PAGES.

And something else - on heavyly loaded machine the message about
collecting pv entries is disapearing to fast from the dmesg buffer and
there are only 5 of them. They probably should be limited to some number
per min (may be as a part of some kernel logging system). And when You
don't see the message  you start wandering what is happening - plenty of
RAM , no swapping and machine freezes on 30sec to 5min intervals for ~5
sec. Not fun realy :).

Regards.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Exiting Xorg panics Core Duo laptop

2006-05-12 Thread Rick C. Petty
On Thu, May 11, 2006 at 09:07:03PM -0500, Eric Anderson wrote:
 
 I'd use the nv driver instead, however I'm not sure how to make it work 
 with this screen (1920x1200).

This is my entry for my Dell 2405FPW:

Section Monitor
Identifier  Monitor1
DisplaySize 520 330 # yes, I measured it
HorizSync   30.0-81.0
VertRefresh 56.0-76.0
Mode 1920x1200
DotClock 154.0
HTimings 1920 1968 2000 2080
VTimings 1200 1203 1209 1235
EndMode
Option  DPMS
EndSection


I can't remember if I needed the frequencies-- I had some initial problems
using the DVI input and nvidia not detecting things correctly, but then I
added this to the Device section:

Option  ConnectedMonitor  DPF-1

YMMV,

-- Rick C. Petty
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


qemu is unusable on my smp box

2006-05-12 Thread Paolo Pisati

with latest qemu port (but i tried to downgrade it down to 
7.2.something and it didn't change), on a 6.1 host (see NEWLUXOR 
kernel config file attached), running a 6.1 freesbie image (see IPFW 
kernel config attached) qemu showed the following problems:

1) when i launch qemu, many times it goes directly to monitor and 
   sits there eating cpu (i've to manually kill it)

2) whenever it boots, i get a lot of 'calcru: going backwards...'
   (with or without kqemu loaded - no differences at all)

3) when i try to shutdown/reboot the freesbie image, it hangs
   after killing syslogd, and after around 15/20mins it starts
   syncing disks 

4) generally, whenevr it boots, it feels _really_ slow i.e. if
   i type a character there's a visible delay before it shows
   up in qemu window - unusable

all these problems occur on a dualcore pentiumd 920 (2.8Ghz),
2gb ram, intel 945g (X in vesa mode- agp doesn't attach) 
(see dmesg attached), while the exact same freesbie images runs 
fine on a 6.1 [EMAIL PROTECTED] 512mb ram and intel 915g (X with i810 
and dri).
Am i the only one to experience these problems?
Is anyone running qemu on a SMP host?

-- 

Paolo

machine i386
cpu I686_CPU
ident   NEWLUXOR

#optionsSCHED_ULE   # ULE scheduler
options SCHED_4BSD  # 4BSD scheduler
options PREEMPTION  # Enable kernel thread preemption
options INET# InterNETworking
options INET6   # IPv6 communications protocols
options FFS # Berkeley Fast Filesystem
options SOFTUPDATES # Enable FFS soft updates support
options UFS_DIRHASH # Improve performance on big directories
options NFSCLIENT   # Network Filesystem Client
options MSDOSFS # MSDOS Filesystem
options MSDOSFS_LARGE
options CD9660  # ISO 9660 Filesystem
options PROCFS  # Process filesystem (requires PSEUDOFS)
options PSEUDOFS# Pseudo-filesystem framework
options COMPAT_43   # Compatible with BSD 4.3 [KEEP THIS!]
options COMPAT_FREEBSD4 # Compatible with FreeBSD4
options COMPAT_FREEBSD5 # Compatible with FreeBSD5
options SCSI_DELAY=5000 # Delay (in ms) before probing SCSI
options KTRACE  # ktrace(1) support
options SYSVSHM # SYSV-style shared memory
options SYSVMSG # SYSV-style message queues
options SYSVSEM # SYSV-style semaphores
options _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-time 
extensions
options KBD_INSTALL_CDEV# install a CDEV entry in /dev
options ADAPTIVE_GIANT  # Giant mutex is adaptive.

options SMP # Symmetric MultiProcessor Kernel
device  apic# I/O APIC

# Bus support.
device  pci

# ATA and ATAPI devices
device  ata
device  atadisk # ATA disk drives
device  atapicd # ATAPI CDROM drives
options ATA_STATIC_ID   # Static device numbering

# SCSI peripherals
device  scbus   # SCSI bus (required for SCSI)
device  ch  # SCSI media changers
device  da  # Direct Access (disks)
device  sa  # Sequential Access (tape etc)
device  cd  # CD
device  pass# Passthrough device (direct SCSI access)
device  ses # SCSI Environmental Services (and SAF-TE)

# atkbdc0 controls both the keyboard and the PS/2 mouse
device  atkbdc  # AT keyboard controller
device  atkbd   # AT keyboard
device  psm # PS/2 mouse

device  kbdmux  # keyboard multiplexer

device  vga # VGA video card driver

device  splash  # Splash screen and screen saver support

# syscons is the default console driver, resembling an SCO console
device  sc

device  agp # support several AGP chipsets

# Power management support (see NOTES for more options)
#device apm
# Add suspend/resume support for the i8254.
device  pmtimer

# Serial (COM) ports
device  sio # 8250, 16[45]50 based serial ports

# Parallel port
device  ppc
device  ppbus   # Parallel port bus (required)
device  ppi # Parallel port interface device

device  miibus  # MII bus support
device  re  # RealTek 8139C+/8169/8169S/8110S

# Pseudo devices.
device  loop# Network loopback
device  random  # Entropy device
device  ether   # Ethernet support
device  sl  # 

no core file handler recognizes format

2006-05-12 Thread Avleen Vig
My 6.0 box[1], and now my 6.1 box[1] have been crashing almost daily,
due to page faults. Last night I was lucky enough to see it happen on
screen, where it reported a Page not present error.

When I try to look at the code file in GDB, I get this error:

warning: /usr/home/avleen/vmcore.0: no core file handler recognizes
format, using default
Can't fetch registers from this type of core file
Can't fetch registers from this type of core file
#0  0x in ?? ()


I did see a message from a few months ago which stated the userworld
could be out of step with the kernel becaus the core file format changed
at some point, but I've been building the kernel and world together for
6.0 and 6.1.

Any suggestions befoer I downgrade to 5.4?

-- 
Avleen Vig
Systems Administrator
Personal: www.silverwraith.com

Wickedness is a myth invented by good people to account for the curious
 attractiveness of others.  - Oscar Wilde
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: no core file handler recognizes format

2006-05-12 Thread Stanislav Sedov
On Fri, May 12, 2006 at 03:00:19PM -0700, Avleen Vig wrote:
 My 6.0 box[1], and now my 6.1 box[1] have been crashing almost daily,
 due to page faults. Last night I was lucky enough to see it happen on
 screen, where it reported a Page not present error.
 
 When I try to look at the code file in GDB, I get this error:
 
 warning: /usr/home/avleen/vmcore.0: no core file handler recognizes
 format, using default
 Can't fetch registers from this type of core file
 Can't fetch registers from this type of core file
 #0  0x in ?? ()
 
 
 I did see a message from a few months ago which stated the userworld
 could be out of step with the kernel becaus the core file format changed
 at some point, but I've been building the kernel and world together for
 6.0 and 6.1.
 
 Any suggestions befoer I downgrade to 5.4?
 

You should use kgdb rather the gdb. GDB doesn't recognizes kernel dumps
format by default.

-- 
-
Stanislav Sedov MBSD labs, Inc. [EMAIL PROTECTED]
If the facts don't fit the theory, change the facts. -- A. Einstein
-
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: help:How to map a physical address into a kernel address?

2006-05-12 Thread M. Warner Losh
In message: [EMAIL PROTECTED]@promisechina.com
[EMAIL PROTECTED] writes:
: To access sg_table in kernel address, I need to map the starting physical
: address of a segment into a kernel address. As I know that, we can use
: phystovirt()/bustovirt(), or kmap()/kmap_atomic() to map a bus/physical
: address or a physical page into a kernel address in Linux, but I did not
: find such a function in FreeBSD. Please help me on this, it is very urgent!

Use busdma.

Warner
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: no core file handler recognizes format

2006-05-12 Thread Avleen Vig
On Sat, May 13, 2006 at 02:39:19AM +0400, Stanislav Sedov wrote:
 You should use kgdb rather the gdb. GDB doesn't recognizes kernel
 dumps format by default.

Ah thank you!

Here's the information I found.
Any help that anyone can provide will go into a nice little crash
debugging for beginners document which I've started working on :-)



Ok kgdb tells me:

Fatal trap 12: page fault while in kernel mode
fault virtual address   = 0x58
fault code  = supervisor write, page not present
instruction pointer = 0x20:0xc06005aa
stack pointer   = 0x28:0xd6c13ad0
frame pointer   = 0x28:0xd6c13b00
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, def32 1, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process = 25911 (python)
trap number = 12
panic: page fault
Uptime: 14h49m8s
Dumping 511 MB (2 chunks)
  chunk 0: 1MB (159 pages) ... ok
  chunk 1: 511MB (130800 pages) 495 479 463 447 431 415 399 383 367 351 335 319 
303 287 271 255 239 223 207 191 175 159 143 127 111 95 79 63 47 31 15

#0  doadump () at pcpu.h:165
165 pcpu.h: No such file or directory.
in pcpu.h

The few lines before trap() was called look like this:

#5  0xc071a38d in trap (frame=
  {tf_fs = 8, tf_es = 40, tf_ds = 40, tf_edi = 0, tf_esi = 0, tf_ebp = 
-691979520, tf_isp = -691979588, tf_ebx = -691979136, tf_edx = -691978864, 
tf_ecx = 0, tf_eax = 8, tf_trapno = 12, tf_err = 2, tf_eip = -1067448918, tf_cs 
= 32, tf_eflags = 66183, tf_esp = -691979136, tf_ss = -691979544})
at /usr/src/sys/i386/i386/trap.c:434
#6  0xc070814a in calltrap () at /usr/src/sys/i386/i386/exception.s:139
#7  0xc06005aa in ip_ctloutput (so=0x8, sopt=0xd6c13c80)
at /usr/src/sys/netinet/ip_output.c:1210

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]