David O'Brien wrote:
On Sun, Sep 23, 2001 at 04:05:27PM +0200, Cyrille Lefevre wrote:
David O'Brien wrote:
On Mon, Sep 17, 2001 at 05:42:23PM -0700, Jordan Hubbard wrote:
We're still waiting for 4.0's support footprint to widen
a bit more before subjecting people to it by default.
On Mon, Sep 24, 2001 at 08:56:08AM +0200, Cyrille Lefevre wrote:
what kind of issues ? I'm using both XFree86-4 and ports in package form
(pre-compiled stuffs) w/o any problems.
Please RTF /usr/ports/Mk/bsd.port.mk and look at what XFREE86_VERSION
does.
--
-- David ([EMAIL PROTECTED])
To
Ok, here is the second set of results. I didn't run all the tests
because nothing I did appeared to really have much of an effect. In
this set of tests I set MAXMEM to 128M. As you can see the buildworld
took longer verses 512M (no surprise), and vmiodirenable still helped
On Mon, Sep 24, 2001 at 01:07:00PM +0200, Attila Nagy wrote:
Hello,
I'm just curious: is it possible to set up an NFS server and a client
where the client has very big (28 GB maximum for FreeBSD?) swap area on
multiple disks and caches the NFS exported data on it?
This could save a lot of
Hi,
I have noticed some strange behaviour with 4.3-RELEASE and dump. I have
been dumping my filesystems through gzip into a compressed dumpfile.
Some
of the resulting dumps have been MUCH larger than I would expect.
As an example, I have just dumped my /home partition note that lots
of
On Mon, Sep 24, 2001 at 01:07:00PM +0200, Attila Nagy wrote:
I'm just curious: is it possible to set up an NFS server and a client
where the client has very big (28 GB maximum for FreeBSD?) swap area on
multiple disks and caches the NFS exported data on it?
This could save a lot of bandwidth
Hello,
| In short, which program gives enough knowledge to the microprocessor (?)
| and allow him to use kern.flp mfsroot.flp in order to boot and make the
| operating system running.
your BIOS reads the first sektor from your floppy which consists
of a boot loader, which usually loads
As a side note, Irix and Solaris provide cachefs for this purpose and use
NFS filesystems as examples (others examples may include CD-ROM, etc).
Charles
-Original Message-
From: David Malone [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 24, 2001 8:26 AM
To: Attila Nagy
Cc: [EMAIL
Thanks for the responses, as expected it was an operator head space problem.
My lack of understanding how the default queues and bw would make ping
look. Apparently, enough delay is introduced merely by adding a pipe that
the ping client timesout waiting for the reponse. The response was
Seems to have still S1G bug:
Connected to cvsup4.freebsd.org
Server cvsup4.freebsd.org has the S1G bug
--
Regards, Ulf.
-
Ulf Zimmermann, 1525 Pacific Ave., Alameda, CA-94501, #: 510-865-0204
To Unsubscribe: send mail to
++ 24/09/01 11:30 -0700 - Ulf Zimmermann:
| Seems to have still S1G bug:
|
| Connected to cvsup4.freebsd.org
| Server cvsup4.freebsd.org has the S1G bug
This should go to the maintainer of cvsup4.freebsd.org, available at:
On 23-Sep-01 Evan Sarmiento wrote:
Hello,
After compiling a new kernel, installing it, when my laptop
tries to mount its drive, it panics with this message:
panic: lock (sleep mutex) vnode interlock not locked @
../../../kern/vfs_default.c:460
which is:
if (ap-a_flags
I saw a duplicate in one of the capabilities that wer submitted to -bugs earlier.
This had me thinking. What happens when a duplicate capability exists in termcap?
Are there any other duplicates in termcap.src? If yes, which?
The first attachment is a perl script that strips all cruft from
:
:In message [EMAIL PROTECTED], Matt Dillon writes:
:
:$8 = 58630
:(kgdb) print vm_page_buckets[$8]
:
:What is vm_page_hash_mask? The chunk of memory you printed out below
:looks alright; it is consistent with vm_page_array == 0xc051c000. Is
:it just the vm_page_buckets[] pointer that is
remember that we hit almost this problem with the KSE stuff during
debugging?
The pointers in the last few entries of the vm_page_buckets array got
corrupted when an agument to a function that manipulated whatever was next
in ram was 0, and it turned out that it was 0 because
of some PTE
Tell me if I am wrong but from the floppy, the files kern.flp
mfsroot.flp are compressed and then uncompressed into memory.
If so, that means that the FreeBSD box is running this programs from the
RAM and not from the floppy, right ?
Correct. They're running with the root device set to a
The pointers in the last few entries of the vm_page_buckets array got
corrupted when an agument to a function that manipulated whatever was next
in ram was 0, and it turned out that it was 0 because
of some PTE flushing thing (you are the one that found it... remember?)
I think I've also seen
:The pointers in the last few entries of the vm_page_buckets array got
:corrupted when an agument to a function that manipulated whatever was next
:in ram was 0, and it turned out that it was 0 because
: of some PTE flushing thing (you are the one that found it... remember?)
:
:I think I've also
:
:remember that we hit almost this problem with the KSE stuff during
:debugging?
:
:The pointers in the last few entries of the vm_page_buckets array got
:corrupted when an agument to a function that manipulated whatever was next
:in ram was 0, and it turned out that it was 0 because
: of some
not, I believe in 4.x
we do in 5.x
On Mon, 24 Sep 2001, Matt Dillon wrote:
:The pointers in the last few entries of the vm_page_buckets array got
:corrupted when an agument to a function that manipulated whatever was next
:in ram was 0, and it turned out that it was 0 because
: of some
In message [EMAIL PROTECTED], Matt Dillon writes:
Hmm. Do we have a guard page at the base of the per process kernel
stack?
As I understand it, no. In RELENG_4 there are UPAGES (== 2 on i386)
pages of per-process kernel state at p-p_addr. The stack grows
down from the top, and struct
:In message [EMAIL PROTECTED], Matt Dillon writes:
:
:Hmm. Do we have a guard page at the base of the per process kernel
:stack?
:
:As I understand it, no. In RELENG_4 there are UPAGES (== 2 on i386)
:pages of per-process kernel state at p-p_addr. The stack grows
:down from the top, and
:
:In message [EMAIL PROTECTED], Matt Dillon writes:
:
:Hmm. Do we have a guard page at the base of the per process kernel
:stack?
:
:As I understand it, no. In RELENG_4 there are UPAGES (== 2 on i386)
:pages of per-process kernel state at p-p_addr. The stack grows
:down from the top,
What happens on an ECC equipped PC when you have a multi-bit memory
error that hardware scrubbing can't fix? Will there be some sort of
NMI or something that will panic the box?
I'm used to alphas (where you'll get a fatal machine check panic) and
I am just wondering if PCs are as safe.
stack can be somewhat sparse depending on execution path, but it's not a
bad idea..
On Mon, 24 Sep 2001, Matt Dillon wrote:
:In message [EMAIL PROTECTED], Matt Dillon writes:
:
:Hmm. Do we have a guard page at the base of the per process kernel
:stack?
:
:As I understand it, no.
:What happens on an ECC equipped PC when you have a multi-bit memory
:error that hardware scrubbing can't fix? Will there be some sort of
:NMI or something that will panic the box?
:
:I'm used to alphas (where you'll get a fatal machine check panic) and
:I am just wondering if PCs are as safe.
Matt Dillon writes:
:What happens on an ECC equipped PC when you have a multi-bit memory
:error that hardware scrubbing can't fix? Will there be some sort of
:NMI or something that will panic the box?
:
:I'm used to alphas (where you'll get a fatal machine check panic) and
:I am
Andrew Gallatin wrote:
What happens on an ECC equipped PC when you have a multi-bit memory
error that hardware scrubbing can't fix? Will there be some sort of
NMI or something that will panic the box?
I'm used to alphas (where you'll get a fatal machine check panic) and
I am just
Andrew Gallatin wrote:
Matt Dillon writes:
:What happens on an ECC equipped PC when you have a multi-bit memory
:error that hardware scrubbing can't fix? Will there be some sort of
:NMI or something that will panic the box?
:
:I'm used to alphas (where you'll get a fatal
Matt Dillon wrote:
:The pointers in the last few entries of the vm_page_buckets array got
:corrupted when an agument to a function that manipulated whatever was next
:in ram was 0, and it turned out that it was 0 because
: of some PTE flushing thing (you are the one that found it...
:
:I did it as part of the KSE work in 5.x. It would be quite easy to do it
:for 4.x as well, but it makes a.out coredumps problematic.
:
:Also, options UPAGES=4 is a pretty good defensive measure.
:
:Cheers,
:-Peter
:--
:Peter Wemm - [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
On Mon, 24 Sep 2001, Matt Dillon wrote:
Yowzer. How the hell did that happen! Yes, you're right, the
vm_page_array[] pointer has gotten corrupted. If we assume that
the vm_page_t is valid (0xc0842acc), then the vm_page_buckets[]
pointer should be that.
...
This is
Matt Dillon wrote:
:
:I did it as part of the KSE work in 5.x. It would be quite easy to do it
:for 4.x as well, but it makes a.out coredumps problematic.
:
:Also, options UPAGES=4 is a pretty good defensive measure.
:
:Cheers,
:-Peter
:--
:Peter Wemm - [EMAIL PROTECTED]; [EMAIL
Matt Dillon wrote:
:
:I did it as part of the KSE work in 5.x. It would be quite easy to do it
:for 4.x as well, but it makes a.out coredumps problematic.
:
:Also, options UPAGES=4 is a pretty good defensive measure.
:
:Cheers,
:-Peter
:--
:Peter Wemm - [EMAIL PROTECTED]; [EMAIL
:Oh, one other thing... When we had PCIBIOS active for pci config space
:read/write support, we had stack overflows on many systems when the SSE
:stuff got MFC'ed. The simple act of trimming about 300 bytes from the
:pcb_save structure was enough to make the difference between it working or
This isn't perfect but it should be a good start in regards to
testing kstack use. This patch is against -stable. It reports
kernel stack use on process exit and will generate a 'Kernel stack
underflow' message if it detects an underflow. It doesn't panic,
so for a fun
Matt Dillon wrote:
This isn't perfect but it should be a good start in regards to
testing kstack use. This patch is against -stable. It reports
kernel stack use on process exit and will generate a 'Kernel stack
underflow' message if it detects an underflow. It doesn't
:
:Matt Dillon wrote:
: This isn't perfect but it should be a good start in regards to
: testing kstack use. This patch is against -stable. It reports
: kernel stack use on process exit and will generate a 'Kernel stack
: underflow' message if it detects an underflow. It
:stack size = 4688
Sep 24 22:47:22 test1 /kernel: process 29144 exit kstackuse 4496
closer... :-)
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message
Matt Dillon wrote:
Yah... the test I ran was just a couple of seconds worth of playing
around over ssh. I expect the worst case to be a whole lot worse.
We're going to have to bump up UPAGES to 3 in 4.x, there's no question
about it. I'm going to do it tonight.
Heh. I
40 matches
Mail list logo