Inheriting the nodump flag

2001-02-26 Thread Dima Dorfman

Hello -hackers

Some time ago, on -arch, phk proposed that the nodump flag should be
inherited (see 'inheriting the "nodump" flag ?' around Dec. 2000).
This was generally considered a good idea, however, the patch to the
kernel he proposed was thought an ugly hack.  In addition, jeroen
pointed out that NetBSD had implemented this functionality the Right
Way(tm), in dump(8).

Attached below is a port of NetBSD's patch to FreeBSD's dump(8).
dump's tree walker is a little weird, so the patch is a little more
complicated than calling fts_set with FTS_SKIP.  For the technical
details of what it does, see:
http://lists.openresources.com/NetBSD/tech-kern/msg00453.html.

I've been using this on two of my hosts for a while, and it works as
expected.  Given the additional fact that NetBSD has had this for
almost two years, and that the patch below looks very similar to the
one they applied, I doubt it significantly breaks anything.

Comments?

Thanks in advance

Dima Dorfman
[EMAIL PROTECTED]


Index: traverse.c
===
RCS file: /st/src/FreeBSD/src/sbin/dump/traverse.c,v
retrieving revision 1.11
diff -u -r1.11 traverse.c
--- traverse.c  2000/04/14 06:14:59 1.11
+++ traverse.c  2001/02/20 01:39:06
@@ -74,9 +74,11 @@
 typedeflong fsizeT;
 #endif
 
-static int dirindir __P((ino_t ino, daddr_t blkno, int level, long *size));
+static int dirindir __P((ino_t ino, daddr_t blkno, int level, long *size,
+long *tapesize, int nodump));
 static void dmpindir __P((ino_t ino, daddr_t blk, int level, fsizeT *size));
-static int searchdir __P((ino_t ino, daddr_t blkno, long size, long filesize));
+static int searchdir __P((ino_t ino, daddr_t blkno, long size, long filesize,
+long *tapesize, int nodump));
 
 /*
  * This is an estimation of the number of TP_BSIZE blocks in the file.
@@ -152,10 +154,14 @@
dp = getino(ino);
if ((mode = (dp-di_mode  IFMT)) == 0)
continue;
-   SETINO(ino, usedinomap);
+   /*
+* All dirs go in dumpdirmap; only inodes that are to
+* be dumped go in usedinomap and dumpinomap, however.
+*/
if (mode == IFDIR)
SETINO(ino, dumpdirmap);
if (WANTTODUMP(dp)) {
+   SETINO(ino, usedinomap);
SETINO(ino, dumpinomap);
if (mode != IFREG  mode != IFDIR  mode != IFLNK)
*tapesize += 1;
@@ -192,9 +198,10 @@
long *tapesize;
 {
register struct dinode *dp;
-   register int i, isdir;
+   register int i, isdir, nodump;
register char *map;
register ino_t ino;
+   struct dinode di;
long filesize;
int ret, change = 0;
 
@@ -204,24 +211,34 @@
isdir = *map++;
else
isdir = 1;
-   if ((isdir  1) == 0 || TSTINO(ino, dumpinomap))
+   /*
+* If a directory has been removed from usedinomap, it
+* either has the nodump flag set, or has inherited
+* it.  Although a directory can't be in dumpinomap if
+* it isn't in usedinomap, we have to go through it to
+* propogate the nodump flag.
+*/
+   nodump = (TSTINO(ino, usedinomap) == 0);
+   if ((isdir  1) == 0 || (TSTINO(ino, dumpinomap)  !nodump))
continue;
dp = getino(ino);
-   filesize = dp-di_size;
+   di = *dp;   /* inode buf may change in searchdir(). */
+   filesize = di.di_size;
for (ret = 0, i = 0; filesize  0  i  NDADDR; i++) {
-   if (dp-di_db[i] != 0)
-   ret |= searchdir(ino, dp-di_db[i],
+   if (di.di_db[i] != 0)
+   ret |= searchdir(ino, di.di_db[i],
(long)dblksize(sblock, dp, i),
-   filesize);
+   filesize, tapesize, nodump);
if (ret  HASDUMPEDFILE)
filesize = 0;
else
filesize -= sblock-fs_bsize;
}
for (i = 0; filesize  0  i  NIADDR; i++) {
-   if (dp-di_ib[i] == 0)
+   if (di.di_ib[i] == 0)
continue;
-   ret |= dirindir(ino, dp-di_ib[i], i, filesize);
+   ret |= dirindir(ino, di.di_ib[i], i, filesize,
+   tapesize, nodump);
}
if (ret  HASDUMPEDFILE) {

Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], Julian Elischer writes:
I still think that in such a case it should be possible to
'test the commitment' by touching all the allocated memory 
while trapping page faults.

And what do you do if you get one?  There's no undo button for SIGSEGV.
Traditionally, you return from the signal handler right where you were.
Can you get out of this with longjmp()?  Probably.  It's not exactly
supported or guaranteed.

In any event:

1.  The C language spec doesn't require you to do this.
2.  Other implementations have provided this guarantee, at least as an
option.

It's odd that I see lots of people arguing for segfaults killing the process
accessing memory that has been "successfully" allocated, but no one arguing
for the process getting killed when it exceeds a disk quota.

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], Matt Dillon writes:
The problem is a whole lot more complex then you think.  Dealing with
overcommit is not simply counting mapped pages, there are all sorts
of issues involved.  But the biggest gotcha is that putting in
overcommit protection will not actually save your system from
dying a terrible death.

No, but it may allow me to stop and save some work and then gracefully
suicide, rather than being segfaulted.

I have seen plenty of designs which are able to say "give me another 32MB of
memory.  Oh, you can't?  Okay, I'll do this the other way."

Most importantly, though, I would rather be able to save my work and *THEN*
abort because there's insufficient memory.

Let's compare FreeBSD's handling of this situation to the handling we get
under one of the most unstable pieces of crap imaginable:  MacOS.

When I run out of memory on my Mac, you know what happens?  I get little
warning boxes saying there isn't enough memory to complete a given operation.
Almost all the time, I am then allowed to continue *using* the applications
I have open.  Indeed, the application which *gave* me the error message
is still able to run... long enough for me to save my work and close a
few windows.

Apparently, the "desirable" behavior under FreeBSD would be for the
application to believe it had memory, then suddenly abort without
warning, and with no chance to save my work.

It in fact makes it *more* likely that the
system will die a horrible death, because with overcommit protection
you wind up pre-reserving resources that would otherwise be reserved on
demand (and often might not ever be used by the programs mapping
those resources).

Sure.  But I want to know *UP FRONT* when I have problems.  If I am trying
to allocate too much memory, for God's sake, *TELL ME*.  Give me warnings
and errors.  Make my program thrash, so I start wondering where the swap
space went, and I can try to fix the problem, but don't just randomly kill
processes that may or may not have done anything "wrong".

Not only that, but overcommit protection does not
protect you from thrashing, and quite often the system will become
unusable long before it actually runs out of memory+swap.

I am okay with saving my work slowly.  I am not okay with not saving my
work.

A simple example of this is mmap(... PROT_READ|PROT_WRITE, MAP_PRIVATE),

I am not sure that any promises have ever been made about the stability of
mmap()'d memory.  :)

So before you get into tring to 'protect' yourself by implementing
overcommit protection, you need to think long and hard on what you
are actually protecting yourself from... and what you aren't.  People
seem to believe that edge cases such as allocating memory and then
the system faulting the process later because it couldn't allocate the
actual backing store later is of paramount importance but they seem
to forget that by the time you reach that situation, you are generally
already past the point of no return.

This is quite possibly the case.  On the other hand, in theory, the system
could limit the total number of dirtyable pages such that it has a guarantee
of enough space to swap... or at least promise to only kill programs that
haven't had the "don't kill me" flag set.

There are real applications, if only a few, where this would make a
substantial difference.  There are lots where it would at least provide warm
fuzzies, and a checkbox conformance item that some one may have specified.

Overcommit protection doesn't 
even come close to being able to prevent your system from getting 
into an unrecoverable situation and it certainly is no substitute for
writing the software correctly (such that the memory failure can't
occur in the first place).

Unfortunately, as has been made abundantly clear, you can't write software
such that a memory failure can't occur in the first place.  If someone else
writes a program that allocates a giant chunk of memory, then slowly goes
through dirtying pages, it is quite likely that a correctly written program
will eventually be killed.  If, indeed, FreeBSD might also kill a program
for dirtying bss space, you can't write *ANY* program "correctly"; you can
always get a SIGSEGV for accessing an object that was in no way
distinguishable from an object you have permission to access.

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Build timings - FreeBSD 4.2 vs. Linux

2001-02-26 Thread Robin Cutshaw

On Wed, Feb 21, 2001 at 10:44:42AM -0800, Peter Wemm wrote:
  
  There's a problem here.  I tried to configure an SMP kernel but when it
  booted the fxp0 (Compaq dual eepro100 adapter) got timeout errors and
  wouldn't work.  I went back and did the config/make on the GENERIC
  kernel and booted it.  Same thing.  The stock GENERIC kernel that came
  with the dist works just fine.  Any ideas?
  
  One other problem I've seen with the Compaq 8500 system.  FreeBSD doesn't
  see the pci adapter on the secondary bus.  I had to move the ethernet
  adapter to the primary bus for it to work.
 
 Perhaps the output of 'pciconf -l' and mptable(8) would be useful.
 dmesg also, after a verbose boot (boot -v at the loader).
 

I did a little more research.  The GENERIC kernel works fine, even when
rebuilt.  If I uncomment SMP/APIC, the kernel doesn't work with the
ethernet card.  It looks like interrupts aren't being processed.

Also, it looks like 4.2 is not scanning the complete PCI bus.  Here's
the output from pciconf:


pcib3@pci0:1:0: class=0x060400 card=0x00dc chip=0x00261011 rev=0x05 hdr=0x01
none0@pci0:11:0:class=0x080400 card=0xa2f80e11 chip=0xa0f70e11 rev=0x11 
hdr=0x00
none1@pci0:12:0:class=0x088000 card=0xb0f30e11 chip=0xa0f00e11 rev=0x00 
hdr=0x00
none2@pci0:13:0:class=0x03 card=0x47561002 chip=0x47561002 rev=0x7a 
hdr=0x00
ida0@pci0:14:0: class=0x010400 card=0x40400e11 chip=0x00101000 rev=0x02 hdr=0x00
isab0@pci0:15:0:class=0x060100 card=0x02001166 chip=0x02001166 rev=0x4d 
hdr=0x00
none3@pci0:20:0:class=0x05 card=0x chip=0x1117118c rev=0x05 
hdr=0x00
none4@pci0:20:1:class=0x05 card=0x chip=0x1117118c rev=0x05 
hdr=0x00
chip0@pci0:25:0:class=0x06 card=0x chip=0x60100e11 rev=0x01 
hdr=0x00
chip1@pci0:26:0:class=0x06 card=0x chip=0x60100e11 rev=0x01 
hdr=0x00
chip2@pci0:27:0:class=0x06 card=0x chip=0x60100e11 rev=0x01 
hdr=0x00
fxp0@pci1:4:0:  class=0x02 card=0xb0dd0e11 chip=0x12298086 rev=0x05 hdr=0x00
fxp1@pci1:5:0:  class=0x02 card=0xb0dd0e11 chip=0x12298086 rev=0x05 hdr=0x00


Here's the output from scanpci (a little program that I wrote for XFree86):

pci bus 0x0 cardnum 0x01 function 0x: vendor 0x1011 device 0x0026
 Digital  Device unknown

pci bus 0x0 cardnum 0x0b function 0x: vendor 0x0e11 device 0xa0f7
 Compaq  Device unknown

pci bus 0x0 cardnum 0x0c function 0x: vendor 0x0e11 device 0xa0f0
 Compaq  Device unknown

pci bus 0x0 cardnum 0x0d function 0x: vendor 0x1002 device 0x4756
 ATI Mach64 GV

pci bus 0x0 cardnum 0x0e function 0x: vendor 0x1000 device 0x0010
 NCR  Device unknown

pci bus 0x0 cardnum 0x0f function 0x: vendor 0x1166 device 0x0200
 Device unknown

pci bus 0x0 cardnum 0x14 function 0x: vendor 0x118c device 0x1117
 Device unknown

pci bus 0x0 cardnum 0x14 function 0x0001: vendor 0x118c device 0x1117
 Device unknown

pci bus 0x0 cardnum 0x19 function 0x: vendor 0x0e11 device 0x6010
 Compaq  Device unknown

pci bus 0x0 cardnum 0x1a function 0x: vendor 0x0e11 device 0x6010
 Compaq  Device unknown

pci bus 0x0 cardnum 0x1b function 0x: vendor 0x0e11 device 0x6010
 Compaq  Device unknown

pci bus 0x1 cardnum 0x04 function 0x: vendor 0x8086 device 0x1229
 Intel 82557/8/9 10/100MBit network controller

pci bus 0x1 cardnum 0x05 function 0x: vendor 0x8086 device 0x1229
 Intel 82557/8/9 10/100MBit network controller

pci bus 0x5 cardnum 0x01 function 0x: vendor 0x1011 device 0x0026
 Digital  Device unknown

pci bus 0x5 cardnum 0x0b function 0x: vendor 0x0e11 device 0xa0f7
 Compaq  Device unknown

pci bus 0x6 cardnum 0x04 function 0x: vendor 0x8086 device 0x1229
 Intel 82557/8/9 10/100MBit network controller

pci bus 0x6 cardnum 0x05 function 0x: vendor 0x8086 device 0x1229
 Intel 82557/8/9 10/100MBit network controller

pci bus 0xd cardnum 0x0b function 0x: vendor 0x0e11 device 0xa0f7
 Compaq  Device unknown


Note that the XFree86 3.3.6 version of scanpci stopped at bus 3 but
this new version scans the complete bus.

Here's the output from mptable (while running GENERIC):



===

MPTable, version 2.0.15

---

MP Floating Pointer Structure:

  location: BIOS
  physical address: 0x000f4fd0
  signature:'_MP_'
  length:   16 bytes
  version:  1.4
  checksum: 0x18
  mode: Virtual Wire

---

MP Config Table Header:

  physical address: 0x000ff485
  signature:'PCMP'
  base table length:668
  version:  1.4
  checksum: 0x28
  OEM ID:

Re: ata-disk ioctl and atactl patch

2001-02-26 Thread Scott Renfro

On Mon, Feb 26, 2001 at 08:28:56AM +0100, Soren Schmidt wrote:
 
 No its not safe at all, you risk trashing an already running command...

Thanks for the feedback; that's exactly what I was concerned about.

 Anyhow, I have an atacontrol thingy in the works for attach/detach,
 raid control etc, etc, I'll try to merge this functionality into that
 (the ioctl's will change etc, but the functionality is nice)...

Great.

thanks again,
-scott

-- 
Scott Renfro [EMAIL PROTECTED]  +1 650 906 9618

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Inheriting the nodump flag

2001-02-26 Thread Robert Watson


I won't have a chance to look at the patch below until later this week,
but had two comments--

1) This method of handling recursive nodump is far superior to any actual
   inheritence of the flag as part of file system operations, as currently
   no other file flags are inherited from the parent directory -- the only
   property that is inherited is the group.  With ACLs, the parent's
   default ACL will also play a role in the new access ACL.  In any case,
   there is no precedent for file flag inheritence.

2) Please run the patch by freebsd-audit -- there have been a fair number
   of vulnerabilities in the fts code in the past due to race conditions
   of various sorts, and it's important that any modifications be
   carefully scrutinized to prevent the reintroduction of vulnerabilites.

However, the general idea sounds very useful, and something that I'd find
applicable on a daily basis :-).

Robert N M Watson FreeBSD Core Team, TrustedBSD Project
[EMAIL PROTECTED]  NAI Labs, Safeport Network Services

On Mon, 26 Feb 2001, Dima Dorfman wrote:

 Hello -hackers
 
 Some time ago, on -arch, phk proposed that the nodump flag should be
 inherited (see 'inheriting the "nodump" flag ?' around Dec. 2000).
 This was generally considered a good idea, however, the patch to the
 kernel he proposed was thought an ugly hack.  In addition, jeroen
 pointed out that NetBSD had implemented this functionality the Right
 Way(tm), in dump(8).
 
 Attached below is a port of NetBSD's patch to FreeBSD's dump(8).
 dump's tree walker is a little weird, so the patch is a little more
 complicated than calling fts_set with FTS_SKIP.  For the technical
 details of what it does, see:
 http://lists.openresources.com/NetBSD/tech-kern/msg00453.html.
 
 I've been using this on two of my hosts for a while, and it works as
 expected.  Given the additional fact that NetBSD has had this for
 almost two years, and that the patch below looks very similar to the
 one they applied, I doubt it significantly breaks anything.
 
 Comments?
 
 Thanks in advance
 
   Dima Dorfman
   [EMAIL PROTECTED]
 
 
 Index: traverse.c
 ===
 RCS file: /st/src/FreeBSD/src/sbin/dump/traverse.c,v
 retrieving revision 1.11
 diff -u -r1.11 traverse.c
 --- traverse.c2000/04/14 06:14:59 1.11
 +++ traverse.c2001/02/20 01:39:06
 @@ -74,9 +74,11 @@
  typedef  long fsizeT;
  #endif
  
 -static   int dirindir __P((ino_t ino, daddr_t blkno, int level, long *size));
 +static   int dirindir __P((ino_t ino, daddr_t blkno, int level, long *size,
 +long *tapesize, int nodump));
  static   void dmpindir __P((ino_t ino, daddr_t blk, int level, fsizeT *size));
 -static   int searchdir __P((ino_t ino, daddr_t blkno, long size, long 
filesize));
 +static   int searchdir __P((ino_t ino, daddr_t blkno, long size, long filesize,
 +long *tapesize, int nodump));
  
  /*
   * This is an estimation of the number of TP_BSIZE blocks in the file.
 @@ -152,10 +154,14 @@
   dp = getino(ino);
   if ((mode = (dp-di_mode  IFMT)) == 0)
   continue;
 - SETINO(ino, usedinomap);
 + /*
 +  * All dirs go in dumpdirmap; only inodes that are to
 +  * be dumped go in usedinomap and dumpinomap, however.
 +  */
   if (mode == IFDIR)
   SETINO(ino, dumpdirmap);
   if (WANTTODUMP(dp)) {
 + SETINO(ino, usedinomap);
   SETINO(ino, dumpinomap);
   if (mode != IFREG  mode != IFDIR  mode != IFLNK)
   *tapesize += 1;
 @@ -192,9 +198,10 @@
   long *tapesize;
  {
   register struct dinode *dp;
 - register int i, isdir;
 + register int i, isdir, nodump;
   register char *map;
   register ino_t ino;
 + struct dinode di;
   long filesize;
   int ret, change = 0;
  
 @@ -204,24 +211,34 @@
   isdir = *map++;
   else
   isdir = 1;
 - if ((isdir  1) == 0 || TSTINO(ino, dumpinomap))
 + /*
 +  * If a directory has been removed from usedinomap, it
 +  * either has the nodump flag set, or has inherited
 +  * it.  Although a directory can't be in dumpinomap if
 +  * it isn't in usedinomap, we have to go through it to
 +  * propogate the nodump flag.
 +  */
 + nodump = (TSTINO(ino, usedinomap) == 0);
 + if ((isdir  1) == 0 || (TSTINO(ino, dumpinomap)  !nodump))
   continue;
   dp = getino(ino);
 - filesize = dp-di_size;
 + di = *dp;   /* inode buf may change in searchdir(). */
 + filesize = di.di_size;
   for 

Re: Setting memory allocators for library functions.

2001-02-26 Thread Rik van Riel

On Sat, 24 Feb 2001, Peter Seebach wrote:
 In message 9820.983050024@critter, Poul-Henning Kamp writes:
 I think there is a language thing you don't understand here.

 No, I just disagree.  It is useful for the OS to provide a hook for
 memory which is *known to work* - and that is the environment C specifies.

Send patches.

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/   http://distro.conectiva.com/


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], 
Rik van Riel writes:
 No, I just disagree.  It is useful for the OS to provide a hook for
 memory which is *known to work* - and that is the environment C specifies.

Send patches.

I may some day.  It's not very high on my priority list; I'd probably try to
fix UVM first.  :)

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: ata-disk ioctl and atactl patch

2001-02-26 Thread Stephen Rose

A couple of us on the questions list have asked for a way to spin down ide
disks when idle.  Is there any chance that this utility could lead to
something useful there?

Steve Rose


It seems Scott Renfro wrote:
 As I promised on -mobile earlier this week, I've cleaned up my patches
 to port the {Net,Open}BSD atactl utility, including a simplistic
 ata-disk ioctl.  They apply cleanly against this afternoon's -stable
 (including Soren's latest commit bringing -stable up to date with
 -current).  I've been running them for some time and they ''work great
 here''.

 Before announcing this in a broader context, I wanted to get a bit of
 feedback on the ioctl implementation.  In particular, is it safe to
 just do an ata_command inside adioctl() without any further checking?
 (e.g., can this cause bad things to happen under heavy i/o load?)

No its not safe at all, you risk trashing an already running command...

Anyhow, I have an atacontrol thingy in the works for attach/detach,
raid control etc, etc, I'll try to merge this functionality into that
(the ioctl's will change etc, but the functionality is nice)...

-S?ren

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message

---



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Nate Williams

[ Memory overcommit ]

 One important way to gain confidence that you're little box won't
 silently crash at the worst possible time for the customer is to
 be able to *prove* to yourself that it can't happen, given certain
 assumptions. Those assumptions usually include things like "the
 hardware is working properly" (e.g., no ECC errors) and "the compiler
 compiled my C code correctly".
 
 Given these basic assumptions, you go through and check that you've
 properly handled every possible case of input (malicious or otherwise)
 from the outside world. Part of the "proof" is verifying that you've
 checked all of your malloc(3) return values for NULL.. and assuming
 that if malloc(3) returns != NULL, then the memory is really there.
 
 Now, if malloc can return NULL and the memory *not* really be there,
 ^^^
I assume you meant 'can't' here, right?

 there is simply no way to prove that your code is not going to crash.

Even in this case, there's no way to prove your code is not going to
crash.

The kernel has bugs, your software will have bugs (unless you've proved
that it doesn't, and doing so on any significant piece of software will
probably take longer to do than the amount of time you've spent writing
and debugging it).

And, what's to say that your correctly working software won't go bad
right in the middle of your program running.

There is no such thing as 100% fool-proof.

 This memory overcommit thing is the only case that I can think of
 where this happens, given the basic assumptions of correctly
 functioning hardware, etc. That is why it's especially annoying to
 (some) people.

If you need 99.999% fool-proof, memory-overcommit can be one of the
many classes of problems that bite you.  However, in embededed systems,
most folks design the system with particular software in mind.
Therefore, you know ahead of time how much memory should be used, and
can plan for how much memory is needed (overcommit or not) in your
hardware design.  (We're doing this right now in our 3rd generation
product at work.)

If the amount of memory is unknown (because of changing load conditions,
and/or lack-of-experience with newer hardware), then overcommit *can*
allow you to actually run 'better' than a non-overcommit system, though
it doesn't necessarily give you the same kind of predictability when you
'hit the wall' like a non-overcommit system will do.

Our embedded OS doesn't do memory-overcommit, but sometimes I wish it
did, because it would give us some things for free.  However, *IF* it
did, we'd need some sort of mechanism (ie; AIX's SIGDANGER) that memory
was getting tight, so the application could start dumping unused memory,
or at least have an idea that something bad was happening so it could
attempt to cleanup before it got whacked. :)



Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], Nate Williams writes
:
Even in this case, there's no way to prove your code is not going to
crash.

Sure.  But you can at least prove that all crashes are the result of bugs,
not merely design "features".  From the point of view of proving that a
system is designed correctly, memory overcommit is a "bug".  ;)

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Nate Williams

 Even in this case, there's no way to prove your code is not going to
 crash.
 
 Sure.  But you can at least prove that all crashes are the result of bugs,
 not merely design "features".

'Proving' something is correct is left as an excercise for the folks who
have way too much time on their hand.  At my previous job (SRI), we have
folks who work full-time trying to prove algorithms.

In general, proving out simple algorithms takes months, when the
algorithm itself took 1-2 hours to design and write.

Another thing is that crashes may have occurred because of invalid
input, invalid output, valid but not expected input, etc...

Again, memory overcommit is only *one* class of bugs that is avoided.
The phrase "can't see the forest for the trees" jumps to mind. :)




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], Nate Williams writes
:
Again, memory overcommit is only *one* class of bugs that is avoided.
The phrase "can't see the forest for the trees" jumps to mind. :)

Sure, and likewise, bugs in libc are only one *class* of bugs we can avoid,
but that doesn't mean we don't fix them.

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Matt Dillon


:Matt Dillon wrote:
: 
:..
: the system runs out of memory, even *with* overcommit protection.
: In fact, all sorts of side effects occur even when the system
:...
:
:That's an assumption.

Ha.  Right.  Go through any piece of significant code and just see how
much goes flying out the window because the code wants to simply assume
things work.  Then try coding conditionals all the way through to fix
it... and don't forget you need to propogate the error condition back
up the procedure chain too so the original caller knows why it failed.
:
: There ain't no magic bullet here, folks.  By the time you get a memory
: failure, even with no overcommit, it is far too late to save the day.
:
:Scientific computation. If, at one point, no more memory can be
:allocated, you back off, save the present results, and try again later.

This is irrelevant to the conversation.  It has nothing to do with
overcommit in the context it is being discussed.

:
:You assume too much. Quite a few of the more efficient garbage
:collection algorithms depend on knowing when the memory has become full.

   This has nothing to do with overcommit in the context it is being
   discussed.  In fact, this has nothing to do with OS memory management
   at all -- all garbage collected languages have their own infrastructure
   to determine when memory pressure requires collecting.

:You keep allocating, and when malloc() returns NULL, *then* you run the
:garbage collector, free some space, and try again. If malloc() doesn't
:work, quite a few very efficient garbage collection algorithms become
:impossible to implement.

First, that's bullshit.  Most garbage collection implementations
require the memory 'size' to be hardwired.  Second, that's bullshit
because any program relying on that sort of operation had damn well
better have its datasize limit set to something reasonable, or the
garbage collector is going to run the system out of swap before it
decides it needs to do a run through.

:Just because *you* don't see how one thing can work, it doesn't mean it
:can't work. As I have trivially shown above.

You haven't shown anything above.  Garbage collectors do not work that
way and you really should know it.

:Honestly, I think non-overcommit is a mistake and your approach is much
:better, but it's not the only approach and there _are_ valid approaches
:that depend on not overcommitting, and I really hate having to defend
:non-overcommit against such bogus arguments.

You've completely ignored the point that overcommit has nothing whatsoever
to do with memory pressure.  You are assuming that overcommit is some
sort of magic bullet that will solve the memory pressure handling problem,
and it is nothing of the sort.

-Matt

:Daniel C. Sobral   (8-DCS)
:[EMAIL PROTECTED]


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Dufault

 
 I still think that in such a case it should be possible to
 'test the commitment' by touching all the allocated memory 
 while trapping page faults. and fault all your memory from 
 'potential' to 'allocated'. As someone said. it is not sure which program
 when you run out of swap, but I think you might be able to somehow
 change this behaviour to "the program making the request" fails
 (or gets a fault).
 
 You could allocate your memory.
 trap faults.
 touch all of the allocated memory.
 if it faults, you can remap some file to that location to allow the 
 instruction to continue.. continue and abort the check..
 exit as needed, OR continue with secure knowledge that
 all your memory is there.
 Alternatively you could allocate your own on-disk swapspace for
 a program by telling malloc to use a file for all your memory needs.

I think the right way to implement this would be define a POSIX P1003.1

"mlockall(MCL_CURRENT|MCL_FUTURE)"

along with a physical memory limit resource.  I think this could
be defined to give the requested malloc performance.  This is 
defined to be not inherited so you'd still need to modify your
program.  I absolutely agree this is a can of worms, but this would
be a way to proceed.

Peter

--
Peter Dufault ([EMAIL PROTECTED])   Realtime development, Machine control,
HD Associates, Inc.   Fail-Safe systems, Agency approval

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], Matt Dillon writes:
   This has nothing to do with overcommit in the context it is being
   discussed.  In fact, this has nothing to do with OS memory management
   at all -- all garbage collected languages have their own infrastructure
   to determine when memory pressure requires collecting.

I think we were talking about C, and there are reasonable GC implementations
for C... that assume that they can detect an out-of-memory condition because
malloc returns a null pointer.

First, that's bullshit.  Most garbage collection implementations
require the memory 'size' to be hardwired.  Second, that's bullshit
because any program relying on that sort of operation had damn well
better have its datasize limit set to something reasonable, or the
garbage collector is going to run the system out of swap before it
decides it needs to do a run through.

Fair enough, other circumstances might *also* cause a scan... but the fact
remains that a GC system will want to know when it's out of memory, and
will want some kind of warning other than a segfault.

You've completely ignored the point that overcommit has nothing whatsoever
to do with memory pressure.  You are assuming that overcommit is some
sort of magic bullet that will solve the memory pressure handling problem,
and it is nothing of the sort.

No one has said it solves all the problems.  It solves *one specific* problem;
the problem that you don't get *ANY* warning, or *ANY* chance to do anything,
if you actually run out of available memory.  Even if it's a transient failure
that will go away in five minutes.  Even if all you need to do is an fwrite
and an fclose.

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



802.1q vlan patches for if_fxp

2001-02-26 Thread Nick Sayer


A colleague of mine has reported the necessity of adding this patch to 
if_fxp.c to support 802.1q vlans. I must admit that I am not familiar 
enough with the vlan code to understand what good this does. This patch 
is against 4.1-RELEASE. If it is a good thing, I would like to add it to 
-current and MFC it back in time for 4.3. If it isn't, I'd like to tell 
my friend why. Thanks in advance.



*** pci/if_fxp.c.orig   Wed Jul 19 09:36:36 2000
--- pci/if_fxp.cTue Aug  8 23:18:37 2000
***
*** 52,57 
--- 52,65 
  
  #include net/bpf.h
  
+ #include "vlan.h"
+ #if NVLAN  0
+ #include net/if_types.h
+ #include net/if_arp.h
+ #include net/ethernet.h
+ #include net/if_vlan_var.h
+ #endif
+ 
  #if defined(__NetBSD__)
  
  #include sys/ioctl.h
***
*** 417,422 
--- 425,433 
ether_ifattach(ifp, enaddr);
bpfattach(sc-sc_ethercom.ec_if.if_bpf, ifp, DLT_EN10MB,
sizeof(struct ether_header));
+ #if NVLAN  0
+   ifp-if_data.ifi_hdrlen = sizeof(struct ether_vlan_header);
+ #endif
  
/*
 * Add shutdown hook so that DMA is disabled prior to reboot. Not
***
*** 599,604 
--- 610,618 
 * Attach the interface.
 */
ether_ifattach(ifp, ETHER_BPF_SUPPORTED);
+ #if NVLAN  0
+   ifp-if_data.ifi_hdrlen = sizeof(struct ether_vlan_header);
+ #endif
/*
 * Let the system queue as many packets as we have available
 * TX descriptors.
*** modules/fxp/Makefile.orig   Fri Jan 28 05:26:29 2000
--- modules/fxp/MakefileTue Aug  8 23:28:25 2000
***
*** 2,7 
  
  .PATH:${.CURDIR}/../../pci
  KMOD  = if_fxp
! SRCS  = if_fxp.c opt_bdg.h device_if.h bus_if.h pci_if.h
  
  .include bsd.kmod.mk
--- 2,11 
  
  .PATH:${.CURDIR}/../../pci
  KMOD  = if_fxp
! SRCS  = if_fxp.c opt_bdg.h vlan.h device_if.h bus_if.h pci_if.h
! CLEANFILES= vlan.h
! 
! vlan.h:
!   touch vlan.h
  
  .include bsd.kmod.mk


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Daniel C. Sobral

Peter Seebach wrote:
 
 It's odd that I see lots of people arguing for segfaults killing the process
 accessing memory that has been "successfully" allocated, but no one arguing
 for the process getting killed when it exceeds a disk quota.

Disk quote is an artificial limit. If you remind each and every other
time this discussion came up, you *can* set artificial memory limits,
and that won't cause applications to be killed. But, of course, this
particular solution you do not accept.

Anyway, these are two very different situations, and comparing them is
silly.

If you want non-overcommit, code it and send the patches.

-- 
Daniel C. Sobral(8-DCS)
[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]

Acabou o hipismo-arte. Mas a desculpa brasileira mais ouvida em Sydney
e' que nao tem mais cavalo bobo por ai'.

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Daniel C. Sobral

Matt Dillon wrote:
 
 :Matt Dillon wrote:
 :
 :..
 : the system runs out of memory, even *with* overcommit protection.
 : In fact, all sorts of side effects occur even when the system
 :...
 :
 :That's an assumption.
 
 Ha.  Right.  Go through any piece of significant code and just see how
 much goes flying out the window because the code wants to simply assume
 things work.  Then try coding conditionals all the way through to fix
 it... and don't forget you need to propogate the error condition back
 up the procedure chain too so the original caller knows why it failed.

sarcasmPerhaps you should re-acquaint yourself with exception
handlers, as you seem to have forgotten them since you last worked with
your C compiler. You know, the kind of thing where you can put a
longjmp() in a proxy malloc(), and keep a setjmp() at the appropriate
functional level so that if _any_ allocation fails down that path the
whole function is cancelled?/sarcasm

 :
 : There ain't no magic bullet here, folks.  By the time you get a memory
 : failure, even with no overcommit, it is far too late to save the day.
 :
 :Scientific computation. If, at one point, no more memory can be
 :allocated, you back off, save the present results, and try again later.
 
 This is irrelevant to the conversation.  It has nothing to do with
 overcommit in the context it is being discussed.

It has everything to do with overcommit in the context it is being
discussed. I might be mistaken on exactly who, but I think it was Peter
Seebach himself who once complained about his scientific application
dieing suddenly under out of memory conditions on FreeBSD, though
working perfectly on Solaris. Said application allocated memory if
possible, and, if not, saved temporary results and went do something
else until more memory became available.

Now, you say "By the time you get a memory failure, even with no
overcommit, it is far too late to save the day". This example shows
that, no, you are wrong, it is not far too late. It is possible, at this
point, to deal with the lack of more memory.

I'll give you one more example. Protocol validation. It is often
impossible to test all possible permutations of a protocol's dialog, but
being able to go as deep as possible on execution tree and then, when
you go out of memory, giving up on that path, backing down and
continuing elsewhere let you get a partial validation, which is not
enough to prove a protocol is correct but might well be enough to prove
it is incorrect. This is a real application, and one in which an out of
memory condition is not only handled but even expected.

 :You assume too much. Quite a few of the more efficient garbage
 :collection algorithms depend on knowing when the memory has become full.
 
This has nothing to do with overcommit in the context it is being
discussed.  In fact, this has nothing to do with OS memory management
at all -- all garbage collected languages have their own infrastructure
to determine when memory pressure requires collecting.

And, of course, those whose infrastructure depends on a malloc()
returning NULL indicating the heap is full will not work on FreeBSD.
(sarcasmYou do recall that many of these languages are written in C,
don't you?/sarcasm)

It has everything to do with overcommit. In this particular case, not
only there _is_ something to do when the out of memory condition arise,
but the very algorithm depends on it arising.

And this is for some algorithms. Other algorithms require unbounded
space for the collection but can handle out of memory conditions if they
arise.

 :You keep allocating, and when malloc() returns NULL, *then* you run the
 :garbage collector, free some space, and try again. If malloc() doesn't
 :work, quite a few very efficient garbage collection algorithms become
 :impossible to implement.
 
 First, that's bullshit.  Most garbage collection implementations
 require the memory 'size' to be hardwired.  Second, that's bullshit

Garbage Collection: Algorithms for Automatic Memory Management, Richard
Jones and Rafael Lins. Bullshit is what you just said.

 because any program relying on that sort of operation had damn well
 better have its datasize limit set to something reasonable, or the
 garbage collector is going to run the system out of swap before it
 decides it needs to do a run through.

Yes, indeed that happens, and dealing with swapped out objects is one
topic of study in gc algorithms. For some applications, it would result
in trashing. For others, most of the swapped out pages consist entirely
of garbage.

 :Just because *you* don't see how one thing can work, it doesn't mean it
 :can't work. As I have trivially shown above.
 
 You haven't shown anything above.  Garbage collectors do not work that
 way and you really should know it.

You are obviously acquainted with an all too small subset of garbage
collection algorithms.

 :Honestly, I 

Re: Setting memory allocators for library functions.

2001-02-26 Thread Matt Dillon


: :...
: :
: :That's an assumption.
: 
: Ha.  Right.  Go through any piece of significant code and just see how
: much goes flying out the window because the code wants to simply assume
: things work.  Then try coding conditionals all the way through to fix
: it... and don't forget you need to propogate the error condition back
: up the procedure chain too so the original caller knows why it failed.
:
:sarcasmPerhaps you should re-acquaint yourself with exception
:handlers, as you seem to have forgotten them since you last worked with
:your C compiler. You know, the kind of thing where you can put a
:longjmp() in a proxy malloc(), and keep a setjmp() at the appropriate
:functional level so that if _any_ allocation fails down that path the
:whole function is cancelled?/sarcasm

   sarcasm ... just try *proving* that sort of code.  Good luck!  I'm
   trying to imagine how to QA code that tries to deal with memory failures
   at any point and uses longjump, and I'm failing.

:It has everything to do with overcommit in the context it is being
:discussed. I might be mistaken on exactly who, but I think it was Peter
:Seebach himself who once complained about his scientific application
:dieing suddenly under out of memory conditions on FreeBSD, though
:working perfectly on Solaris. Said application allocated memory if
:possible, and, if not, saved temporary results and went do something
:else until more memory became available.

Said application was poorly written, then.  Even on solaris if you
actually run the system out of memory you can blow up other unrelated
processes.  To depend on that sort of operation is just plain dumb.

At *best*, even on solaris, you can set the datasize limit for the
process and try to manage memory that way.  It is still a bad idea
though because there's a chance that any libc call you make inside
your core code, even something is innocuous as a printf(), may fail.

The proper way to manage memory with this sort of application is to
specify a hard limit in the program itself, and have the program
keep track of its own useage and save itself off if it hits the 
hard limit.  Alternatively you can monitor system load and install
a signal handler to cause the program to save itself off and exit if
the load gets too high... something that will occur long before
the system actually runs out of memory.

:I'll give you one more example. Protocol validation. It is often
:impossible to test all possible permutations of a protocol's dialog, but
:being able to go as deep as possible on execution tree and then, when
:you go out of memory, giving up on that path, backing down and
:continuing elsewhere let you get a partial validation, which is not
:enough to prove a protocol is correct but might well be enough to prove
:it is incorrect. This is a real application, and one in which an out of
:memory condition is not only handled but even expected.

This has nothing to do with memory overcommit.  Nothing at all.  What
is your definition of out-of-memory?  When swap runs out, or when the
system starts to thrash?  What is the point of running a scientific
calculation if the machine turns into a sludge pile and would otherwise
cause the calculation to take years to complete instead of days?

You've got a whole lot more issues to deal with then simple memory
overcommit, and you are ignoring them completely.

:And, of course, those whose infrastructure depends on a malloc()
:returning NULL indicating the heap is full will not work on FreeBSD.
:(sarcasmYou do recall that many of these languages are written in C,
:don't you?/sarcasm)

Bullshit.  If you care, a simple wrapper will do what you want.  Modern
systems tend to have huge amounts of swap.  Depending on malloc to
fail with unbounded resources in an overcommit OR a non-overcommit case
is stupid, because the system will be thrashing heavily long before it
even gets to that point.

Depending on malloc() to fail by setting an appropriate datasize limit
resource is more reasonable, and malloc() does work as expected if you
do that.

:It has everything to do with overcommit. In this particular case, not
:only there _is_ something to do when the out of memory condition arise,
:but the very algorithm depends on it arising.

It has nothing to do with overcommit.  You are confusing overcommit
with hard-datasize limits, which can be set with a simple 'limit'
command.

:
:Garbage Collection: Algorithms for Automatic Memory Management, Richard
:Jones and Rafael Lins. Bullshit is what you just said.

None of which requires overcommit.  None of which would actually
work in a real-world situation with or without overcommit if you do
not hard-limit the memory resource for the program in the first place.

You are again making the mistake of assuming that not having overcommit
will magically solve 

Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], "Daniel C. Sobral" writes:
Anyway, these are two very different situations, and comparing them is
silly.

They are situations in which an application can be killed and has no way
to detect that it is about to do something wrong, and in which there *was*
a correct way to notify the application of impending doom.

Once we accept the argument "the C standard doesn't say this doesn't cause
a segmentation fault", we can apply it to everything.

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Rik van Riel

On Sun, 25 Feb 2001, Matt Dillon wrote:

 The problem is a whole lot more complex then you think.  Dealing with
 overcommit is not simply counting mapped pages, there are all sorts
 of issues involved.  But the biggest gotcha is that putting in
 overcommit protection will not actually save your system from
 dying a terrible death.  It in fact makes it *more* likely that the
 system will die a horrible death,

Indeed, but since a lot of the non-overcommit fans will
not believe this, why not let them find out by themselves
when they write a patch for it?

And maybe, just maybe, they'll succeed in getting their
idea of non-overcommit working with a patch which doesn't
change dozens of places in the kernel and doesn't add
any measurable overhead.

(a while ago we had our yearly discussion on linux-kernel
about this, after some time somebody showed up with code
to do overcommit ... and, of course, the conclusion that
it wouldn't work since he got to understand the problem
better while writing the code ;))

regards,

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/   http://distro.conectiva.com/


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



mmap() w/ MAP_STACK question

2001-02-26 Thread E.B. Dreger

Greetings,

I'm interested in using mmap() with MAP_STACK.  After writing a couple of
test programs and looking at vm_map_insert() and vm_map_stack(), it
appears that vm_map_stack() behaves as if MAP_FIXED were set.

Why is this?  I would like to allocate stack space without having to
search for available blocks.

Is there any harm in modifying vm_map_stack() to search for a free block,
a la vm_map_insert()?  I've not delved extensively into the kernel, and am
asking before I tinker in new territory. :-)


TIA,
Eddy

---

Brotsman  Dreger, Inc.
EverQuick Internet / EternalCommerce Division

E-Mail: [EMAIL PROTECTED]
Phone: (316) 794-8922

---


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Rik van Riel

On Tue, 27 Feb 2001, Daniel C. Sobral wrote:
 Matt Dillon wrote:
  :Matt Dillon wrote:
  :
  :..
  : the system runs out of memory, even *with* overcommit protection.
  : In fact, all sorts of side effects occur even when the system
  :...
  :
  :That's an assumption.
 
  Ha.  Right.  Go through any piece of significant code and just see how
  much goes flying out the window because the code wants to simply assume
  things work.  Then try coding conditionals all the way through to fix
  it... and don't forget you need to propogate the error condition back
  up the procedure chain too so the original caller knows why it failed.

 sarcasmPerhaps you should re-acquaint yourself with exception
 handlers,

And just where are you going to grow the cache when the
exception handler runs off the edge of the current stack
page ?

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/   http://distro.conectiva.com/


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], 
And maybe, just maybe, they'll succeed in getting their
idea of non-overcommit working with a patch which doesn't
change dozens of places in the kernel and doesn't add
any measurable overhead.

If it adds overhead, fine, make it a kernel option.  :)

Anyway, no, I'm not going to contribute code right now.  If I get time
to do this at all, I'll probably do it to UVM first.

My main objection was to the claim that the C standard allows random
segfaults.  It doesn't.  And yes, bad hardware is a conformance violation.  :)

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Marc W



hi!

i can never really tell if this alias is for discussions concerning
development OF the FreeBSD OS or development ON the FreeBSD OS (or
both), but I figure i'll risk the wrath of the anti-social and ask a
coupla programming questions :-)


is mkdir(3) guaranteed to be atomic?  Thus, if I have two processes
with
a possible race condition, is mkdir(3) guaranteed to only work on one
of
them and return EEXIST on the other???  Are there filesystem type cases
where this might not be the case (NFS being my main concern )

thanks!

marc.





To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Why does a named pipe (FIFO) give me my data twice ???

2001-02-26 Thread Marc W


hello!

I've got a program that creates a named pipe, and then spawns a
thread
which sits in a loop:

// error checking snipped.
//
while (1) {
int fifo = open(fifoPath, O_RDONLY);  // this blocks
fprintf(stderr, "somebody opened the other end!\n");
read(fifo, buf, sizeof(buf));
fprintf(stderr, "got the following data: %s\n", buf);
close(fifo);
}

i then have another instance of the same program do the following:

fifo = open(fifoPath, O_WRONLY);
write(fifo, buf, strlen(buf);


now, the problem is that the first process succeeds suddenly for
the
open, reads the data, closes the fifo, and IMMEDIATELY SUCCEEDS THE
open(2)
again, reading in the same data.
 After that, all is as expected.  

Note that this doesn't happen ALL the time -- only about 80% of the
time.

Any idea why this would happen?

Thanks!

marc.



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



apache

2001-02-26 Thread Dan Phoenix




[Mon Feb 26 13:04:34 2001] [error] (54)Connection reset by
peer: getsockname
[Mon Feb 26 13:04:39 2001] [emerg] (9)Bad file
descriptor: flock: LOCK_EX: Error getting accept lock. Exiting!
[Mon Feb 26 13:04:39 2001] [alert] Child 777 returned a Fatal error... 
Apache is exiting!
httpd in free(): warning: page is already free.


anyone seen this before?
i do have some things on nfs apache accesses..


Dan

+--+ 
|   BRAVENET WEB SERVICES  |
|  [EMAIL PROTECTED]|
| make installworld|
| ln -s /var/qmail/bin/sendmail /usr/sbin/sendmail |
| ln -s /var/qmail/bin/newaliases /usr/sbin/newaliases |
+__+



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: apache

2001-02-26 Thread Dan Phoenix




httpd in free(): warning: recursive call.
httpd in free(): warning: recursive call.
httpd in free(): warning: recursive call.
httpd in free(): warning: recursive call.
httpd in free(): warning: recursive call.
httpd in free(): warning: recursive call.
httpd in free(): warning: recursive call.

seeing that on 2 webservers that have highest cpu
yet others are exactly the same with really low cpu's



On Mon, 26 Feb 2001, Dan Phoenix wrote:

 Date: Mon, 26 Feb 2001 13:07:40 -0800 (PST)
 From: Dan Phoenix [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Subject: apache
 
 
 
 
 [Mon Feb 26 13:04:34 2001] [error] (54)Connection reset by
 peer: getsockname
 [Mon Feb 26 13:04:39 2001] [emerg] (9)Bad file
 descriptor: flock: LOCK_EX: Error getting accept lock. Exiting!
 [Mon Feb 26 13:04:39 2001] [alert] Child 777 returned a Fatal error... 
 Apache is exiting!
 httpd in free(): warning: page is already free.
 
 
 anyone seen this before?
 i do have some things on nfs apache accesses..
 
 
 Dan
 
 +--+ 
 |   BRAVENET WEB SERVICES  |
 |  [EMAIL PROTECTED]|
 | make installworld|
 | ln -s /var/qmail/bin/sendmail /usr/sbin/sendmail |
 | ln -s /var/qmail/bin/newaliases /usr/sbin/newaliases |
 +__+
 
 
 


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: [hackers] Re: Large MFS on NFS-swap?

2001-02-26 Thread David Gilbert

 "Matt" == Matt Dillon [EMAIL PROTECTED] writes:

[... my newfw bomb deleted ...]

Matt Heh heh.  Yes, newfs has some overflows inside it when you
Matt get that big.  Also, you'll probably run out of swap just
Matt newfs'ing the metadata, you need to use a larger block size,
Matt large -c value, and a large bytes/inode (-i) value.  But then,
Matt of course, you are likely to run out of swap trying to write out
Matt a large file even if you do manage to newfs it.

Matt I had a set of patches for newfs a year or two ago but never
Matt incorporated them.  We'd have to do a run-through on newfs to
Matt get it to newfs a swap-backed (i.e. 4K/sector) 1TB filesystem.

Matt Actually, this brings up a good point.  Drive storage is
Matt beginning to reach the limitations of FFS and our internal (512
Matt byte/block) block numbering scheme.  IBM is almost certain to
Matt come out with their 500GB hard drive sometime this year.  We
Matt should probably do a bit of cleanup work to make sure that we
Matt can at least handle FFS's theoretical limitations for real.

That and the availability of vinum and other raid solutions.  You can
always make multiple partitions for no good reason (other than
filesystem limitations), but we were planning to put together a 1TB
filesystem next month.  From what you're telling me, I'd need larger
block sizes to make this work?

IMHO, we might reconsider that.  With SAN-type designs, you're
probably going to find the distribution of filesizes on
multi-terrabyte filesystems that are shared by 100's of computers to
be roughly the same as the filesize distributions on today's
filesystems.

Making the run for larger block sizes puts us in the same league as
DOS.  While it will stave off the wolves, it will only work for so
long give Moore's law.

Dave.

-- 

|David Gilbert, Velocet Communications.   | Two things can only be |
|Mail:   [EMAIL PROTECTED] |  equal if and only if they |
|http://www.velocet.net/~dgilbert |   are precisely opposite.  |
=GLO

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Drew Eckhardt

In message [EMAIL PROTECTED], [EMAIL PROTECTED] writes
:
is mkdir(3) guaranteed to be atomic?  

Yes.

Are there filesystem type cases where this might not be the case 
(NFS being my main concern )

No.


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Nate Williams

 is mkdir(3) guaranteed to be atomic?  
 
 Yes.
 
 Are there filesystem type cases where this might not be the case 
 (NFS being my main concern )
 
 No.

Yes.  NFS doesn't guarantee atomicity, because it can't.  If the mkdir
call returns, you have no guarantee that the remote directory has been
created (caching, errors, etc...)




Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Marc W


 -
 From:  Nate Williams [EMAIL PROTECTED]

  Are there filesystem type cases where this might not be the case 
  (NFS being my main concern )
  
  No.
 
 Yes.  NFS doesn't guarantee atomicity, because it can't.  If the
mkdir
 call returns, you have no guarantee that the remote directory has
been
 created (caching, errors, etc...)

I can handle it if there is a case where both fail, but is there a
case where both can SUCCEED ?? 

marc.


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Nate Williams

   Are there filesystem type cases where this might not be the case 
   (NFS being my main concern )
   
   No.
  
  Yes.  NFS doesn't guarantee atomicity, because it can't.  If the
 mkdir
  call returns, you have no guarantee that the remote directory has
 been
  created (caching, errors, etc...)
 
 I can handle it if there is a case where both fail, but is there a
 case where both can SUCCEED ?? 

What do you mean 'both succeed'?


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Mike Smith

 In message [EMAIL PROTECTED], [EMAIL PROTECTED] writes
 :
 is mkdir(3) guaranteed to be atomic?  
 
 Yes.

Um.  mkdir(2) is atomic.  Note that mkdir(1) with the -p argument is 
*not* atomic.

 Are there filesystem type cases where this might not be the case 
 (NFS being my main concern )
 
 No.

How would it *not* be atomic?


-- 
... every activity meets with opposition, everyone who acts has his
rivals and unfortunately opponents also.  But not because people want
to be opponents, rather because the tasks and relationships force
people to take different points of view.  [Dr. Fritz Todt]
   V I C T O R Y   N O T   V E N G E A N C E



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], Mike Smith writes:
How would it *not* be atomic?

Well, imagine a hypothetical broken system in which two simultaneous calls
to mkdir, on some hypothetical broken filesystem, can each think that it
"succeeded".  After all, at the end of the operation, the directory has
been created, so who's to say they're wrong?  ;)

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Marc W



  
  I can handle it if there is a case where both fail, but is
there a
  case where both can SUCCEED ?? 
 
 What do you mean 'both succeed'?

My understanding is that, on non-broken filesystems, calls to
mkdir(2) either succeed by creating a new directory, or fail and return
EEXIST (note: excluding all other types of errors :-))

However, NFS seems to have issues, so the question is:  could both
mkdir(2) calls actually succeed and claim to have created the same
directory (even if it is?), or is one ALWAYS guaranteed to fail, as on
a normal fs.

marc.



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Nate Williams

   I can handle it if there is a case where both fail, but is
 there a
   case where both can SUCCEED ?? 
  
  What do you mean 'both succeed'?
 
 My understanding is that, on non-broken filesystems, calls to
 mkdir(2) either succeed by creating a new directory, or fail and return
 EEXIST (note: excluding all other types of errors :-))
 
 However, NFS seems to have issues, so the question is:  could both
 mkdir(2) calls actually succeed and claim to have created the same
 directory (even if it is?), or is one ALWAYS guaranteed to fail, as on
 a normal fs.

You're implying that you are making two calls to create the same
directory.  Am I correct?

The answer is 'maybe'?  Depends on the remote NFS server.  Matt or one
of the other NFS gurus may know more, but I wouldn't count on *anything*
over NFS.  If you need atomicity, you need lockd, which isn't
implemented on FreeBSD.


Nate

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: ata-disk ioctl and atactl patch

2001-02-26 Thread Stephen Rose

Well, for me it's the noise and heat that I'm trying to minimize.  That
and we're out of power here in California.  :-)

Steve Rose


On Mon, 26 Feb 2001, Soren Schmidt wrote:

 It seems Stephen Rose wrote:
  A couple of us on the questions list have asked for a way to spin down ide
  disks when idle.  Is there any chance that this utility could lead to
  something useful there?
 
 Well, of cause it could, but I'm not sure I see the usefullness of
 the spindown at all, a spinup costs several units of idleness power,
 so you have to keep it spun down for long periods to make it worth
 the effort, and you wear significantly more on the mechanics this
 way too...
 
  It seems Scott Renfro wrote:
   As I promised on -mobile earlier this week, I've cleaned up my patches
   to port the {Net,Open}BSD atactl utility, including a simplistic
   ata-disk ioctl.  They apply cleanly against this afternoon's -stable
   (including Soren's latest commit bringing -stable up to date with
   -current).  I've been running them for some time and they ''work great
   here''.
  
   Before announcing this in a broader context, I wanted to get a bit of
   feedback on the ioctl implementation.  In particular, is it safe to
   just do an ata_command inside adioctl() without any further checking?
   (e.g., can this cause bad things to happen under heavy i/o load?)
  
  No its not safe at all, you risk trashing an already running command...
  
  Anyhow, I have an atacontrol thingy in the works for attach/detach,
  raid control etc, etc, I'll try to merge this functionality into that
  (the ioctl's will change etc, but the functionality is nice)...
  
  -S?ren
  
  To Unsubscribe: send mail to [EMAIL PROTECTED]
  with "unsubscribe freebsd-hackers" in the body of the message
  
  ---
  
  
  
  To Unsubscribe: send mail to [EMAIL PROTECTED]
  with "unsubscribe freebsd-hackers" in the body of the message
  
 
 
 -Søren
 


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



[hackers] Re: Setting memory allocators for library functions.

2001-02-26 Thread David Gilbert

 "Daniel" == Daniel C Sobral [EMAIL PROTECTED] writes:

Daniel Dag-Erling Smorgrav wrote:
None of these solutions are portable, however;  Well, no, but
 the sole available definition of "portable" says that it is 
 "portable" to assume that all the memory malloc can return is
 really  available.
 
 Show me a modern OS (excluding real-time and/or embedded OSes) that
 makes this guarantee.

Daniel Solaris and AIX (on AIX this is optional on a global or
Daniel per-application level).

IIRC, Digital-UNIX or OSF-1 ... or whatever it's called now.  I seem
to remember the first Alphas that arrived to a company I worked for
had this set globally in the OS by default.  Due to the bloat of the
OS and Motif and other such things, they required simply amazing
amounts of swap just to run.

Dave.

-- 

|David Gilbert, Velocet Communications.   | Two things can only be |
|Mail:   [EMAIL PROTECTED] |  equal if and only if they |
|http://www.velocet.net/~dgilbert |   are precisely opposite.  |
=GLO

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Mike Smith

 In message [EMAIL PROTECTED], Mike Smith writes:
 How would it *not* be atomic?
 
 Well, imagine a hypothetical broken system in which two simultaneous calls
 to mkdir, on some hypothetical broken filesystem, can each think that it
 "succeeded".  After all, at the end of the operation, the directory has
 been created, so who's to say they're wrong?  ;)

Is this somehow related to memory overcommit?

-- 
... every activity meets with opposition, everyone who acts has his
rivals and unfortunately opponents also.  But not because people want
to be opponents, rather because the tasks and relationships force
people to take different points of view.  [Dr. Fritz Todt]
   V I C T O R Y   N O T   V E N G E A N C E



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Lyndon Nerenberg

 "Peter" == Peter Seebach [EMAIL PROTECTED] writes:

Peter Well, imagine a hypothetical broken system in which two
Peter simultaneous calls to mkdir, on some hypothetical broken
Peter filesystem, can each think that it "succeeded".  After all,
Peter at the end of the operation, the directory has been
Peter created, so who's to say they're wrong?  ;)

"They" are :-) What if the two processes issuing mkdir() have a
different effective [ug]id or umask? I.e. if I get success back I'm
going to assume I own the directory, which has a 1/n chance of being
wrong for for n processes with unique uids racing a non-atomic mkdir()
call over (say) NFS.

--lyndon

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Rik van Riel

On Mon, 26 Feb 2001, Peter Seebach wrote:
 In message [EMAIL PROTECTED],
 And maybe, just maybe, they'll succeed in getting their
 idea of non-overcommit working with a patch which doesn't
 change dozens of places in the kernel and doesn't add
 any measurable overhead.

 If it adds overhead, fine, make it a kernel option.  :)

 Anyway, no, I'm not going to contribute code right now.  If I get time
 to do this at all, I'll probably do it to UVM first.

 My main objection was to the claim that the C standard allows
 random segfaults.  It doesn't.  And yes, bad hardware is a
 conformance violation.  :)

I don't think a failed kernel-level allocation after overcommit
should generate a segfault.

IMHO it should send a bus error (or a sigkill if the process
doesn't exit after the SIGBUS).

Rationale:
SIGSEGV for _user_ mistakes (process accesses wrong stuff)
SIGBUS for _system_ errors  (ECC error, kernel messes up, ...)

cheers,

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/   http://distro.conectiva.com/


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], 
Rik van Riel writes:
I don't think a failed kernel-level allocation after overcommit
should generate a segfault.

IMHO it should send a bus error (or a sigkill if the process
doesn't exit after the SIGBUS).

Same difference, so far as the language is concerned.

Rationale:
SIGSEGV for _user_ mistakes (process accesses wrong stuff)
SIGBUS for _system_ errors  (ECC error, kernel messes up, ...)

As long as we grant that it's the kernel *messing up*, I won't complain;
no one said an implementation could be perfect, and known bugs go with the
territory.  I only object to attempts to portray it as a legitimate and
correct implementation of the C spec.

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], 
Rik van Riel writes:
Rationale:
SIGSEGV for _user_ mistakes (process accesses wrong stuff)
SIGBUS for _system_ errors  (ECC error, kernel messes up, ...)

Actually, this is not canonically the distinction made.  On a Unix PC,
{
int *a, c[2];
char *b;
a = c;
b = a;
++b;
a = b;
*a = 0;
}
would get SIGBUS, because it was a bus error.  The error is not a segmentation
fault; the memory written to is all legitimately available to the process.  It
is a bus error, because the data access is not possible on the bus.  :)

I think "the memory you thought you had actually doesn't exist anywhere" is
more like a segmentation fault than a bus error.

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Is mkdir guaranteed to be 'atomic' ??

2001-02-26 Thread Matt Dillon


(owner-freebsd-hackers removed from list)

:You're implying that you are making two calls to create the same
:directory.  Am I correct?
:
:The answer is 'maybe'?  Depends on the remote NFS server.  Matt or one
:of the other NFS gurus may know more, but I wouldn't count on *anything*
:over NFS.  If you need atomicity, you need lockd, which isn't
:implemented on FreeBSD.
:
:Nate

There are a couple of issues here.  First, I'm fairly sure that the
NFS server side implementation of mkdir() does *not* guarentee that
the operation will be synced to permanent storage on the server side
before returning.  So if the NFS server crashes and reboots, the
just-created directory could disappear.

Second, in regards to several clients trying to mkdir() at the same
time: mkdir() does not use the same semantics as an O_EXCL file
create.  This means that *normally* only one of the two clients
will succeed in the mkdir() call.  However, under certain circumstances
(e.g. a server reboot or possibly packet loss / NFS retry) it is possible
for the directory to be created yet for *both* client's mkdir() calls
to *FAIL* (server reboot), or for *both* client's mkdir() calls to
succeed (if one or both clients had packet loss and had to retry the
request).

Under NFSv3, the only thing that truely guarentees proper operation
in regards to competing clients is an O_EXCL file open().  An O_EXCL
file open() is guarenteed to succeed on precisely one client when
multiple clients are trying to create the same file, and NFSv3 O_EXCL
semantics guarentees that NFS retries and server reboots will still
result in proper operation (the client doing the retry or the client
that succeeded in the open() call will still see a 'success' even if
the server reboots or if an NFS retry occurs).

-Matt


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Converting Perforce to CVS

2001-02-26 Thread John Wilson

No, there isn't.  They have a CVS to Perforce script, not Perforce to CVS
(please don't ask who in their right mind would want to go from Perforce to
CVS :)

Well, if Anton doesn't respond, I guess I'll just have to write one
myself...

John



On Sun, 25 Feb 2001 22:00:31 -0600, Michael C . Wu wrote:

  On Sun, Feb 25, 2001 at 09:17:38AM -0800, John Wilson scribbled:
  | If you still have the Perforce-CVS conversion script, I would be very
  | grateful if you could e-mail it to me.
  
  Such a script is available for download on www.perforce.com.
  
  | On Tue, 9 Jan 2001 09:13:39 +0100, Anton Berezin wrote:
  |   On Sat, Jan 06, 2001 at 03:06:20PM -0800, John Wilson wrote:
  |I apologize in advance, as this is not strictly a FreeBSD-related
  | question,
  |but I know that a lot of FreeBSD'ers use CVS as well as Perforce,
so
  | here
  |goes...
  |
  |What is the easiest way to convert a P4 source repository to CVS,
while
  |preserving revisions, history, log messages, etc?   Both systems
seem
  | to use
  |RCS, but is it as simple as copying the files?   Are there any
caveats?
  | 
  |   
  |   I have one script, but it does not handle branches (the project I
was
  |   converting did not have any).  I can mail it to you if you want. 
The
  |   branch handling should be rather similar to binary files handling,
which
  |   the script already performs.
  
  -- 
  +--+
  | [EMAIL PROTECTED] | [EMAIL PROTECTED] |
  | http://peorth.iteration.net/~keichii | Yes, BSD is a conspiracy. |
  +--+





___
Send a cool gift with your E-Card
http://www.bluemountain.com/giftcenter/



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: apache

2001-02-26 Thread Matt Dillon

:
:httpd in free(): warning: recursive call.
:httpd in free(): warning: recursive call.
:httpd in free(): warning: recursive call.
:httpd in free(): warning: recursive call.
:httpd in free(): warning: recursive call.
:httpd in free(): warning: recursive call.
:httpd in free(): warning: recursive call.
:
:seeing that on 2 webservers that have highest cpu
:yet others are exactly the same with really low cpu's

This occurs if a signal occurs in the middle of a free() and
the signal handler itself tries to do a free().  It is extremely
illegal to do this... apache needs to be fixed.

-Matt


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: ata-disk ioctl and atactl patch

2001-02-26 Thread Jon Hamilton

Soren Schmidt [EMAIL PROTECTED], said on Mon Feb 26, 2001 [09:10:43 PM]:
} It seems Stephen Rose wrote:
}  A couple of us on the questions list have asked for a way to spin down ide
}  disks when idle.  Is there any chance that this utility could lead to
}  something useful there?
} 
} Well, of cause it could, but I'm not sure I see the usefullness of
} the spindown at all, a spinup costs several units of idleness power,
} so you have to keep it spun down for long periods to make it worth
} the effort, and you wear significantly more on the mechanics this
} way too...

Others have posted from their experiments that they were able to maintain
a spun down state for upwards of an hour (admittedly, with some care and 
planning).  

There are also lots of odd uses where one might have long periods of 
inactivity, such as a laptop on which you log in to a "real machine" to
do some kind of work, and want to leave the display going without having to
listen to the disk (for whatever reason).  I often leave mine listening
for incoming messages for hours at a time, and appreciate being able to 
shut up the disk.  I'm using a different method to accomplish the goal, but
the desire that led me there is what's important for purposes of this 
discussion.  Are there other ways to accomplish the same thing?  Sure.  
But this desire isn't per se unreasonable on its face.  It's not a good fit
for lots of systems, but there are cases where it's desirable.  

-- 

   Jon Hamilton 
   [EMAIL PROTECTED]

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: apache

2001-02-26 Thread Dan Phoenix



Exact same config on other webservers wtih really low cpu.
Memory checks out fine. Mysql connection to db checks out fine.
IO is fine...running out of ideas.

tcpdump returns alot of these

15:19:53.383597 elrond.sf.bravenet.com.shivahose 
drago.sf.bravenet.com.telnet: . ack 6212 win 17520 (DF) [tos 0x10] 
15:19:53.914743 0:30:80:28:dd:c7  1:80:c2:0:0:0 802.1d ui/C
 Unknown IPX Data: (43 bytes)
[000] 00 00 00 00 00 80 00 00  02 16 7A 67 80 00 00 00   ..zg
[010] 13 80 00 00 30 80 28 DD  C0 80 13 01 00 14 00 02  0.(. 
[020] 00 0F 00 00 00 00 00 00  00 00 00  ...
 len=43
   0080  0216 7a67 8000 
 1380  3080 28dd c080 1301 0014 0002
 000f     00

what is shivahose?

dmesg returns

icmp-response bandwidth limit 228/200 pps
icmp-response bandwidth limit 230/200 pps
pid 1146 (httpd), uid 506: exited on signal 10
pid 1207 (httpd), uid 506: exited on signal 10
icmp-response bandwidth limit 219/200 pps
icmp-response bandwidth limit 204/200 pps
icmp-response bandwidth limit 217/200 pps
icmp-response bandwidth limit 219/200 pps
icmp-response bandwidth limit 219/200 pps
icmp-response bandwidth limit 221/200 pps

I even recompiled his kernel optimized with 256 maxusers.
That did not help either.

looking at apache error log i see alot of
[Mon Feb 26 15:12:50 2001] [error] (54)Connection reset by
peer: getsockname


so i guess i will ask about getsockname..i have experience this before
when maxclients was set to low. That is not the case hererunning out
of ideas.



On Mon, 26 Feb 2001, Matt Dillon wrote:

 Date: Mon, 26 Feb 2001 15:02:34 -0800 (PST)
 From: Matt Dillon [EMAIL PROTECTED]
 To: Dan Phoenix [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: apache
 
 :
 :httpd in free(): warning: recursive call.
 :httpd in free(): warning: recursive call.
 :httpd in free(): warning: recursive call.
 :httpd in free(): warning: recursive call.
 :httpd in free(): warning: recursive call.
 :httpd in free(): warning: recursive call.
 :httpd in free(): warning: recursive call.
 :
 :seeing that on 2 webservers that have highest cpu
 :yet others are exactly the same with really low cpu's
 
 This occurs if a signal occurs in the middle of a free() and
 the signal handler itself tries to do a free().  It is extremely
 illegal to do this... apache needs to be fixed.
 
   -Matt
 


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Where can I find out rules on blocking in threads?

2001-02-26 Thread Marc W





hello!

I'm running into a problem with some threading using pthreads in an
application i'm writing for FreeBSD.

The application basically 

1. initializes some UI goo (but doesn't start any of it UP) using a
GUI framework (Qt)
2. creates a FIFO, and then spawns a thread
3. this new thread then does:

fifo = open(fifoPath, O_RDONLY);

4. after the new thread is spawned, the application is supposed to
then continue initialization, showing the main window and continuing on
happily.


Now, the problem is that when step 3 above blocks on the open(2)
call (as it should, since the other end of the pipe isn't opened yet),
the whole application is frozen, and the main thread can't continue
with GUI processing, and the app appears to die.

What is goofy is that this works just fine under Linux.  So,
FreeBSD has slightly different blocking rules or something -- but I
don't understand them.  It also hangs under Solaris 8/Intel.

So, the question is:  how can I find out what these differences are
and try to get around them.   I'm using this to limit instances of my
program to one, and need a named pipe instead of just a lock file so
that new instances can communicate any arguments they might have been
given, etc ...  


any suggestions?

thanks!

marc.



To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: apache

2001-02-26 Thread Doug White

On Mon, 26 Feb 2001, Dan Phoenix wrote:

 
 
 
 [Mon Feb 26 13:04:34 2001] [error] (54)Connection reset by
 peer: getsockname
 [Mon Feb 26 13:04:39 2001] [emerg] (9)Bad file
 descriptor: flock: LOCK_EX: Error getting accept lock. Exiting!
 [Mon Feb 26 13:04:39 2001] [alert] Child 777 returned a Fatal error... 
 Apache is exiting!
 httpd in free(): warning: page is already free.
 
 
 anyone seen this before?
 i do have some things on nfs apache accesses..

Don't put the scoreboard  lock file on NFS. The Apache docs say this is a
No-No unless you change the locking type.

Doug White|  FreeBSD: The Power to Serve
[EMAIL PROTECTED] |  www.FreeBSD.org


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: [hackers] Re: Setting memory allocators for library functions.

2001-02-26 Thread Peter Seebach

In message [EMAIL PROTECTED], Tony Finch writes:
fork() with big data segments that cause swap to be reserved in case
of a copy-on-write. The 2GB of swap is never actually used, but you
still have to have it.

That's a good point.  So, we should warn people that asking for memory
committments, having huge data spaces, and forking, is dangerous or stupid.

:)

-s

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: apache

2001-02-26 Thread Dan Phoenix



I did not specify a lock directive in httpd.conf.
Default my httpd is in /usr/local/apache
i would assume lock file is going there which is an ide drive.




 Date: Mon, 26 Feb 2001 17:57:03 -0800 (PST)
 From: Doug White [EMAIL PROTECTED]
 To: Dan Phoenix [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: apache
 
 On Mon, 26 Feb 2001, Dan Phoenix wrote:
 
  
  
  
  [Mon Feb 26 13:04:34 2001] [error] (54)Connection reset by
  peer: getsockname
  [Mon Feb 26 13:04:39 2001] [emerg] (9)Bad file
  descriptor: flock: LOCK_EX: Error getting accept lock. Exiting!
  [Mon Feb 26 13:04:39 2001] [alert] Child 777 returned a Fatal error... 
  Apache is exiting!
  httpd in free(): warning: page is already free.
  
  
  anyone seen this before?
  i do have some things on nfs apache accesses..
 
 Don't put the scoreboard  lock file on NFS. The Apache docs say this is a
 No-No unless you change the locking type.
 
 Doug White|  FreeBSD: The Power to Serve
 [EMAIL PROTECTED] |  www.FreeBSD.org
 


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Why does a named pipe (FIFO) give me my data twice ???

2001-02-26 Thread Dima Dorfman

If the first program calls open(2) before the second one calls
close(2) the former will not block because there's already a writer on
the pipe.  A possible workaround would be to unlink and recreate the
fifo in program one, like so:

for (;;)
{
int fifo;
mkfifo(fifo_path, fifo_mode);
fifo = open(fifo_path, O_RDONLY);
read(fifo, ...);
printf(...);
close(fifo);
unlink(fifo_path);
}

As for why this actually happens, I don't know.  None of my Stevens
books explain this.

Hope this helps

Dima Dorfman
[EMAIL PROTECTED]
 
 hello!
 
 I've got a program that creates a named pipe, and then spawns a
 thread
 which sits in a loop:
 
 // error checking snipped.
 //
 while (1) {
 int fifo = open(fifoPath, O_RDONLY);  // this blocks
 fprintf(stderr, "somebody opened the other end!\n");
 read(fifo, buf, sizeof(buf));
 fprintf(stderr, "got the following data: %s\n", buf);
 close(fifo);
 }
 
 i then have another instance of the same program do the following:
 
 fifo = open(fifoPath, O_WRONLY);
 write(fifo, buf, strlen(buf);
 
 
 now, the problem is that the first process succeeds suddenly for
 the
 open, reads the data, closes the fifo, and IMMEDIATELY SUCCEEDS THE
 open(2)
 again, reading in the same data.
  After that, all is as expected.  
 
 Note that this doesn't happen ALL the time -- only about 80% of the
 time.
 
 Any idea why this would happen?

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: apache

2001-02-26 Thread Lars Eggert

Dan Phoenix wrote:
 
 httpd in free(): warning: recursive call.

What FreeBSD/apache versions is this with? I've seen the same on
FreeBSD-3.4 and an older apache build from ports. Haven't (yet) seen it
under 4.2 and the latest apache from ports.
-- 
Lars Eggert [EMAIL PROTECTED] Information Sciences Institute
http://www.isi.edu/larse/University of Southern California
 S/MIME Cryptographic Signature


Re: Setting memory allocators for library functions.

2001-02-26 Thread Arun Sharma

On 26 Feb 2001 18:56:18 +0100, Matt Dillon [EMAIL PROTECTED] wrote:
 Ha.  Right.  Go through any piece of significant code and just see how
 much goes flying out the window because the code wants to simply assume
 things work.  Then try coding conditionals all the way through to fix
 it... and don't forget you need to propogate the error condition back
 up the procedure chain too so the original caller knows why it failed.

So, it all comes down to reimplementing the UNIX kernel in a language
that supports exceptions, just like Linus suggested :) 

-Arun

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: ThinkNIC booting FreeBSD? [WAS: Re: Silent FreeBSD]

2001-02-26 Thread Wes Peters

Chris Shenton wrote:
 
 In message [EMAIL PROTECTED] Wes Peters writes:
 
  We have several NIC's around here (the New Internet Computer, see
  http://www.thinknic.com/ for details) and will be adding a couple of these
  so we can boot FreeBSD or NetBSD on them in the next little while.  A NIC
  running FreeBSD on a silent CF disk strikes me as an ideal bedroom computer;
  you can leave it on all the time and just let the screen sleep when you're
  not using it.
 
 Been thinking seriously about buying a couple of these; hard to beat
 the $200 price point. Chat on one of the NIC lists indicates they had
 two fans and will now ship with three fans; this seems like it will
 make it rather noisey -- especially since they're diskless and should
 be quiet.

They are quiet.  I didn't notice any fans in the ones we disassembled at
work.  The Geode processor runs QUITE hot, though.  They make good hand-
warmers when it's snowing outside.

 Since I prefer FreeBSD to Linux, I'd rather run BSD than the Linux on
 the CD it runs from.  Is it possible to create an ISO of a bootable
 and runnable FreeBSD? What happens with stuff like /tmp and /var/log?

You wouldn't want it; performance running off the CD-ROM is terrible.
Get a 2.5" hard drive and stick it to the case top with double-sticky
tape.

 Failing this, I'd probably net-boot the NICs off a bigger FreeBSD
 machine, and NFS mount /home dirs and /usr/local type of software.

That would probably give you better performance than the CD-ROM.  The
NIC has a small flash disk in it as well; you could probably put the
boot loader and enough /boot filesystem on that to autoboot.

 That way when I built a tool or package it would be available to any
 of the FreeBSD boxes in the house. Seems like a great bang/buck
 ratio. Any comments on this approach?

Sounds good to me.  Maybe I should setup a netboot server at work and
have a hack at it.  We've got a couple of IDE-CompactFlash adapters
I could play with without scrogging the NIC-standard flash device.

-- 
"Where am I, and what am I doing in this handbasket?"

Wes Peters Softweyr LLC
[EMAIL PROTECTED]   http://softweyr.com/

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message



Re: Inheriting the nodump flag

2001-02-26 Thread Dima Dorfman

 1) This method of handling recursive nodump is far superior to any actual
inheritence of the flag as part of file system operations, as currently
no other file flags are inherited from the parent directory -- the only
property that is inherited is the group.  With ACLs, the parent's
default ACL will also play a role in the new access ACL.  In any case,
there is no precedent for file flag inheritence.

I'm not sure if this is supposed to be a confirmation, but, just to
clear things up, the patch doesn't cause the nodump flag to be
inherited in the filesystem per se.  I.e., you won't see entire trees
of files marked nodump that weren't marked before after running dump
with this patch applied.  It is inherited in terms of dump's internal
maps; perhaps propagated would be a better word to describe its
behavior.

 
 2) Please run the patch by freebsd-audit -- there have been a fair number
of vulnerabilities in the fts code in the past due to race conditions
of various sorts, and it's important that any modifications be
carefully scrutinized to prevent the reintroduction of vulnerabilites.

dump doesn't use fts; I used calling fts_set as an example because in
a program that uses fts, pruning a directory and everything under it
is a matter of one library call.  In dump's case, it's not that
simple.

Nevertheless (or should that be consequently?), your point is well
taken.  I will send this to -audit in a few days barring any
objections here.

Thanks again

Dima Dorfman
[EMAIL PROTECTED]


 
 However, the general idea sounds very useful, and something that I'd find
 applicable on a daily basis :-).
 
 Robert N M Watson FreeBSD Core Team, TrustedBSD Project
 [EMAIL PROTECTED]  NAI Labs, Safeport Network Services
 
 On Mon, 26 Feb 2001, Dima Dorfman wrote:
 
  Hello -hackers
  
  Some time ago, on -arch, phk proposed that the nodump flag should be
  inherited (see 'inheriting the "nodump" flag ?' around Dec. 2000).
  This was generally considered a good idea, however, the patch to the
  kernel he proposed was thought an ugly hack.  In addition, jeroen
  pointed out that NetBSD had implemented this functionality the Right
  Way(tm), in dump(8).
  
  Attached below is a port of NetBSD's patch to FreeBSD's dump(8).
  dump's tree walker is a little weird, so the patch is a little more
  complicated than calling fts_set with FTS_SKIP.  For the technical
  details of what it does, see:
  http://lists.openresources.com/NetBSD/tech-kern/msg00453.html.
  
  I've been using this on two of my hosts for a while, and it works as
  expected.  Given the additional fact that NetBSD has had this for
  almost two years, and that the patch below looks very similar to the
  one they applied, I doubt it significantly breaks anything.
  
  Comments?
  
  Thanks in advance
  
  Dima Dorfman
  [EMAIL PROTECTED]
  
  
  [ Patch snipped to conserve bandwidth of those who have to pay for
  it; it's available at
  http://www.unixfreak.org/~dima/home/nodump.diff or the mailing
  list archives if you're interested. ]

To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message