Matthew Dillon wrote:
:Hi,
:
:I wrote a script that generates 1 million files in one directory on the
:hammer filesystem. The first 100k are created very quickly, then it
:starts to get less predictive. It stops completely after creating 836k
:files. I can still ping the machine, but I can't ssh into it any more.
:It's a head-less system so I can tell what is going on exactly.
:
:I'm using the attached C file like this:
:
:   cr 1000000 test
:
:Regards,
:
:   Michael

     Oooh, nice test.  Mine got to 389000 before it deadlocked in
     the buffer cache.

     I'll be able to get this fixed today.  It looks like a simple
     issue.

Commit 61/A fixes the problem. But now, after creating around 3 million
files and doing a "cpdup /usr/pkgsrc /hammer/pkgsrc", running "make
head-src-cvsup" turned out to be extremely slow (actually it was the
"cvs update" part). Then I did a "rm -rf
/hammer/dir-with-a-million-files" and hammer finally died :) It probably
core dumped :(

I can try to reproduce it tomorrow with a display connected to it.

Another thing I noticed is that when there is a lot of file system
activity, other user processes are slowed down a lot (maybe they are
just blocked on a condition). At least that's how it feels.

Regards,

  Michael

Reply via email to