In article [EMAIL PROTECTED] you
wrote:
Thank you for not telling it to one of my servers which is running
around with about 10 concurrent connections biting its tail. I
wouldn't like to hurt its feelings. And I've got the feeling that it
will have to bear a bit more of that beating.
Rik van Riel wrote:
On Wed, 30 May 2001, Terry Lambert wrote:
The intent of the test is obviously intended to show
certain facts which we all know to be self-evident under
strange load conditions which are patently unreal.
I would suggest a better test would be to open _at least_
Albert D. Cahalan wrote:
This postmark test is useless self flagellation.
The benchmark tests what it was meant to test: performance
on huge directories.
Which is useless, since only degenerate software results
in huge directories.
I have yet to see one example of software which would
Rik van Riel wrote:
How about a real benchmark?
Good question indeed. All proposed benchmarks in this thread
have been geared heavily towards one system or the other and
are not at all industry standard benchmarks.
At www.spec.org I see SPECweb99 numbers for Solaris, AIX,
Linux,
Rik van Riel wrote:
Thank you for not telling it to one of my servers which is running
around with about 10 concurrent connections biting its tail. I
wouldn't like to hurt its feelings. And I've got the feeling that it
will have to bear a bit more of that beating.
Interesting,
Rik van Riel wrote:
On Wed, 30 May 2001, Terry Lambert wrote:
The intent of the test is obviously intended to show
certain facts which we all know to be self-evident under
strange load conditions which are patently unreal.
I would suggest a better test would be to open _at least_
In article [EMAIL PROTECTED] you
wrote:
On Wed, 30 May 2001, Terry Lambert wrote:
I would suggest a better test would be to open _at least_ 250,000
connections to a server
That would certainly qualify for the patently unreal part, but I don't
know what else you want to prove here.
Thank
On Wed, 30 May 2001, Albert D. Cahalan wrote:
I would suggest a better test would be to open _at least_
250,000 connections to a server running under both FreeBSD
and Linux. I was able to do this without breaking a sweat
on a correctly configured FreeBSD 4.3 system.
How about a real
On Thu, 31 May 2001, Noses wrote:
Thank you for not telling it to one of my servers which is running
around with about 10 concurrent connections biting its tail. I
wouldn't like to hurt its feelings. And I've got the feeling that it
will have to bear a bit more of that beating.
It seems Rik van Riel wrote:
At www.spec.org I see SPECweb99 numbers for Solaris, AIX,
Linux, Windows, Tru64, and HP-UX. FreeBSD must be hiding,
because I don't see it. BSDI, Walnut Creek, and WindRiver
all have failed to submit results.
Linux is still #1 for 1 to 4 processors. The
On Thu, 31 May 2001, Søren Schmidt wrote:
If somebody sends me the 800 US$ the software costs, or better
get me the software for free (we are a free OS right) I'll gladly
run it through a variety of machines here...
If you think this is the problem, I'll happily chip in $50;
it would be
This postmark test is useless self flagellation.
The intent of the test is obviously intended to show
certain facts which we all know to be self-evident under
strange load conditions which are patently unreal.
We already knew the limitations on putting many files
in a directory; the only useful
Dave Hayes wrote:
You can't make that assumption just yet (although it seems
reasonable). We really don't know exactly what the problem they are
trying to solve is. Network news sites running old versions of
software (as an example, I know someone who still runs CNEWS) have
very clear
Andrew Reilly wrote:
On Sat, May 26, 2001 at 07:25:16PM +1000, Andrew Reilly wrote:
One of my personal mail folders has 4400 messages in it, and
I've only been collecting that one for a few years. It's not
millions, but its a few more than the 500 that I've seen some
discuss here as a
On Wed, 30 May 2001, Terry Lambert wrote:
The intent of the test is obviously intended to show
certain facts which we all know to be self-evident under
strange load conditions which are patently unreal.
I would suggest a better test would be to open _at least_
250,000 connections to a
This postmark test is useless self flagellation.
The benchmark tests what it was meant to test: performance
on huge directories.
The intent of the test is obviously intended to show
certain facts which we all know to be self-evident under
strange load conditions which are patently unreal.
On Sun, 27 May 2001 22:50:48 -0300 (BRST), Rik van Riel [EMAIL PROTECTED] wrote:
On Sat, 26 May 2001, Peter Wemm wrote:
Which is more expensive? Maintaining an on-disk hashed (or b+tree)
directory format for *everything* or maintaining a simple low-cost
format on disk with in-memory hashing
Andrew Reilly wrote:
It is quite concievable that a performance tweak to the IMAP
server could involve a header cache in a relational database of
some sort, and that would certainly contain references to the
individual files, which would then be accessed randomly.
You might want to
On Sun, 27 May 2001, Doug Barton wrote:
Andrew Reilly wrote:
It is quite concievable that a performance tweak to the IMAP
server could involve a header cache in a relational database of
some sort, and that would certainly contain references to the
individual files, which would then be
On Sat, 26 May 2001, Peter Wemm wrote:
Which is more expensive? Maintaining an on-disk hashed (or b+tree)
directory format for *everything* or maintaining a simple low-cost
format on disk with in-memory hashing for fast lookups?
I bet that for modest directory sizes the cost of disk IO
On Fri, May 25, 2001 at 08:49:21PM +, Terry Lambert wrote:
There is _no_ performance problem with the existing implementation,
if you treat postgres as the existing implementation; it will do
what you want, quickly and effectively, for millions of record keys.
Does postgres make a good
On Sat, May 26, 2001 at 07:25:16PM +1000, Andrew Reilly wrote:
One of my personal mail folders has 4400 messages in it, and
I've only been collecting that one for a few years. It's not
millions, but its a few more than the 500 that I've seen some
discuss here as a reasonable limit (why is
Andrew Reilly [EMAIL PROTECTED] writes:
Where in open(1) does it specify a limit on the number of files
permissible in a directory? The closest that it comes, that I can
see is:
Well, read(2) doesn't tell you not to do your IO one character at a
time, but that doesn't mean it's a good idea.
Andrew Reilly writes:
On Fri, May 25, 2001 at 08:49:21PM +, Terry Lambert wrote:
There is _no_ performance problem with the existing implementation,
if you treat postgres as the existing implementation; it will do
what you want, quickly and effectively, for millions of record keys.
Andrew Reilly writes:
/usr/ports/distfiles on any of the mirrors probably contains
upwards of 5000 files too, and there is a strong likelyhood that
these will be accessed out-of-order by ports-makefile-driven
fetch requests.
Oh!
You point a good example!
0cicuta~(13)/bin/ls
[EMAIL PROTECTED] wrote:
Andrew Reilly writes:
/usr/ports/distfiles on any of the mirrors probably contains
upwards of 5000 files too, and there is a strong likelyhood that
these will be accessed out-of-order by ports-makefile-driven
fetch requests.
Oh!
You point a good example!
Jordan Hubbard [EMAIL PROTECTED] writes:
Erm, folks? Can anyone please tell me what this has to do with
freebsd-hackers any longer?
While the thread has diverged from it's original intent, there is
something related I consider to be a more interesting topic. If it's
still not appropriate for
Greg Black writes:
Andresen,Jason R. wrote:
| On Thu, 24 May 2001, void wrote:
|
| On Wed, May 23, 2001 at 09:20:51AM -0400, Andresen,Jason R. wrote:
|
| Why is knowing the file names cheating? It is almost certain
| that the application will know the names of it's own files
|
:Jordan Hubbard [EMAIL PROTECTED] writes:
: Erm, folks? Can anyone please tell me what this has to do with
: freebsd-hackers any longer?
:
:While the thread has diverged from it's original intent, there is
:something related I consider to be a more interesting topic. If it's
:still not
I would have sent this to the original author if he had used a
proper email address on his post; sorry to those who don't want
to see it.
| | I have files fooX where X is a number from 0 to 6 in that
| | directory. I need to find a piece of information, so I run that
| | information
] ] 1. I don't think I've ever seen a Linux distro which has write
] ] caching enabled by default. Hell, DMA33 isn't even enabled
] ] by default ;)
] ]
] ] You are talking about controlling the IDE drive cache.
] ]
] ] The issue here is write cache in the filesystem code.
]
]
] Nothing in Unix stops you from putting millions of files in a
] directory. There are (I mantain _obviously_) good reasons to
] want to do that. The only thing that stops you is that _some_
] Unix platforms, using _some_ file systems, behave badly if you
] do that.
There are _no_ good reasons
] It's got nothing to do with the basics of software engineering or
] computer science. It's got to do with interface definitions and
] APIs.
]
] Where in open(1) does it specify a limit on the number of files
] permissible in a directory? The closest that it comes, that I can
] see is:
[ ...
One word: B+Tree. Hash tables work well if the entire hash table fits
into memory and you know (approximately) what the upper limit on records
is going to be. If you don't, then a B+Tree is the only proven way to
go. (sure, there are plenty of other schemes, some hybrid, some
Ultimately something like Reiser will win over UFS, but performance
figures aren't the whole picture. Most of the bugs have been worked out
of UFS and the recovery tools are extremely mature. Only a handful
of edge cases have been found in the last decade. Nearly all the bugs
On Thu, 24 May 2001, void wrote:
On Wed, May 23, 2001 at 09:20:51AM -0400, Andresen,Jason R. wrote:
Why is knowing the file names cheating? It is almost certain
that the application will know the names of it's own files
(and won't be grepping the entire directory every time it
needs
On Wed, 23 May 2001, Shannon wrote:
On Wed, May 23, 2001 at 10:54:40PM -0300, Rik van Riel wrote:
1. I don't think I've ever seen a Linux distro which has write
caching enabled by default. Hell, DMA33 isn't even enabled
by default ;)
You are talking about controlling the IDE drive
In message [EMAIL PROTECTED] Jason Andresen writes:
: If only FreeBSD could boot from those funky M-Systems flash disks.
We boot FreeBSD off of M-Systems flash disks all the time. Don't know
what the problem is with your boxes.
Warner
To Unsubscribe: send mail to [EMAIL PROTECTED]
with
On Thu, May 24, 2001 at 12:25:59PM -0300, Rik van Riel wrote:
On Wed, 23 May 2001, Shannon wrote:
On Wed, May 23, 2001 at 10:54:40PM -0300, Rik van Riel wrote:
1. I don't think I've ever seen a Linux distro which has write
caching enabled by default. Hell, DMA33 isn't even enabled
Andresen,Jason R. wrote:
| On Thu, 24 May 2001, void wrote:
|
| On Wed, May 23, 2001 at 09:20:51AM -0400, Andresen,Jason R. wrote:
|
| Why is knowing the file names cheating? It is almost certain
| that the application will know the names of it's own files
| (and won't be grepping the
Greg Black wrote:
Andresen,Jason R. wrote:
| This still doesn't make sense to me. It's not like the program is going
| to want to do a find on the directory every time it has some data it
| wants to put somewhere. I think for the majority of the cases (I'm sure
| there are exceptions) an
In a message dated 05/23/2001 5:04:36 PM Eastern Daylight Time,
[EMAIL PROTECTED] writes:
Tell them to fire 20K packets/second at the linux box and watch it
crumble.
Linux has lots of little kludges to make it appear faster on some
benchmarks,
but from a networking standpoint it
] Terry Lambert writes:
]
] I don't understand the inability to perform the trivial
] design engineering necessary to keep from needing to put
] 60,000 files in one directory.
]
] However, we can take it as a given that people who need
] to do this are incapable of doing computer science.
] 1. I don't think I've ever seen a Linux distro which has write
] caching enabled by default. Hell, DMA33 isn't even enabled
] by default ;)
]
] You are talking about controlling the IDE drive cache.
]
] The issue here is write cache in the filesystem code.
No. The issue here is the
From: Greg Black [mailto:[EMAIL PROTECTED]]
And if this imaginary program is going to do that, it's equally
easy to use a multilevel directory structure and that will make
the life of all users of the system simpler. There's no real
excuse for directories with millions (or even thousands) of
Shannon Hendrix wrote:
You are talking about controlling the IDE drive cache.
The issue here is write cache in the filesystem code.
1) IIRC they were talking about hw.ata.wc
In a subthread, yeah. I think though, the overall issue is the caching
ext2 does that ufs does not. I'm
Jason Andresen wrote:
And if this imaginary program is going to do that, it's equally
easy to use a multilevel directory structure and that will make
the life of all users of the system simpler. There's no real
excuse for directories with millions (or even thousands) of
files.
No,
On Thu, May 24, 2001 at 05:00:44PM -0400, [EMAIL PROTECTED] wrote:
Linux has lots of little kludges to make it appear faster on some
benchmarks,
but from a networking standpoint it cant handle significant network
loads.
Are you sure this is still true? The 2.4.x series
On Thu, May 24, 2001 at 04:42:02PM -0600, Charles Randall wrote:
From: Greg Black [mailto:[EMAIL PROTECTED]]
There's no real
excuse for directories with millions (or even thousands) of
files.
While I agree completely that there's no excuse for
On Thu, May 24, 2001 at 10:34:26PM +, Terry Lambert wrote:
] 1. I don't think I've ever seen a Linux distro which has write
] caching enabled by default. Hell, DMA33 isn't even enabled
] by default ;)
]
] You are talking about controlling the IDE drive cache.
]
] The issue
On Fri, May 25, 2001 at 06:17:33AM +1000, Greg Black wrote:
the life of all users of the system simpler. There's no real
excuse for directories with millions (or even thousands) of
files.
One of the things that I've always liked about Unix was that
there aren't as many arbitrary limits on
Andrew Reilly wrote:
| On Fri, May 25, 2001 at 06:17:33AM +1000, Greg Black wrote:
| the life of all users of the system simpler. There's no real
| excuse for directories with millions (or even thousands) of
| files.
|
| [...]
|
| Nothing in Unix stops you from putting millions of files in
On 25 May, Greg Black wrote:
This is just not true. For the vast majority of the systems
that have ever been called Unix, attempting to put millions of
files into a directory would be an utter disaster. No ifs or
buts. It might be nice if this were different, although I see
no good reason
Andrew Reilly wrote:
| You can moan about tree-structured vs relational databases, [...]
I can moan about whatever I please -- for instance the fact that
you can't be bothered using a mailer that conforms with basic
rules. Please figure out how to get a Message-Id header into
your mail and
Erm, folks? Can anyone please tell me what this has to do with
freebsd-hackers any longer? It's been quite a long thread
already - have a heart please and take it to -chat. :(
Thanks,
- Jordan
To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the
Shannon Hendrix wrote:
And just to get things worse... :-) the test must be made on the *same*
slice. If you configure two different slices, the one on the outer
tracks will be faster.
I cannot verify that with my drive, but my largest is 18GB so maybe
the difference is not as
The proposed filesystem is most likely Reiserfs. This is a true
journalling filesystem with a radically non-traditional layout.
It is no problem to put millions of files in a single directory.
(actually, the all-in-one approach performs better than a tree)
XFS and JFS are similarly
On Wed, May 23, 2001 at 08:17:12AM -0400, Andresen,Jason R. wrote:
On Tue, 22 May 2001, Kris Kennaway wrote:
On Tue, May 22, 2001 at 10:27:27PM +0300, Nadav Eiron wrote:
I ran tests that I think are similar to what Jason ran on identically
configured FreeBSD and Linux/ReiserFS
On Tue, 22 May 2001, Daniel C. Sobral wrote:
Jason Andresen wrote:
If only FreeBSD could boot from those funky M-Systems flash disks.
It can.
How? Nothing I found in the documentation indicated this, or gave any
sort hint as to how I might go about doing it. The Linux driver has a
On Tue, 22 May 2001, Daniel C. Sobral wrote:
Jason Andresen wrote:
Results:
ufs+softupdates is a little slower than ext2fs+wc for low numbers of
files, but scales better. I wish I had a Reiserfs partition to
test with.
Ext2fs is a non-contender.
Note, though, that there is some
Dear All,
An interview with Reiser just appeared on http://www.slashdot.org/
Just to add a little oil to the fire. :-)
Kees Jan
You are only young once,
but you can stay immature all your life.
To Unsubscribe: send mail to [EMAIL
On Tue, 22 May 2001, Shannon Hendrix wrote:
On Tue, May 22, 2001 at 09:31:34AM -0400, Jason Andresen wrote:
We only have three Linux boxes here (and one is a PC104 with a flash
disk) and already I've had to reinstall the entire OS once when we had a
power glitch. ext2fsck managed to
On Tue, 22 May 2001, Shannon Hendrix wrote:
On Tue, May 22, 2001 at 02:49:21PM -0400, Jason Andresen wrote:
6 files took ~15 minutes to create as is. I'm going to have to wait
until tonight to run larger sets. 2.2.16 is what we have here.
I'm still waiting to see how much faster
I just finished the FreeBSD test with
vfs.vmiodirenable=1 (it was 0 before)
6 simlultanious files, 1 transactions, FreeBSD
4.0-Release+Softupdates with write cacheing disabled. Results are pretty
much unchanged. Do you have to enable vmiodirenable at boot time for it
to take affect?
On Tue, 22 May 2001, Terry Lambert wrote:
I don't understand the inability to perform the trivial
design engineering necessary to keep from needing to put
60,000 files in one directory.
However, we can take it as a given that people who need
to do this are incapable of doing computer
On Tue, 22 May 2001, Shannon Hendrix wrote:
On Tue, May 22, 2001 at 12:03:33PM -0400, Jason Andresen wrote:
The data:
Hardware:
Both machines have the same hardware on paper (although it is TWO
machines,
YMMV).
PII-300
Intel PIIX4 ATA33 controller
IBM-DHEA-38451 8063MB
On Wed, May 23, 2001 at 06:53:37AM -0300, Daniel C. Sobral wrote:
I cannot verify that with my drive, but my largest is 18GB so maybe
the difference is not as pronounced as on some newer drives like those
(currently) monster 70GB drives.
It should be measurable.
Actually, I edited too
PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, May 23, 2001 3:15 PM
Subject: Re: technical comparison
I just finished the FreeBSD test with
vfs.vmiodirenable=1 (it was 0 before)
6 simlultanious files, 1 transactions, FreeBSD
4.0-Release+Softupdates with write cacheing disabled
On Wed, May 23, 2001 at 09:03:37AM -0400, Andresen,Jason R. wrote:
The scary thing is that it was the attached harddrive that lost all of the
files. The situitation is this:
[snip]
Sorry to hear that, but like I said, it isn't typical. ext2 in it's
early days, an ext before that were really
Zitiere Daniel C. Sobral [EMAIL PROTECTED]:
Note, though, that there is some very recent perfomance improvement on
very large directories known as dirpref (what changed, actually, was
dirpref's algorithm). This is NOT present on 4.3-RELEASE, though it
_might_ have since been committed to
On Wed, 23 May 2001, Shannon Hendrix wrote:
Where I live, the power gets worse every year. I lost quite a few ext
filesystems, but only a couple of ufs and ext2 filesystems. Then I
bought a 1920VA UPS and it's no longer an issue. I just found it easier
to not lose power than to worry about
On Wed, May 23, 2001 at 08:17:12AM -0400, Andresen,Jason R. wrote:
Did you enable write caching? You didn't mention, and it's off by
default in 4.3, but I think enabled by default on Linux.
I tried to leave the FreeBSD and Linux boxes as unchanged as possible for
my tests (they are lab
On Wed, 23 May 2001, Kris Kennaway wrote:
On Wed, May 23, 2001 at 08:17:12AM -0400, Andresen,Jason R. wrote:
Did you enable write caching? You didn't mention, and it's off by
default in 4.3, but I think enabled by default on Linux.
I tried to leave the FreeBSD and Linux boxes as
On Wed, May 23, 2001 at 09:20:51AM -0400, Andresen,Jason R. wrote:
Why is knowing the file names cheating? It is almost certain
that the application will know the names of it's own files
(and won't be grepping the entire directory every time it
needs to find a file).
With 60,000 files,
Terry Lambert [EMAIL PROTECTED] writes:
I don't understand the inability to perform the trivial
design engineering necessary to keep from needing to put
60,000 files in one directory.
Hear hear! ;) (Been waiting for that one)
However, we can take it as a given that people who need
to do
On Wed, 23 May 2001, Andresen,Jason R. wrote:
On Wed, 23 May 2001, Kris Kennaway wrote:
That's all well and good, but I thought the aim here was to compare
Linux and FreeBSD performance on as level playing field as possible?
You're not measuring FS performance, you're measuring FS
On Wed, May 23, 2001 at 10:54:40PM -0300, Rik van Riel wrote:
1. I don't think I've ever seen a Linux distro which has write
caching enabled by default. Hell, DMA33 isn't even enabled
by default ;)
You are talking about controlling the IDE drive cache.
The issue here is write cache in
Albert D. Cahalan wrote:
It should be immediately obvious that ext2 is NOT the filesystem
being proposed, async or not. For large directories, ext2 sucks
as bad as UFS does. This is because ext2 is a UFS clone.
The proposed filesystem is most likely Reiserfs. This is a true
journalling
[trimming CCs]
On Tue, May 22, 2001 at 09:31:34AM -0400, Jason Andresen wrote:
Er, I don't think ReiserFS is in the Linux kernel yet, although it is
the default filesystem on some distros apparently. I think Linus has
some reservations about the stability of the filesystem since it is
Albert D. Cahalan wrote:
Gordon Tetlow writes:
On Mon, 21 May 2001, Jordan Hubbard wrote:
[Charles C. Figueire]
c) A filesystem that will be fast in light of tens of thousands of
files in a single directory (maybe even hundreds of thousands)
I think we can more than hold our
Jason Andresen wrote:
Oops, I fubbed up the linux at 6 files test, I'm rerunning it now,
but it will take a while to finish.
Results:
ufs+softupdates is a little slower than ext2fs+wc for low numbers of
files, but scales better. I wish I had a Reiserfs partition to
test with.
To
Jason Andresen wrote:
Jason Andresen wrote:
Oops, I fubbed up the linux at 6 files test, I'm rerunning it now,
but it will take a while to finish.
Results:
ufs+softupdates is a little slower than ext2fs+wc for low numbers of
files, but scales better. I wish I had a Reiserfs
] I work in an environment consisting of 300+ systems, all FreeBSD
] and Solaris, along with lots of EMC and F5 stuff. Our engineering division
] has been working on a dynamic content server and search engine for the
] past 2.5 years. They have consistently not met up to performance and
]
Jason Andresen writes:
Albert D. Cahalan wrote:
It should be immediately obvious that ext2 is NOT the filesystem
being proposed, async or not. For large directories, ext2 sucks
as bad as UFS does. This is because ext2 is a UFS clone.
The proposed filesystem is most likely Reiserfs. This is
-Original Message-
From: Terry Lambert [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 22, 2001 10:59 AM
To: [EMAIL PROTECTED]
Subject: Re: technical comparison
] I work in an environment consisting of 300+ systems, all FreeBSD
] and Solaris, along with lots of EMC and F5 stuff
Albert D. Cahalan wrote:
Jason Andresen writes:
Er, I don't think ReiserFS is in the Linux kernel yet, although it is
the default filesystem on some distros apparently. I think Linus has
some reservations about the stability of the filesystem since it is
It is in the kernel:
I ran tests that I think are similar to what Jason ran on identically
configured FreeBSD and Linux/ReiserFS machines. ResierFS is much much
faster than UFS+softupdates on these tests.
Linux (2.2.14-5 + ReiserFS):
Time:
164 seconds total
97 seconds of transactions (103 per
On Tue, May 22, 2001 at 12:40:11PM -0600, Matt Simerson wrote:
When did that change? As of March which was the last time I had my grubby
little hands all over a F5 BigIP box in our lab, it was NOT running FreeBSD.
It runs a tweaked version of BSDI's kernel.
I believe it is Terry's
ReiserFS entered Linux kernels in the pre 2.4.1 series, and was 'official' with 2.4.1.
To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-hackers in the body of the message
On Tue, May 22, 2001 at 10:27:27PM +0300, Nadav Eiron wrote:
I ran tests that I think are similar to what Jason ran on identically
configured FreeBSD and Linux/ReiserFS machines. ResierFS is much much
faster than UFS+softupdates on these tests.
Linux (2.2.14-5 + ReiserFS):
Time:
I didn't, but I believe Jason's numbers (for ext2 and ufs) also had write
caching only enabled on Linux.
On Tue, 22 May 2001, Kris Kennaway wrote:
On Tue, May 22, 2001 at 10:27:27PM +0300, Nadav Eiron wrote:
I ran tests that I think are similar to what Jason ran on identically
configured
Jason Andresen wrote:
If only FreeBSD could boot from those funky M-Systems flash disks.
It can.
--
Daniel C. Sobral(8-DCS)
[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]
wow regex humor... I'm a geek
To Unsubscribe: send mail to [EMAIL PROTECTED]
Jason Andresen wrote:
Results:
ufs+softupdates is a little slower than ext2fs+wc for low numbers of
files, but scales better. I wish I had a Reiserfs partition to
test with.
Ext2fs is a non-contender.
Note, though, that there is some very recent perfomance improvement on
very large
Nadav Eiron wrote:
I ran tests that I think are similar to what Jason ran on identically
configured FreeBSD and Linux/ReiserFS machines. ResierFS is much much
faster than UFS+softupdates on these tests.
For that matter, did you have vfs.vmiodirenable enabled?
--
Daniel C. Sobral
On Tue, May 22, 2001 at 02:49:21PM -0400, Jason Andresen wrote:
6 files took ~15 minutes to create as is. I'm going to have to wait
until tonight to run larger sets. 2.2.16 is what we have here.
I'm still waiting to see how much faster ReiserFS is.
I'm willing to overnight your test
On Tue, May 22, 2001 at 09:31:34AM -0400, Jason Andresen wrote:
Er, I don't think ReiserFS is in the Linux kernel yet, although it is
the default filesystem on some distros apparently.
ReiserFS, on my system anyway, started just losing files. I'd log in and
would notice some mp3 files or
Shannon Hendrix wrote:
On Tue, May 22, 2001 at 02:49:21PM -0400, Jason Andresen wrote:
6 files took ~15 minutes to create as is. I'm going to have to wait
until tonight to run larger sets. 2.2.16 is what we have here.
I'm still waiting to see how much faster ReiserFS is.
I'm
On Tue, May 22, 2001 at 12:03:33PM -0400, Jason Andresen wrote:
Here's the results I got from postmark, which seems to be the closest
match to the original problem in the entire ports tree.
Test setup:
Two machines with the same make and model hardware, one running
FreeBSD 4.0, the other
On Tue, May 22, 2001 at 10:55:09PM -0300, Daniel C. Sobral wrote:
And just to get things worse... :-) the test must be made on the *same*
slice. If you configure two different slices, the one on the outer
tracks will be faster.
I cannot verify that with my drive, but my largest is 18GB so
Nadav Eiron wrote:
I ran tests that I think are similar to what Jason ran on identically
configured FreeBSD and Linux/ReiserFS machines. ResierFS is much much
faster than UFS+softupdates on these tests.
[ ... ]
Both tests were done with postmark-1.5, 6 files in
1 transactions.
1 - 100 of 109 matches
Mail list logo