Linux-Development-Sys Digest #955, Volume #6 Tue, 13 Jul 99 09:15:23 EDT
Contents:
Re: New OS development (Alex Wells)
Re: FS problems in 2.2.9 - is 2.2.10 better? (Tony Gale)
performance of memcpy on Linux (Maciej Golebiewski)
measuring interrupts time ("Sebastien")
Dos4GW ->Linux ("Dmitri Semenov")
Re: when will Linux support > 2GB file size??? (Donal K. Fellows)
Re: Move command source (Jan Andres)
Re: performance of memcpy on Linux ("Thomas Steffen")
Package manager as a VFS (Mark WARDLE)
gcc bug ("BennoG")
Re: MICROSOFT LINUX DISTRIBUTION (Robert Kaiser)
NCR 53C710 Fast SCSI-2 Controller ("Mike Coakley")
Re: when will Linux support > 2GB file size??? (Robert Krawitz)
Re: when will Linux support > 2GB file size??? (Robert Krawitz)
Re: when will Linux support > 2GB file size??? (David Fox)
Re: when will Linux support > 2GB file size??? (Robert Krawitz)
Re: when will Linux support > 2GB file size??? (Byron A Jeff)
----------------------------------------------------------------------------
From: Alex Wells <[EMAIL PROTECTED]>
Subject: Re: New OS development
Date: Tue, 13 Jul 1999 09:34:30 +0100
Reply-To: [EMAIL PROTECTED]
Currently I don't have the foggiest of all the features it will have - it's
still only on the drawing board, although I'm having a play with some tools
that might get me started.
I know for definate that it will have network support built in from the
ground up, and I might even try and implement process migration into it
(that's if I can be bothered to put in all the extra work that it entails!).
As I say, I'm not too sure about what it's main function is actually going
to be - the idea originally was to see how an OS works by building one from
the ground up (one of the best ways - even if it is the most difficult!).
I'll try and keep the group updated on the progress that's being made, and
new info will be posted as and when it becomes available.
Bye for now,
Alex
Of all the things I've lost, I miss my mind the most
-- Ozzy Ozbourne
Veni, Vermini, Vomui - I came, I got Ratted, I Threw up.
------------------------------
From: [EMAIL PROTECTED] (Tony Gale)
Subject: Re: FS problems in 2.2.9 - is 2.2.10 better?
Date: 13 Jul 1999 08:31:16 GMT
In article <7megqm$5s8$[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Kurt Fitzner) writes:
> I'm having some problems with my filesystem caching in 2.2.9. I took
> out the bdflush/update daemon as suggested on www.linuxhq.org. My system
> doesn't seem to be flushing dirty buffers to disk within any kind of sane
> timeframe at all. After moderate use, I can keep the system almost totally
> idle for an hour or more, and still find that doing a sync favours me with
> between 5 and 30 seconds of completely solid disk activity.
>
Removing the bdflush daemon was only valid for 2.2.8. You should
definately be running it for anything else. The code used in 2.2.8 was
discovered to eat filesystems, so don't use 2.2.8.
-tony
--
E-Mail: Tony Gale <[EMAIL PROTECTED]>
The views expressed above are entirely those of the writer
and do not represent the views, policy or understanding of
any other person or official body.
------------------------------
From: Maciej Golebiewski <[EMAIL PROTECTED]>
Subject: performance of memcpy on Linux
Date: Tue, 13 Jul 1999 10:43:11 +0200
Dear All,
Recently I have noticed strange behaviour: caling memcpy
with longer chunks of data actually delivers worse
bandwidth.
I wrote a short test program (included at the end of this
post) that simply memcopies data between two buffers for
different lengths of data, and computes corresponding
bandwidths.
Here are the results from three machines:
1. Dual PPro 200 MHz, Linux 2.0.34 with SMP
#len t(us) bdw(MB/S)
8192 43.000000 181.686047
16384 86.000000 181.686047
32768 170.000000 183.823529
65536 341.000000 183.284457
131072 683.000000 183.016105
262144 2515.000000 99.403579 # oops
524288 9704.000000 51.525144 # oops
2. Same hardware as above, Linux 2.2.5 with SMP
#len t(us) bdw(MB/S)
8192 43.000000 181.686047
16384 89.000000 175.561798
32768 173.000000 180.635838
65536 345.000000 181.159420
131072 941.000000 132.837407
262144 2864.000000 87.290503 # oops
524288 9251.000000 54.048211 # oops
3. Single PII 300 MHz, Linux 2.0.34, no SMP
#len t(us) bdw(MB/S)
8192 26.000000 300.480769
16384 127.000000 123.031496
32768 294.000000 106.292517
65536 538.000000 116.171004
131072 1505.000000 83.056478
262144 3397.000000 73.594348 # oops
524288 12215.000000 40.933279 # oops
(Actually the program does measurements also for even
shorten lengths, but I have omitted them since they are
meaningless anyway, as everything resides in cache).
In each case, when I increase the length of data to be
copied, the bandwidth drops.
The big question is:
- is it caused by the way Linux kernel manages memory?
- is this caused just by cache effects?
- or maybe it's just the hardware (e.g. memory bus optimized for
short bursts instead of long transfers)?
Any comments/hints are really welcome.
Maciej
/* a proggy to test memory transfers */
#include <stdlib.h>
#include <stdio.h>
#include <sys/time.h>
#include <unistd.h>
#define START 1024
#define STEP 2
#define END (8*65536)
#define CACHE (512*1024)
#define ITER 100
#define MB (1024*1024)
#define S2MUS 1000000
int main (int argc, char **argv) {
char *source, *target, *ptr;
struct timeval timestamp[2];
long elapsed_us;
double avg_time;
int i, j;
if ((ptr = malloc (4 * CACHE))) {
source = &ptr[0];
target = &ptr[2*CACHE];
printf ("#len\tt(us)\tbdw(MB/S)\n");
for (i = START; i <= END; i *= STEP) {
/* touch buffers */
memset (source, i, i);
memset (target, END-i, i);
/* warm up */
memcpy (target, source, i);
memcpy (source, target, i);
gettimeofday (×tamp[0], NULL);
for (j = 0; j < ITER; j++) {
memcpy (target, source, i);
memcpy (source, target, i);
}
gettimeofday (×tamp[1], NULL);
elapsed_us =
(timestamp[1].tv_usec + S2MUS * timestamp[1].tv_sec) -
(timestamp[0].tv_usec + S2MUS * timestamp[0].tv_sec);
avg_time = (double)(elapsed_us / (2*ITER));
printf ("%d\t%lf\t%lf\n", i, avg_time, S2MUS * (i / avg_time) /
MB);
}
} else
fprintf (stderr, "allocation failed\n");
return 0;
} /* main */
------------------------------
From: "Sebastien" <[EMAIL PROTECTED]>
Subject: measuring interrupts time
Date: Tue, 13 Jul 1999 10:11:29 +0100
Hi,
Does anybody have a program to test the respose time of linux to an
interruption?
I guess that the best test would be to use a function generator to trigger
the interrupt and to make the system respond by an output voltage. Using an
oscilloscope you would be able to use measure the time shift in between the
2 signals. This time shift would be equal to the interrupts response time
I have seen some measurments but I would like to try it on my system while
adjusting a few parameters.
Thanks. Sebastien.
------------------------------
From: "Dmitri Semenov" <[EMAIL PROTECTED]>
Subject: Dos4GW ->Linux
Date: Tue, 13 Jul 1999 07:59:35 GMT
I do not know about Linux too much. We are looking to change a platform from
DOS+Dos4w(Watcom C++) to something else.
The main requirements are:
1. Possibility to make small enough OS kernel
2. Disabled page swapping
3. OS binary size all: Max 5MB
Our task is developing end-user navigational systems based on x86 platform
having power-on, power-off button and some more(like game machine or TV).
If anyone can give me information about possibility of using Linux, please
do. I already found very big Linux community and very interesting sources
like RTLinux or GGI project, but still need a advise from some Linux guru.
If you have information Please send directly to my email
Best regards, Dmitry Semenov
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (Donal K. Fellows)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 13 Jul 1999 09:09:26 GMT
In article <[EMAIL PROTECTED]>,
Robert Krawitz <[EMAIL PROTECTED]> wrote:
> Let's not put up straw men, shall we? Big iron types (at least the
> ones who know what they're doing) aren't interested in pitiful
> little NT toys. The right comparison here is against Solaris and
> AIX, both of which have supported big files for several years now.
> They do use swap space, but if they're going to go 2 GB deep into
> it, they're going to have a problem, which is why they're going to
> have those big machines loaded to the gills with RAM (which is
> something else Linux needs to fix). Any home user going 2 GB into
> swap space is likewise going to be hurting badly for performance.
Some very large compilations (especially with C++) can get up towards
that mark. But since they tend not to need too much of it all at
once, it isn't a problem if your main memory is only something titchy
(like 128MB.)
Donal.
--
Donal K. Fellows http://www.cs.man.ac.uk/~fellowsd/ [EMAIL PROTECTED]
-- The small advantage of not having California being part of my country would
be overweighed by having California as a heavily-armed rabid weasel on our
borders. -- David Parsons <o r c @ p e l l . p o r t l a n d . o r . u s>
------------------------------
From: [EMAIL PROTECTED] (Jan Andres)
Subject: Re: Move command source
Date: 11 Jul 1999 16:34:47 GMT
In article <7lqtsv$suh$[EMAIL PROTECTED]>, Steve B. wrote:
>Where can I find the source to basic commands like mv. I have RH 6.0 and
>loaded the source, but not finding the source to commands. The reason I
>am looking is a bet with someone whether mv does a copy and delete or just
>adjusts the inodes. In early days of Unix all moves were actually
>copy/delete, I say that is only done today when crossing partitions or
>disks. Like to find the source to see who's right.
The `mv' command is included in the GNU fileutils package. I know what
mv does, but as you want to find it out yourself, I'm not telling you.
;-)
Good luck (reading sources is sometimes terrible),
Jan
--
Jan Andres
[EMAIL PROTECTED]
Ham radio: DH2JAN
------------------------------
From: "Thomas Steffen" <[EMAIL PROTECTED]>
Subject: Re: performance of memcpy on Linux
Date: 13 Jul 1999 12:36:26 +0200
Maciej Golebiewski <[EMAIL PROTECTED]> writes:
> Recently I have noticed strange behaviour: caling memcpy
> with longer chunks of data actually delivers worse
> bandwidth.
jep, and the reason is called memory cache, it has nothing whatsoever
to do with linux. memcpy's performance is limited by memory bandwidth,
and this bandwidth decreases significantly as soon as the memory block
in question doesn't fit in the cache any more.
so...
> 1. Dual PPro 200 MHz, Linux 2.0.34 with SMP
...
> 131072 683.000000 183.016105
> 262144 2515.000000 99.403579 # oops
256 (+ a bit) or 512 kB (?) cache exhausted.
> 3. Single PII 300 MHz, Linux 2.0.34, no SMP
..
> 65536 538.000000 116.171004
> 131072 1505.000000 83.056478
> 262144 3397.000000 73.594348 # oops
> 524288 12215.000000 40.933279 # oops
same here, though the cache size isn't obvious, maybe the caching
algorithm is a bit more clever than in the ppro.
> - is this caused just by cache effects?
[X] correct.
--
linux, linuctis - f, das beste Betriebssystem ;-)
------------------------------
From: [EMAIL PROTECTED] (Mark WARDLE)
Subject: Package manager as a VFS
Date: 13 Jul 1999 11:02:23 GMT
Reply-To: [EMAIL PROTECTED]
Although I quite like RPM etc.., it seems rather un-unix like to not
implement something like this as a device/virtual filesystem when it would
be ideally suited to managed packages on a system. Imagine uninstalling
items just by rm'ing them from the package manager filesystem and
installing just by copying. I don't think there's anything out there like
that is there?
I suppose if no-one out there is interested in coding something like this
then I might try and fit in it sometime. Any suggestions?
Dr Mark Wardle
[EMAIL PROTECTED]
------------------------------
From: "BennoG" <[EMAIL PROTECTED]>
Subject: gcc bug
Date: Tue, 13 Jul 1999 13:00:27 +0200
I think I have found a bug in GCC. When compiling the following code without
optimalisation on Intel platform the two lines should put out the same
result.
int main()
{
double d1;
d1=1.38;
printf("var=%f ",d1*1800);
printf("var=%d\n",(int)(d1*1800));
printf("var=%f ",1.38*1800);
printf("var=%d\n",(int)(1.38*1800));
return 0;
}
Output generated by this program is:
var=2484.000000 var=2483
var=2484.000000 var=2484
It has to be:
var=2484.000000 var=2484
var=2484.000000 var=2484
Benno
------------------------------
From: [EMAIL PROTECTED] (Robert Kaiser)
Subject: Re: MICROSOFT LINUX DISTRIBUTION
Date: 13 Jul 1999 09:59:07 GMT
In article <[EMAIL PROTECTED]>,
Frank Sweetser <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] (Scott Lanning) writes:
>
>> Frank Sweetser ([EMAIL PROTECTED]) wrote:
>> : [EMAIL PROTECTED] (Scott Lanning) writes:
>> : > it, then their bastardized form of Linux becomes standard.
>> :
>> : wrong. dead wrong.
>>
>> Do you mean their techniques are wrong or my assessment is
>> wrong? If the latter, why? That is how they work, I think.
>
> calling something a standard does not make it truly a standard.
>
Well, that would seem to depend on who does the calling and
what you mean by "truly" a standard. It seems to me that it
has certainly worked like that when Microsoft introduced their
"standard" for operating systems (DOS/Win*), network protocols
(SMB), CDROM file formats (Joliet) ....
Of course, these "standards" aren't ISO approved, but they're
pretty much *the* de-facto IT standards today :-(.
Rob
------------------------------
From: "Mike Coakley" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.hardware
Subject: NCR 53C710 Fast SCSI-2 Controller
Date: Tue, 13 Jul 1999 07:48:34 -0400
I am having trouble installing a RedHat 6.0 installation onto a Compaq
Proliant 4000 with the NCR53C710 controller. The installation simply cannot
find the controller and cannot continue without it. (I know I shouldn't be
saying this...) I can install MS WinNT without any problems and the
controller is recognized and the system boots off of this controller/HD.
Does anyone out there have any ideas?
Mike
------------------------------
From: Robert Krawitz <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 13 Jul 1999 07:56:17 -0400
[EMAIL PROTECTED] (Byron A Jeff) writes:
> In article <[EMAIL PROTECTED]>,
> Robert Krawitz <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] (Bones) writes:
> -This "average user" thing is a nonsense, because there's no such thing
> -as an "average user". For that matter, the percentage of users who
> -need, say, support for a Matrox Millennium is considerably less than
> -50%.
>
> Substitute "a majority of" for "average" and you start to get the point.
> The percentage of users that require 2G+ files is much much smaller than the
> Millennium crowd.
Sorry, I was being a bit sarcastic here.
> -Yes, it's true that relatively few home or hobbyist users really need
> -big files. For that matter, I don't think most business users really
> -*need* big files per se, as there are other ways of storing even large
> -data warehouses that don't specifically require such large files, but
> -it's certainly convenient.
>
> And that's the primary argument I've seen: It would make some things
> convenient. I'd feel more pressed if alternatives were not readily
> available, but they are. The limit is on the unit size, not the total size.
Well, the very existence of a filesystem is a convenience; after all,
applications *could* use raw disk space themselves and parcel it out.
> -The IA-64 won't solve this problem, because there won't be enough
> -software for business users. When I say "business users" I'm not
> -referring to people who want an office suite for the desktop, but
> -rather the data warehouse crowd. The reason that they're even
> -considering Linux is that apps such as Oracle, DB/2, Tuxedo, and such
> -are at last becoming available. I don't want to hear anyone talking
> -about MySQL; it simply doesn't have the horsepower or even the basic
> -features (ever hear of ROLLBACK WORK?) to run this kind of stuff.
>
> Question: Do these applications have the ability to store data in raw
> partitions?
Some do and some don't. I consider using a raw partition to be a
major disadvantage, though, since it prevents using ordinary Unix
mechanisms (ownerships, permissions, backup, etc.) for managing the
files. I don't think people who want large amounts of data should
have to reinvent the filesystem.
> Not sure about the VFS. ext2 definitely does. That's why whenever this
> discussion comes up, I always ask why does ext2 need be changed to support
> 2G+ files? Why not simply come up with another filesystem to support large
> files?
That's fine, but it's shifting the argument. The discussion here is
about whether or not *Linux on the x86* should support large files.
Arguing about whether ext2 should or if there should be a new
filesystem is different. I happen to agree that changing ext2 is the
wrong approach, because it will render incompatible at a stroke
existing filesystems, but I do think it's necessary that this be done.
> ext2 is very good at what it does the most. Adding 2G+ support will make it
> less good at what it does the most. So why change it?
Well, OK, why would it make it less good?
--
Robert Krawitz <[EMAIL PROTECTED]> http://www.tiac.net/users/rlk/
Tall Clubs International -- http://www.tall.org/ or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- mail [EMAIL PROTECTED]
"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton
------------------------------
From: Robert Krawitz <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 13 Jul 1999 07:57:32 -0400
[EMAIL PROTECTED] (Donal K. Fellows) writes:
> In article <[EMAIL PROTECTED]>,
> Robert Krawitz <[EMAIL PROTECTED]> wrote:
> > Let's not put up straw men, shall we? Big iron types (at least the
> > ones who know what they're doing) aren't interested in pitiful
> > little NT toys. The right comparison here is against Solaris and
> > AIX, both of which have supported big files for several years now.
> > They do use swap space, but if they're going to go 2 GB deep into
> > it, they're going to have a problem, which is why they're going to
> > have those big machines loaded to the gills with RAM (which is
> > something else Linux needs to fix). Any home user going 2 GB into
> > swap space is likewise going to be hurting badly for performance.
>
> Some very large compilations (especially with C++) can get up towards
> that mark. But since they tend not to need too much of it all at
> once, it isn't a problem if your main memory is only something titchy
> (like 128MB.)
Like I said (from personal experience with things like this): without
sufficient RAM this is going to hurt badly for performance.
--
Robert Krawitz <[EMAIL PROTECTED]> http://www.tiac.net/users/rlk/
Tall Clubs International -- http://www.tall.org/ or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- mail [EMAIL PROTECTED]
"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton
------------------------------
From: d s f o x @ c o g s c i . u c s d . e d u (David Fox)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 13 Jul 1999 04:59:32 -0700
[EMAIL PROTECTED] (Bones) writes:
> Unlike some other OSes I could name, Linux doesn't keep its swap-file
> stored in the same filesystem as system and user data/apps. This is
> the only file that could realistically approach 2GB, and that an
> average user could come across. But even that example may be pushing
> it.
The other day I lost the end of a tar file because it exceeded 2 gig.
Ouch!
--
David Fox http://hci.ucsd.edu/dsf xoF divaD
UCSD HCI Lab baL ICH DSCU
------------------------------
From: Robert Krawitz <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 13 Jul 1999 08:07:15 -0400
[EMAIL PROTECTED] (Byron A Jeff) writes:
> In article <[EMAIL PROTECTED]>,
> Rowan Hughes <[EMAIL PROTECTED]> wrote:
> -In article <7m8qtt$[EMAIL PROTECTED]>, Byron A Jeff wrote:
> - [snip]
> ->So BTW why exactly do you need 2GB+ files?
> -
> -They're needed more and more. In 6-12 months 50GB IDE disks
> -with media speeds of 30MB/sec will be the norm. I'm using
> -this sort of H/W already at work (GIS type stuff) and Linux
> -would get a lot more use in this field if it could do >2GB files.
>
> I see your point, to a point. I still see this as a somewhat minor
> inconvenience to one segment of the developer population at the risk
> of destabilizing everything for everyone. The limit is only at the
> file level and for good reason which is that 32 bit machines
> naturally only represent ints up to 2G (with another 2G for negative
> indices).
I don't consider this a good reason. CP/M was never limited to
256-byte files (it ran on 8-bit processors), and for that matter, it
wasn't limited to 64K files either. DOS certainly wasn't. GCC
supports 64-bit integers, generating the necessary instructions for
working with them.
If we move everyone to an unnatural size size to simplify
> things for a few applications, we risk the danger of reducing the
> stability and performance of a filesystem that handles a great
> majority of the file/application population in its current state.
Modifying the existing filesystem layout is indeed risky, but how is
creating a new filesystem risky in this regard?
> Has anyone thought of writing a large file interface or class for
> this type of activity? It seems to me its a relatively minor
> adjustment to map a set of files in size up to 2G over a larger
> array. Or maybe a filesystem specifically designed for handling
> large files?
There already is a de facto standard for handling large files, which
Solaris and AIX support. It's backward compatible, but not forward
compatible: the normal filesystem operations cannot operate on a file
larger than 2 GB, but there are 64-bit versions of all of the
file-related system calls (or at least open() and friends) that can.
> I'd just hate to see the elegance and power of the existing
> filesystem be compromised by fulfilling the needs of a few when
> other relatively simple avenues exists to solve the problem.
How would the filesystem be compromised?
> Of all the examples I've seen only databases seem to warrant such a
> significant change. And databases could in fact have better
> performance be implementing a rudimentary filesystem for the data on
> top a the raw partition.
Actually, it's more an issue with processing large flat files than
with databases, for precisely this reason.
> Consider this: If a switch to a 64 bit filesystem occurs then every
> application that does a seek on that filesystem must be recompiled.
More precisely, every application that tries to access a file greater
than 2 GB. That's true, but the change isn't difficult, and there's
already a standard way to do it.
Every
> data block pointer in every inode will double in size. Every data block
> computation will require 64 bit arithmatic.
Sure, but so what?
--
Robert Krawitz <[EMAIL PROTECTED]> http://www.tiac.net/users/rlk/
Tall Clubs International -- http://www.tall.org/ or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- mail [EMAIL PROTECTED]
"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton
------------------------------
From: [EMAIL PROTECTED] (Byron A Jeff)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 13 Jul 1999 08:02:05 -0400
In article <[EMAIL PROTECTED]>,
Robert Krawitz <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] (Byron A Jeff) writes:
-
-> In article <[EMAIL PROTECTED]>,
-> Robert Krawitz <[EMAIL PROTECTED]> wrote:
-> [EMAIL PROTECTED] (Bones) writes:
-
-> -Yes, it's true that relatively few home or hobbyist users really need
-> -big files. For that matter, I don't think most business users really
-> -*need* big files per se, as there are other ways of storing even large
-> -data warehouses that don't specifically require such large files, but
-> -it's certainly convenient.
->
-> And that's the primary argument I've seen: It would make some things
-> convenient. I'd feel more pressed if alternatives were not readily
-> available, but they are. The limit is on the unit size, not the total size.
-
-Well, the very existence of a filesystem is a convenience; after all,
-applications *could* use raw disk space themselves and parcel it out.
A point that I have in fact been pounding. Such parceling at the application
level doesn't make sense for small files. But we're talking about monster
files here. Doesn't raw disk space at least merit a look?
-
-> -The IA-64 won't solve this problem, because there won't be enough
-> -software for business users. When I say "business users" I'm not
-> -referring to people who want an office suite for the desktop, but
-> -rather the data warehouse crowd. The reason that they're even
-> -considering Linux is that apps such as Oracle, DB/2, Tuxedo, and such
-> -are at last becoming available. I don't want to hear anyone talking
-> -about MySQL; it simply doesn't have the horsepower or even the basic
-> -features (ever hear of ROLLBACK WORK?) to run this kind of stuff.
->
-> Question: Do these applications have the ability to store data in raw
-> partitions?
-
-Some do and some don't. I consider using a raw partition to be a
-major disadvantage, though, since it prevents using ordinary Unix
-mechanisms (ownerships, permissions, backup, etc.) for managing the
-files. I don't think people who want large amounts of data should
-have to reinvent the filesystem.
I'm willing to conceed this point. That rationale sounds right.
-
-> Not sure about the VFS. ext2 definitely does. That's why whenever this
-> discussion comes up, I always ask why does ext2 need be changed to support
-> 2G+ files? Why not simply come up with another filesystem to support large
-> files?
-
-That's fine, but it's shifting the argument. The discussion here is
-about whether or not *Linux on the x86* should support large files.
That wasn't my perception. Every time I read this argument it comes accross
as "Why doesn't ext2 support 2G+ files". All the mechanisms required for a
different filesystem to support large files is already in place.
-Arguing about whether ext2 should or if there should be a new
-filesystem is different. I happen to agree that changing ext2 is the
-wrong approach, because it will render incompatible at a stroke
-existing filesystems, but I do think it's necessary that this be done.
Well now we are in agreement. Yes we need large files. Yes it needs to be in
a different filesystem.
-
-> ext2 is very good at what it does the most. Adding 2G+ support will make it
-> less good at what it does the most. So why change it?
-
-Well, OK, why would it make it less good?
Because switching all the file pointer computations to 64 bit will slow each
and every reference to the file system down on a 32 bit machine.
It was natural to do on 64 bit architechures because they naturally support
64 bit arithmatic. But 32 bit architectures don't.
That's why I believe a different filesystem with 64 bit support and most likely
a different interface wrapper may be in order to minimize the performance hit.
All I'm saying is that adding 64 bit support shouldn't affect each and every
application on the system, only the ones that require 64 bit support.
BAJ
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************