Linux-Development-Sys Digest #954, Volume #6 Tue, 13 Jul 99 05:14:28 EDT
Contents:
Re: when will Linux support > 2GB file size??? (Byron A Jeff)
FS problems in 2.2.9 - is 2.2.10 better? (Kurt Fitzner)
Re: NT to Linux port questions (Marcus Sundberg)
Advice on source control for cross-border devlpmnt ("Mas Nakachi")
Re: rebuilding kernel on RedHat 5.0 (Gary Lawrence Murphy)
Re: Advice on source control for cross-border devlpmnt (Arun Sharma)
Help - Deleted /var/log/* from RedHat 5.2 system! (Nico Zigouras)
Re: anonymous memory mapping (Takeyasu Wakabayashi)
Re: Help - Deleted /var/log/* from RedHat 5.2 system! (Michael Lee Yohe)
Re: Memory Management Bug (Peter Ross)
mkfs.sysv ("Ilhoon,Shin")
Lost Cursor in console mode - how to recover (Mike Kennedy)
Lost console mode cursor - how to recover (Mike Kennedy)
Re: micro kernel (Emile van Bergen)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Byron A Jeff)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 13 Jul 1999 00:28:24 -0400
In article <[EMAIL PROTECTED]>,
Robert Krawitz <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] (Bones) writes:
-
-> >On Sun, 11 Jul 1999 01:31:04 GMT, [EMAIL PROTECTED] (Christopher B.
Browne) wrote:
->
-> >I suspect that the "64 bit support on 32 bit architectures" thing will not
-> >happen quickly or soon. IA-64 should come out fairly soon, which will add
-> >up to *several* 64 bit architectures that should be suitable for those that
-> >really need big files. And that availability should diminish the urgency of
-> >"fixing" the problem on IA-32.
->
-> Plus pushing the issue now is kind of trivial, as another poster
-> pointed out, because of the small percentage of users who actually
-> need to be able to manipulate files >2GB.
-
-This "average user" thing is a nonsense, because there's no such thing
-as an "average user". For that matter, the percentage of users who
-need, say, support for a Matrox Millennium is considerably less than
-50%.
Substitute "a majority of" for "average" and you start to get the point.
The percentage of users that require 2G+ files is much much smaller than the
Millennium crowd.
-
-Yes, it's true that relatively few home or hobbyist users really need
-big files. For that matter, I don't think most business users really
-*need* big files per se, as there are other ways of storing even large
-data warehouses that don't specifically require such large files, but
-it's certainly convenient.
And that's the primary argument I've seen: It would make some things
convenient. I'd feel more pressed if alternatives were not readily
available, but they are. The limit is on the unit size, not the total size.
-
-The IA-64 won't solve this problem, because there won't be enough
-software for business users. When I say "business users" I'm not
-referring to people who want an office suite for the desktop, but
-rather the data warehouse crowd. The reason that they're even
-considering Linux is that apps such as Oracle, DB/2, Tuxedo, and such
-are at last becoming available. I don't want to hear anyone talking
-about MySQL; it simply doesn't have the horsepower or even the basic
-features (ever hear of ROLLBACK WORK?) to run this kind of stuff.
Question: Do these applications have the ability to store data in raw
partitions?
-
-Scientific users -- the folks most people think of when the word
-"supercomputer" comes up -- may not care as much because they're all
-running custom stuff anyhow, and they're used to porting stuff at the
-drop of a hat. But that dog won't hunt in the business world. These
-guys really do need to handle a lot of data (the more the better as
-far as they're concerned), and they also need the applications that go
-along with them.
-
-Furthermore, the problem Linux has with large files isn't the OS, but
-the filesystem. glibc 2.1 doesn't have any problem with large file
-offsets, but ext2 does, and I suspect that the VFS layer does also.
Not sure about the VFS. ext2 definitely does. That's why whenever this
discussion comes up, I always ask why does ext2 need be changed to support
2G+ files? Why not simply come up with another filesystem to support large
files?
ext2 is very good at what it does the most. Adding 2G+ support will make it
less good at what it does the most. So why change it?
-
-> Unlike some other OSes I could name, Linux doesn't keep its swap-file
-> stored in the same filesystem as system and user data/apps. This is
-> the only file that could realistically approach 2GB, and that an
-> average user could come across. But even that example may be pushing
-> it.
-
-Let's not put up straw men, shall we? Big iron types (at least the
-ones who know what they're doing) aren't interested in pitiful little
-NT toys. The right comparison here is against Solaris and AIX, both
-of which have supported big files for several years now. They do use
-swap space, but if they're going to go 2 GB deep into it, they're
-going to have a problem, which is why they're going to have those big
-machines loaded to the gills with RAM (which is something else Linux
-needs to fix). Any home user going 2 GB into swap space is likewise
-going to be hurting badly for performance.
-
-2 GB just isn't all that much data these days. We haven't reached the
-point yet where a single image is 2 GB, but we're within striking
-distance. And let's not fall into the trap Windows did with its
-16-bit garbage.
I don't this is the case. There is already 64 bit file offset support in
the library. The kernel can certainly handle disk arrays in the terabytes.
The only missing piece of the puzzle is a filesystem that natually supports
large files. I simply believe that ext2 should not that filesystem.
BAJ
------------------------------
From: [EMAIL PROTECTED] (Kurt Fitzner)
Subject: FS problems in 2.2.9 - is 2.2.10 better?
Date: 13 Jul 1999 04:54:46 GMT
I'm having some problems with my filesystem caching in 2.2.9. I took
out the bdflush/update daemon as suggested on www.linuxhq.org. My system
doesn't seem to be flushing dirty buffers to disk within any kind of sane
timeframe at all. After moderate use, I can keep the system almost totally
idle for an hour or more, and still find that doing a sync favours me with
between 5 and 30 seconds of completely solid disk activity.
I tried to quantize this, so I created a separate ext2 partition. I wrote
a program that wrote to 50 files in various forms, closed, exited and then
waited. After 30 minutes, I hard-reset the machine, to find fully 1/3 of
the files I wrote to had damage.
I have read conflicting reports about the update/bdflush daemon. Some
saying it is better to remove it, some saying you must remove it or it
will cause problems, and others saying it should stay. If anyone could
give me some answers regarding the current official opinion on this, I'd
appreciate it. Do I still need this utility? I certainly need something,
because something is certainly broken.
Have these problems been addressed in 2.2.10? There are no notes on
it anywhere that I've been able to find which desribe what has been done
in 2.2.10.
Thanks in advance for the info.
- Kurt.
------------------------------
From: Marcus Sundberg <[EMAIL PROTECTED]>
Subject: Re: NT to Linux port questions
Date: Tue, 13 Jul 1999 07:09:43 +0200
Robert Krawitz wrote:
>
> Matthew Carl Schumaker <[EMAIL PROTECTED]> writes:
> > > In unix, the 'objects' are called files. A process can have a handle to
> > > an open 'file'. It can be a (tcp/udp/unix) socket, a fifo, a terminal, a
> > > file, a sound card, whatever. It is, indeed, kind of an object.
> > >
> > True but not all of these handle are univerisal, in MS there is a data
> > type HANDLE that is used for any kind of handle let it be a file, window,
> > socket, device, etc
And the same thing is true for Unix. The difference is that in Windows
you can (AFAIK) not use the same function to read data from all objects.
> Windows may be a bit harder, but it's still not too bad; just use
> multiple threads or processes, with one thread waiting for the
> appropriate X event and writing a byte to a file descriptor that your
> main thread can select on. For that matter, skip the thread nonsense
> and use processes.
You don't have to resort to such ugly things for X, use ConnectionNumber(3)
which will give you a filedescriptor you can select() on for X events.
//Marcus
--
===============================+====================================
Marcus Sundberg | http://www.stacken.kth.se/~mackan/
Royal Institute of Technology | Phone: +46 707 295404
Stockholm, Sweden | E-Mail: [EMAIL PROTECTED]
------------------------------
From: "Mas Nakachi" <[EMAIL PROTECTED]>
Subject: Advice on source control for cross-border devlpmnt
Date: Mon, 12 Jul 1999 22:05:12 -0700
Hey all,
This isn't actually a question on Linux so I apologize in advance if this
isn't entirely appropriate for this newsgroup but I have a question that I
thought the Linux development community would have some insight into...
This question involves source control mgmt for a cross-border software
project for a small software company without the resources for some
mega-sophisticated source-control tools. We use Clearcase but not the
multi-site version and we are having issues with our foreign development
teams accessing our development branches due to ISP problems at their
geographic locale. Basically it seems like the slow connection makes
everything from checking stuff out to checking stuff in an insanely long and
laborious process such that they avoid merging their changes into our area
on a timely basis.
Consequently, this results in them checking in tons of shit at once at very
very distant intervals which then result in mega-painful integration since
our area is so dynamic that by the time they check stuff in, it is way way
out-of-synch and invariably blows up the build.
We have propsed that they take snapshot views of defined milestone builds,
build on this snapshot, and merge like once a week into our dynamic branch
after which we integrate, stabilize, and then repeat the process. There
were other convoluted ideas tossed around but nothing seems particularly
elegant.
I was wondering if anyone has had experience with such a development
situation and if you could offer some words of wisdom on the optimal way to
manage this source control since it nearly sunk us on our previous release.
Also I was wondering how the Linux development community source control
works in general and if there was anything to be learned from that for our
above-scenario? Afterall, Linux seems to be the ultimate distributed
development project so I'm thinking that there must be something that is
applicable...
I would greatly appreciate any advice on this from you guys since you are
the experts on this stuff.
Thanx in advance
MN
------------------------------
From: Gary Lawrence Murphy <[EMAIL PROTECTED]>
Subject: Re: rebuilding kernel on RedHat 5.0
Date: 13 Jul 1999 01:35:07 -0400
Reply-To: [EMAIL PROTECTED]
You can find the RH Kernel configuration information in the Beta-Books
site at http://www.mcp.com/ but in a nutshell ...
make xconfig : pops up all those glorious options, and each has help
and the help usually recommends a default
once you finish xconfig (or just "make config" if you don't have X
installed) you then need to prep the sources:
make dep clean : Two commands here, assuming the first works. dep will
ensure files properly depend on others, and clean will
ensure there are no stray includes or assembler files
that don't know you've changed options
make zImage : I hate this one. It just makes the kernel image and leaves
it up to you to find it (in arch/...) and move it to someplace
useful. I much prefer
make zlilo : Makes the zImage kernel and installs in in root, then
runs LILO to install it --- WARNING: You need to have
your lilo.conf previously prepared to accept the kernel
image at /vmlinuz (see my chapter on the www.mcp.com site)
make modules modules_install : Again, two commands, one to make modules the
other I leave for you to guess ;)
As for the differences between the two dummies books, they are both
right, and there are other ways as well! :) The *nice* thing about Linux
is that it gives you choices. Remember that.
--
Gary Lawrence Murphy <[EMAIL PROTECTED]> TeleDynamics Communications Inc
Business Telecom Services : Internet Consulting : http://www.teledyn.com
Linux/GNU Education Group: http://www.egroups.com/group/linux-education/
"Computers are useless. They can only give you answers."(Pablo Picasso)
------------------------------
From: [EMAIL PROTECTED] (Arun Sharma)
Subject: Re: Advice on source control for cross-border devlpmnt
Reply-To: [EMAIL PROTECTED]
Date: Tue, 13 Jul 1999 06:02:05 GMT
On Mon, 12 Jul 1999 22:05:12 -0700, Mas Nakachi <[EMAIL PROTECTED]>
wrote:
> I would greatly appreciate any advice on this from you guys since you are
> the experts on this stuff.
Linux kernel works by exchanging a bunch of patches, which you can
apply using
patch (1) - apply a diff file to an original
But use of CVS (http://www.cyclic.com) is very widespread among
free software development projects.
You can either use remote CVS or use a tool called cvsup for remote
synchronization.
-Arun
------------------------------
From: Nico Zigouras <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.misc,comp.os.linux.setup
Subject: Help - Deleted /var/log/* from RedHat 5.2 system!
Date: Tue, 13 Jul 1999 02:19:23 -0400
Hi folks:
I deleted /var/log from my RedHat 5.2 system and all files in that
folder. Now all my web server access and error logs are gone and they
are not being regenerated. My /etc/httpd/logs folder was also deleted
of error_log and access_log. Any help? Thanks.
------------------------------
From: Takeyasu Wakabayashi <[EMAIL PROTECTED]>
Subject: Re: anonymous memory mapping
Date: 13 Jul 1999 15:20:14 +0900
Thank you for your answer.
Wolfram Gloger <[EMAIL PROTECTED]> writes:
>
> Anonymous shared memory is (or at least was perceived to be) difficult
> to implement in Linux. The SYSV shm is an equivalent alternative so
> the priority of extending mmap() hasn't been high enough.
>
Then, how about POSIX shared memory? I know there's an implementation
by K.A. Knizhnik, but is there any plan to support it in the system
call level?
--
Takeyasu Wakabayashi,
Faculty of Economics, Toyama University
[EMAIL PROTECTED]
------------------------------
Date: Tue, 13 Jul 1999 01:52:41 -0500
From: Michael Lee Yohe <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Help - Deleted /var/log/* from RedHat 5.2 system!
Crossposted-To: comp.os.linux.misc,comp.os.linux.setup
> I deleted /var/log from my RedHat 5.2 system and all files in that
> folder. Now all my web server access and error logs are gone and they
> are not being regenerated. My /etc/httpd/logs folder was also deleted
> of error_log and access_log. Any help? Thanks.
You can re-install the httpd package which will restore all the
directories and subsequent permissions. Here is a dump of my /var/log
tree structure (RH6 - but little has changed between the two).
Otherwise - the other files will be recreated next reboot.
/var/log
|-- boot.log
|-- cron
|-- dmesg
|-- htmlaccess.log
|-- httpd
| |-- access_log
| `-- error_log
|-- lastlog
|-- maillog
|-- messages
|-- netconf.log
|-- pacct
|-- samba
| |-- log.nmb
| `-- log.smb
|-- savacct
|-- secure
|-- sendmail.st
|-- spooler
|-- usracct
|-- uucp
| |-- Debug
| |-- Log
| `-- Stats
|-- wtmp
`-- xferlog
Michael Lee Yohe ([EMAIL PROTECTED])
BRADS A3 Diagnostic Kernel Lead Engineer
PEI Electronics, Inc.
------------------------------
From: Peter Ross <[EMAIL PROTECTED]>
Subject: Re: Memory Management Bug
Date: 13 Jul 1999 17:01:04 +1000
Stefan Proels <[EMAIL PROTECTED]> writes:
>Peter Samuelson wrote:
>> [Stefan Proels <[EMAIL PROTECTED]>]
>> > I have no way argued that it's a Bad Thing to handle allocation this
>> > way. I just don't think that it's a Good Thing to enable every
>> > ordinary user to crash the system.
>>
>> You may well have hit a bug then. You should never be able to actually
>> crash the system. What version of Linux? (While I don't have resource
>I'm running a 2.2.7 kernel from a SuSE 6.1 distribution.
>> problems since I'm essentially single-user here) Linus claims that late
>> late 2.1.x releases should perform much more sanely in tight-memory
>> situations than before -- this was since maybe 2.1.125 or so. In
>> particular, Linux 2.0.x was pretty bad at this.
>>
>> Do you have memory overcommit turned on or off? Makes a difference.
>It's turned off. The program I posted allocated the memory in small
>chunks; as I understand the docs memory overcommit will have no effect
>to this code because it only check for huge individual allocations.
>Anyway, it's turned off.
I am not sure you understand what overcommit is.
Say you have a system with 32Mb of Ram and 12Mb of swap.
Say at some instant of time 40Mb of memory has been malloc'd. This
leaves you with 4Mb of memory available.
If you malloc 5Mb of memory the malloc will fail with overcommit turned
off. All that matters is that 5Mb is bigger then 4Mb, not what the size
of the requests are, it could easily be 5kb and 4kb.
If overcommit is turned on then the the malloc will succeed.
However if you attempt to use all of that 5Mb of memory then there will
be problems.
Pete.
--
+----------------------------------------------------------------------+
| Peter Ross M Sci/Eng Melbourne Uni (change - to . for email) |
| email: [EMAIL PROTECTED] WWW: http://www.cs.mu.oz.au/~petdr |
+----------------------------------------------------------------------+
------------------------------
From: "Ilhoon,Shin" <[EMAIL PROTECTED]>
Subject: mkfs.sysv
Date: Wed, 14 Jul 1999 16:59:14 +0900
where can I get mkfs.sysv?
i need the file to mount sysv.
if you know that, please answer me..
------------------------------
From: [EMAIL PROTECTED] (Mike Kennedy)
Subject: Lost Cursor in console mode - how to recover
Date: 13 Jul 1999 07:31:20 GMT
In console mode the standard blinking underline cursor was lost
after exiting dosemu (funny app). I need to get it back. How?
Rebooting works of course but surely there is another way. All
the howto/faq seem to talk about fonts, mouse cursors or X. I
need the console mode cursor.
TIA
------------------------------
From: [EMAIL PROTECTED] (Mike Kennedy)
Subject: Lost console mode cursor - how to recover
Date: 13 Jul 1999 07:33:54 GMT
I lost the console mode cursor (fast blinking underline) after
exiting dosemu (dos app). Rebooting gets it back but this is
not good.
All the howto/faq talk about mouse cursor, fonts, X cursors.
Anyone know how to get the cursor in console mode?
TIA
------------------------------
From: Emile van Bergen <[EMAIL PROTECTED]>
Subject: Re: micro kernel
Date: Tue, 13 Jul 1999 10:10:05 +0200
On Mon, 12 Jul 1999, Olivier Scalbert wrote:
>Hello,
>
>I would like to build a very little micro-kernel (pico kernel ?) on PC,
>just for an educational point of view.
>I would like to develop it on a Linux box, produce a bootable floppy and
>boot it on an other PC. I have few questions:
>How can I write a sector on a floppy ?
>How can I produce non relocatable code ?
>C, C++ compilers: which one to use ?
Take a look at my own little pet project: a kernel head that does the
setup of the cpu (segment desc. tables, interrupts, page tables) and the
interrupt controller. It also has some generic functions (like memcpy,
printf and such) and can be compiled from any linux box. The bootloader
is not lilo but the GRUB from Erik... something. The same loader used to
boot the Hurd, you can find it (don't remember where I got it from).
You can get it from
ftp://n293.ede.telekabel.euronet.nl/pub/evb/kernelstart.tar.gz
Success ;-)
--
M.vr.gr. / Best regards,
Emile van Bergen (e-mail address: [EMAIL PROTECTED])
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************