Linux-Development-Sys Digest #949, Volume #6 Sun, 11 Jul 99 07:13:46 EDT
Contents:
unresolved symbols in driver ("James S. Johnson")
Re: kernel (Christopher B. Browne)
Re: kernel (Alexander Viro)
Re: suse 6.0 + gdb "steps" into ALL functions (Arun Sharma)
Re: CD-ROM File Time Bug (Peter Samuelson)
Re: Memory swapper abstraction (RFD) (Peter Samuelson)
Re: Memory Management Bug (Peter Samuelson)
Re: "DbgPrint" in Linux? (Yung-Hsiang Lu)
Re: linux for single user (Tobias Anderberg)
what is i_next_alloc_goal ? ("Soohyung Lee")
Re: Memory Management Bug (Andi Kleen)
Re: when will Linux support > 2GB file size??? ([EMAIL PROTECTED])
----------------------------------------------------------------------------
From: "James S. Johnson" <[EMAIL PROTECTED]>
Subject: unresolved symbols in driver
Date: Thu, 08 Jul 1999 17:46:10 -0400
Reply-To: [EMAIL PROTECTED]
RedHat 5.1, kernel 2.0.34, i586
I have written a driver for a board that is connected to the parallel
port. The driver works correctly when installed with insmod, but I would
like to use kerneld. The driver is in /lib/modules/preferred/misc. When
I run depmod it does not recognize several routines:
# depmod -a -e
/lib/modules/preferred/misc/ad7008.o: unresolved symbol(s)
request_region
unregister_chrdev
release_region
register_chrdev
verify_area
check_region
#
Several of these are part of the port registration. Using insmod, the
port is registered correctly in /proc/ioports, so it appears that the
functions are recognized when installed with insmod. What should I do to
get depmod to resolve these symbols?
jjohnson
------------------------------
From: [EMAIL PROTECTED] (Christopher B. Browne)
Subject: Re: kernel
Reply-To: [EMAIL PROTECTED]
Date: Sun, 11 Jul 1999 03:24:58 GMT
On Sun, 11 Jul 1999 01:07:51 +0200, Astro <[EMAIL PROTECTED]> posted:
>Josef M�llers <[EMAIL PROTECTED]> wrote in message
>[EMAIL PROTECTED]
>Astro wrote:
>>
>> What is that, kernel?
>
>If you take "operating system" as being the kernel plus all the
>utilities and susbsystems necessary to actually get the whole thing up
>and running, "kernel" refers to that part of the operating system that
>directly handles the hardware (interrupts, devices, memory management,
>et. al.). Wrt Linux, "kernel" is that part of the entire operating
>system whose sourcecode usually resides in "/usr/src/linux".
>The kernel is the only part of the entire operating system that must run
>in some kind of privileged mode where it can access i/o ports, memory
>management registers etc.)
The term "kernel" is fundamentally a mathematical term that is used in
reference to vector spaces.
Memory is not serving well right now, and my nearest abstract algebra
book is Not In The Same Country; I'm going to have to give two
descriptions, and suggest that one is right.
A vector space can include something like the (x,y) plane of real
numbers.
You can define such a vector space based on some set of vectors that
may be used to represent all points in that vector space.
Thus, in three dimensions (x, y, z) , one might pick the two vectors
(0, 0, 1) and (0, 1, 0), where linear combinations of them will allow
you to express any point on the plane where x=0.
An interesting operation is to try to come up with a minimal set of
orthogonal vectors (e.g. - they are, in some sense, perpendicular to
one another) that cover the desired vector space, and then normalize
them so that they have overall length 1.
With the above example, one could choose the three vectors (0, 0, 2),
(0, 1, 1), and (0, 1, 3), and they would be usable to represent the
same plane. They're not as "natural" a set of choices, as they are
redundant and have somewhat "odd" values to use to express the vector
space.
Now comes the leap to possible error; a kernel is either:
a) The set of orthogonal normal vectors (e.g. - like (0, 1, 0) and
(0, 0, 1)), or
b) The vector space represented by those normal vectors.
I *think* the "kernel" is the latter, and will assume that from here
on in. (Conclusions change a bit if that's wrong.)
Thus far, this looks pretty irrelevant. Nothing to do with operating
systems, right?
Nope. The above is *highly* relevant.
An operating system "kernel" represents a set of code to support a
defined set of operators (e.g. - like the orthogonal normal vectors)
that may then be composed together by "user mode" programs to make the
computer do all the sorts of things that the kernel supports (e.g. -
a vector space of all the combinations of operations).
The usual view of a "kernel" in the context of an operating system is
that minimal set of operators.
The *practical* upshot of this is that a well designed operating
system will involve a carefully designed "kernel" that has a useful
set of operators do do things like:
- Managing memory
- Managing tasks
- Managing devices
Those operators are then used by application developers to create
applications.
If the kernel is defined vaguely, the reliability of the system will
be questionable, and this is something that we see in the way
Microsoft has shoved all sorts of things into the "kernel" of their
systems.
By defining those operators very carefully (preferably going to the
extent of defining how they work in enough detail as to allow
mathematical analysis of their composition) this allows programmers to
write reliable programs that depend on the operators.
--
Those who do not learn from history, loop.
[EMAIL PROTECTED] <http://www.ntlug.org/~cbbrowne/oses.html>
------------------------------
From: [EMAIL PROTECTED] (Alexander Viro)
Subject: Re: kernel
Date: 10 Jul 1999 23:50:52 -0400
In article <[EMAIL PROTECTED]>,
Christopher B. Browne <[EMAIL PROTECTED]> wrote:
>The term "kernel" is fundamentally a mathematical term that is used in
>reference to vector spaces.
>
>Memory is not serving well right now, and my nearest abstract algebra
>book is Not In The Same Country; I'm going to have to give two
>descriptions, and suggest that one is right.
[fairly twisted description of basis]
>a) The set of orthogonal normal vectors (e.g. - like (0, 1, 0) and
> (0, 0, 1)), or
>b) The vector space represented by those normal vectors.
>
>I *think* the "kernel" is the latter, and will assume that from here
>on in. (Conclusions change a bit if that's wrong.)
If you are talking about vector spaces (or groups, etc.) - you are
seriously confused. There is no such thing as kernel of vector space or
its subspaces. Now, if you have a linear mapping between two vector
spaces you can look at the vectors with zero image. They form a subspace.
Which is called the kernel of the linear mapping. The thing you tried to
describe is an orthonormal basis.
>Those who do not learn from history, loop.
>[EMAIL PROTECTED] <http://www.ntlug.org/~cbbrowne/oses.html>
The term came from MULTICS. Kernel as opposed to shell - the latter
is what you are interacting with, the former is the guts of system.
--
"You're one of those condescending Unix computer users!"
"Here's a nickel, kid. Get yourself a better computer" - Dilbert.
------------------------------
From: [EMAIL PROTECTED] (Arun Sharma)
Subject: Re: suse 6.0 + gdb "steps" into ALL functions
Reply-To: [EMAIL PROTECTED]
Date: Sun, 11 Jul 1999 04:46:07 GMT
On 10 Jul 1999 23:27:07 GMT, someone <[EMAIL PROTECTED]> wrote:
> Hi there,
>
> Till SuSE version 5.3, It was OK for me to use gdb to debug my progs.
> But with version 6.0, 'gdb' insists to "step" into functions I didn't write
> (printf, read, write...).
>
> How can I prevent GDB to enter those functions ?
>
Use the "next" command.
>
> As I use threaded progs, GDB always stops on signal SIGUSR1 after the
> "pthread_create" call. Will solutions for previous problem cure this
> ennoying fact too ?
>
Read the pthreads FAQ. You can put a couple of lines in your $HOME/.gdbinit
to ignore those signals.
-Arun
------------------------------
From: [EMAIL PROTECTED] (Peter Samuelson)
Subject: Re: CD-ROM File Time Bug
Date: 11 Jul 1999 00:36:30 -0500
Reply-To: Peter Samuelson <[EMAIL PROTECTED]>
[Charles Sullivan <[EMAIL PROTECTED]>]
> Can someone explain what's going on here?
Hmmm, I can try.
> Under MS-DOS (Win 98) I copied two files from a CD to my hard drive,
> to a floppy disk, and to a ZIP disk. One file has a create date in
> January, the other in July.
So after you copied them, are the dates *still* January and July?
I.e. did DOS preserve the CD timestamps or write new ones? I'm just
making sure you're seeing what you think you're seeing. I don't have
access to an MS-DOS machine (thank goodness) but Windows NT 4 seems to
be preserving timestamps on floppies.
> I am in the Eastern US time zone. My system clock is presently set
> to today's date (10 July 1999) and time, which is Eastern Daylight
> Time.
Meaning what? Normally the system (kernel) clock is set in UTC, and
the hardware clock is too. For dual-boot compatibility with legacy
systems, some distributions offer the option of putting the hardware
clock in local time and correcting the kernel clock via a boot script.
The time *you* see is governed by either your TZ variable or the libc
default, which is the contents of /etc/timezone. `ls' converts kernel
time using this.
> here's what I get for the time. (This is the hour only
[...]
> File MS-DOS Linux=> CDROM HDD Floppy ZIP
> ---- ------ ------------- --- ------ ---
> Jan 22 17 21 21 21
> July 12 8 12 12 12
I presumed to "correct" for what looks like proportional spacing in
your editor. (For shame!) In fixed spacing, is the above correct?
> If I reset my hardware clock to a date in January and reboot Linux,
> the 'date' command reports the date I have set, denoted 'EST'. But
> the file times in the various media reported by ls are identical to
> those shown above.
Yes. The timestamps are the same. And you didn't change your
timezone, so they will be presented the same way. You seem to be
expecting that if you're in EDT, a January file will be listed in EDT.
Not so -- it was created during non-daylight time, so is listed in EST.
Similarly, a July file is listed in EDT, since July is during daylight
savings. Thus the "reported" time will be the same year-round,
assuming the file timestamp isn't touched.
> It looks like Linux is assuming that the CD file are UTC and is
> figuring that out from the DOS date (assumed to be Eastern time). Is
> this correct?
Yes except substitute "libc timezone" for "DOS date". Linux assumes
the CD was mastered (actually that the ISO9660 filesystem was created)
in UTC. So a January file is reported 5 hours behind its actual
timestamp, a July file 4 hours behind.
In my limited testing here, it looks like mtools doesn't do any
timezone correction (hey, it's trying to be as much like MS-DOS as
possible so I guess it really shouldn't), but of course regular system
utilities (like GNU fileutils) do.
> I don't understand why the difference of one hour in the January file on
> the other media.
Not sure about that part. Rerun your test, if you will, after typing
export TZ=UTC0
into your shell. That makes `ls' tell you exactly what times the
kernel gives it.
--
Peter Samuelson
<sampo.creighton.edu!psamuels>
------------------------------
From: [EMAIL PROTECTED] (Peter Samuelson)
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Memory swapper abstraction (RFD)
Date: 11 Jul 1999 01:06:45 -0500
Reply-To: Peter Samuelson <[EMAIL PROTECTED]>
[Sean Walton <[EMAIL PROTECTED]>]
> Where do I find info? Is it being done in Linux?
Yes, but it isn't as interesting as you think. I am no cluster guru
myself, but I've seen the arguments go back and forth between some guy
who's been developing DSM for Linux and people like Larry McVoy, who
apparently has done quite a bit of distributed processing stuff in the
past. Larry maintains that DSM is inherently inefficient compared to
other techniques of parallel processing across nodes (notably
message-passing), and the only thing it has going for it is ease of
programming. He also maintains that anyone doing serious number
crunching will be writing custom applications and won't mind the
additional bother of message-passing API's like PVM. Whoever it is
that maintains DSM for Linux (sorry can't remember the guy's name)
comes off as never actually having written those kinds of applications
in the real world like Larry has, but nevertheless he doggedly insists
that DSM is the way to go. It's an old and tired series of flamewars.
You also alluded to process migration, which is a hypothetical means of
moving a running process from one machine to another. It is fraught
with gotchas, like how to maintain state of TCP/IP connections, and is
widely considered more trouble than it's worth, at least to try to
shoehorn it into Linux. Again, there are notable exceptions to the
general opinion.
What is considered much more feasible is to define a protocol similar
to what X session managers use. X session managers (or at least CDE)
have a way to tell an X app that it's logout time. The app then
checkpoints to some temporary file and passes back to the session
manager an appropriate command line for restoring state from that file.
Next time the user logs in, the session manager runs the app with the
specified command line and the user sees approximately what was on the
screen when she logged out last. A similar API could reasonably be
defined for just any old process, probably using a socket in /tmp, and
an app compiled to take advantage of it could feasibly be migrated from
one machine to another. The external migration support would be
relatively easy, in fact. The disadvantage would be that it's not
transparent, i.e. you can't migrate *any* process, but on the other
hand, if you're running a large enough operation to need process
migration you could probably modify your apps to support such an API.
--
Peter Samuelson
<sampo.creighton.edu!psamuels>
------------------------------
From: [EMAIL PROTECTED] (Peter Samuelson)
Subject: Re: Memory Management Bug
Date: 11 Jul 1999 01:16:02 -0500
Reply-To: Peter Samuelson <[EMAIL PROTECTED]>
[Stefan Proels <[EMAIL PROTECTED]>]
> I have no way argued that it's a Bad Thing to handle allocation this
> way. I just don't think that it's a Good Thing to enable every
> ordinary user to crash the system.
You may well have hit a bug then. You should never be able to actually
crash the system. What version of Linux? (While I don't have resource
problems since I'm essentially single-user here) Linus claims that late
late 2.1.x releases should perform much more sanely in tight-memory
situations than before -- this was since maybe 2.1.125 or so. In
particular, Linux 2.0.x was pretty bad at this.
Do you have memory overcommit turned on or off? Makes a difference.
cat /proc/sys/vm/memory_overcommit
> setrlimit doesn't fix the problem. You still could have multiple
> processes (users) allocating like crazy.
It does help; you can limit number of processes. But in practice,
you're right. In the paraphrased words of Albert Cahalan's wishlist
page, "It may be acceptible for a user to run 20 processes. It may be
acceptible for a process to use 20 MB of RAM. That does not mean it is
acceptible for a user to use 400 MB of RAM." This is a limit check
Linux could use -- per-user vm.
--
Peter Samuelson
<sampo.creighton.edu!psamuels>
------------------------------
From: [EMAIL PROTECTED] (Yung-Hsiang Lu)
Subject: Re: "DbgPrint" in Linux?
Date: 11 Jul 1999 06:14:12 GMT
Does anybody know how to do this (redirect printk to the serial line)?
I plan to do some performance study about a hard disk. Therefore,
logging on a hard disk doesn't seem a good way because it will pollute
the data.
Thanks!
:
: For programs, you can write your own syslog messages, or even just write to
: stderr, and when starting the program redirect stderr to the serial line.
:
: --
: Michael Meissner, Cygnus Solutions
: PMB 198, 174 Littleton Road #3, Westford, Massachusetts 01886
: email: [EMAIL PROTECTED] phone: 978-486-9304 fax: 978-692-4482
--
Sincerely,
Yung-Hsiang Lu
[EMAIL PROTECTED]
------------------------------
From: [EMAIL PROTECTED] (Tobias Anderberg)
Subject: Re: linux for single user
Reply-To: [EMAIL PROTECTED]
Date: Sat, 10 Jul 1999 23:00:36 +0200
>Hi Everyone,
>Does anyone know if there is a commercial verision of Linux for single user,
>single device system without all other complecated components?. This version
>is meant to be installed on hand held devices.
You are talking about an embedded Linux. Check out ELKS, Embedded Linux
Kernel S<omething>. Damn, don't have the URL here, just search for it at
your favourite search engine.
Btw, ELKS isn't commercial.
--
tobias
------------------------------
From: "Soohyung Lee" <[EMAIL PROTECTED]>
Subject: what is i_next_alloc_goal ?
Date: Fri, 9 Jul 1999 12:51:29 +0900
Hi, everyone !
There are two fields 'i_next_alloc_goal' and 'i_next_alloc_block' in
'ext2_inode_info' structure.
What is the purpose of these two fields.
I guess that it is for sequential allocation.
But I don't know the exact meaning and purpose of these.
Could anyone kindly tell me what these are .
I really need your help .
Thanks in advance ..
- Lee -
------------------------------
From: Andi Kleen <[EMAIL PROTECTED]>
Subject: Re: Memory Management Bug
Date: 11 Jul 1999 10:33:51 +0200
Stefan Proels <[EMAIL PROTECTED]> writes:
> Sean Walton wrote:
> >
> > There should be a FAQ on this: it is a common *and*avoidable* programming
> > problem. Linux uses a "lazy allocation" scheme: a program can allocate as
> > much memory as is wants, but the actual physical allocation does not occur
> > until written to. So, it's actually possible to allocate 2GB (4GB on some
> > patched systems) where only 128MB of VM is really available.
> > Is this a defect? No, it was literally designed this way. [BTW, your
> > system may not be locked up at all--it's possibly just thrashing.] There
>
> No, it was completely locked up. No disk I/O. It didn't even allow me to
> switch to another console.
2.2.* has some deadlocks in out-of-memory handling. Solutions are currently
tested in the 2.3 series and will be ported to 2.2 when they prove to be stable.
The workaround ATM is to set sensible ulimits.
-Andi
--
This is like TV. I don't like TV.
------------------------------
From: [EMAIL PROTECTED]
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 11 Jul 1999 02:47:06 -0700
In article <7m8qtt$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
thanks for the info.
>
>So BTW why exactly do you need 2GB+ files?
not too important. I just need it becuase I backup my data from a large disk
to smaller disk, and I would like to tar/gzip the data on the large
disk into one .tar.gz file into the small disk without having to break
it into multiple files. (the tar.gz file comes out to about 5 GB in size,
since the data on the large disk is about 20 GB in size).
may be then I wait untill IA-64 is out next year.
Mk.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************