Linux-Development-Sys Digest #927, Volume #7 Thu, 1 Jun 00 21:13:18 EDT
Contents:
Re: Winmodems )Re: Need ideas for university funded project for linux) (Mark Wilden)
Re: Problem with kernel module programming... (David Wragg)
LILO Configuration Problem ("Jay Randall")
driver needed for /dev/bpp
Re: Why compiling "-static" creates HUGE binary??? (Daniel R. Grayson)
Re: assembler problems (Robert Redelmeier)
Re: 2 GB File size limit? (Christopher Browne)
Re: Need ideas for university funded project for linux (Christopher Browne)
Re: Need ideas for university funded project for linux (Christopher Browne)
TCP/IP and Drivers - Making the connections. ("Michael")
Re: Detecting whether another process has a file open (Charles Blackburn)
Re: Why compiling "-static" creates HUGE binary??? (Graham Stoney)
----------------------------------------------------------------------------
From: Mark Wilden <[EMAIL PROTECTED]>
Crossposted-To:
comp.os.linux,comp.os.linux.development,comp.os.linux.development.apps,comp.os.linux.misc,comp.os.linux.setup,comp.os.linux.advocacy
Subject: Re: Winmodems )Re: Need ideas for university funded project for linux)
Date: Thu, 01 Jun 2000 21:16:08 +0100
hac wrote:
>
> Go into your BIOS setup, and turn off support for PnP O/S. That's the
> option that lets Windows change the assignments.
There may even be an option to do this on an IRQ-by-IRQ basis (which I
prefer, since generally PnP works well on my system).
------------------------------
From: David Wragg <[EMAIL PROTECTED]>
Subject: Re: Problem with kernel module programming...
Date: 01 Jun 2000 00:15:05 +0000
"Robichaud, Jean-Philippe [BAN:6S33:EXCH]" <[EMAIL PROTECTED]>
writes:
> For some reason, I must program a kernel module that ping another
> computer... My problem I that if I include the netinet/ip_icmp.h
> file, I get a lot of "was redifined" error and if I don't include it,
> I get undefined symbol error... Any tips ?
You can't use any user-land headers from kernel source files. But
since you can't call or use much of the stuff declared in those
headers, this isn't much of a loss. If there are structs and
constants that you want to use from those headers, you should make
your own headers containing those things, or add them to existing
kernel headers.
But in this case the kernel headers should already have everything you
need, because the kernel already knows how to parse ICMP ECHO REQUEST
packets and generate ECHO REPLY packets. You probably ought to start
by looking at net/ipv4/icmp.c, since this contains the code which
already handles ICMP ECHO packets.
> P.S., here is the entire file I'm working on... (it is not complete,
> but I should work like this...) there is some part of the code
> doing nothing right now, but that what I'm working on...
It seems that you are trying to move large chunks of code from the
userland ping program into the kernel. This is not a good approach;
even if you do eventually get it working inside the kernel, you will
end up rewriting almost all of it.
Without knowing why you want to do this, it is hard to give more
precise advice.
David Wragg
------------------------------
From: "Jay Randall" <[EMAIL PROTECTED]>
Subject: LILO Configuration Problem
Date: Thu, 1 Jun 2000 14:23:33 -0700
Hello,
I am rebuilding the kernel. The machine I am using is dual bootable with
Windows NT and Linux. As a backup to the newly built kernel not working, I
am keeping a backup of the original kernel to boot with. As a test, I
copied the kernel to *.OLD, and added an entry in lilo.conf to point to the
newly created backup. I then ran lilo. The result of this was a warning
about the /dev/hdb7 not being on the first drive, followed by comments about
adding Linux *, LinuxOld and dos.
When I boot, I choose Linux from the Windows OS Loader, which invokes LILO.
However, now that I have changed lilo.conf, LILO prints out LIL- and locks
up.
Does anybody have an idea why this happens?
Is there some problem with LILO not being on the MBR?
P.S. I subsequently went back to the old lilo.conf, ran lilo again, and I am
still getting the same results (LIL- and locks up).
Thanks for the help,
Jay.
------------------------------
From: damajah@kzahq. ()
Subject: driver needed for /dev/bpp
Date: 1 Jun 2000 21:31:15 GMT
Hi, I am involved in porting a hardware/software system,from RedHat 4.2 to
6.2. The software needs to use the /dev/bpp1 device. Looking through the
kernel source I found that this seems to be featured only in the sbus (sparc
hardware) section, (drivers/sbus/char/bpp.c) but it does seem to contain code
for i386 too. However make config never asks about it.
I tried a few tricks to get it to compile with the other modules, but among
other things it complains about the lack of a bpp.h, which I couldn't find.
I thought maybe there is a third party module out there or something, but
searching the web only comes up with references to sparc hardware.
Unfortunatley we don't have the source code to the 2.0.something kernel that
the app works on currently, but I can confirm that bpp.o is loaded as a
module. So it must be possible.
We are now using 2.2.14 I think.
Any ideas on how to create the new version of this module?
Thanks
Kurt
------------------------------
Crossposted-To: alt.os.linux,linux.dev.c-programming,linux.dev.gcc
Subject: Re: Why compiling "-static" creates HUGE binary???
From: [EMAIL PROTECTED] (Daniel R. Grayson)
Date: 01 Jun 2000 17:23:46 -0500
If you want a really small binary, here is one way:
% cd /tmp
% cat foo.c
int main( void )
{
return 0 ;
}
void _start( void )
{
_exit(main());
}
% gcc -s -static -nostdlib foo.c -lc
% ls -l a.out
-rwxrwxr-x 1 dan users 696 Jun 1 17:23 a.out
% ./a.out
% size a.out
text data bss dec hex filename
112 0 4 116 74 a.out
------------------------------
From: Robert Redelmeier <[EMAIL PROTECTED]>
Subject: Re: assembler problems
Date: Thu, 01 Jun 2000 18:32:13 -0500
Josef Moellers wrote:
>
> Robert Redelmeier wrote:
>
> > Your major problem will be with OS calls. Things like `call
> > GetCurrentProcess`, or worse yet, `invoke SetPriority`. These
> > things have to be translated to something Linux understands,
> > like `libc` calls [`call printf`] or the corresponding bare-metal
> > Linux syscalls (`int 0x80`). This is not a trivial effort.
>
> Never ever do "INT"s in an assembler program. You are much better off
> using the appropriate wrapper from libc. They don't cost that much and
> they guard you against future changes in the system call interface as
> well as make your program moderately portable to other Unix derivates.
Oh, never say never. This is almost a religious issue. While
I agree that it usually isn't a good idea to use INTs, they do
have their place. For one thing, most ASM should not do any syscalls,
though `libc` or otherwise. Those should be in the main routine
that is written in `c`. ASM should be reserved for compute routines
that need to be fast, or hardware IO that are hard to do in `c`.
But that said, I did put INTs in my `cpuburn` package. I wanted
_very_ small code (400 bytes) to fit onto rescue/diags disks.
`libc` cruft was not welcome. I also put dual interfaces on
my INT calls (both Linux fastcall in regs, and BSD on the stack)
so my code runs without any changes on both Linux & *BSD systems.
Very handily, both use `int 0x80` as the main syscall entry!
-- Robert author `cpuburn` http://users.ev1.net/~redelm
------------------------------
From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.os.linux.hardware
Subject: Re: 2 GB File size limit?
Reply-To: [EMAIL PROTECTED]
Date: Fri, 02 Jun 2000 00:16:09 GMT
Centuries ago, Nostradamus foresaw a time when Grant Gray would say:
>[EMAIL PROTECTED] wrote:
>> is there a 2 GB file size limit on Linux? A friend of mine is doing some
>> Video editing on Windows and I was just curious whether he would be having
>> the same problem under Linux.
>>
>ext2 is 32bit and does have the 2gb limitation (i think there are workarounds
>for it), ReiserFS is 64bit and therefore has no appreciable (at least not for
>the next few years) file size limit.
All rubbish.
ext2 permits (and has, for a couple years now) file sizes up to 2TB,
but is throttled, on 32 bit platforms, to 2GB, because the file
descriptor format used, by default, in virtually all applications, is
limited to 32 bits on 32 bit platforms.
ReiserFS does _absolutely nothing_ to contribute to improvement with
regards to this limitation.
--
[EMAIL PROTECTED] - <http://www.ntlug.org/~cbbrowne/linuxkernel.html>
Warning: Dates in calendar are closer than they appear.
------------------------------
From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To:
comp.os.linux.development,comp.os.linux.development.apps,comp.os.linux.misc,comp.os.linux.advocacy
Subject: Re: Need ideas for university funded project for linux
Reply-To: [EMAIL PROTECTED]
Date: Fri, 02 Jun 2000 00:17:16 GMT
Centuries ago, Nostradamus foresaw a time when Donovan Rebbechi would say:
>On Wed, 24 May 2000 11:14:50 -0600, Praedor Tempus wrote:
>
>>And the right to fork is good because...?
>
>Because if the maintainer abandons or neglects a project, someone else
>can pick it up. However, while "the right" to fork is a good thing,
>that doesn't mean that gratuitous forking is a good thing.
>
>ESR talks about this kind of thing and uses the phrase "promiscuous
>theory, puritan practice", I think the essay is "homesteading the noosphere".
>
>Some examples of "good" forks include egcs ( which I believe has been handed
>back to the FSF. egcs-c++ is a vast improvement over g++ ) and x-emacs.
"When Umberto Eco classified the Mac OS as catholic and DOS as
protestant, he unleashed a minor religious war."
<http://lwn.net/1999/0114/a/wizards2.html>
I can't locate the web page where I saw this, but recently saw where
someone took the analogy of a code fork as a would-be "Protestant
Reformation," where you start with the One True Catholic Code Base,
and fork off a Protestant version.
The analogy is remarkably good...
- As with Martin Luther, the point of the exercise is _not_ primarily
to start wars (though that may well happen), but rather to reform
the things seen as "broken" in the catholic tradition.
- As with the wars between Catholics and Protestants, _violent_
disputes tend to arise over matters on the periphery from the
reasons why the split took place.
- The fact that people _can_ get worked up over code forks means that
they are sufficiently committed to whatever software is in question
as to suggest that there is a "religious-like" fervor.
Mind you, there have been rather fewer people outright _killed_ in the
disputes of Software Protestant Reformations...
>> Because it is GOOD to fragment software and libraries so that apps
>>fail to work nicely? So that if you
>
>Like I said, the fact that the right to fork is a good thing doesn't
>mean that forking for the sake of it is a good thing.
The Protestant Reformation didn't happen because all was well and
good; it was a reaction to some _significant problems_ in the Catholic
Church.
Similarly, code forks do not occur as an inherent good; they happen as
what is considered, at least by the reformers, as a "lesser of evils."
>>the unix baby early on. It is brought up as a fear of something that
>>could possibly kill linux (for general use...but then, there are
>
>The fragmentation that exists within Linux has very little to do with
>forking. THe main problem is that there is a lack of standardisation
>on versions of the different APIs.
There are some of the APIs _within_ Linux (e.g. - VFS, kernel modules)
where nobody has gotten arrogant enough to consider that they may
speak "Ex Cathedra" to indicate that they signify "authoritative
teaching."
>>I would like a nice, clear explanation of why forking should be
>>considered >good.
>
>Your confusing "the right to fork" with forking itself. The right to fork
>is one that should be exercised judiciously.
>
>>Standards
>>make coder's lives easier, make user's lives easier. SOME things should
>
>The main problem at the moment ( at least wrt Linux ) is not about forks,
>but it's about failure of the distributors to use the same versions of
>core components such as glibc, the compiler, and other shared libs.
Actually, this _isn't_ a persistent failure that they are suffering
from. If it represented a failure, then it _would_ result in forks.
They are all "suffering" from the fact that the core components
continue to be developed, and that releases of those components do not
happen in a "lock step" manner that is well-synchronized with when
distribution makers decide to "make golden" _their_ releases.
So long as the distributions have independent "source trees," which
are being independently updated from the _true_ sources (e.g. -
periodically pulling updates from what Ulrich Drepper does with
GLIBC), they will "suffer" from this problem.
In contrast, when multiple distributions are able to treat a single
maintained source tree as "their canon," as is, to a great extent,
true for the Debian-based distributions, this diminishes the
differences.
An _encouraging_ bit of news is that SuSE has plans to use SourceForge
as their "compile server," which presumably means that they'll have
substantially _all_ of their sources sitting there, and thereby means
that _that_ tree represents a "publicly available canon" not unlike
those published by Debian, FreeBSD, OpenBSD, and NetBSD.
>As far as the developers are concerned, there is not much difficulty writing
>code that will compile on any Linux distribution. Also, KDE and GNOME make
>writing for multiple distributions or even UNIX flavours pretty easy.
>Basically, you can use "GNOME" or "KDE" as your target platform. If you
>use glib data types or QT classes and the KDEsupport stuff, you usually don't
>really need to worry about the peculiarities of your target platform.
I'd go along with GLIB as being a helpful thing; it qualifies as a
library that has been around long enough to have fairly much
stabilized.
On the other hand, other components of KDE/GNOME are still pretty much
of "alpha test" quality. (Bonobo and GNOME-print come to mind...)
>However, failure to standardise on lib versions, compilers and
>package managers makes releasing binaries for multiple distributions
>a bit of a pain.
A failure of there to _be_ One True Library Version is the problem.
By the way, the introduction of XFree86 4.0 is going to cause much the
same problem for distribution makers over the next six months or so.
At this point in time, the 4.0 release is "only really suited to those
happy to play with code," as it lacks the config tools as well as some
device drivers that is needed for it to be considered "generally
available."
Yes, there are RPMs out there, but if you're not prepared to fiddle
with config files by hand, and to trouble-shoot, it is best to beware
of it at this point.
Give it a few months for work to start to solidify on the config
tools, and aficionados will be clamouring for Red Hat to rush to
"include it now!"
My prediction is that whomever first _sells_ a distribution that
contains XFree86 is likely to rush it into place _far_ too quickly,
and that _horrible_ things will be said about them. RHAT has a
history of this, to some extent; they adopted GLIBC 2.0, an
experimental version, and got bashed pretty heavily over the problems
that arose not only with incompatibilities between GLIBC 2.0 and LIBC
5.4.x, but also with the "production" release of GLIBC, 2.1.
--
[EMAIL PROTECTED] - <http://www.hex.net/~cbbrowne/>
"never post anything you don't want to see on your resume..."
-- Martin Minow <[EMAIL PROTECTED]>
------------------------------
From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To:
comp.os.linux.development,comp.os.linux.development.apps,comp.os.linux.misc,comp.os.linux.advocacy
Subject: Re: Need ideas for university funded project for linux
Reply-To: [EMAIL PROTECTED]
Date: Fri, 02 Jun 2000 00:17:18 GMT
Centuries ago, Nostradamus foresaw a time when [EMAIL PROTECTED] would say:
>"Anthony W. Youngman" <[EMAIL PROTECTED]> writes:
>> In other words, it's not the package maintainer's fault if the
>> maintenance package has no way of correctly marking-up the dependency.
>
>I don't really understand how this can be so. If the required bit is
>set, the package is required; if it's not, it's not. Even the most
>basic usable package-management system has to have at least these two
>states.
Other possibility:
- System has a "base" package that may exist only to establish
Forcible Dependancies.
- The tools are designed to filter out references to "base," or
perhaps to be aware that If You Try To Delete Base, This Is A Very
Bad Thing, And You Should Be Told Of The Danger, Given Every
Possible Opportunity To Recant...
- "Base" then has dependancies that establish the need for some
mandatory set of packages without which the system cannot function
at all.
In the dependancy system, deleting Base would then trigger deleting
all of the packages that depend on it (much like an SQL "ON DELETE
CASCADE" constraint).
When that request to nuke out one package results in _every_ package
on the system leaping onto the "list of stuff to delete," methinks
this can show the gentle user that they may be making a mistake.
No "state" required; merely dependancies...
--
[EMAIL PROTECTED] - <http://www.hex.net/~cbbrowne/>
Rules of the Evil Overlord #227. "I will never bait a trap with
genuine bait." <http://www.eviloverlord.com/>
------------------------------
From: "Michael" <[EMAIL PROTECTED]>
Subject: TCP/IP and Drivers - Making the connections.
Date: Thu, 1 Jun 2000 19:32:15 +0100
Hi:
I have to ask about drivers and TCP/IP. I have been wondering about a
project I am on and I need to know about what needs to be implemented to
make a NIC driver communicate with the rest of the TCP/IP protocol to make
the NIC/driver fully functional. Can anyone help me out with the
specifications/requirements?
Michael
------------------------------
From: [EMAIL PROTECTED] (Charles Blackburn)
Subject: Re: Detecting whether another process has a file open
Date: Thu, 1 Jun 2000 23:26:50 +0100
Reply-To: [EMAIL PROTECTED]
On Wed, 31 May 2000 20:15:56 GMT, Nick Craig-Wood wrote:
>What I need is a foolproof way of detecting whether another process
>has a file open on the local machine. My program needs to process
>files in a directory after they have been completely written from
>another process (or possibly from a network file system).
Well why don't use use shared memory or even a semaphore to indicate
what resources are being used etc.
ad as far as finding all files etc is concerned, you won't if you don't
have permissions to access them. that's the whole reason behind them :)
--
Charles Blackburn -=- Remove NOSPAM to email a reply.
Summerfield Technology Limited - SuSE Linux Reseller & Birmingham L.U.G sponsor
[EMAIL PROTECTED]
[EMAIL PROTECTED]
11:20pm up 1 day, 52 min, 2 users, load average: 0.06, 0.02, 0.00
------------------------------
From: Graham Stoney <[EMAIL PROTECTED]>
Crossposted-To: alt.os.linux,linux.dev.c-programming,linux.dev.gcc
Subject: Re: Why compiling "-static" creates HUGE binary???
Date: 2 Jun 2000 11:08:59 +1000
In article <[EMAIL PROTECTED]>,
Dave Platt <[EMAIL PROTECTED]> wrote:
>Ferexample, doing a printf() will end up forcing the loader to bring
>in most of the stdio functions... either because these are called
>directly or indirectly from within the printf() routine, or because
>they happen to be included in the same .c or .s source file as
>functions that are called from printf().
But you might not think that the original caller was calling "printf", given
that his minimal program doesn't actually do anything. This is significant
because glibc's C99 conforming printf with GNU extensions is over 100 Kb
nowadays; and he's getting it whether he calls it or not.
Why? Because exit() calls free() to release functions registered with atexit(),
even if you don't actually use atexit(). free() resides is malloc.c, as does
malloc_stats(), which calls fprintf(). So a static link against glibc gets
the rather hefty GNU malloc & printf, even if they don't call either directly.
Few non-trivial programs can avoid malloc(), but statically linked C++
programs for instance are paying for printf(), though they may never use it.
>... Or (to accomplish a similar goal) individual functions can be tagged with
>a compiler directive (a ... gcc __attribute__) which causes the complier to
>place the functions in different control sections in the object file.
There is now a gcc command line switch to do this automatically named
-ffunction-sections and a corresponding linker switch --gc-sections, which
together conspire to reduce linker granularity from object level to function
level. This is the simplest way to remove dead functions like malloc_stats,
since you don't have to modify the glibc source at all, or convince the glibc
maintainer to restructure anything.
Neither gcc nor the glibc build use these flags yet by default though, but
you can add "-ffunction-sections -fdata-sections" to CFLAGS when you configure
glibc, then link with LDFLAGS including "-Wl,--gc-sections". Of course you
have to be prepared to rebuild glibc to get the benefit...
Regards,
Graham
--
Graham Stoney
Principal Hardware/Software Engineer
Canon Information Systems Research Australia
Ph: +61 2 9805 2909 Fax: +61 2 9805 2929
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.development.system) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Development-System Digest
******************************