Linux-Development-Sys Digest #644, Volume #7      Thu, 2 Mar 00 10:13:14 EST

Contents:
  Re: unresolved Symbols in modules ("Paul Goossens")
  Re: unresolved Symbols in modules (Mathias Waack)
  Re: Absolute failure of Linux dead ahead? ([EMAIL PROTECTED])
  Re: Absolute failure of Linux dead ahead? (mlw)
  Re: Absolute failure of Linux dead ahead? (Christopher Browne)
  Re: Absolute failure of Linux dead ahead? ("Joseph T. Adams")
  Re: Absolute failure of Linux dead ahead? (Edward Rosten)
  Re: Absolute failure of Linux dead ahead? (Anders Larsen)
  Re: Struct size and allocate problem! need help. (Anders Larsen)
  knfsd : exporting a directory containing 2 devices ([EMAIL PROTECTED])
  Re: Binary compatibility: what kind of crack are they smoking? ("Joseph T. Adams")
  Re: Binary compatibility: what kind of crack are they smoking? (Ian Molton)

----------------------------------------------------------------------------

From: "Paul Goossens" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.misc
Subject: Re: unresolved Symbols in modules
Date: Thu, 2 Mar 2000 13:20:19 +0100

=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1

nilesh patel heeft geschreven in bericht
<[EMAIL PROTECTED]>...
>    I am able to compile a module but when I try to insmod it it
> says
>that the printk_R****    is unresolved .
>    What is the reason for the unresolved symbols ??
>
>    Nilesh


Do you use the -O6 option with the compiler? I've had a similar
problem with unresolved symbols and using the -O6 option eliminated
this problem.

I don't know what the -O6 option exactly does (I tried to look it up,
but found NULL)
What I do know is this:
Insmmod loads the image of the module in kernel-memory and before
running init_module() it has to fill in the adresses of external
variables and functions. This is because the module at compile-time
doesn't know exactly where in memory those variables and functions
are when it is inserted into the kernel. These adresses may change
with every little change in the kernel or the way the kernel is
compiled.

The insmod program will look for these adresses in some file in the
/proc directory. But when you don't compile with -O6 it can't find
ALL the adresses. I know, it sounds strange to me also, but if anyone
knows more about this, please tell us!

Greetz,
Paul

=====BEGIN PGP SIGNATURE=====
Version: PGPfreeware 6.5.2 for non-commercial use <http://www.pgp.com>

iQA/AwUBOL5Ocl9XwgRUBUQfEQJy/ACg4KWNNIXAnLhGUNMxUUHdUUEQ2T8AnjDp
t2trEP+uv3ksuyUG+yqeLlVL
=CBFb
=====END PGP SIGNATURE=====




------------------------------

From: Mathias Waack <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.misc
Subject: Re: unresolved Symbols in modules
Date: Thu, 02 Mar 2000 13:35:34 +0100

Paul Goossens wrote:
Hi,

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1

Signing a common news posting is IMO waste of bandwidth.

> ALL the adresses. I know, it sounds strange to me also, but if anyone
> knows more about this, please tell us!

It sounds really strange. Unresolved symbols in modules come from 4 sources:

1. the programmer forgot to define __KERNEL__ and MODULE. 
   These symbols are used by a lot of header files to insert kernel specific stuff
2. the programmer forgot to give the gcc the commandline switch -O2
   The gcc starts inlining at this tuning level. Inlining is needed to eleminate some 
   unused symbols in the code
3. the user try to insmod a module for a another kernel then the module was compiled 
for.
4. the programmer forget to define some symbols which she declared before

I wouldn't recommand you using switches higher then O2. The compiler may inline some 
functions which shouldn't be inlined and the error rate of the comiler raises 
exponentiell (thats my humble experience)

Mathias

------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: comp.os.linux.advocacy
Subject: Re: Absolute failure of Linux dead ahead?
Date: Thu, 02 Mar 2000 12:47:36 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] wrote:

[snip]

> Wait, wait!  There are other scary items forthcoming:
>
> a) Resolution of the 2038 problem.  2^31-1 seconds from Jan 1, 1970
> happens to be in 2038.  Stuff Will Break Then.
>
> This is the end-of-epoch that is the UNIX equivalent to the "Year 2000
> cliff" that everyone worried last year about.
>

I've alway wondered about this one.  Is there any reason we can't
just agree that the world actually began in 2000 and modify a few
system calls?  I suppose that anyone who has a file laying around
since pre-2000 will get the wrong timestamp, but will anyone really
care in 2038? Are there any programs out there that code dates as
the number of seconds since 1970?  Is there any significance to
1970 other than reminding Thompson how old he's getting?  Inquiring
minds want to know:)

Clark



Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: mlw <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: Absolute failure of Linux dead ahead?
Date: Thu, 02 Mar 2000 08:20:02 -0500

[EMAIL PROTECTED] wrote:
> 
> In article <[EMAIL PROTECTED]>,
>   [EMAIL PROTECTED] wrote:
> 
> [snip]
> 
> > Wait, wait!  There are other scary items forthcoming:
> >
> > a) Resolution of the 2038 problem.  2^31-1 seconds from Jan 1, 1970
> > happens to be in 2038.  Stuff Will Break Then.
> >
> > This is the end-of-epoch that is the UNIX equivalent to the "Year 2000
> > cliff" that everyone worried last year about.
> >
> 
> I've alway wondered about this one.  Is there any reason we can't
> just agree that the world actually began in 2000 and modify a few
> system calls?  I suppose that anyone who has a file laying around
> since pre-2000 will get the wrong timestamp, but will anyone really
> care in 2038? Are there any programs out there that code dates as
> the number of seconds since 1970?  Is there any significance to
> 1970 other than reminding Thompson how old he's getting?  Inquiring
> minds want to know:)

I am not sure that I care about this one, it is 37 years away. In 37
years, 64 bit computers will be obsolete. Right now we have the 2G file
issue which is far more important and is a real problem today.

-- 
Mohawk Software
Windows 95, Windows NT, UNIX, Linux. Applications, drivers, support. 
Visit http://www.mohawksoft.com

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: Absolute failure of Linux dead ahead?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 02 Mar 2000 13:29:55 GMT

Centuries ago, Nostradamus foresaw a time when [EMAIL PROTECTED]
would say:
>In article <[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] wrote:
>[snip]
>> Wait, wait!  There are other scary items forthcoming:
>>
>> a) Resolution of the 2038 problem.  2^31-1 seconds from Jan 1, 1970
>> happens to be in 2038.  Stuff Will Break Then.
>>
>> This is the end-of-epoch that is the UNIX equivalent to the "Year 2000
>> cliff" that everyone worried last year about.
>
>I've alway wondered about this one.  Is there any reason we can't
>just agree that the world actually began in 2000 and modify a few
>system calls?  I suppose that anyone who has a file laying around
>since pre-2000 will get the wrong timestamp, but will anyone really
>care in 2038? Are there any programs out there that code dates as
>the number of seconds since 1970?  Is there any significance to
>1970 other than reminding Thompson how old he's getting?  Inquiring
>minds want to know:)

Changing the epoch's beginning would cause a new and different bit
of confusion, as it invalidates all the present date calculations.

It might be simple enough in some ways, but the *conversion* 
process, to force everything over to the new epoch, whilst using
the same data format, would Not Be Pretty.  It has the problem
that you can't tell if a particular value has been converted or
not if they use the same format.
-- 
Rules of the Evil Overlord #57. "I will not rely entirely upon "totally
reliable" spells that can be neutralized by relatively inconspicuous
talismen." 
<http://www.eviloverlord.com/lists/overlord.html>
[EMAIL PROTECTED] - - <http://www.ntlug.org/~cbbrowne/lsf.html>

------------------------------

From: "Joseph T. Adams" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: Absolute failure of Linux dead ahead?
Date: 2 Mar 2000 13:34:03 GMT

In comp.os.linux.advocacy Christopher Browne <[EMAIL PROTECTED]> wrote:
: Wait, wait!  There are other scary items forthcoming:

: a) Resolution of the 2038 problem.  2^31-1 seconds from Jan 1, 1970
: happens to be in 2038.  Stuff Will Break Then.  

: This is the end-of-epoch that is the UNIX equivalent to the "Year 2000
: cliff" that everyone worried last year about.

I would hope that apps are not still using 32-bit hardware and time_t
structs by that time.

Indeed, I hope (and am reasonably certain) that Moore's Law will make
it possible and beneficial to start to do at least some kinds of
system work in a language much safer than C.


: b) Resolution of the 2GB file problem on 32 bit architectures.  This
: is, again, a 32-bit-ism that is starting to bite people working with
: today's Very Large Disk Drives.  The comprehensive fix to this will
: likely be synchronized with the 2038 problem, as resolutions for both
: likely involve moving from 32 bit values to 64 bit values.

: There will be some period of "inconvenience" at whatever point libraries,
: filesystems, and kernels have to be synchronized together to fix this.

I believe that will happen this year.  The necessary components are
already in place.  All that has to happen (AFAIK) is for a
distribution vendor to do the painstaking work of recompiling
everything (including glibc) and fixing stuff that isn't 64-bit clean
or otherwise breaks.  Yes, this is a pain, but it has to be done, and
the first vendor to do it successfully will gain a huge advantage over
the others.


: c) One of these days, someone may actually come up with a C++ ABI that
: would be expected to interoperate widely.  No approach for this is yet
: available.

I'm not highly convinced that binary portability is even a good idea,
much less necessary, in a world in which most of the software worth
having is available in source code form.

Nor am I convinced that C++ will gain a huge market share in the *nix
world beyond what it has now.  Standard C++ is a nice language for the
developer, if used properly, but a nightmare for compiler and library
writers.  Most of what might have been done in Standard C++ if it had
been available for Linux before very recently, has been done using a
combination of C and higher level languages such as Perl or Python.

I would be extremely unlikely to write Linux apps in C++ *unless* they
were intended to be deployed in a KDE/Qt environment, in which, for a
variety of reasons, C++ does make the most sense.  In such an
environment, the necessary libraries can generally be assumed to exist
and to be at least adequately stable.  And I would release under a
GPL- or LGPL-compatible license, so source would be available.


: d) Many of the problems go away when you've got tools that automatically
: recompile software using local tools thus maximizing compatibility, and
: possibly even performance.  The BSD Ports system provides this, and
: Debian looks to be looking towards this.  (Which has the further merit
: of diminishing C++ ABI worries...)

I think these are both excellent approaches, and think that whoever
could combine the best features of both would have a real winner. 

I've always had a love/hate relationship with binary RPMs.  I love the
fact that they *usually* work or at least yield sufficiently useful
clues as to make it possible to make them work.  But I hate the idea
of using binaries.  They introduce potential security risks; they may
have subtle bugs caused by slightly differing library versions; they
may very well break if I upgrade my kernel, and then I'm stuck going
out and getting the source anyway.  They certainly won't work reliably
if I change to a very different distribution such as Debian where the
system files tend to live in different places.

A cross between dpkg and *BSD ports would be awesome, and if it
existed, I would strongly encourage developers to use it, in addition
to or even instead of RPMs.


: Part of the problem Red Hat has had is that they didn't have anyone
: truly responsible for system testing.  Testing was generally "supposed
: to be done," but without any pointed responsibility, this doesn't 
: necessarily happen.   About six months ago, I'm told they hired 
: someone to be responsible for testing, which should lead to there
: being some automated tests that should be helpful.  The acquisition
: of Cygnus is pretty interesting in that Cygnus has been collecting
: test suites for compilers for rather a while now, hopefully providing
: them with some expertise that might rub off.


The more complex software becomes, the more necessary it is to
automate the testing process.  It's hard for me to understand how any
robust software could possibly get built without it.


Joe

------------------------------

From: Edward Rosten <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: Absolute failure of Linux dead ahead?
Date: Thu, 02 Mar 2000 13:46:55 +0000

Linux is fully 64-Bit on systems that are fully 64-Bit. So, on systems
that are designed for proper performance, it gives proper performance. I
am not a fan of the i86 and think that it is faintly ridiculous that
intel are still persuing the 32 bit line, (by jacking up the [MG]Hz) and
trying to get more performance out of an inherently limited design. If/
When the merced becomes avaliable, and when they stop the 32 bit line
for PCs all together, then Linux on a PC will be 64 bit.

64 bit subsystems on a 32 bit machine need either 2 sets of systems
calls or need serious hacks to avoid a big drop in performancs. We
really all should move over to proper computers.

Just don't ask me about my opinion when I want a 3GB file on a pentium
;-)

-Ed

-- 
Did you know that the reason that windows steam up in cold weather is
because
of all the fish in the atmosphere?
        -The Hackenthorpe Book Of Lies

------------------------------

From: Anders Larsen <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy
Subject: Re: Absolute failure of Linux dead ahead?
Date: Thu, 02 Mar 2000 15:01:01 +0100

[EMAIL PROTECTED] schrieb:
> 
> In article <[EMAIL PROTECTED]>,
>   [EMAIL PROTECTED] wrote:
> 
> [snip]
> 
> > Wait, wait!  There are other scary items forthcoming:
> >
> > a) Resolution of the 2038 problem.  2^31-1 seconds from Jan 1, 1970
> > happens to be in 2038.  Stuff Will Break Then.
> >
> > This is the end-of-epoch that is the UNIX equivalent to the "Year
> > 2000 cliff" that everyone worried last year about.
> >
> 
> I've alway wondered about this one.  Is there any reason we can't
> just agree that the world actually began in 2000 and modify a few
> system calls?  I suppose that anyone who has a file laying around
> since pre-2000 will get the wrong timestamp, but will anyone really
> care in 2038? Are there any programs out there that code dates as
> the number of seconds since 1970?  Is there any significance to
> 1970 other than reminding Thompson how old he's getting?  Inquiring
> minds want to know:)

In the *N*X-world, coding "...dates as the number of seconds since 1970"
is the norm rather than the exception, to say the least.
And it's really quite common to store dates in that format in databases
etc., so IMHO it would be next to impossible to change the starting
point.

It might be true that in 2038 most people don't care about what happened
before 2000-01-01, but the *transition* would be tough anyway.

One could, however, perhaps consider changing time_t from signed to
unsigned - that would add another 68 years to the life-time of the
32-bit time_t.
I did a couple of quick tests on my QNX box (Watcom 10.6 C-Compiler) to
theck the rollover of the time_t sign bit, and - lo and behold! - it
turned out that mktime(), gmtime() and localtime() already internally
treat time_t as UNsigned!

Well, I really don't care that much, however - I believe 64+ bit
machines will be the norm before 2038 (and I'll be eighty by then  ;-)

-- 
Anders Larsen

------------------------------

From: Anders Larsen <[EMAIL PROTECTED]>
Crossposted-To: 
alt.os.linux,comp.os.linux.development.apps,comp.unix.sco.misc,comp.unix.sco.programmer,comp.unix.unixware.misc,tw.bbs.comp.linux
Subject: Re: Struct size and allocate problem! need help.
Date: Thu, 02 Mar 2000 15:10:36 +0100

Charles Bryant wrote:
> 
> In article <[EMAIL PROTECTED]>,
> Jaron <[EMAIL PROTECTED]> wrote:
> >Hi,
> >    I encounter a compiling program as below:
> >
> >    struct    a {
> >        unsigned char    a1;
> >        unsigned char    a2;
> >        unsigned short    a3
> >        unsigned short    a4
> >        unsigned long    a5;
> >    };
> >
> >the structure size must be 10
> 
> No. It can be any value greater than or equal to five.

How do you arrive at that figure?
The C language definition clearly and unambigously state that a short be
*at least* 16 bits and a long *at least* 32 bits of length.

-- 
Anders Larsen

------------------------------

From: [EMAIL PROTECTED]
Subject: knfsd : exporting a directory containing 2 devices
Date: Thu, 02 Mar 2000 14:17:02 GMT

Hello,
I export a directory that contain 2 devices (the
2nd device is a subdir), but when I mount this
directory (with an automounter on a OpenStep4.2
system), the 2nd device isn't mounted !!!
Or I had the same config in RedHat 5.1, and it
worked.

Now, I have the folowing configuration :
redhat 6.1, knfsd-1.4.7-7, kernel 2.2.14
##### /etc/fstab ########
/dev/sdb1 /users defaults,quota 0 2
/dev/sdc1 /users/specific defaults 0 2
##### /etc/exports ######
/users *.mydomain(rw,insecure,root_squash)\
       secure.mydomain(rw,insecure,no_root_squash)
#########################


Please HELP !
Wilfrid Gaboriaud
email : [EMAIL PROTECTED]

Service Informatique
Institut Universitaire de Technologie
La Rochelle
FRANCE



Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: "Joseph T. Adams" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy,comp.os.linux.setup
Subject: Re: Binary compatibility: what kind of crack are they smoking?
Date: 2 Mar 2000 14:58:56 GMT

In comp.os.linux.advocacy Gregory Neil Bastow <[EMAIL PROTECTED]> wrote:
: I think many people mistakenly think that RMS wants personal recognition
: for the success of GNU/Linux. Everything he has ever said or done
: suggests otherwise. The problem, as he sees it, (and I see it this way,
: too) is that in the success of GNU/Linux, people are forgetting that the
: Free Software Foundation (which is probably 75% responsible for the fact
: that we have such a kick-arse system) is doing things a particular way to
: prove a particular point and to allow people to use the software they need
: without compromising the principles that FSF is trying to evangelise.

Just as a personal aside, I find systems without GNU tools to be
intolerable.  The very first thing I do when confronted with an
otherwise Unix-like system that doesn't have at least bash and emacs,
or something emacs-like, is to go out and get both. 


: It is unfortunate that after finally succeeding in proving their point,
: and succeeding in their 10-year-old mission the FSF have been robbed of
: the chance to make people associate the success of the project with their
: principles and their methodologies by simple whims of nomenclature.

That irritates me a bit as well.  There are a LOT of folks who don't
understand that without RMS and GNU, there would be no Linux, and that
freedom is about much more than just price.


: Right though he may be, RMS's subsequent attempts to close the stable door
: after the horse had bolted have done nothing but detriment his/our cause.

I certainly think he could have handled it differently, but I
understand his frustration.  I credit him more than any other single
person for helping to create not just GNU/Linux, but the Internet as
we know it, and, hence, a big part of the direction of both the
economy and the culture of the rapidly expanding developed world.

History may well look at RMS as one of the most influential (and
perhaps underappreciated) individuals of the late 20th century. 
Linus, ESR, Larry Wall, and various others have been as well, but I
believe all of these folks have acknowledged that but for RMS and his
work, theirs never would have happened.


Joe

------------------------------

From: Ian Molton <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Crossposted-To: comp.os.linux.advocacy,comp.os.linux.setup
Subject: Re: Binary compatibility: what kind of crack are they smoking?
Date: Thu, 02 Mar 2000 15:12:15 +0000

Mario Klebsch wrote:
> 
> Gregory Neil Bastow <[EMAIL PROTECTED]> writes:
> 
> >Just make sure you say GNU/Linux when you mean more than the kernel, boys
> >and girls.
> 
> What the hell is GNU/Linux? YALD (Yet Another Linux Distribution)?

I think this /proves/ its a troll...

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to