Linux-Development-Sys Digest #917, Volume #7     Mon, 29 May 00 23:13:19 EDT

Contents:
  Re: unresolved symbol ([EMAIL PROTECTED])
  Re: Need ideas for university funded project for linux (JEDIDIAH)
  Re: Winmodems )Re: Need ideas for university funded project for linux) (JEDIDIAH)
  Re: Very simple linux Scheduler question ("Jason Johnson")
  Re: Need ideas for university funded project for linux (David Steuber)
  Re: Technical advantages/disadvantages of Linux (Horst von Brand)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED]
Subject: Re: unresolved symbol
Date: 29 May 2000 16:41:01 -0700

In article <8gupcl$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
>
 
>>When I
>>complile, everything is find. When I do insmod module.o, I get:   
>>module.o : unresolved symbol strlen
>>
>>Can someone tell me how to get over this !!!!

>C run-time library is not linked into the kernel. The kernel
>does NOT use libc.
 
Although Linux does not use libc, it has its own version of
strlen. So, you should NOT have the above error. But if you have
tried to use some other C function such as 'printf' you will 
see the error, since 'printf' is not defined in the linux kernel.

So, strlen() can be used in the module (you will be using the
Linux kernel strlen, not the libc one).

How did you build the module?

I just did a test and it works:

===== m1.c ========
#include <linux/module.h>
#include <linux/sched.h>
static int i;
int init_module(void)
{
   printk("<1>Hello, in init_module\n");
   printk("length is %d\n",strlen("hello"));
   return 0;
}
void cleanup_module(void) { printk("<1>in cleanup_module\n");}

==================

This is the Makefile I used:

======================
INCLUDEDIR= /usr/include
CFLAGS= -D__KERNEL__ -DMODULE -O -Wall -I$(INCLUDEDIR)
VER = $(shell awk -F\" '/REL/ {print $$2}' $(INCLUDEDIR)/linux/version.h)
OBJ = m1.o
all: $(OBJ)
        echo "done.. version is $(VER)"
clean:
        rm -f m1.o
=======================

The above module will work ok.

make
insmod m1.o
rmmod m1

looking in output of 'dmesg' I see the length printed as '5' ok.

Nasser


------------------------------

From: [EMAIL PROTECTED] (JEDIDIAH)
Crossposted-To: 
comp.os.linux,comp.os.linux.development,comp.os.linux.development.apps,comp.os.linux.misc,comp.os.linux.setup,comp.os.linux.advocacy
Subject: Re: Need ideas for university funded project for linux
Date: Tue, 30 May 2000 01:55:49 GMT

On 24 May 2000 12:32:39 -0600, Craig Kelley <[EMAIL PROTECTED]> wrote:
>[EMAIL PROTECTED] (JEDIDIAH) writes:
>
>> >But there is no reason to expect it to be any better.  Is there a tool
>> >that tells you when the last program that uses a shared library
>> >has been removed?
>> 
>>      Sure there is. It could query the actual state of the system 
>>      rather than just keeping track of what packages had been 
>>      manipulated.
>
>A database is the only way to really solve this problem with any
>speed.  Sure, you could find(1) all executables and ldd them to see
>which libraries they use, but what a waste of time.  It won't even

        No it isn't. Such a practice would tell you what is ACTUALLY
        on the system rather than what has been made known to the debian
        or rpm databases.

[deletia]

        Unreliable network file systems are a whole other can of worms...

-- 

    In what language does 'open' mean 'execute the evil contents of'    |||
    a document?      --Les Mikesell                                    / | \
    
                                      Need sane PPP docs? Try penguin.lvcm.com.

------------------------------

From: [EMAIL PROTECTED] (JEDIDIAH)
Crossposted-To: 
comp.os.linux,comp.os.linux.development,comp.os.linux.development.apps,comp.os.linux.misc,comp.os.linux.setup,comp.os.linux.advocacy
Subject: Re: Winmodems )Re: Need ideas for university funded project for linux)
Date: Tue, 30 May 2000 02:06:02 GMT

On 27 May 2000 10:06:53 +0400, Victor Wagner <[EMAIL PROTECTED]> wrote:
>In comp.os.linux.misc JEDIDIAH <[EMAIL PROTECTED]> wrote:
>:      Quite. My Phoebe hardware ISA modem came with very reasonable
>: defaults: no-pnp, com2. In this configuration I was able to slap it in
>: and use it immediately.
>
>Reasonable?! Internal modem is next worst thing to winmodem.
>Modem should be a separate box, attached to the com port with cable.
>Most important thing in this setup are leds and its independent power
>swithch. I don't know how it looks in the West, but here in Russia it is
>qute possible that modem would hang on noisy line and only
>power-cycling would revive it. How'd you power-cycle internal modem?
>
>Only thing that I don't like about external modem is that they require
>something other than 5V DC or 12V DC. (typically 12V AC). If they need
>some voltage, which I could get from the main power supply of machine,
>I'd be able to get rid of those power supply units.
>
>:>
>:>When I installed my modem in win, I needed a bunch of driver diskettes to
>:>get it to work.
>
>No, not at all. You need a bunch of driver diskettest to make _windows_
>_think_ your modem works.
>
>Using Dos terminal program like Telix or self-written Tcl script you
>could make your modem working immediately without any  drivers.
>:      One should never need to 'install' a modem. A real modem is as
>
>Why? You don't consider unscrewing case, finding ISA slot, fiddling with
>IRQ-s "installing"? It can turn into half an hour of downtime.

        Nope. It's actually quite simple to manipulate the IRQ assignments
        in a modern BIOS. If your COM2 isn't already occupied by something
        else then there isn't going to be any "fiddling" at all actually.

        This is especially true for a modem that comes preset from the factory
        to use COM2.

        As far as "plugging in a connector" constituting 'installation', that's
        just assinine. If that sort of thing bothers you (plugging something
        into a socket) then you need to move to Lancaster county.

>
>External modems are ones whose installation doesn't interrupt system
>operation. You bring in from shop, you connect it, you turn it on, and
>other users of your machine do their work in the same time.

        As 'downtime maintenance' goes, plugging in an ISA modem is 
        waaay down on the list in terms of turnaround time and end
        user deprivation. Nevermind the fact that not every random
        PC is going to be something 'mission critical' where HA is
        required.

        Although, if your power grid is like your coms net, any talk
        about HA is really absurd anyways.

-- 

    In what language does 'open' mean 'execute the evil contents of'    |||
    a document?      --Les Mikesell                                    / | \
    
                                      Need sane PPP docs? Try penguin.lvcm.com.

------------------------------

From: "Jason Johnson" <[EMAIL PROTECTED]>
Subject: Re: Very simple linux Scheduler question
Date: Mon, 29 May 2000 21:40:26 -0500

Thank you very much!  I had no idea that such a document existed.  Most of
the people I talked to
said there was no documentation like this on the Linux kernel.

However, the book did not seem to answer my question.  I am speaking about
the code in the
kernel that, after it has ran through all the processes calculating there
"goodness" and assigning
the biggest one to "c", a test of "c" is done to make sure that it is not
zero.  If it is, they go through
the list again assigning the counters to be the value of there priority.
This seems to indicate that
every process will run until it has used all its quantum and then not run
again until all other processes
have done the same (again I am ignoring the IO, ques, etc.).

Jason

=====Original Message=====
From: Frederic S. Parain <[EMAIL PROTECTED]>
To: Jason Johnson <[EMAIL PROTECTED]>
Date: Monday, May 29, 2000 5:21 PM
Subject: Re: Very simple linux Scheduler question


>"Jason Johnson" <[EMAIL PROTECTED]> writes:
>
>> Hrm.  This must be a really stupid question, or a really hard one, based
on
>> the number
>> of answers I have recieved.
>[...]
>
>It isn't a stupid question but did you read the fine manual?
>This is explained in "The Linux Kernel" from David A Rusling:
>
>http://www.linuxdoc.org/LDP/tlk/kernel/processes.html
>
>Section "4.3 Scheduling"
>subsection "Process selection":
>
>Process selection
>
>The scheduler looks through the processes on the run queue looking for
>the most deserving process to run. If there are any real time
>processes (those with a real time scheduling policy) then those will
>get a higher weighting than ordinary processes. The weight for a
>normal process is its counter but for a real time process it is
>counter plus 1000. This means that if there are any runnable real time
>processes in the system then these will always be run before any
>normal runnable processes. The current process, which has consumed
>some of its time-slice (its counter has been decremented) is at a
>disadvantage if there are other processes with equal priority in the
>system; that is as it should be. If several processes have the same
>priority, the one nearest the front of the run queue is chosen. The
>current process will get put onto the back of the run queue. In a
>balanced system with many processes of the same priority, each one
>will run in turn. This is known as Round Robin scheduling. However, as
>processes wait for resources, their run order tends to get moved
>around.
>
>--
>Frederic S. PARAIN - PhD Student - Projet SOLIDOR
>IRISA-INRIA, Campus de Beaulieu, 35042 Rennes cedex, France
>T�l: +33 (0) 2 99 84 75 34, e-mail: [EMAIL PROTECTED]
>"Everything should be as simple as possible, but not simpler"



------------------------------

Crossposted-To: 
comp.os.linux,comp.os.linux.development,comp.os.linux.development.apps,comp.os.linux.misc,comp.os.linux.setup,comp.os.linux.advocacy
Subject: Re: Need ideas for university funded project for linux
From: David Steuber <[EMAIL PROTECTED]>
Date: Tue, 30 May 2000 03:00:04 GMT

Nix <$}xinix{[email protected]> writes:

' Craig Kelley <[EMAIL PROTECTED]> writes:
' 
' > For all the faults in dpkg and rpm; they are light-years ahead of
' > autoconf
' 
' They do completely different jobs. autoconf is a software portability
' enhancement tool, not a software packager!

Just a little story here.  A demonstration of the dangers of a little
knowledge and root privileges.

I decided to try and get KDE2 working on my laptop computer.  I got it 
working on my PPro200 tower, so it should be no problem, right?  Well, 
in my poking around, I managed to completely trash X.  I no longer
have a graphical logon at this time.  I'm actually screwing around in
a text console.  Keybindings from stty don't seem to apply ( the
backspace key ).  I get library conflicts when trying to go to run
level 3 ( SuSE 6.2 ).  It bumps me back to run level 2.

I was so frustrated, I actually flirted with the idea of blowing away
the system and setting up my laptop computer according to
www.diylinux.com.  What stopped me was the doubt in my mind that
XFree86 3.3.6 will support my particular Neomagic chipset ( 256A ? ).
I've been using a binary only X server for this machine its entire
Linux career.

That didn't stop me from also trying to build XFree on this machine.
I did that on the PPro as well.  However, I've been getting error
messages saying that there is no 386 instruction called 'filds'.  WTF?
 I don't remember that on the PPro.  Maybe the compiler and libc are
not set up correctly.  I don't know.  I also have no clue as to what
'filds' is.  I would guess that it is an FPU instruction, but that is
as far as I can go.

Yast leaves a lot to be desired.  I would love to just tell the damn
thing to update my system with everything to restore the binary files
to the way they were.  I have not found any 'force' option and Yast
believes the system to be up to date.

There are of course system settings I do not want to loose.  I have
mail and GNUS index, for example.  There is also the X86Config file.
I suppose I can back that all up.  However, it is a royal pain in the
ass to restore files over the network when some of those files are
network configuration files!  The egg is sitting on the wrong
computer!  Perhaps I need to get some of those 'floppy disks'.  It's
been so long since I used one of those I barely remember what they
look like.  Also, my laptop computer either has the DVD/CD drive
installed or the floppy drive installed, not both.

It may well be 100% user error on my part, but I feel that package
management in the name of RPMs and Yast has failed me completely.  I
don't know much about the various config files on the system.
Settings are put into rc.config and sourced into the miriad of scripts 
that are fired up during system start up.  If I make a change to
rc.config, I simply run SuSEConfig to enable the changes.  In the case 
of changing my SMTP server recently, that involved changing the relay
server in rc.config, which in turn changes _one_ line in sendmail.cf.
I presume a SIGHUP then gets sent to sendmail, causing it to reread
sendmail.cf.  But that is not the only thing controled by rc.config.

SuSE did a good job for simplifying system management if you don't
want to grab the latest and greatest releases of such software as GNU
GCC, libc, KDE2 from CVS, etc.  When you do want to do such things,
you find yourself shackled by the fact that you haven't a fucking clue 
as to how GNU/Linux works as a system.  If you look in /sbin/init.d,
you find there are a lot of scripts for bringing up the computer.
These scripts are well written, I'll grant SuSE that.  But the init.d
concept is a complete mystery to me along with so many other things.

Granted, most end users just want a system to perform dreary tasks
on.  Perhaps that is why Windows does so well.  It does dreary tasks
in spades, as well as games.  I'm not such an end user.  Sure, I have
specific tasks that I wish to perform.  I also wish to configure my
machine so that it is optimized for performing those tasks.  Extra
crap that is included by SuSE and other distributions is a waste of
disk space and logistic effort.

I'm also hampared by the fact that both my computers are completely
different.  One is an old PPro 200 tower, the other is a Solo 5150
laptop.  The solo has PCMCIA, the PPro does not.  The Solo has a
synaptics touch pad which is configured as a regular PS/2 style mouse
like the PPro.  The video chip set is different.  I have the PPro set
up with two NICs as a router to my DSL service.  The laptop doesn't
even have a modem.  The laptop has USB ( intel host controler ), the
PPro does not.

The PPro has more disk space and is a great place for CVS to be set
up.  The laptop has a more ergonomic keyboard and a _much_ sharper
display for text.  The PPro can handle more colors for graphics.

I am happy with the way the PPro is setup.  I have no intentions of
further modifying it if possible.  It makes a great router and a place 
to offload compile jobs, even though the laptop is actaully a little
faster.  ( I need a new computer ).

I spend almost all my time using the laptop as my terminal.  Loosing X 
on it is a major inconvenience.  I should probably have left well
enough alone, but KDE2 is shaping up to be _much_ nicer than KDE1.  I
know about the criticisms of Qt.  I agree with them all.  They just
aren't show stopers for me.  I have neither the time nor the
intelligence to put together a real C++ toolkit ( better than GTK-- is 
claimed to be, and certainly better than Qt ).  I do consider Qt to be 
a hack.  I would prefer to use the C++ features that the language
provides.  Sure, I don't know how to trap a signal from the FPU when
you divide by zero and turn it into an exception that I can throw.
But I'm sure I could learn that little trick (if it is possible at
all).

The bottom line is that the canned distributions are designed for the
user who doesn't really want to tinker with the system.  This is not
exactly in the spirit of Linux.  However, it is very pragmatic.  Very
few people have the necessary expertise to build a system from
scratch.  The Linux kernel is just a small part of a complete
operating system.

My ideal would be to have a means of grabbing source packages and
compiling them for the machine they are to be installed on.  That
takes care of linkage problems.  Most packages generate a single
binary that can be dumped into /usr/bin.  Larger packages will also
include man pages.  Man pages go into a well known tree.  Even /etc
config files go into a well known location.  Larger packages such as
XFree86 do create some complexity.  However, the X Window System has
standard package locations too.

Really, all that is needed is a set of shell scripts that will find
the packages you want, fetch the source and dump it into a src tree
and build and install the package.  If you are fortunate enough to
have heterogeneous systems, you can use a single machine for this and
use rsync to configure other machines on the network.

Package dependencies are easy to deal with.  If a library is missing,
the configure file should tell you and abort.  If the configure file
was not written correctly, then the build will fail and indicate the
missing library.  If the major libraries are in a publicly accessible
database, then you should be able to locate them and install them.  A
script can automate the procedure.

Naturally, this only works when you have a high speed internet
connection.  CD distributions are not going to go away.  However, CDs
have the same problem as print media.  They are out of date by the
time they reach the consumer.

I have no solution to this problem.  For a corporate environment, this 
is why you have a system administrator in the first place.  He isn't
there to be the BOFH.  For home users, Linux is still a hobby system
regardless of all the hype.  The major distributions are making very
rapid progress towards making end-user friendly packages.  This is
necessary to the success of free software, and I applaud them for it.
However, the big problem with a practical canned system is that it
must be ridged and inflexible ( pardon the redundancy ) in order to be 
supportable.  Choice leads to confusion.  This is why Microsoft gives
you know choice.  That and their bottom line.

Fortunately, there are people like me who want to become GNU/Linux
gurus who have brains that actually work.  I am confident that
GNU/Linux will move into a position of being a true competitor to
Microsoft Windows on the desktop.  The server market is just too small 
to be of real importance by comparison.  SuSE, Slack, Debian, Red Hat, 
et al, will be producing Linux systems that anyone can install onto
most hardware.  It will just work.  However, there will always be
people who want to be on the bleeding edge and be working with systems 
that just can't be pre-packaged with today's tools.

A POSIX compliant system is a complex collection of complex software.
It will never be a simple thing for anyone to just throw together
their own home brew OS environment.  However, I think the tools
available can be adapted to provide a basis that makes it simpler for
people who want a home brew OS environment to do exactly that.  All
you need is a base system that can fetch and compile the source code
for different packages.  It should also be able to recursively fetch
and compile any true dependencies.  I've been thinking about doing
this myself, but it is not an easy task.  I haven't got the confidence 
in my ability to do such a thing.  I certainly need to learn the
intricacies of shell scripting to aid in this.  I don't think Perl or
TCL is sufficient.

Regardless of the sophistication of the tools, it will still be
necessary to understand the intricacies of a complete GNU/Linux OS.
Individual packages such as sendmail that have complex config scripts
will probably need tools that can help automate the configuration
process so that poeple don't create open mail relays.  Current
packages hide information.  That is both good and bad.  It is good in
that people like me can get a working system together without loosing
a lot of hair.  It is bad in that people like me don't know how the
sytem works.  We don't have to know everything.  That is asking too
much.  But we should know the fundamentals.  This is valuable
knowledge beyond the esthetic.  You can earn good money if you know
how the system works and you work for a corperation that uses that
system.

I think I've ranted long enough for one post.  End stream of
consciousness. 

-- 
David Steuber   |   Hi!  My name is David Steuber, and I am
NRA Member      |   a hoploholic.

All bits are significant.  Some bits are more significant than others.
        -- Charles Babbage Orwell

------------------------------

From: [EMAIL PROTECTED] (Horst von Brand)
Subject: Re: Technical advantages/disadvantages of Linux
Date: 30 May 2000 02:55:22 GMT
Reply-To: [EMAIL PROTECTED]

On 29 May 2000 13:07:46 -0700, p@t <p@t> wrote:
>In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
>>If you ask for technical reasons, IMHO linux is as good as any other
>>UNIX system. 

>That is not the engineering way to compare products.

>To start with, you need to come up with a list of the functionalities
>that Linux provides, and do the same for the OS's you are trying
>to compare it with.

Nope. You need to come up with a list of functionalities your application
needs, and estimate how critical each one is. This might very well include
non-technical characteristics, i.e., will there be somebody able to support
this system in 5 years time. Then you grade all alternatives, and finally
you pick the best one.
-- 
Horst von Brand                             [EMAIL PROTECTED]
Casilla 9G, Vi�a del Mar, Chile                               +56 32 672616

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to