Linux-Development-Sys Digest #697, Volume #8      Mon, 7 May 01 10:13:14 EDT

Contents:
  Questions about x86 CPU memory management of Linux. ("���A�a")
  Sun Solaris IPC (vardhan)
  Re: Help: Kernel module doesn't compile after kernel upgrade. ("Stefan")
  Re: Is there a limit of the number of kernel modules? (Kasper Dupont)
  Re: How can I get the IP of eth0? (Kasper Dupont)
  Re: Obtaining the IP address of the eth0 interface? (Kasper Dupont)
  Re: How to get a number of processors (Stefaan A Eeckels)
  Re: Is linux kernel preemptive?? (Stefaan A Eeckels)
  Re: disk request ordering dependency (Kasper Dupont)
  Re: Q: device driver races? (Kasper Dupont)
  Re: Understanding module initialization (Kasper Dupont)
  Re: Questions about x86 CPU memory management of Linux. (Kasper Dupont)

----------------------------------------------------------------------------

From: "���A�a" <[EMAIL PROTECTED]>
Subject: Questions about x86 CPU memory management of Linux.
Date: Mon, 7 May 2001 17:49:12 +0800

Hi!

I has traced Linux source code for a period of time, and I am very
interested in
operating system. I am confused about the paging and segment management of
the
modern operating system for a long time. I test my programs on the Linux for
understanding every process's address space, trace the source code of kernel
2.2,
and read some related books for answers. But I still confuse now.......

First, I want to make sure that if every process's linear address space is
independent?
(e.g. all processes address space are from 0x00000000 to 0xffffffff)?? My
test program shows
that they are independent, because every programs' main function addresses
are almost the
same. But, something disturbs me. If all processes are independent, it means
that two process's
linear address may be overlapped. What mechanism is used to distinguish
these two the same
linear addresses, but different physical address?? Does it depend on the
value stored in
CR3 to identify the different page directory address?? Another, what address
space is our
programs (applications) operate on? Logical address space or linear address
space??

Second, we know that Linux uses only six segment types stored in the GDT,
they are "kernel
code segment", "kernel data segment", "user code segment, "use data
segment", "TSS segment",
and "LDT segment(optional)". I assume that a user process occupy the user
code segment ,one
TSS, and one LDT in GDT. But how about the user data segment? What is its
functionality?

Third, If all processes locate in the user code segment, but this segment
only can address
4GB space. Does it not collide the concept that every process has its own
4GB address space?
Because of this question, I doubt that all processes share one linear
address space.

At last, why do we need to use (user/kernel) code segment and data segment
seperately?? Are
not the all process's address space in the code segment??

Although I read some theorical and pratical books about operating system for
solving my
questions, I can not find reasonable explainations.

I hope that someone can help me, thanks very much!! Thanks!

Y.H. Feng



------------------------------

From: vardhan <[EMAIL PROTECTED]>
Crossposted-To: 
comp.unix.questions,comp.unix.programmer,comp.os.linux.misc,comp.unix.solaris
Subject: Sun Solaris IPC
Date: Mon, 07 May 2001 15:38:39 +0530


Hi,

I need to find information about the various UNIX
implementations/standards(?) for IPCs (message Queues, pipes, shared
memory), such as POSIX, ATT, SVR4, BSD etc. I need this for comparing
and listing the adv/disadv of each for a particular project, to find the
best IPC mechanism suitable for the project.

The project is based on a Solaris platform (version unknown) running a
SunOS (probably 5.6). I assume that the particular SunOS itself will be
using a specific Unix IPC standard. Please correct me if I am wrong. My
question is, in this case (Sun on Solaris) what are the options I have
for IPC, and where do I get the information to made a comparative study
most suitable for me? Specifically, where do I begin to get the
information I need? I am quite comfortable with basic UNIX IPC and sytem
calls.

Any suggestions in the form of book titles/we links are welcome!

thanks,
Vardhan

PS:  Please pardon if the question has been incorrectly framed, since I
am not yet quite comfortable with
different flavors of Unix.



--

Vardhan Walavalkar
email: [EMAIL PROTECTED]



------------------------------

From: "Stefan" <[EMAIL PROTECTED]>
Subject: Re: Help: Kernel module doesn't compile after kernel upgrade.
Date: Tue, 08 May 2001 13:39:51 +0200

In article <[EMAIL PROTECTED]>, "Peter T. Breuer"
<[EMAIL PROTECTED]> wrote:

>> What can i do, do get the compilation up and running again?
> 
> Fix your include headers. You are including /usr/include/linux instead
> of /usr/src/linux/include. Replace you /usr/include/linux and
> /usr/include/asm with links to the right place, or use -I.

Hello Peter,

thank you for your help. In the past I used to set links for the
include-files
lrwxrwxrwx    1 root     root    /usr/include/asm ->
                   /usr/src/linux/include/asm/ 
lrwxrwxrwx    1 root     root   /usr/include/linux -> 
                  /usr/src/linux/include/linux/ 
just as you said. I didn't work with this configuration. 
But with the -I flag it ran perfectly. This confuses me, 
because the links to /usr/src/linux/... and the -I /usr/src/linux/include
option should be equivalent.

Stefan 
================
German spoken.

------------------------------

From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: Is there a limit of the number of kernel modules?
Date: Mon, 07 May 2001 11:48:08 +0000

Nick Lockyer wrote:
> 
> I would think only memory and the number of processes available.  I believe
> modules must be in real memory, they cannot be paged out, but I am not
> certain on that.  I would not think anybody is ever going to have a problem
> though!

The kernel and all kernel modules must be in
physical memory. The number of processes is
only a limit if the module needs to create a
new kernel thread, and most modules don't do
that.

> 
> Pei Zheng <[EMAIL PROTECTED]> wrote in message
> news:9crskq$290d$[EMAIL PROTECTED]...
> > Hi there,
> > I am wondering that if there is a limit of the number of loadable kernel
> > modules for current linux kernel(2.4).
> > anybody any idea?
> >
> > -Pei
> >
> >

-- 
Kasper Dupont

------------------------------

From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: How can I get the IP of eth0?
Date: Mon, 07 May 2001 12:15:31 +0000

This is a multi-part message in MIME format.

==============1F69D54B40B0D4DC6CF50F48
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Andreas Egner wrote:
> 
> Hallo,
> how can I get the IP of a network device like eth0, ppp0, ... in a C/C++
> program?
> 
> Thanks
> Andreas

-- 
Kasper Dupont

==============1F69D54B40B0D4DC6CF50F48
Content-Type: text/plain; charset=us-ascii; name="getip.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="getip.c"

#include <arpa/inet.h>
#include <net/if.h>
#include <stdio.h>
#include <sys/ioctl.h>

int main()
{
  struct ifreq I = {{"eth0"}};
  int fd=socket(PF_INET,SOCK_DGRAM,IPPROTO_IP);
  ioctl(fd,SIOCGIFADDR,&I);
  printf("%s\n",inet_ntoa(((struct sockaddr_in*)&I.ifr_addr)->sin_addr));
  return 0;
}

==============1F69D54B40B0D4DC6CF50F48==


------------------------------

From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: Obtaining the IP address of the eth0 interface?
Date: Mon, 07 May 2001 13:02:58 +0000

This is a multi-part message in MIME format.

==============2A0CFA1B1CE091EA28765EEB
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Carlos Justiniano wrote:
> 
> I need to obtain the IP address of the eth0 interface from within my
> Linux server application.  I'm aware that I can do this from a
> connected socket, but I would like to have the IP address prior to my
> server making a connection.
> 
> I'd rather not have to depend on the hosts file or functions such as
> gethostname(), gethostbyname().  The machine that my server will be
> running on will obtain it's IP via DHCP.
> 
> IP and IFCONFIG both return the correct information of the interfaces
> that are active.  I realize that I could parse the output to obtain
> the IP info that I need... but I'm more interested in a solution that
> doesn't depend on a clever trick.  I imagine that there must be an API
> to enumerate the active interfaces?
> 
> - Carlos

This example should help you.

-- 
Kasper Dupont

==============2A0CFA1B1CE091EA28765EEB
Content-Type: text/plain; charset=us-ascii; name="getip.c"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline; filename="getip.c"

#include <arpa/inet.h>
#include <net/if.h>
#include <stdio.h>
#include <sys/ioctl.h>

int main()
{
  struct ifreq I = {{"eth0"}};
  int fd=socket(PF_INET,SOCK_DGRAM,IPPROTO_IP);
  ioctl(fd,SIOCGIFADDR,&I);
  printf("%s\n",inet_ntoa(((struct sockaddr_in*)&I.ifr_addr)->sin_addr));
  return 0;
}

==============2A0CFA1B1CE091EA28765EEB==


------------------------------

From: [EMAIL PROTECTED] (Stefaan A Eeckels)
Subject: Re: How to get a number of processors
Crossposted-To: comp.os.linux.development.apps
Date: Mon, 7 May 2001 14:52:39 +0200

In article <[EMAIL PROTECTED]>,
        [EMAIL PROTECTED] (Dave Blake) writes:
> Eric P. McCoy <[EMAIL PROTECTED]> wrote:
> 
>> This strikes me as a battle of bad ideas: I hate writing a
>> text parser to deal with /proc; I don't like using nonstandard
>> pieces of code; and no program should ever need to know how
>> many processors are in a given box.  There are cases where
>> you'd want to use one or all of these bad ideas, but I, for
>> one, would need a pressing reason.
> 
> Suppose I am writing a data crunching piece of software that
> parallelizes easily, and wish to run a thread on each processor. 
> 
> I first parse /proc/stat, and then crunch away with a thread
> on each CPU. 
> 
> For a web searching program, you may wish to know the number of
> NICs and CPUs, and take the lower of the two as the number of
> threads to run. And so on. 

In a Unix system, the application should not need to know
anything about the hardware details. The recent obsession
with threads violates that basic tenet. If one wants to
squeeze the last ounce of performance from a box, don't
use an OS.

-- 
Stefaan
-- 
How's it supposed to get the respect of management if you've got just
one guy working on the project?  It's much more impressive to have a
battery of programmers slaving away. -- Jeffrey Hobbs (comp.lang.tcl)

------------------------------

From: [EMAIL PROTECTED] (Stefaan A Eeckels)
Subject: Re: Is linux kernel preemptive??
Date: Mon, 7 May 2001 14:47:40 +0200

In article <[EMAIL PROTECTED]>,
        Greg Copeland <[EMAIL PROTECTED]> writes:
> 
> Thanks.  That's been my point.  Please clarify
> why you think that interrupt handlers do not
> count as system calls.  Let's review this.  Both
> reside within the kernel.  Both are functions.  Both
> are called to service a specific level of functionality.

You've got the hierarchy wrong. Applications use
OS services through "system calls", and the OS 
uses services provided by the hardware through
drivers. Drivers consist of interrupt handlers
(if the hardware is capable of generating interrupts)
plus code to control the device.
Your confusion stems from the fact that in certain
processor architectures system calls are implemented
through a mechanism similar to hardware interrupts,
but to Unix, system calls are totally different from
interrupt handlers.

> The only distinction is that one is invoked directly by
> an asynchronous interrupt handler while the other is a
> synchronous call.  Hardly worth making the distinction
> unless you are getting into a microscopic conversation,
> which I have pointed out that I was specifically trying
> to avoid.  Now then, since I feel sure that we can agree
> on the above, adding this level of detail to a high level
> conversation, I would submit, offers no additional value.

You're correct when talking from the CPU's viewpoint, but
totally wrong when looking at it in terms of the structure
of the Linux OS. 

> Keep in mind, interrupt handlers can and do call other
> system calls.  In essence, when you say an "interrupt
> handler", you are really describing an asynchronous entry
> point to a system call.

In a Unix system, this is most certainly _not_ the case.
No application ever vectors to the same routine as an
interrupt handler --the job of the OS is to keep applications
from doing just that. Inside the drivers, you can use
kernel services, but you would never call them through the
same mechanism as an application uses to perform a system
call. And interrupt handlers have to be written to minimize
the time spent with interrupts disabled, something you don't
normally worry about when implementing a system call.

> In the original conversation, the
> fact that something was being called asynchronously was
> specifically deemed, by me, as irrelevant because the level
> of detail at that point in time could not make any meaningful
> distinction as to it exact behavior to make mention worthy.
> 
> The use of "system call" is a very ambiguous term which I
> feel people often assign specific attributes that does not
> really exist.  I like to avoid all of that and think of
> system calls as just that.  A function residing within the
> "system".  In this case, I think of "system" as being "the
> kernel."  I find that for the vast majority of conversations,
> the concept holds true and greatly simplifies the thread as
> far fewer assumptions are needed by the participants.

"System call", "device driver" and "interrupt handler" have
very precise meanings in the Linux context. Not using these
meanings in c.o.l.d.s. is asking for disagreement. Your
definition of system call makes little sense, because it
ignores the scope of the "function residing within the
system". Even though printk is such a function, it is not
callable from an application program, and hence it is not
a system call, and neither are the interrupt handlers.

> Hope this helps.

When in Rome, do as the Romans. 

-- 
Stefaan
-- 
How's it supposed to get the respect of management if you've got just
one guy working on the project?  It's much more impressive to have a
battery of programmers slaving away. -- Jeffrey Hobbs (comp.lang.tcl)

------------------------------

From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: disk request ordering dependency
Date: Mon, 07 May 2001 13:09:39 +0000

Zhiyong Xu wrote:
> 
>   How did kernel enforce disk request dependency, for example, the rename
> operation changes the name of a file by adding a link for the new name and
> removing the old link. If the system goes down after the old directory block
> has been written(with the link removed) but before the new one is written,
> neither name for the file will exist.
>      I checked __make_request function in ll_rw_blk.c, but I don't where's
> code deal with consistency of data block of a file and metadata of it. So
> what's the linux stratege here?
>      Thanks!

Depends on which type of filesystem is being used.
If it is ext2 the filesystem is marked dirty when
it is mounted R/W and is marked clean when it is
no longer mounted R/W. If it is still marked dirty
on next boot e2fsck will be run to fix the problems.
If it is a fat system nothing is done about it. If
it is a journalizing filesystem like reiserfs or
ext3 some technique is used to either complete the
operation or rollback when it is mounted next time.


-- 
Kasper Dupont

------------------------------

From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: Q: device driver races?
Date: Mon, 07 May 2001 13:17:52 +0000

Holger Eitzenberger wrote:
> Additionally there is a feature on the board called "reset cylcle".
> At boot time a reset cycle is started and has to be stopped in a
> (configurable) time period; if not, the system will be rebooted by the
> LED board.

A board with this reboot function is called a Watchdog timer.
Linux already has drivers for some Watchdog timers, it might
be a good idea to see if it contains anything you can use.
If you need to write a new driver to support your board, I
suggest you try to make it compatible with the existing
Watchdog timer interface.

-- 
Kasper Dupont

------------------------------

From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: Understanding module initialization
Date: Mon, 07 May 2001 13:26:19 +0000

[EMAIL PROTECTED] wrote:
> 
> I am trying to understand how modules are initialized in the kernel. I can
> see in ../init/main where do_basic_set for things like CONFIG_PCI and other
> constants. After that I see a call to do_initcalls() where I then presume
> the various init_module() routines of all of the modules that are compiled
> into the kernel are called. I can see it calling something called
> __initcall_start which appears defined in vmlinux.lds and appears to be a
> linker control script of some sort. The problem is I don't understand what
> is going on. Perhaps someone might help me out by expressing how this works.
> (And if the answer is 'man xxx', so be it, but I did try).
> 
> By extension, the next question relates to modules loaded after the kernel
> boots. Is this the same, or is there some other mechanism at work.
> 
> Charles

Modules are by definition are loaded after the kernel
boots, the init_module() function is called from
kernel/module.c on this line:

        if (mod->init && (error = mod->init()) != 0) {

The mod->init field is given to the kernel by the
module loader program, usually modprobe or insmod.

Some drivers can be compiled both as part of the kernel
and as a module, these drivers needs different ways to
be called. If they are compiled as part of the kernel
they are initialized from init/main.c, the code usually
looks similar to this:

#ifdef CONFIG_DRIVERNAME
        drivername_init();
#endif

-- 
Kasper Dupont

------------------------------

From: Kasper Dupont <[EMAIL PROTECTED]>
Subject: Re: Questions about x86 CPU memory management of Linux.
Date: Mon, 07 May 2001 13:44:48 +0000

���A�a wrote:
> 
> Hi!
> 
> I has traced Linux source code for a period of time, and I am very
> interested in
> operating system. I am confused about the paging and segment management of
> the
> modern operating system for a long time. I test my programs on the Linux for
> understanding every process's address space, trace the source code of kernel
> 2.2,
> and read some related books for answers. But I still confuse now.......
> 
> First, I want to make sure that if every process's linear address space is
> independent?
> (e.g. all processes address space are from 0x00000000 to 0xffffffff)?? My
> test program shows
> that they are independent, because every programs' main function addresses
> are almost the
> same. But, something disturbs me. If all processes are independent, it means
> that two process's
> linear address may be overlapped. What mechanism is used to distinguish
> these two the same
> linear addresses, but different physical address?? Does it depend on the
> value stored in
> CR3 to identify the different page directory address?? Another, what address
> space is our
> programs (applications) operate on? Logical address space or linear address
> space??
> 
> Second, we know that Linux uses only six segment types stored in the GDT,
> they are "kernel
> code segment", "kernel data segment", "user code segment, "use data
> segment", "TSS segment",
> and "LDT segment(optional)". I assume that a user process occupy the user
> code segment ,one
> TSS, and one LDT in GDT. But how about the user data segment? What is its
> functionality?
> 
> Third, If all processes locate in the user code segment, but this segment
> only can address
> 4GB space. Does it not collide the concept that every process has its own
> 4GB address space?
> Because of this question, I doubt that all processes share one linear
> address space.
> 
> At last, why do we need to use (user/kernel) code segment and data segment
> seperately?? Are
> not the all process's address space in the code segment??
> 
> Although I read some theorical and pratical books about operating system for
> solving my
> questions, I can not find reasonable explainations.
> 
> I hope that someone can help me, thanks very much!! Thanks!
> 
> Y.H. Feng

The i386 architecture implements a segment concept
with 4 different privelege levels. Linux does not
need the segments and only needs 2 levels, this is
done with the small number of segment selectors.

The 4 GB linear address space is split into two
sections the first part is usually 3 GB and is used
in user space, the second part is usually 1 GB and
is used in kernel space.

The user section is different for each process, the
context switch code will switch this by changing
the CPU register pointing to the page table.

The kernel section is the same for all processes
and starts with a 1:1 mapping to the physical
memory, newer kernels are able to map only part
of the physical memory and use the rest for buffers,
cache and process memory.

The split of the range into 1:3 GB can in some
versions be changed to 2:2 GB at compile time.

Some of the segment selectors used by Linux maps
only the user space while others maps both user
space and kernel space.

-- 
Kasper Dupont

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to the
comp.os.linux.development.system newsgroup.

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to