Linux-Development-Sys Digest #982, Volume #6     Thu, 22 Jul 99 06:13:44 EDT

Contents:
  Re: when will Linux support > 2GB file size??? (Jonathan A. Buzzard)
  Re: High load average, low cpu usage (leoxx)
  Re: when will Linux support > 2GB file size??? (Ulrich Weigand)
  Transparent Proxy and UDP (Peter Eisenlohr)
  [?] /lib & /sbin/init (jauming)
  ide-cd problem (Antoine Lefebvre)
  Re: [?] /lib & /sbin/init (Arun Sharma)
  Re: when will Linux support > 2GB file size??? (Christopher B. Browne)
  Re: when will Linux support > 2GB file size??? (Christopher B. Browne)
  Tracing system calls (Mitch)
  Swapping in 2.0.3[46] (Chris Frey)
  Re: Tracing system calls (M van Oosterhout)
  Read 'KERN_INFO' (Junichi SAITO)
  bit already cleared (Jun-Young Cho)
  Re: Kernel version 2.3.9+ (Mark Tranchant)
  linking problem using gcc 2.7.2.3 (Matt)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Jonathan A. Buzzard)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: Wed, 21 Jul 1999 23:52:47 +0100

In article <[EMAIL PROTECTED]>,
        [email protected] (Peter Hayes) writes:
[SNIP]
> 
> The video is usually clipped into short segments for ease of access
> during editing, but yes, an Avid AVR77 (broadcast quality) file
> consumes 1 Gb per minute (in round figures), so a 5 minute clip, which
> is about the biggest I'd like to handle, will be about 5 gigs in size.
> It is possible to record for as long as you have storeage space so the
> files can get *VERY* big. The Apple Mac seems to handle these file
> sizes reliably, they're streamed off twin striped UW Scsi drives.
> Windows NT handles them also, Avid are moving to NT4 for their next
> generation of editing systems (because M$ bought a 20% holding in
> Avid???).
> 

Poor sods, my personal experience with NTFS is that if you suffer a
system crash or are force to do a reset while writing heavely to a
large (ie. 32GB+) NTFS file system the chances are about 1 in 3 that
it is stuffed on reboot. I mean really stuffed the only way back to
a consistant state is reformat and restore for the last backup.
At this point a DLT autochanger comes in really handy.


> So anyone wanting to do a broadcast quality video editing system based
> on Linux will need >2 Gb file access as number one priority. Either
> that or some clever invisible real-time file splitting/splicing
> algorithym. Maybe BeOS would be a better platform, except you couldn't
> hack it to optimise it for editing.
> 
> Landsat and other environmental imaging satellites, eg SPOT, can
> produce enormous amounts of data. Even the humble weather satellite
> will give you an 80 Mb file on a good pass. Up the channels and
> resolution tenfold and you can easily get a >2Gb file.
> 

Guess why we have NT workstations with 64GB of disk space.


JAB.

-- 
Jonathan A. Buzzard                 Email: [EMAIL PROTECTED]
Northumberland, United Kingdom.       Tel: +44(0)1661-832195

------------------------------

From: [EMAIL PROTECTED] (leoxx)
Crossposted-To: comp.os.linux.misc
Subject: Re: High load average, low cpu usage
Reply-To: [EMAIL PROTECTED]
Date: Thu, 22 Jul 1999 00:07:45 GMT

>I have a PII 450 MHz w/512 MB ram and a clean RedHat 6.0 w/updates .
>When I take a look at cpu usage i kpm/qps most of the cpu is idle
>(~85%), but load average reports around 2. Isn't load average a function
>of cpu usage (only, mostly)? Can it be that some process is running, but
>not showing in ps/top/kpm? I feel that the system response has been
>reduced quite a bit after upgrading to RedHat 6.0.

You can use the "top" command to find out which applications
are probably causing this.

-- 
JR

------------------------------

From: [EMAIL PROTECTED] (Ulrich Weigand)
Crossposted-To: comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Date: 22 Jul 1999 02:50:07 +0200

Warren Young <[EMAIL PROTECTED]> writes:

>I understood from the Linux kernel mailing list FAQ that this feature
>was considered too bletcherous to use.  Are they serious about putting
>it into the standard kernel, or are they just farting around with
>oddball hardware features?  

I seem to recall discussion about implementing access to the extended physical
memory via a special ram disk driver.  Apps would then be able to access
the memory by mmap()ing (sections of) those ram disks ...

>UnixWare uses this feature to get up to 8 GB of linearly-addressable
>memory, and the remaining 56 GB addressable with 36 bits is available
>through extended shared-memory APIs.

How's that 8 GB supposed to work? AFAIK, the physical address extensions
only apply to *physical* addresses; the *virtual* address space remains
32-bit (4 GB) ...

-- 
  Ulrich Weigand,
  IMMD 1, Universitaet Erlangen-Nuernberg,
  Martensstr. 3, D-91058 Erlangen, Phone: +49 9131 85-7688

------------------------------

From: Peter Eisenlohr <[EMAIL PROTECTED]>
Subject: Transparent Proxy and UDP
Date: Thu, 22 Jul 1999 04:42:43 +0200

Hi all,

I am redirecting some UDP traffic via an appropiate ipchains-rule to
some local port. With recvfrom(), I can receive datagrams and identify
their source(s). But how can I determine the destination, i.e. the
IP-Address the packet has been sent to? (ipchains will catch the
packet regardless of it's destination address and send it to my
program)

thanx for any hints,
  Peter

------------------------------

From: jauming <[EMAIL PROTECTED]>
Subject: [?] /lib & /sbin/init
Date: Thu, 22 Jul 1999 01:35:03 GMT

we are porting linux to mipsel-linux-idt-r3k
the kernel is almost ready now.

question is:
how to prepare "/lib/*" (shared library) for elf-executable ?
             & "/sbin/init" ?
thx in advanced!:)
you can also reply to: [EMAIL PROTECTED]
--
--
regards


Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

------------------------------

From: antoine@tiger. (Antoine Lefebvre)
Subject: ide-cd problem
Reply-To: [EMAIL PROTECTED]
Date: Thu, 22 Jul 1999 02:15:47 GMT

Hello,
I have a problem with the cdrom driver which appear in the 2.2.xxx kernel. 
When
I mount the cdrom the system lock up. After looking in the driver, I found 
that the last function execute is

static void cdrom_start_read_continuation (ide_drive_t *drive)

and it reach the end. After that, it is lock....

my cdrom is an LTN301, ATAPI CDROM drive 32x
and my computer a P166 with a SpaceWalker mother board, chipset i430TX

thanks a lot



-- 
Antoine Lefebvre
[EMAIL PROTECTED]
http://pages.infinit.net/linux/rocket/index.html
http://pages.infinit.net/linux/music/music.html

------------------------------

From: [EMAIL PROTECTED] (Arun Sharma)
Subject: Re: [?] /lib & /sbin/init
Reply-To: [EMAIL PROTECTED]
Date: Thu, 22 Jul 1999 03:45:11 GMT

On Thu, 22 Jul 1999 01:35:03 GMT, jauming <[EMAIL PROTECTED]> wrote:
> we are porting linux to mipsel-linux-idt-r3k
> the kernel is almost ready now.
> 
> question is:
> how to prepare "/lib/*" (shared library) for elf-executable ?
>              & "/sbin/init" ?

You need to port glibc (or any libc) to your architecture. Compile it
using a cross compiler and produce libc.a.

Then you recompile sysvinit package and link it with the above libc.a
to get /sbin/init.

        -Arun

------------------------------

From: [EMAIL PROTECTED] (Christopher B. Browne)
Crossposted-To:  comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Reply-To: [EMAIL PROTECTED]
Date: Thu, 22 Jul 1999 04:05:06 GMT

On Thu, 22 Jul 1999 00:00:21 +0100, Jonathan A. Buzzard <[EMAIL PROTECTED]> posted:
>In article <[EMAIL PROTECTED]>,
>       [EMAIL PROTECTED] (Christopher B. Browne) writes:
>[SNIP]
>> 
>> There are effectively three choices:
>> a) Leave it alone, letting 32 bit systems have 32 bit limitations, and
>> 64 bit systems have 64 bit limits, which is the situation we have at
>> present,
>> 
>> b) Move to 64 bit representations on all platforms, which results in a
>> performance "hit" for 32 bit systems, but which has the positive effect
>> that it is a "linear" change, or
>> 
>> c) Segment things, having the *option* of using a 64 bit API when
>> desired, and a 32/64 bit API (platform-dependent) when you "don't care."
>> 
>> My suspicion is that the best way to handle this is to go with option a)
>> for the time being, and then move to b) in about five years when 64 bit
>> processors become dominant in the marketplace.
>
>Belive you me option a) is not a realsitic option. Far too many people
>have a requirement to handle files larger than 2GB *today* on a 32bit
>system.

If you think it's unrealistic, then I presume that you can name the
"project team" that are going to make it unrealistic?

Personally, I would say that it is *entirely* realistic to prefer a).

- It is far more likely to be robust *and* efficient than the
alternatives.

- It requires no rewriting of applications, no fixing of bugs, no period
of immaturity.  We've already lived through the a.out-->ELF change, and
the GLIBC 1 --> GLIBC 2 change; these were both transitions that caused
many problems with stuff not working as expected.  The "emulate 64 bits,
sometimes" transition would doubtless be at least as painful.

- It permits those that *need* big files to choose the appropriate
hardware platform.

- It encourages the deployment of 64 bit systems in those cases where
64 bits is in fact likely to provide benefit.

- It does not introduce either inelegant "hacks," or arbitrarily
inefficient changes.

Furthermore, I think it is quite realistic to expect it to *actually
happen* this way.

The approach appears to conform with Linus' attitudes towards the issue,
which is a rather important issue.  If he feels things should be done in a
particular way, that is likely to matter quite a lot.  If you think it is
"not realistic," that, frankly, doesn't weigh heavily on peoples' minds.
-- 
Eagles may soar, but weasels don't get sucked into jet engines.
[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/lsf.html>

------------------------------

From: [EMAIL PROTECTED] (Christopher B. Browne)
Crossposted-To:  comp.os.linux.advocacy
Subject: Re: when will Linux support > 2GB file size???
Reply-To: [EMAIL PROTECTED]
Date: Thu, 22 Jul 1999 04:05:02 GMT

On Wed, 21 Jul 1999 17:47:07 -0600, Warren Young <[EMAIL PROTECTED]> posted:
>"Christopher B. Browne" wrote:
>> 
>> b) Move to 64 bit representations on all platforms, which results in a
>> performance "hit" for 32 bit systems, but which has the positive effect
>> that it is a "linear" change, or
>
>This will require recompiling everything, I believe -- it breaks the
>kernel-to-user-mode ABI.  Let's not take this option if we don't have
>to.

Yes, recompiling everything turns out to be necessary.

But it's even worse than you think; any software that assumes 32 bits,
or *forces* 32 bits, for data structures has the potential to break.
Ports of software to 64 bit architectures has been finding many such bugs,
but if programs self-configure their word sizes based on looking at system
architecture, the 32-bittedness of IA-32 will cause them to get confused.

>> c) Segment things, having the *option* of using a 64 bit API when
>> desired, and a 32/64 bit API (platform-dependent) when you "don't care."
>
>This option may sound nasty, but it has enjoyed some commercial
>success.  In at least some ways, the real competition for Linux isn't
>NT, but higher-end Unix-on-Intel operating systems like UnixWare.  UW
>isn't pretty, but it at least doesn't have the 2GB memory and file size
>limitations.  

The killer question is whether it's worth hacking things up to get a
couple more bits, when waiting a year gives you increasing availability
of 64 bit systems which obviate the problem.

>It gets past the first via the P6 architecture's PAE (processor address
>extensions) to the tune of 64 GB.  I understand that Linux won't be
>supporting this, but it's interesting to note.  By this feature,
>UnixWare is able to address 8 GB linearly, plus up to 64 GB via special
>shared memory APIs, which big-database vendors like Oracle use.
>
>UnixWare gets past the file limits simply by having a better filesystem:
>Vertias' vxfs.  (1TB filesystems and 1TB files within those
>filesystems.)

I'd say that there must be some interesting memory management issues
there; if files can be 1TB in size, and memory spaces are limited to
8GB, then there is some isomorphism "in the way" when you try to do
memory mapped files...

>If you're interested in further details on the differences, see my *ix
>pages, linked below.  I'm a Linux fan myself, but I thought it would be
>worthwhile to compare my preferred OS to the Unix I find myself actually
>_using_ more, since that's what's installed here at work.  After several
>revisions due to stirring up the UW crowd, I think it's pretty accurate
>and objective.

It looks to me like you've probably hammered out the bits that would be
*particularly* offensive to either side.

My last exposure to SCO was in 1993, and at the time, I was using it
as the development environment for some code getting deployed on VMS,
so I can't disagree with the SCO particulars...

The issue that wraps across all of these issues is that of
  "Good enough is best."

That is the best answer to the Mindcraft benchmarks: Linux may not have
"won," but the benchmarks clearly established that Linux is a credible
platform for web servers.

And, for those that find SCO licensing to be expensive, the fact that
Linux isn't quite as capable in some areas doesn't establish inferiority,
but only that there *is* a niche for SCO.  Unfortunately for SCO, the
"niche" happens (at this point) to be the shrinking one of "high
performance IA-32."  

I would be *extremely* worried about SCO's prospects if it were not that
IA-64 is imminent, and is likely to allow them a significant performance
boost to make them competitive with the "big iron" UNIXes.  The fact that
they are still not a hardware vendor is a problem nonetheless...
-- 
Real Programmers use: "compress -d > a.out"
[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/lsf.html>

------------------------------

From: Mitch <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development,comp.os.linux.misc
Subject: Tracing system calls
Date: Thu, 22 Jul 1999 03:56:47 GMT

I'm currently working on a BSDi box, and have recently been introduced
to a program called 'ktrace'.  This thing is a great utility.  My
question is whether or not there is something like this for linux?  I've
made a cursory pass through freshmeat, but didn't turn up anything.

THANKS


------------------------------

From: Chris Frey <[EMAIL PROTECTED]>
Subject: Swapping in 2.0.3[46]
Date: Thu, 22 Jul 1999 02:14:47 -0400

Hi,

By accident, I recently discovered some interesting behaviour with
swapping, after running out of virtual memory, then coming back from
the edge.

Example:
Suppose I have a daemon like junkbuster running, so that every time
I connect to it forks another process.  This runs fine with no
extraneous hard disk activity, for now.

Now I run some app as a normal user with no ulimits, using up all
virtual memory, so that a separate "free" command gives me a segfault
due to being out of memory.

Now I exit said app, freeing all that memory.  We're back to normal,
right?  Well, almost.

If I try to access junkbuster now, every time it forks, there is a
hard disk access (I assume a swap access).  If I'm surfing through
this proxy, the hard disk can get pretty busy.  And needlessly busy,
IMHO.

Now this problem goes away if I "swapoff -a" and "swapon -a".  But I'm
thinking this should not be necessary on a well-behaved swapping system.

My setup is 2.0.34 or 2.0.36, with ext2fs and 1 swap partition both
on a single IDE drive.

Can someone enlighten me as to why this happens?  And if possible,
could someone more familiar with the code than I, point me to where
this behavioural decision is made?  This would be greatly appreciated.

Thanks for your time,
- Chris


------------------------------

Date: Thu, 22 Jul 1999 15:52:27 +1000
From: M van Oosterhout <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development,comp.os.linux.misc
Subject: Re: Tracing system calls

Mitch wrote:
> 
> I'm currently working on a BSDi box, and have recently been introduced
> to a program called 'ktrace'.  This thing is a great utility.  My
> question is whether or not there is something like this for linux?  I've
> made a cursory pass through freshmeat, but didn't turn up anything.

BSD has its own special version?? I thought strace was portable to
most architectures.

Anyway, the program is strace. A similar program named ltrace
traces library calls.

Dunno where to get it though.

HTH,
Martijn van Oosterhout
Australia

------------------------------

From: [EMAIL PROTECTED] (Junichi SAITO)
Subject: Read 'KERN_INFO'
Date: 22 Jul 1999 08:04:11 GMT

How can I read messages labeled 'KERN_*', especially 'KERN_INFO'
in linux/arch/i386/kernel/setup.c at boot ? 

I tried passing 'debug' argument to lilo when booting, also
modified /etc/init.d/sysklogd to specify '-c 7' option for klogd, 
but nothing particular happened.

Do I have to change something before compiling the kernel ?

I have 2.2.10 and 2.3.10 installed.

tia.

------------------------------

From: Jun-Young Cho <[EMAIL PROTECTED]>
Subject: bit already cleared
Date: Thu, 22 Jul 1999 05:30:13 +0900

Hi all
When I boot up my computer, It stops after generating message like that
"Ext2_free... bit already cleared"
I don't know what can I do?
help me..
best regrads.




------------------------------

From: Mark Tranchant <[EMAIL PROTECTED]>
Subject: Re: Kernel version 2.3.9+
Date: Thu, 22 Jul 1999 08:11:17 +0100
Reply-To: [EMAIL PROTECTED]

Andi Kleen wrote:
> 
> David Schwartz <[EMAIL PROTECTED]> writes:
> 
> >       For very good reasons, handholding is not available for development
> > kernels. There is no web site on 'do it yourself brain surgery for
> > beginners'. No sane person would advise beginners to attempt brain
> > surgery.
> 
> To quote Alan Cox (from net/Changes):
> 
> >>
> BTW: Don't let the magic words 'kernel programming' worry you. Its like DOS
> - you make a mistake you have to reboot. You do at least get dumps and a
> kernel logger that is reliable. There is now a loadable module allowing
> use of gdb on the kernel (no breakpoints though!). No magic involved.
> <<
> 
> It is really not brain surgery for the most parts. So please don't scare
> off newbies with such "elite" comments.
> 

Yes, but there's a difference between "near-computer-illiterate newbies
just trying out Linux for a laugh" and "Linux kernel programming
newbies". I believe AC was talking to the latter - the adventurous
former class may get as far as downloading and compiling a kernel
without really understanding what's going on.

I still firmly believe there should be an "Are you sure?" prompt when
making a development kernel. We could include an undocumented make
target to get around this prompt for the people that don't need it...

Mark.

------------------------------

From: Matt <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: linking problem using gcc 2.7.2.3
Date: Thu, 22 Jul 1999 10:02:56 +0100

Hi,

I am having some major problems compiling and lniking the
javawebserver 1.1.3 in linux SuSE6.1 (glib2.0.7)

All seems to be going well however when the libserver.so
file is being created the 'make' fails.

Does anyone know why and how I can correct it please ?

Many thanks

Matt

bash-2.02# cc -v
Reading specs from /usr/lib/gcc-lib/i486-linux/2.7.2.3/specs
gcc version 2.7.2.3


make -fd jws_inst.mk install

[snip]....

     Finished dependencies of target file
`genunix/unknown/com_sun_server_realm_unix_UNIXUserEnumeration.o'.
     Dependency
`genunix/com_sun_server_realm_unix_UNIXUserEnumeration.c' is older than
dependent
`genunix/unknown/com_sun_server_realm_unix_UNIXUserEnumeration.o'.
    No need to remake target
`genunix/unknown/com_sun_server_realm_unix_UNIXUserEnumeration.o'.
   Finished dependencies of target file `genunix/unknown/libserver.so'.
  Must remake target `genunix/unknown/libserver.so'.
cc -shared -fPIC -Igenunix -I/usr/lib/jdk1.1.7/include
-I/usr/lib/jdk1.1.7/include/genunix   -o genunix/unknown/libserver.so
genunix/unknown/server.o genunix/unknown/UNIXUser.o
genunix/unknown/UNIXUserEnumeration.o
genunix/unknown/com_sun_server_ServerProcess.o
genunix/unknown/com_sun_server_realm_unix_UNIXUser.o
genunix/unknown/com_sun_server_realm_unix_UNIXUserEnumeration.o
Putting child 0x0806f210 PID 00857 on the chain.
Live child 0x0806f210 PID 857
genunix/unknown/com_sun_server_realm_unix_UNIXUser.o: In function
`com_sun_server_realm_unix_UNIXUser_loadFromPasswd':
genunix/unknown/com_sun_server_realm_unix_UNIXUser.o(.text+0x0):
multiple definition of
`com_sun_server_realm_unix_UNIXUser_loadFromPasswd'
genunix/unknown/UNIXUser.o(.text+0x0): first defined here
genunix/unknown/com_sun_server_realm_unix_UNIXUser.o: In function
`com_sun_server_realm_unix_UNIXUser_checkCrypt':
genunix/unknown/com_sun_server_realm_unix_UNIXUser.o(.text+0xd0):
multiple definition of `com_sun_server_realm_unix_UNIXUser_checkCrypt'
genunix/unknown/UNIXUser.o(.text+0xd0): first defined here
genunix/unknown/com_sun_server_realm_unix_UNIXUserEnumeration.o: In
function `com_sun_server_realm_unix_UNIXUserEnumeration_setpwent':
genunix/unknown/com_sun_server_realm_unix_UNIXUserEnumeration.o(.text+0x0):
multiple definition of
`com_sun_server_realm_unix_UNIXUserEnumeration_setpwent'
genunix/unknown/UNIXUserEnumeration.o(.text+0x0): first defined here
genunix/unknown/com_sun_server_realm_unix_UNIXUserEnumeration.o: In
function
`com_sun_server_realm_unix_UNIXUserEnumeration_getNextUserName':
genunix/unknown/com_sun_server_realm_unix_UNIXUserEnumeration.o(.text+0x20):
multiple definition of
`com_sun_server_realm_unix_UNIXUserEnumeration_getNextUserName'
genunix/unknown/UNIXUserEnumeration.o(.text+0x20): first defined here
Got a SIGCHLD; 1 unreaped children.
Reaping losing child 0x0806f210 PID 857
make: *** [genunix/unknown/libserver.so] Error 1
Removing child 0x0806f210 PID 857 from chain.

My make file..

###################################################
# @(#)Makefile.sol     1.11 97/11/06
#
# Solaris-specific makefile for building optional native library
support.
# Use this as a starting point for porting to other UNIX platforms.
#
# Path to top of Java Web Server distribution tree
TOP = ..

# Name of the library we're making (given to System.loadLibrary)
LIBNAME = server

# Location of Java Developer's Kit (and runtime)
JAVA_HOME=/usr/lib/jdk1.1.7

# Java runtime flags
JAVAH=$(JAVA_HOME)/bin/javah
CLASSPATH=$(TOP)/classes:$(TOP)/lib/classes.zip:$(JAVA_HOME)/classes:$(JAVA_HOME)/lib/classes.zip

# OS and machine type
OS =    genunix
ARCH =  unknown
OBJ =   $(OS)/$(ARCH)

# Preprocessor, compiler, linker, and related flags
CC =            cc
CFLAGS =        -shared -fPIC
CPPFLAGS =      -I$(OS) -I$(JAVA_HOME)/include
-I$(JAVA_HOME)/include/$(OS)
LDFLAGS =       #-G -h $(LIBNAME)
INSTALL =       /usr/bin/install

LIBDIR =        $(TOP)/lib/$(OBJ)

LIBRARY =       $(OBJ)/lib$(LIBNAME).so
OBJS =          $(OBJ)/server.o \
                $(OBJ)/UNIXUser.o \
                $(OBJ)/UNIXUserEnumeration.o \
                $(OBJ)/com_sun_server_ServerProcess.o \
                $(OBJ)/com_sun_server_realm_unix_UNIXUser.o \
                $(OBJ)/com_sun_server_realm_unix_UNIXUserEnumeration.o

default all: $(LIBRARY)

$(LIBRARY): $(OBJS)
        $(LINK.c) -o $@ $(OBJS)

$(OBJ)/server.o: server.c $(OS)/com_sun_server_ServerProcess.h
        $(COMPILE.c) -o $@ server.c

$(OBJ)/UNIXUser.o: UNIXUser.c $(OS)/com_sun_server_realm_unix_UNIXUser.h
        $(COMPILE.c) -o $@ UNIXUser.c

$(OBJ)/UNIXUserEnumeration.o: UNIXUserEnumeration.c \
                $(OS)/com_sun_server_realm_unix_UNIXUserEnumeration.h
        $(COMPILE.c) -o $@ UNIXUserEnumeration.c


$(OBJ)/com_sun_server_ServerProcess.o:
$(OS)/com_sun_server_ServerProcess.c
        $(COMPILE.c) -o $@ $(OS)/com_sun_server_ServerProcess.c

$(OS)/com_sun_server_ServerProcess.c:
        $(JAVAH) -d $(OS) -classpath $(CLASSPATH) -stubs
com.sun.server.ServerProcess

$(OS)/com_sun_server_ServerProcess.h:
        $(JAVAH) -d $(OS) -classpath $(CLASSPATH)
com.sun.server.ServerProcess

$(OBJ)/com_sun_server_realm_unix_UNIXUser.o:
$(OS)/com_sun_server_realm_unix_UNIXUser.c
        $(COMPILE.c) -o $@ $(OS)/com_sun_server_realm_unix_UNIXUser.c
$(OS)/com_sun_server_realm_unix_UNIXUser.c:
        $(JAVAH) -d $(OS) -classpath $(CLASSPATH) \
                -stubs com.sun.server.realm.unix.UNIXUser
$(OS)/com_sun_server_realm_unix_UNIXUser.h:
        $(JAVAH) -d $(OS) -classpath $(CLASSPATH)
com.sun.server.realm.unix.UNIXUser

$(OBJ)/com_sun_server_realm_unix_UNIXUserEnumeration.o: \
                $(OS)/com_sun_server_realm_unix_UNIXUserEnumeration.c
        $(COMPILE.c) -o $@
$(OS)/com_sun_server_realm_unix_UNIXUserEnumeration.c
$(OS)/com_sun_server_realm_unix_UNIXUserEnumeration.c:
        $(JAVAH) -d $(OS) -classpath $(CLASSPATH) \
                -stubs com.sun.server.realm.unix.UNIXUserEnumeration
$(OS)/com_sun_server_realm_unix_UNIXUserEnumeration.h:
        $(JAVAH) -d $(OS) -classpath $(CLASSPATH) \
                com.sun.server.realm.unix.UNIXUserEnumeration

install: $(LIBRARY)
        $(INSTALL) -d $(LIBDIR)
        $(INSTALL) -c $(LIBRARY) $(LIBDIR)

clean:
        rm -rf $(OS)

.INIT:
        -@mkdir -p $(OBJ)

.KEEP_STATE:

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to