Linux-Development-Sys Digest #708, Volume #8     Thu, 10 May 01 02:13:17 EDT

Contents:
  Re: why a separate process for each thread on Linux (Alexander Viro)
  Re: why a separate process for each thread on Linux (Juergen Heinzl)
  2.4.4 Kernel, Something Seriously Wrong.  David Hinds can you read in please to! 
("E-mu")
  Re: UDF write on MO ("Alessandro Bietresato")
  C++ Shared libraries on Linux - problem, HELP! (Billy Bob Jameson)
  Re: why a separate process for each thread on Linux (Alexander Viro)
  Re: background process survive session close? (Eric Taylor)
  Re: Linux, streams and the standard library (Steve Connet)
  Re: Linux, streams and the standard library (David Konerding)
  Re: shared DLLs written in C++, and _init(), _fini() (David Konerding)
  Re: Multiple processes using the curses library (Kaz Kylheku)
  Re: How to get a number of processors (Eric P. McCoy)
  Re: why a separate process for each thread on Linux ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Alexander Viro)
Subject: Re: why a separate process for each thread on Linux
Date: 9 May 2001 18:22:36 -0400

In article <[EMAIL PROTECTED]>,
Juergen Heinzl <[EMAIL PROTECTED]> wrote:

>Given clone(2) is Linux specific clone(2) does not exist, you've never
>heard of clone(2), you know of no-one who'd ever heard of clone(2)
>and Montezuma's revenge shall come over you the day you dare to think
>of how nice a clone(2) system call would be.
>
>You get the idea ;)

Nice, but... clone() is equivalent to rfork(). And between Linux+*BSD+Plan9
and Solaris+other Missed'em'V abortions I'd take the former, thank you
very much.

-- 
"You're one of those condescending Unix computer users!"
"Here's a nickel, kid.  Get yourself a better computer" - Dilbert.

------------------------------

From: [EMAIL PROTECTED] (Juergen Heinzl)
Subject: Re: why a separate process for each thread on Linux
Date: Wed, 09 May 2001 22:52:52 GMT

In article <9dcfvc$[EMAIL PROTECTED]>, Alexander Viro wrote:
>In article <[EMAIL PROTECTED]>,
>Juergen Heinzl <[EMAIL PROTECTED]> wrote:
>
>>Given clone(2) is Linux specific clone(2) does not exist, you've never
>>heard of clone(2), you know of no-one who'd ever heard of clone(2)
>>and Montezuma's revenge shall come over you the day you dare to think
>>of how nice a clone(2) system call would be.
>>
>>You get the idea ;)
>
>Nice, but... clone() is equivalent to rfork(). And between Linux+*BSD+Plan9
>and Solaris+other Missed'em'V abortions I'd take the former, thank you
>very much.
[-]
Unix98 does not know of rfork(), which knows of more than clone(),
but you're welcome 8)

Gee, thunderstorms tonight ... off,
Juergen

-- 
\ Real name     : Juergen Heinzl                \       no flames      /
 \ EMail Private : [EMAIL PROTECTED] \ send money instead /

------------------------------

From: "E-mu" <[EMAIL PROTECTED]>
Subject: 2.4.4 Kernel, Something Seriously Wrong.  David Hinds can you read in please 
to!
Date: 09 May 2001 22:59:15 GMT

I know its the kernel, because I configured it exactly like 2.4.2.

It compiles fine, but when I run lilo -v it hangs when it writes to the boot
sector, or if I run mkinitrd it hangs.

Another problem, my adaptec 1480 slim SCSI bombs out when the kernel boots
up.  I noticed they removed adaptec 1480 from the section, SCSI>PCMCIA.

Is it because it no longer is a seperate choice, but rather the drivers are
now part of the PCMCIA (y) choice?

Either way my slim scsi aha1480 bombs out.  Kernel won't go through a
successfull boot with the SCSI card bus card installed.

I could not catch the error messages on boot up, but there are also looping
error messages to that do not stop, then a get a stack dump at the end.


RH Linux 7.1 Up to date patches
ximian's gnome 1.4-all up to date patches
kernel 2.4.2-2, monolithic, except for AHA 1480 which only had a choice of
(m) in kernel.
Dell Inpiron 7500



------------------------------

From: "Alessandro Bietresato" <[EMAIL PROTECTED]>
Subject: Re: UDF write on MO
Date: Thu, 10 May 2001 01:16:31 +0200


"Massimiliano Caovilla" <[EMAIL PROTECTED]> ha scritto nel messaggio
news:[EMAIL PROTECTED]...
> Hi
> I'm trying to use the UDF filesystem on a MO scsi drive (/dev/sda):
> actually I'm using the module version 0.9 I downloaded from trylinux,
> but in write it doesn't work well: I must debug it by myself or there is
> a better version somewhere?
>
> Ciao

Are you using 2048/bytes media (like 3.5"/640MByte MO disk)?




------------------------------

From: Billy Bob Jameson <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.misc,comp.os.linux.development.apps
Subject: C++ Shared libraries on Linux - problem, HELP!
Date: Wed, 09 May 2001 23:52:37 GMT



Hi.

For some time now I am struggling to understand what's wrong with my way
of building a shared library. Got a lot of answers from some ppl but
still.

Building a shared lib on UNIX is apparently a no-brainer. However, all
my attempts  end with "Segmentation fault" immediately after launching
the test program that uses the shared library. So far I found out it's
not because I use namespaces.

More precisely, gdb displays just:
(gdb) run
Starting program: /work/src/testbin/testbin/.libs/testbin
Program received signal SIGSEGV, Segmentation fault.
0x4000c1b6 in ?? ()
(gdb) bt
#0  0x4000c1b6 in ?? ()
#1  0x40002855 in ?? ()
#2  0x4001048f in ?? ()
#3  0x40002382 in ?? ()
#4  0x400020ae in ?? ()
(gdb)

This is all I get.

If there is anyone here willing to get my sources (two kdevelop
projects) compile them,  run them and tell me then where I was wrong,
he/she'll have my eternal gratitude.

Additional info:
RH 7.0 system upgraded from rpms to glibc 2.2.10


TIA
BB



------------------------------

From: [EMAIL PROTECTED] (Alexander Viro)
Subject: Re: why a separate process for each thread on Linux
Date: 9 May 2001 20:50:43 -0400

In article <[EMAIL PROTECTED]>,
Juergen Heinzl <[EMAIL PROTECTED]> wrote:

>>Nice, but... clone() is equivalent to rfork(). And between Linux+*BSD+Plan9
>>and Solaris+other Missed'em'V abortions I'd take the former, thank you
>>very much.
>[-]
>Unix98 does not know of rfork(), which knows of more than clone(),
>but you're welcome 8)

<shrug> Unix98 describes Unix(tm). Any resemblance to Unix is purely
coincidential. rfork/clone is present in Research Unix branch, it is
present in BSD and it is present in Linux. USG branch lacks it. Instead
of that it had introduced a new notion for no good reason, bloated the
API, created a set of complicated rules around it and declares that
a standard. Business as usual...

-- 
"You're one of those condescending Unix computer users!"
"Here's a nickel, kid.  Get yourself a better computer" - Dilbert.

------------------------------

From: Eric Taylor <[EMAIL PROTECTED]>
Subject: Re: background process survive session close?
Date: Thu, 10 May 2001 01:15:48 GMT

Thank you, I knew this was the right place to ask. This is
EXACTLY what I wanted.

thanks again
eric


Greg Copeland wrote:

> A lot of people use screen for this very reason.  In fact, you
> can do cool things like this.  First, run screen on you console.
> Second, start your application.  Third, log out.  Now, go home
> and telnet to the box that is running the application.  Now,
> run screen again, telling it to attach to the instance that you
> previously had running.  Bam...right where you left off.  This is
> very slick.
>
> greg
>
> Eric Taylor <[EMAIL PROTECTED]> writes:
>
> > I know I have done this:
> >
> > program ... &
> >
> > close session window (or telenet session if remotely connected)
> >
> >
> > and the program _sometimes_ remains running. I can't
> > determine when or what causes this. But, I actually want this
> > behavior, without needing to modify source code.
> >
> > Any ideas what is going on and is there a legit way to run
> > a program or script  in the background and then be
> > able to quit the controlling terminal window w/o killing the
> > background job?
> >
> > thanks
> > eric
> >
> >
>
> --
> Greg Copeland, Principal Consultant
> Copeland Computer Consulting
> --------------------------------------------------
> PGP/GPG Key at http://www.keyserver.net
> DE5E 6F1D 0B51 6758 A5D7  7DFE D785 A386 BD11 4FCD
> --------------------------------------------------


------------------------------

Subject: Re: Linux, streams and the standard library
From: Steve Connet <[EMAIL PROTECTED]>
Date: Thu, 10 May 2001 01:30:50 GMT

[EMAIL PROTECTED] (David Konerding) writes:

> Not entirely true.  STL support may be nearly complete (thanks to
> SGI) but the standard library isn't.  STL is just a part of the
> standard library and ostringstream isn't part of STL (it's part of
> IOStreams). I've been playing with gcc-2.95.3 ad gcc-3.0pre (from
> CVS) to test their standard C++ library. It's far from as good as
> STLport.  I get segfaults when using ostringstreams heavily--

Really? That's interesting to know. I am using Red Hat 7.0 which came
with an unstable gcc 2.96 and it won't even compile STLport 4.0. So
I'm stuck dumping my hard drive and reinstalling RH 6.2 and upgrading
to gcc-2.95 and STLport4.0.

Unless, maybe I could install gcc-2.95 and STLport4.0 on my W2K
machine and do builds there? Is that possible?

-- 
Steve Connet            Remove USENET to reply via email
[EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (David Konerding)
Subject: Re: Linux, streams and the standard library
Date: 10 May 2001 01:57:59 GMT
Reply-To: [EMAIL PROTECTED]

On Thu, 10 May 2001 01:30:50 GMT, Steve Connet <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] (David Konerding) writes:
> 
>> Not entirely true.  STL support may be nearly complete (thanks to
>> SGI) but the standard library isn't.  STL is just a part of the
>> standard library and ostringstream isn't part of STL (it's part of
>> IOStreams). I've been playing with gcc-2.95.3 ad gcc-3.0pre (from
>> CVS) to test their standard C++ library. It's far from as good as
>> STLport.  I get segfaults when using ostringstreams heavily--
> 
> Really? That's interesting to know. I am using Red Hat 7.0 which came
> with an unstable gcc 2.96 and it won't even compile STLport 4.0. So
> I'm stuck dumping my hard drive and reinstalling RH 6.2 and upgrading
> to gcc-2.95 and STLport4.0.

7.0 is crap.  Install 7.1.  It's pretty good.  I only had to do one thing
to get STLport-4.1b6 (skip STLport-4.0  and get 4.1b6) to compile--
copy the <exception>,<new>, and <typeinfo> headers into a place where
STLport could find them.

> 
> Unless, maybe I could install gcc-2.95 and STLport4.0 on my W2K
> machine and do builds there? Is that possible?

In theory, yes, although in W2k you've got lots of options for C++
compilers with standard librares.  Borland, Metroworks, Visual C++ ...
the cygwin tools can probably do gcc-2.95.2 and STLport.

Dave

------------------------------

From: [EMAIL PROTECTED] (David Konerding)
Subject: Re: shared DLLs written in C++, and _init(), _fini()
Date: 10 May 2001 02:03:13 GMT
Reply-To: [EMAIL PROTECTED]

On 09 May 2001 17:51:30 -0500, [EMAIL PROTECTED] <[EMAIL PROTECTED]> 
wrote:
> 
> I am writing a few shared, dynamic libraries (DLLs) that will be used by
> another program using the dlopen()/dlsym() calls.
> 
> The shared DLLs are being written in C++, and what I am trying to do is
> to have some initialization code put into the _init() method that gets
> called when the DLL is loaded via dlopen(), and some cleanup code put
> into _fini() when the DLL is unloaded via dlclose().
> 
> The problem is that whenever I add the _init() and _fini() methods to
> my DLL's C++ source file that contains the symbols that will be accessed
> via dlsym(), I get the following link error:
> 
> ModuleMain.o: In function `__malloc_alloc_template<0>::deallocate(void *, unsigned 
>int)':
> /usr/lib/gcc-lib/i386-linux/2.95.4/../../../../include/g++-3/stl_alloc.h(.text+0x0): 
>multiple definition of `_init'
> /usr/lib/crti.o(.init+0x0): first defined here
> /usr/bin/ld: cannot find -ldlo
> collect2: ld returned 1 exit status
> make: *** [libFilter1.so] Error 1
> 
> So it seems that _init() is already defined in the libc6 runtime.
> Now, if I compile my DLL passing the "-nostdlib" flag to g++ at link
> time, I don't get this problem and DLL's source file containing _init()
> and _fini() is linked in without any apparent problems.
> 
> But what happens then is that any static/global objects defined in my
> DLL aren't initialized! It seems that the C++ runtime's static
> object initializer function isn't called.

If everything is compiled and linked properly, your constructors should
be called automatically... you do not need to override _init or _fini.

Are you trying to link using "ld"?  It might  not work properly-- when
linking C++ code normally should use g++ as the linker instead of ld
(although ld might do the same thing-- depends on the implementation).
You do want to link with the standard libraries (IE don't use -nostdlib)
otherwise, very bad things may happen.

------------------------------

From: [EMAIL PROTECTED] (Kaz Kylheku)
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Multiple processes using the curses library
Reply-To: [EMAIL PROTECTED]
Date: Thu, 10 May 2001 02:36:30 GMT

On Wed, 09 May 2001 16:14:25 +0100, Alex Brown (bee3_00)
<[EMAIL PROTECTED]> wrote:
>At the moment I am having difficulty in getting the two processes to share the
>terminal screen despite having non-overlapping subwindows.

That's probably because they have no concept of sharing. Both will write
to the terminal device, and their writes will get interleaved.

Your terminal is not a direct access array of characters, but rather a serial
output device whose cursor positioning is controlled by escape sequences. 
The device knows nothing about subwindows; they are just an illusion created
by sending the right sequence of characters.

If these sequences get mixed up from two sources, you will get a garbled 
screen.

------------------------------

Crossposted-To: comp.os.linux.development.apps
Subject: Re: How to get a number of processors
From: [EMAIL PROTECTED] (Eric P. McCoy)
Date: 10 May 2001 01:28:37 -0400

John Beardmore <[EMAIL PROTECTED]> writes:

> >  While it's certainly possible for an app to figure out
> >what other apps are doing, it's bad for it to base its behavior on any
> >of that information because it has _no idea_ what the other app is
> >doing.

> Pardon ?

Wow.  That makes no sense to me, either.

My best guess is that I meant to say: while app A may be able too how
much system resources app B is using, app A can't make any predictions
on how long those resources are going to be in use, what patterns
they're going to be following, and so on.  The kernel is still limited
in this respect, but it has a better chance of guessing, assuming some
heinously complicated logic, based on syscalls being used.

In fact, it might not be that tough, I just haven't thought about it
enough to have a solid guess.

> >If you can guarantee that your Magic Parallelizing App is going to be
> >the only process running at any given time,

> Well, the only process requiring significant resource.

> > then you can damn well
> >guarantee you'll have an operator smart enough to know how many CPUs
> >are in the machine (and how to tell your app about it).

> So anybody writing // apps can (has to!) afford a better class of
> operator ??

If they can afford a dedicated computer, they can afford to pay
someone $10 to explain to a group of people how to configure the
program properly.  I'm talking just about large bunches of computers,
here; for small numbers (say, <10) the programmer/admin can do it
himself.

> >  If you can't
> >guarantee that, then you need to leave process management up to the
> >kernel, which can understand the situation far better than any
> >individual app.

> If you know the circumstances in which the app will be run, why not
> give it an indication of the number of CPUs ?  Why give the operator
> something else they can get wrong ?

Because they won't.  If necessary, print up a bunch of labels and
stick them on front of the computers.  Still not a real great
solution, but at least it's no longer a platform-dependant one.

> >Seems like what you're _really_ advocating is some powerful process
> >management stuff in the kernel.

> No.  That would be the good long term solution.  What I'm advocating,
> or at least endorsing is a horrible bodge to give a better behaviour
> in a limited set of circumstances.

Ah, so we agree it's a bad thing.  I'll grant you it may sometimes be
necessary under certain conditions; my only original complaint was
that it seemed like a hack.

If we agree on that, we're settled.  I hope we meet up a year or so
down the road rewriting the Linux scheduler, or something.  Although
if you see my name in the project, you should probably tell Linus to
come over and personally kick my ass, as I have no idea how to even
begin programming a scheduler.

-- 
Eric McCoy <[EMAIL PROTECTED]>
  "Knowing that a lot of people across the world with Geocities sites
absolutely despise me is about the only thing that can add a positive
spin to this situation."  - Something Awful, 1/11/2001

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: why a separate process for each thread on Linux
Date: 10 May 2001 05:03:37 GMT

On Wed, 09 May 2001 19:28:59 GMT Eric Taylor <[EMAIL PROTECTED]> wrote:

|> In article <[EMAIL PROTECTED]>,
|> Eric Taylor  <[EMAIL PROTECTED]> wrote:
|>
|> >I see there is a way to share pids? Is that still un-implemented or
|> >have things changed?
|>
|> Hell knows. What for? To emulated Solaris idiocy?
|
| I was wondering what was intended by that feature and whether
| any further progress had been made.
|
| No matter how idiotic Solaris might be, it is worthwhile to be
| able to have their programs port to linux w/o having to change
| source code.

No matter how idiotic Linux might be, it is worthwhile to be
able to have their programs port to solaris w/o having to change
source code.


| Suppose I have a program that has a  local array (on the stack) and
| I want to send a pointer to it to one or more of my threads. Will
| that work with clone? And even if you think that is stupid, suppose
| I have to port a program that does it and I don't have time  to modify it.

That would mean each thread would have to have a stack allocation
within the address space to grow in, and this would all have to
come from a subset of the one address space all threads run in. 
If you are allocating arrays within the stack space (e.g. alloca()),
then your stack space needs will be quite high, potentially.  The
allocation of stack space itself will have to accomodate that,
perhaps with no advance knowledge of the space needs.

Your design should identify what data you do need to share
between tasks on a pointer basis, and consider a more controlled
way to allocate it.  If you can use malloc() and get it in the
heap, then at least you won't have to set aside large stack
spaces.  If you can set up a shared memory segment or map, then
even separate processes can do this.

-- 
=================================================================
| Phil Howard - KA9WGN |   Dallas   | http://linuxhomepage.com/ |
| [EMAIL PROTECTED] | Texas, USA | http://phil.ipal.org/     |
=================================================================

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to the
comp.os.linux.development.system newsgroup.

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to