Linux-Development-Sys Digest #70, Volume #7      Thu, 19 Aug 99 02:14:15 EDT

Contents:
  Re: threads (Karl Heyes)
  Re: C++ templates:  More than Turing Complete? (David Schwartz)
  Re: C++ templates:  More than Turing Complete? (Davin McCall)
  Re: Linux file-size limit? (Christopher Browne)
  Re: most efficient way to zero out a partition? (Christopher Browne)
  Re: Why so inefficient source RPM's ?? (David Fox)
  Re: most efficient way to zero out a partition? ("Charles Sullivan")
  Re: Network stack rewrite (David Schwartz)
  Re: why not C++? (Tristan Wibberley)
  Re: Network stack rewrite (Bill Pitz)
  CONFIGURE KERNEL VARIABLES (Lijun Wang)
  Re: most efficient way to zero out a partition? (Christopher B. Browne)
  Re: most efficient way to zero out a partition? (Kaz Kylheku)

----------------------------------------------------------------------------

From: Karl Heyes <[EMAIL PROTECTED]>
Subject: Re: threads
Date: Thu, 12 Aug 1999 13:25:02 +0100



Bill Burris wrote:

> How are threads implemented on Linux?  Are they part of the kernel or are
> they a user level library?
>

They are examples of threads of the kernel (eg kmod, kflushd etc), however
what I suspect your asking is threads of user processes.   With the definition
of the clone system call,  which the glibc/linuxthreads package uses, implements

threading of processes that show up in the ps list.   There are packages
available
that implement a pseudo (user) threading structure within a process, which does
not show under a ps, but you can ignore those.


>
> If they are implemented at the user level, what happens when a thread blocks
> on I/O?  Do the other threads in the process still run, or is the complete
> process suspended?

In either packages, a blocked thread should not block other threads, unless they

are waiting for the blocked thread.

karl.


------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.networking
Subject: Re: C++ templates:  More than Turing Complete?
Date: Wed, 18 Aug 1999 18:01:19 -0700


> In fact, given any script which terminates (in any scripting language)
> it should be at least *possible* to produce an equivalent assembly
> program (or equivalent C program or whatever) of finite length and
> complexity, should it not?

        This question only seems interesting because you are not being precise
in your use of the word 'equivalent'.

        DS

------------------------------

From: [EMAIL PROTECTED] (Davin McCall)
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.networking
Subject: Re: C++ templates:  More than Turing Complete?
Date: Thu, 19 Aug 1999 02:18:03 GMT

Functionally equivalent is certainly possible - even if your assembly
is just a script interpreter with the script builtin. But I would go
one step further and say that it ought to be possibly to write an
algorithmically-identical (or algorithmically-very-similar) program.
Situations that the compiler couldn't handle might have to be handled
explicitly,  but I don't think it would be too big of a problem.

Incidentally, if a C++ compiler couldn't compile the example given to
demonstrate "why not all C++ programs can be compiled to finite
assembly code" (although some that can't could be run as a script), it
is a problem with the compiler design rather than any logical
impossibility.

eg

==== begin ====

template <class t> func(T x, int i)
{
        if( i > 0 ) func(&x, i - 1);
}

main()
{
        int i;
        func(i, 10);
}

===== end ====

This ought to be compilable, as it is algorithmically identical to the
following C program:

func2(void *x, int i)
{
        if(i > 0) func2((void *)&x, i - 1);

func1(int x, int i)
{
        if( i > 0 ) func2((void *)&x, i - 1);
}

main()
{
        int i;
        func1(i, 10);
}

==== end ====


The key is that func2 handles all the cases where 'x' is a pointer
sufficiently. As it never dereferences a pointer, and int * and an int
****** can be treated the same way.

Davin.


On Wed, 18 Aug 1999 18:01:19 -0700, David Schwartz
<[EMAIL PROTECTED]> wrote:

>
>> In fact, given any script which terminates (in any scripting language)
>> it should be at least *possible* to produce an equivalent assembly
>> program (or equivalent C program or whatever) of finite length and
>> complexity, should it not?
>
>       This question only seems interesting because you are not being precise
>in your use of the word 'equivalent'.
>
>       DS

__________________________________________________________
       *** davmac - sharkin'!! [EMAIL PROTECTED] ***
my programming page: http://yoyo.cc.monash.edu.au/~davmac/

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.os.linux.hardware,comp.os.linux.misc
Subject: Re: Linux file-size limit?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 19 Aug 1999 01:45:07 GMT

On Wed, 18 Aug 1999 10:36:40 -0400, Ted Pavlic <[EMAIL PROTECTED]> wrote:
>I really should read the rest of the thread because I'm sure someone has
>already explained this.
>
>On a 32-bit file system, the biggest file you can have is 2^31 bytes.
>(2147483648 bytes) That's the largest number that the file system can
>address. This limitation isn't specific to ext2. It's the same with any
>other 32-bit file system. (NTFS, for example) There's no getting around
>it... until you have a 64-bit file system. :)
>
>I apologize if someone already has mentioned this and now I'm just wasting
>time.

This would be relevant if we were talking about filesystems that
didn't support >32 bit file sizes.

ext2, in particular, can support file sizes up to 1T, since files are
segmented.

The problem is that:
a) You can't usefully NFS mount that file, because NFS only allows
file sizes up to 2^31.

b) You can't read all of it using standard C file manipulation
functions on 32 bit architectures because the "FILE *" structure only
allows addressing the first 2^31 bytes of the file.
-- 
There's a new language called C+++.  The only problem is every time
you try to compile your modem disconnects.
[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/linuxkernel.html>

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Subject: Re: most efficient way to zero out a partition?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 19 Aug 1999 01:45:13 GMT

On Wed, 18 Aug 1999 20:43:22 GMT, Juergen Heinzl
<[EMAIL PROTECTED]> wrote: 
>In article <[EMAIL PROTECTED]>, Ronald Cole wrote:
>>I feel the need to zero out my /dev/sda10.  It's a bit over 1-gig
>>and so:
>>
>>      dd if=/dev/zero of=/dev/sda10
>>
>>takes a *long* time.  How would I figure out the optimal block size
>>to get the job done in the shortest amount of time without resorting
>>to trial-and-error?
>
>I'd do a low level format if that is good enough and your controller
>supports it, else I'd use dd. Given the time this posting required
>it would be ready already 8)

Alternatively pick arbitrarily.

I'd go with 
   dd if=/dev/zero of=/dev/sda10 bs=64k
which will chew up 65536 bytes at a shot.  

It's small enough that it's not liable to have any ill effects on
memory consumption.  It's large enough that there shouldn't be nearly
as much overhead as with the 512 bytes that I suspect it defaults to.
-- 
Hit the philistines three times over the head with the Elisp reference manual.
-- Michael A. Petonic
[EMAIL PROTECTED] <http://www.ntlug.org/~cbbrowne/lsf.html>


------------------------------

From: d s f o x @ c o g s c i . u c s d . e d u (David Fox)
Crossposted-To: linux.redhat.misc,linux.redhat.rpm
Subject: Re: Why so inefficient source RPM's ??
Date: 18 Aug 1999 20:29:17 -0700

Johan Kullstam <[EMAIL PROTECTED]> writes:

> > 
> > Complain to the maintainer of the app. 
> > RPM is fully capable of these.
> 
> it's not.
> 
> when you build from a spec, rpm makes binarary rpms and source rpms.
> it does not make a package containing spec file and patches *without*
> the main source tarball.
> 
> > It is up to whomever maintains the rpms (not always the same as whomever
> > maintains the app) to distribute these.
> 
> i want a no-source rpm.  i want the source-less rpms to be found on
> ftp sites.  the application maintainer cannot change this situation
> one way or another.  (unless you are talking about the maintainer of
> the rpm application only.)
> 
> i really like sources who are kind enough to include an
> rpm spec file within the tarball.  this, an application maintainer,
> can do.

The RPM system is quite capable of creating no-source RPMS, but it is
extra work for the application maintainer.  What you really want is a
way to remotely extract and retrieve the spec file from a regular RPM,
and then to extract and retrieve individual sources and patches.  You
could then build a tool to update a local source RPM from a remote
source RPM.
-- 
David Fox           http://hci.ucsd.edu/dsf             xoF divaD
UCSD HCI Lab                                         baL ICH DSCU

------------------------------

From: "Charles Sullivan" <[EMAIL PROTECTED]>
Subject: Re: most efficient way to zero out a partition?
Date: Wed, 18 Aug 1999 22:48:08 -0400


Eric Hegstrom wrote in message <[EMAIL PROTECTED]>...
>Speaking of wasting bandwith ....
>
>Ronald Cole wrote:
>
>> If you don't know the answer, then please don't waste everybody's time
>> and bandwidth.
>
>
>Ok, so now I'm guilty to. 


I'd be interested in finding out how long it actually does
take, even with the default blocksize - 1 min, 10 min, an hour?



------------------------------

From: David Schwartz <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.networking
Subject: Re: Network stack rewrite
Date: Wed, 18 Aug 1999 19:47:40 -0700


        Maybe it's just me, but this seems an awful lot like, "I want to
replace the engine in my car with a larger one -- how do you get the
hood open?"

        DS

Daniel wrote:
> 
> I wish to add some new function and remove some of the unneeded stuff from
> the lunix com stack.  The  roblem is that I have no idea where to start.  I
> have entered many IRC chats on the subject and get told "Go to the source!"
> 
> Well the source is a bit too OTT for me at the moment and I want to gently
> accustome myself to it.
> 
> The question is.
> 
> Where can I get information about the structure of the linux comstack?  Some
> simple stuff at first and perhaps a few whitepapers or development docs.  I
> will contact the writer of the stack as a last
> move.
> 
> Dan
> 
> Answers here or to [EMAIL PROTECTED]
> 
> Thanks...
> 
> Last time I posted this i was told that the development was undocumentated.
> I believe that for anything as sucessfull a linux must have some recording
> process behind it to let all the developers work togeather.

------------------------------

From: Tristan Wibberley <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.misc
Subject: Re: why not C++?
Date: Wed, 18 Aug 1999 01:54:26 +0100
Reply-To: [EMAIL PROTECTED]

Kaz Kylheku wrote:
> 
> On Mon, 16 Aug 1999 15:39:56 +0100, Tristan Wibberley <[EMAIL PROTECTED]>
> wrote:
> >While I'm not opposing your conclusion that C++ in kernel is bad, I
> >would like to take issue with two of the reasons you present below.
> >
> >Bjorn Reese wrote:
> >>
> >> To add salt to injury, you cannot use exceptions in global static objects
> >> because their constructors are executed before main(), and therefore
> >> cannot catch the exception.
> >
> >You can use exceptions in global static objects. You reserve the space,
> >and construct the objects in main using placement new. This is what you
> >do in C, and what you should do in C++.
> 
> Right, but that is awfully inconvenient. What is the point of having global
> constructors then if you are going to call the constructions from main?


Exactly. Global constructors are rarely useful, there is not much point
to them (IMHO) except making C++ easier to start learning (and it means
more bugs early on, but you don't have to learn a great deal of new ways
to do things to get started).


> If you are going to go to all this trouble, you might as well just not use a
> constructor for doing the initialization. Have a default construtor that
> doesn't do anything heavy such as acquisition of resources. And then have an
> initialize() method which either returns false or throws or whatever. Then you
> don't have to mess around with crap like placement new.


You would normally not construct until you have some initialisation, C++
is unforgiving performance wise if you do.


> I find that in practice it's easier just to dynamically allocate the global
> object. That way you have control about the timing of its construction,
> and can catch exceptions and all that jazz.


Yes, my statement above can be used as part of an argument for not using
global static objects.


> >> I've also found it more difficult to debug C++ than C (especially the
> >> code of other people,) because (operator) overloading can make otherwise
> >> innocent looking code do much more than you had expected.
> >
> >This is the same as using c_integer_shift( myint, myshift ). It
> >shouldn't do anything more than shift the integer by the specified
> >amount, but it might.
> 
> Funny you should say that, given that the shift operators are used for
> performing input and output in C++.


Yes, this is dumb, extremely dumb - it has made lots of people misuse
them because they see it in the standard. I never liked that at all.
Fortunately, they are just for simple hacked together code and you don't
have to demean yourself by using them :)


> >because people who are that dumb are not usually capable of writing a
> >kernel ;)
> 
> Who wrote MacOS or Windows 98 then? ;)

Sorry, let me correct myself "people who are that dumb are not usually
capable of writing a *good* kernel" ;)

-- 
Tristan Wibberley

------------------------------

From: Bill Pitz <[EMAIL PROTECTED]>
Subject: Re: Network stack rewrite
Crossposted-To: comp.os.linux.networking
Date: Thu, 19 Aug 1999 03:16:50 GMT

In comp.os.linux.networking David Schwartz <[EMAIL PROTECTED]> wrote:

>       Maybe it's just me, but this seems an awful lot like, "I want to
> replace the engine in my car with a larger one -- how do you get the
> hood open?"

Sort of.  But in reality, if you want to re-write the network stack, you
should already have enough knowledge of C, Linux, TCP, and Networking that
the only questions you should have to ask in the Linux newsgroups would be
individual little things.

You've got a big job ahead of you if you actually think you're going to be
able to re-write the entire network stack and keep most existing functionality.

-Bill
-- 
Bill Pitz                                             bill at svn.net
Silicon Valley North, Inc.                                www.svn.net
Internet and World Wide Web Services                   (707) 781-9999

------------------------------

From: Lijun Wang <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.admin,comp.os.linux.help,comp.os.linux.misc
Subject: CONFIGURE KERNEL VARIABLES
Date: Thu, 19 Aug 1999 00:03:33 -0400

Can anyone tell me how to configure the Linux
kernel variables? When I tried to install Oracle
for Linux, it required me to set up such kernel
variables as: SHMMAX, SHMMIN, SHMSEG...

I am new to Linux, so please help me out.

Thanks in advance!

Lijun


------------------------------

From: [EMAIL PROTECTED] (Christopher B. Browne)
Subject: Re: most efficient way to zero out a partition?
Reply-To: [EMAIL PROTECTED]
Date: Thu, 19 Aug 1999 04:54:19 GMT

On 18 Aug 1999 20:01:52 -0400, Doug DeJulio <[EMAIL PROTECTED]> posted:
>In article <[EMAIL PROTECTED]>,
>Ronald Cole  <[EMAIL PROTECTED]> wrote:
>>I feel the need to zero out my /dev/sda10.  It's a bit over 1-gig
>>and so:
>>
>>      dd if=/dev/zero of=/dev/sda10
>>
>>takes a *long* time.  How would I figure out the optimal block size
>>to get the job done in the shortest amount of time without resorting
>>to trial-and-error?
>
>I find that using a block size that's too big doesn't cause any
>trouble.  For example, when I write disk images to floppies I always
>set the block size to the full length of the data.  So, use a
>relatively huge block size, like 10 megabytes, and things should be
>quick.

This reaches the point of diminishing returns quite quickly.

I'm not 100% certain of what the "default" block size is, but 512
bytes comes to mind.

If that's the number then a 100MB partition will involve, roughly
speaking, 100M/512 system calls, which is about 200000.  In effect,
that's 200000 context switches.

Increasing the block size by a factor of 10, to 5120 bytes, will cut
the number of system calls/context switches by 90%.

Increasing block size by another factor of 10, to  50K, would cut the
number of system calls by another factor 90%, but that only diminishes
the *number* of calls by about 9%.  

The next increase, from 50K to 500K, would diminish the *number* of
system calls by about 1%.  And the move to 5MB would diminish the
number of system calls by about 0.1%.

In other words, the first few increases are highly worthwhile, and the
later ones are not.

Personally, I'd assume 64K to be "good enough" in a whole lot of
cases:
- It's *vastly* better than 512 bytes, or even 1K, and yet,
- It doesn't consume a whole lot of memory.
-- 
--Kill Running Inferiors--
[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/lsf.html>

------------------------------

From: [EMAIL PROTECTED] (Kaz Kylheku)
Subject: Re: most efficient way to zero out a partition?
Date: Thu, 19 Aug 1999 05:14:03 GMT

On 18 Aug 1999 20:01:52 -0400, Doug DeJulio <[EMAIL PROTECTED]> wrote:
>In article <[EMAIL PROTECTED]>,
>Ronald Cole  <[EMAIL PROTECTED]> wrote:
>>I feel the need to zero out my /dev/sda10.  It's a bit over 1-gig
>>and so:
>>
>>      dd if=/dev/zero of=/dev/sda10
>>
>>takes a *long* time.  How would I figure out the optimal block size
>>to get the job done in the shortest amount of time without resorting
>>to trial-and-error?
>
>I find that using a block size that's too big doesn't cause any
>trouble.  For example, when I write disk images to floppies I always
>set the block size to the full length of the data.  So, use a
>relatively huge block size, like 10 megabytes, and things should be
>quick.

When I write floppies, I don't bother with dd, because cp does the job just
fine. Unlike dd, the cp and cat programs do not open their output file with the
O_SYNC flag (in the case of cat, it is the shell that opens the file and passes
the descriptor as the standard output to cat, of course).  The absence of
O_SYNC means that your kernel buffers the data as it sees fit and flushes it at
its own convenience. This is handy when writing floppies because the program
just dumps the job to memory and finishes before the write is committed; you
don't have to mess around with job control.

It's a misconception that dd is needed for manipulating block devices.  You
need dd for *character* devices that require the read() and write() system
calls to not exceed a particular size. It's also handy for creating files of a
specified size, and for extracting from, or writing to, arbitrary regions of
files, and performing a few oddball conversions like EBCDIC <-> ASCII.

But for straight copying, you don't need it. Even the above job of blanking
/dev/sda10 could just as well be done with

        cat /dev/zero > /dev/sda10

or

        cp /dev/zero /dev/sda10

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to