Linux-Development-Sys Digest #881, Volume #6     Fri, 25 Jun 99 03:14:41 EDT

Contents:
  Re: TAO: the ultimate OS (Terry Murphy)
  Re: using C++ for linux device drivers (Mario Klebsch)
  Re: Why we are still holding on to X Windows ("Stefan Monnier " 
<[EMAIL PROTECTED]>)
  Re: vesafb for S3 868? ("JAGAT P BRAHMA")
  Re: TAO: the ultimate OS (Terry Murphy)
  Re: libc and a released product. (Andreas Jaeger)
  Re: Configuration as Database (was Re: TAO: the ultimate OS) (Christopher B. Browne)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Terry Murphy)
Crossposted-To: alt.os.linux,comp.os.linux.advocacy,comp.os.misc,comp.unix.advocacy
Subject: Re: TAO: the ultimate OS
Date: 25 Jun 1999 03:44:41 GMT

In article <[EMAIL PROTECTED]>,
Frank Sweetser  <[EMAIL PROTECTED]> wrote:

> - who says that the design and code->debug->code proccess can't happen
>   somewhat in parallel?

I'm glad you brought this up. Research from IBM demonstrates that the
earlier defects are found in the product, the easier they are to fix.
They published a table showing the cost of fixing a defect, depending
on where the defect was introduced relative to where it was found. The
cost of finding a defect in requirements was 50 to 200 times more costly
to fix that not finding that defect until coding was complete.

When you are doing two such distant phases of the product in parallel,
you are going to inevitably going to introduce defects into the system
a code time, which should have been found at design/requirements time
(i.e. if you bothered to do it), which will cost you somewhere in the
neighborhood of the figure quoted above.

I have actually attempted this approach for one of my projects, and
found that a great deal of the code I did was wasted. I have since
started over and have a much better of where I'm going (but no code ;-)

> - who says that once you code something up, you can't decide that it
>   sucked and throw all/some away, and start a new design with more
>   experience behind it?

For what it's worth, the formal software process does include a 
prototype. But the real point of a design document is to really 
give it a good beating, not just read it passively. These kinds of
defects are supposed to be found earlier.

-- Terry

------------------------------

From: Mario Klebsch <[EMAIL PROTECTED]>
Subject: Re: using C++ for linux device drivers
Date: Thu, 24 Jun 1999 20:16:06 +0200

Justin Vallon <[EMAIL PROTECTED]> writes:

>Mario Klebsch <Mario [EMAIL PROTECTED]> writes:

>> Justin Vallon <[EMAIL PROTECTED]> writes:
>> >These are not C++-ish.  There are completely reasonable
>> >implementations of ::operator new and ::operator delete in terms of
>> >the C memory allocator.  Why do you require anything more complicated?
>> >If malloc doesn't do a good job, then malloc should be re-written.
>> 
>> Hey, we are in comp.os.linux.development.system!!!
>>                                          ^^^^^^
>> 
>> And we are talking about kernel code. AFAIK, there is no malloc in
>> the kernel. So you have to do something yourself.

>From what I understand, it would be to use kmalloc.

Yes, and how to you tell the compiler to use kmalloc instead of malloc?
Having ::operator new and ::operator delete seems to be the right tool
for me.

73, Mario
--
Mario Klebsch           [EMAIL PROTECTED]

------------------------------

From: "Stefan Monnier <[EMAIL PROTECTED]>" 
<[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.x
Subject: Re: Why we are still holding on to X Windows
Date: 25 Jun 1999 00:22:00 -0400

>> Care to mention those `design flaws' ?
>> To me `design flaw' and `to be worked out' are mostly mutually exclusive.
> Why? It is sufficiently modular for the missing bits to be added. These
> are still (IMHO) design flaws.

That;'s not my understanding of `design flaw'.
I usually call that an `incomplete implementation'.  If the design is
flexible enough to add the missing bits easily, then it is anything *except*
a design flaw.

>>> which I cannot live without, such as a single-click to open
>>> files/directories,
>> You're talking major design flaw here !
> True enough, at least according to most of the fifty or so users I've
> asked to trial KDE and GNOME. The near-universal opinion has been "We love
> KDE, GNOME is cool but inconsistent and generally harder to use than KDE".

This is not a GNOME design flaw, this is a shortcoming of the file-manager.
It's like saying that a car is fundamentally flawed because the steering
wheel is too fat.


        Stefan

------------------------------

From: "JAGAT P BRAHMA" <[EMAIL PROTECTED]>
Subject: Re: vesafb for S3 868?
Date: Fri, 25 Jun 1999 00:44:19 -0700

I am a newbie. can somebody tell me how to apply that patch to the kernel
2.2.9.
I have already downloaded the kernel and unpacked it( unzipped and
untarred).

Thank you in advance

JPbrahma

Olav Woelfelschneider <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Modemch <[EMAIL PROTECTED]> wrote:
> M> There's a patch for the vesa-fb code at http://www.colonel-panic.com.
It
> M> works with S3 chipsets (worked with my Stealth 64).
>
> Thanks. I'll try this right now.
>
> --
> Olav "Mac" W�lfelschneider                         [EMAIL PROTECTED]
> PGP fingerprint = 06 5F 66 B3  2A AD 7D 2D  B7 19 67 3C  95 A7 9D AF
> AIRPORT: A place where people hurry up and wait.
>



------------------------------

From: [EMAIL PROTECTED] (Terry Murphy)
Crossposted-To: alt.os.linux,comp.os.linux.advocacy,comp.os.misc,comp.unix.advocacy
Subject: Re: TAO: the ultimate OS
Date: 25 Jun 1999 05:20:20 GMT

In article <[EMAIL PROTECTED]>,
void <[EMAIL PROTECTED]> wrote:
>On 24 Jun 1999 03:10:22 GMT, Terry Murphy <[EMAIL PROTECTED]> wrote:
>>
>>Perhaps, but I very rarely see design documents or requirements
>>documents for anything that comes out of the free software community.
>
>Then you're not looking in the right places.  What about soft updates?
>What about vinum, the FreeBSD volume manager?  What about Coda?  

Coda is totally different -- it's a university research project. It
does not at all follow the form of ESR's software development model.
I am not familiar with the others, but they do not sound like large 
products.

>I'm beginning to suspect that you're just stating your prejudices, without
>having done any research whatsoever.

I did acknowledge GNOME as fulfilling my wants, a couple of times. The
Hurd does as well. As for doing research, where am I supposed to look?
Maybe I'm not being resourceful enough, but if I don't see anything on
the website, or in the source directory for the program, I tend to give
up.

>Linux and linux-oriented communities are far from the be-all and end-all
>of open source computing.

Then who is? Linux is by and away the most visible.

>Criminy, you're using Microsoft as a paradigm of software engineering?
>That's truly perverse.

I said "reasonably close". As I said in my other follow-up, most people
think their applications are top notch, and those in products where you
whip out VI and start hacking. Microsoft's development system amazes me
in how fast they are able to turn out high quality and well tested products.

-- Terry

------------------------------

From: Andreas Jaeger <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps
Subject: Re: libc and a released product.
Date: 25 Jun 1999 07:57:35 +0200

>>>>> Christopher Browne writes:

Christopher> On 23 Jun 1999 13:23:27 -0700, david parsons <[EMAIL PROTECTED]>
Christopher> wrote:
>> What Linux frantically needs is (a) a regression test suite that
>> can be run on new libraries to validate them and (b) to have the
>> major distributions refuse to ship new libraries that don't
>> pass the test suite.

Christopher> Agreed.

Christopher> There may already be something "internal" at Cygnus, whether as part
Christopher> of EGCS, GLIBC, or otherwise; if there were a public test suite, that
Christopher> would be, while not the most sexy thing in the world, a terribly
Christopher> valuable thing to have.

Christopher> The person *vastly* more likely than anyone else to know about this
Christopher> would be Ulrich Drepper.

glibc has a test suite.  Just run `make check'.  When errors get
reported I try to make a test program out of them and add this test
program to glibc so that we don't make this error again:-).

The PROJECTS file of glibc mentions:
[ 2] Test compliance with standards.  If you have access to recent
     standards (IEEE, ISO, ANSI, X/Open, ...) and/or test suites you
     could do some checks as the goal is to be compliant with all
     standards if they do not contradict each other.


[ 3] The IMHO opinion most important task is to write a more complete
     test suite.  We cannot get too many people working on this.  It is
     not difficult to write a test, find a definition of the function
     which I normally can provide, if necessary, and start writing tests
     to test for compliance.  Beside this, take a look at the sources
     and write tests which in total test as many paths of execution as
     possible.

Feel free to help here.

Christopher> <agenda id="hidden"> 
Christopher> What the world needs is a nearly-automated system where one may submit
Christopher> snippets of C code (possibly not much more than code inside a main())
Christopher> along with expected output, that should probably amount to
Christopher> "pass/fail".

Christopher> The snippets, along with output, would then be compiled (as needed),
Christopher> run, and output compared to expectations.

Christopher> You want to report a bug?  Provide a test case that is expected to
Christopher> output "pass," and which currently outputs "fail," and stick it in the
Christopher> database.
Christopher> </agenda>

Christopher> There is such a scheme for EGCS, and I think there's one for GNUStep.
Christopher> TOG has worked on something vaguely like this for a draft LSB
Christopher> proposal, not that it is necessarily coded up to be run automagically
Christopher> just yet.

Christopher> There almost certainly should be one for GLIBC.

As I've mentioned above, we've got a test suite but it's not based on
dejagnu - and it's not uniform.  It doesn't output pass/fail, the exit
value tells whether it passes.

Christopher> This is one of those unsexy tasks that it would probably take some
Christopher> financial sponsorship to get people to work on.

Feel free to sponsor somebody;-).

Andreas
-- 
 Andreas Jaeger   [EMAIL PROTECTED]    [EMAIL PROTECTED]
  for pgp-key finger [EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (Christopher B. Browne)
Crossposted-To:  alt.os.linux,comp.os.linux.advocacy,comp.os.misc,comp.unix.advocacy
Subject: Re: Configuration as Database (was Re: TAO: the ultimate OS)
Reply-To: [EMAIL PROTECTED]
Date: Fri, 25 Jun 1999 04:59:48 GMT

On 25 Jun 1999 03:30:40 GMT, Terry Murphy <[EMAIL PROTECTED]>
posted: 
>In article <[EMAIL PROTECTED]>,
>Christopher B. Browne <[EMAIL PROTECTED]> wrote:
>
>>"No reason"?  I suspect you don't understand the way database systems *or*
>>filesystems are implemented.
>
>I thoroughly understand, at least, how filesystems are implemented.
>About a year and a half ago, I personally coded the whole block
>pointer scheme for a filesystem (i.e. direct blocks, single indirect
>blocks, and double indirect blocks).
>
>>Databases are a big hive of pointers.  If built atop of a filesystem, which
>>is another big hive of pointers, this gives you one big interconnected nest
>>of pointers.  If one block of the FS goes out, your database is corrupt, and
>>the corruption may extend arbitrarily distantly from the deepest level.  
>>
>>In other words, a DB atop a filesystem is not only vulnerable to corruption
>>of the file (e.g. - if a buggy program at user level crashes/writes bad
>>data), it is also further vulnerable to corruption that comes from
>>filesystem bugs, which are somewhat indistinguishable from corruption
>>resulting from hardware failures.
>>
>>In contrast, text files are linear arrays of information atop a single
>>FS-level "hive of pointers." If a block blows out, this removes a chunk of
>>that linear array, which is obviously still a bad thing, but not as vastly
>>corruptive, since there aren't the multifariously-indirected pointers.
>
>I -thouroughly- understand this, and this is exactly what I was talking
>about in my last message when I referred to "metadata". I am somewhat
>taken aback that you wrote all of the above, actually.
>
>You can implement a registry system with no additional metadata than a
>text file, if you use fixed sized records, and don't have additional
>indexes, and yet still have all of the advantages of a registry
>system.  There can be exactly no additional metadata than in a text
>file, and thus it is no more subject corruption. If you implement  a
>full scaled database, yes, there is additional chance of corruption.
>But you do not need that to gain the advantages of a registry system.

Sadly, that means that you gain very little over the text file.

--> Fixed size blocks means wasted storage compared to text

--> No indexes means no benefit from "better than O(n) expected accesss time"

Entertainingly enough, that model (aside from the indexing issue) takes us
back towards mainframe processing models where "everything is a block."
Robustness is certainly enhanced, and easier buffering, and having clearly
separate index structures that aren't forcibly expected to be as robust
(since they can just get regenerated as needed) can bring back performance.

>>And vi includes sorts of functionality that regedit doesn't.  A macro
>>programming system.  Regular expression parser.
>
>I know. I'm not sure what your point is. 
>
>If your point is that VI is more functional than REGEDIT in the event
>of a system crash because it provides macro programming, well, I 
>diasagree.
>
>If your point is that VI is bloated and people should use EDLIN
>(or ED if using Unix) for this sort of task, well, the typical 
>registry hater's argument is: "I can just use VI to fix the system)

No, the point was that comparing the sizes of vi versus regedit is pretty
silly, as they do substantially different things.  Trying to pull them back
to comparability probably requires taking bits of "bloat" out of both.

(Aside: The RISKS digest recently reported that an analysis of inaccessable
code and objects in REGEDIT results in them finding that most of the binary
material isn't necessary, and furthermore locates things like unused icons
and menus that got defined multiple times by some MS IDE tool that then just
left this code in.  If they wrote a "code shaker" like Common Lisp vendors
do, it is entirely possible that most of the bloat in a whole lot of MS
utilities could go away.  Of course, then Samsung, Seagate, and Western
Digital would send in hit men to kill the manager responsible for cutting
down on their sales :-).)

>>It would be rather more fair to compare regedit, weighing in at 71K, with
>>something like t.exe, which weighs in at 1K.
>
>I do not know what t.exe is, but assuming it is a small text editor, 
>there is no reason you could not implement a very small registry 
>editory, since there are calls to do all of the real work. 
>
>At the minimum all you need is code to change a value, see the values,
>and get the keys. These are INHERENTLY simpler operations, and could
>be implemented with less code than it would take to implement a general
>purpose text editor which can perform the operations as efficiently.

The *real* point is that when text is used, one can muster the force of any
of UNIX's text manipulation tools, whereas a binary database leaves you
limited to whatever tools have been specifically written to manipulate the
binary database.

>>You have to compare apples with apples.
>>
>>To compare with Oracle, you have to have a version of Oracle that:
>>
>>a) Removes the lock manager LCK,
>>b) Removes the transaction manager CKPT,
>>c) Removes the archive logging facility (e.g. - LGWR),
>>d) Allows processes to modify database files directly, without locking,
>>rather than having to proxy through such things as DBWR,
>>e) Implements atop files atop DOS FAT, rather than using raw partitions.
>>
>>Oracle doesn't just have *one* thing that works to provide a robust system;
>>it has a whole host of work processes working together.
>
>My problem with this thinking is that you are only referring to the
>Windows implementation of the registry. I am merely advocating
>database configuration systems in general, and there's no reason
>that one couldn't handle this stuff.
>
>>And it is worth noting that they don't use a database to store the
>>system configuration for the database; that is stored in, Surprise,
>>Surprise... 
>>    Text files.
>
>This is a good point, but I suspect it is just to fit in with other
>Unix tools as far as configuration tools. Does Oracle running on VMS or
>Windows also use text files, or just on Unix?

I have not installed it on Windows, but it seems reasonable to expect this.
After all, OCP documentation doesn't indicate UNIX-specificness, and I've
seen the same behaviour on Novell.

It would be pretty dumb to have a separate data parsing system to debug just
to support VMS, which, if it was using a DB, would probably use RDB :-), and
another to support NT, so while I can't confirm behaviour on those
platforms, I'd be real surprised to hear that Oracle used other than text...

>>That is a separate question.
>>
>>I would promote the use of some common configuration libraries, such as
>>libPropList.  It was created particularly for use with WindowMaker, and is
>>compatible with the property list schemes used in NeXTstep, OPENSTEP, and
>>GNUstep.  It sets up a structure that may be read, manipulated, written, and
>>synchronized with a file, and by providing an opaque data structure in
>>between, removes the need for an application to be aware of the form in
>>which the data is represented.  
>>
>>Its implementation uses text files, which nobody (other than those that
>>apparently worship O(1) or O(log(n)) operations that result in building
>>fragile multilayered disk-based pointer hives) seems to complain about.
>>
>>Note that by using facilities like libPropList, this means that:
>>the programmer does *not* need to create:
>>a) Yet Another File Format,
>>b) Yet Another Parser,
>>and debug, and so forth.
>
>I am not familiar with libPropList (one of my favorite features of
>WindowMaker, is that it just saves my configuration operations without
>me having to mess with text files, so I never much bothered to see what
>was going on under the hood), but it sounds like a fine product (from
>what you describe). My problem, of course, with this sort of thing, is
>that it is non-standard, so only a few tools end up using it.

It's interoperable with three separate implementations on three different
platforms, which makes this a particularly peculiar instance of being
"non-standard."

>I hate to sound cynical/close-minded, but the only way to really
>impress me with this sort of tool, is for every file/tool on Unix to
>use it. The whole fact that the registry is used in everything in
>Windows, is the whole essence of its advantage.

I don't see a mandate for everyone to reimplement all the programs they've
written just to do that; that approach represents the sort of "fascism" that
the UNIX community (and others) rather frown on.

If a few more programs adopt libPropList this year, that seems to me to be
quite adequate.  (That seems likely, by the way.)

>>Have you profiled it to establish that parsing is forcibly the cause of
>>slowness?
>>
>>Or are you merely claiming this as a bald fact, devoid of any real evidence
>>to back up such claims?
>>
>>The sources are available.  Your mission, if you wish to claim evidence that
>>text parsing is the cause of its slowness, is to compile Linuxconf with
>>suitable profiling options, collect statistics, and tell us how much of the
>>wait times related to text parsing.
>
>OK, that's fair. I'm basing my claim on the fact that the speed is much
>worse than for Windows NT when doing the equivalent operations (e.g. 
>changing the DNS server, or setting up PPP). Anyways, I will not mention
>this issue again until I take you up on your challenge.
>
>>Oh, good.  Citing sendmail.cf as an example of "why you should use a
>>database instead."
>
>OK, I will admit that this was not a good thing to say. 
>
>To be more fair, I will say that a GUI read/write editor of /etc/fstab
>would be MUCH more complex than would be an editor for the equivalent
>information stored in a binary file. True, it would not be too complex,
>but it has so much more worries that the editor of the binary file
>doesn't have: dealing with user errors, and preserving exactly what the
>user inputted are two issues.

/etc/fstab is a rather more appropriate example than config for Sendmail.

The significant *loss* that comes from moving to a binary format is
threefold: 
a) I can no longer use RCS to provide a transactional history, with the
ability to compare, and roll back arbitrarily far at will.

That is one of the benefits that having a host of text-oriented tools buys
you.  And this sort of thing can be done pretty transparently.

b) You lose the ability to have manual/"visual" groupings of entries.

c) You lose the ability to *simply* drop in comments that easily apply to
either an entry or a group of entries.

My classic alternative to /etc/fstab is to do the following:

# mkdir /etc/fstab
# echo 192.168.1.1 > /etc/fstab/dantzig.brownes.org
# echo 192.168.1.2 > /etc/fstab/wolfe.brownes.org
# echo 192.168.1.3 > /etc/fstab/knuth.brownes.org
# cd /etc/fstab
# ln dantzig.brownes.org dantzig
# ln wolfe.brownes.org wolfe
# ln knuth.brownes.org knuth
# ln -s dantzig cache
# ln -s wolfe coda

Interesting properties:

- Looking up the IP address of "dantzig" simplifies, from parsing
/etc/fstab, to simply trying to open /etc/fstab/dantzig, and returning the
IP address in that file, if it exists.  (Note the mixture of hard and soft
links.)

- It probably consumes a little more disk space, which rarely matters.

- It is arguably a little harder to back up than the traditional approach,
although not dramatically so.  Tar is still your friend...

- It's more transactional.  You can fiddle with just one entry, with less
risk of accidentally blowing away additional entries.

- The ordering becomes less controllable.  I can't keep dantzig next to
wolfe; they're just keys in a table (e.g. - filenames in a directory).

- Comments have no good place to go.

It's still text, but more DB-like.  Functionality is gained; functionality
is lost.  I doubt we'll be changing over to this scheme tomorrow.

-- 
Those who do not understand Unix are condemned to reinvent it, poorly.  
-- Henry Spencer          <http://www.hex.net/~cbbrowne/lsf.html>
[EMAIL PROTECTED] - "What have you contributed to free software today?..."

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to