Linux-Development-Sys Digest #194, Volume #6     Thu, 31 Dec 98 01:14:25 EST

Contents:
  Re: Linux Registry Stone Bitch to Administer (Robin Becker)
  Re: HELP ME!!! (fiReStaRteR)
  Re: Registry for Linux - Why? (Christopher Browne)
  Re: Linux Registry Stone Bitch to Administer ("Christian Gross")
  Re: Linux Registry Stone Bitch to Administer ("Christian Gross")
  Re: Linux Registry Stone Bitch to Administer ("Christian Gross")
  Re: Linux Registry Stone Bitch to Administer ("Christian Gross")
  Re: Registry for Linux - Why? (Christopher Browne)
  Re: Ethernet/Token Ring and Cabletron Switches (Kazin)
  Re: [Q] SCSI Tape Setup (Michael Peterson)
  resetting kernel IPC parms (ag)
  Re: Registry - Already easily doable, in part. (Christopher Browne)
  Re: "One thing that looks definite is....Mac OS X Server in January..." (Christopher 
Browne)

----------------------------------------------------------------------------

From: Robin Becker <[EMAIL PROTECTED]>
Subject: Re: Linux Registry Stone Bitch to Administer
Date: Wed, 30 Dec 1998 19:04:30 +0000

In article <[EMAIL PROTECTED]>, Kevin Huber
<[EMAIL PROTECTED]> writes
>"Jens" == Jens Kristian S�gaard <[EMAIL PROTECTED]> writes:
>Jens> Well, use Ghost to make an image of the original harddrive. Shouldn't
>Jens> be that hard ;-)
>
>For testing purposes, you can have three partitions, a minimal boot
>one, a clean one, and a test one.  You always boot to the test
>partition until you want a fresh start.  When that happens you boot to
>the minimal partition and then use xcopy (maybe in a command file) to
>copy the entire NT installation over from the clean partition to the
>test partition.  This is what we do at work anyway.  The same idea
>could be used for testing Linux systems, although Unix installs
>usually don't hose the system like Windows installs.
>
>-Kevin
>
>
>
>
>
I can hardly see this happening with zero admin costs as is constantly
being claimed by M$; at my last job, despite SMS and the best efforts of
hundreds of super lusers no two machines were ever the same. Stuff would
fall over because of language dlls etc etc. We needed 500MB compressed
for NT plus all the nonsense.
-- 
Robin Becker

------------------------------

From: [EMAIL PROTECTED] (fiReStaRteR)
Subject: Re: HELP ME!!!
Date: 31 Dec 98 18:23:10 GMT
Reply-To: [EMAIL PROTECTED]

On Wed, 23 Dec 1998 11:50:50 +0900, midoplan <[EMAIL PROTECTED]> wrote:
>>                Notes for linux release 0.01
>>
>>
>>                0. Contents of this directory
>>
>> linux-0.01.tar.Z   - sources to the kernel
>> bash.Z             - compressed bash binary if you want to test it
>> update.Z           - compressed update binary
>> RELNOTES-0.01      - this file
>
>
>>This is a free minix-like kernel for i386(+) based AT-machines.
>>Full source is included, and this source has been used to
>>produce a running kernel on two different machines.
>>Currently there are no kernel binaries for public viewing,
>>as they have to be recompiled for different machines.
>>You need to compile it with gcc (I use 1.40, don't know
>>if 1.37.1 will handle all __asm__-directives),
>>after having changed the relevant configuration file(s).
>
>i want to rebuild linux 0.0.1 environment
>i can get a 'linux 0.0.1' kernel source, but bash, update,
>gcc compiler & ld and c-runtime library based kernel 0.0.1
>system calls sources used building linux 0.0.1
>
>how to get those? ( or linux 0.2 build environment )
>
>
>
>
>
>

The first Linux kernel was built in a Minix environnement.
Try minix related newsgroups to get it.

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Subject: Re: Registry for Linux - Why?
Date: 31 Dec 1998 02:55:15 GMT
Reply-To: [EMAIL PROTECTED]

On Wed, 30 Dec 1998 10:48:10 GMT, Konstantinos Agouros
<[EMAIL PROTECTED]> wrote: 
>Another thing that's not bad (we realized this on BSDI for a commercial product
>but it could be as well working for Linux) is much easier backup for crash-
>recovery. Consider the following: You install a server from cdrom. After that
>you have to supply it with IP-Addresses and configs for the services that run
>on it. If the initial installation asks for a backup-floppy or tape (but a flop-
>py is enough) that contains one registry and the server has software to generate
>everything out of that it's done and the machine comes up like before.

People have been known to install Red Hat Linux, and build an RPM
package that represents "local site configuration" to handle the local
"stuff" such as network configuration. 

Establishing this sort of information would be fairly readily
accomplished via a centralized registry, which is an attractive feature
thereof. 

In the absence of a registry, two somewhat RPM-oriented techniques
could be taken to try to gather up "local config changes":

a) One could use "find" to locate all files modified after the time at
which the system was initially installed.  Those files are likely the
ones that were customized. 

b) Reinstall *all* of the RPM packages that are installed on the system.
Those files that have been "user-customized" will be detected by RPM,
and will be saved off with names somewhat like unto /etc/hosts.rpmsave

Again, a find command can then be used to locate all of the customized
files.

The list of files could be used assortedly to:
 - cpio the files off into a backup archive, or
 - build an RPM "spec" file so that their configuration could be
   set up as a "standard" configuration.

>Another aspect is consistency-checking. That of course means you need to have 
>some means of entering data (like DNS-entries) that first checks for consistency
>against the other data entered. But there's also cross-references. For example
>(unless you use uucp) sendmail needs a running Nameservice if you want to be a
>mailserver. Of course one can start looking through all config-files 
>indiviudally but having one base of data makes stuff like this much easier...

Agreed.

Linuxconf tries to do some of this; it has a ways to go in terms of
getting more functional and mature; it further needs to get a little
more "transactional" in recording what it does in a way that is
supportive of the "administrator who wants to learn what needs to be
tuned up."

>The point with a registry for Unix-Systems is, that it must be editable by
>a texteditor by someone who knows what he or she is doing.

110% agreed.

-- 
"It's not about 'Where do you want to go today?'"; "It's more like,
'Where am I allowed to go today?'" 
[EMAIL PROTECTED] <http://www.ntlug.org/~cbbrowne/lsf.html>

------------------------------

From: "Christian Gross" <[EMAIL PROTECTED]>
Subject: Re: Linux Registry Stone Bitch to Administer
Date: Thu, 31 Dec 1998 03:49:18 +0100


Kevin Huber wrote in message ...
>For testing purposes, you can have three partitions, a minimal boot
>one, a clean one, and a test one.  You always boot to the test
>partition until you want a fresh start.  When that happens you boot to
>the minimal partition and then use xcopy (maybe in a command file) to
>copy the entire NT installation over from the clean partition to the
>test partition.  This is what we do at work anyway.  The same idea
>could be used for testing Linux systems, although Unix installs
>usually don't hose the system like Windows installs.
>
That is a good idea!!!  I think I will pursue that one.  BTW spoken like a
true Windows user!!!

Christian



------------------------------

From: "Christian Gross" <[EMAIL PROTECTED]>
Subject: Re: Linux Registry Stone Bitch to Administer
Date: Thu, 31 Dec 1998 03:43:50 +0100

Jens Kristian S�gaard wrote in message ...
>Why not just export the whole registry to a textfile (using
>REGEDIT.EXE) before betatesting, and then import the whole damn thing
>again after the testing's finished?


Nice idea, but the problem is that it brings back the problems you had
before.  What you want is the ability to pick certain parts.  For example, I
want to remove a DLL or set of DLL's.  The registration code for the COM
DLL's needs to be removed.  Then the software settings for the application
need to be removed.  And if you use MS software, then it is thrown across
ten million places in the registry.  Try doing this for two or three apps
and it becomes a painful experience.  Regclean is ok to a certain degree,
but you need nested relationships and dependencies, which gets tricky to
follow.  Take the ini file.  Blast the beast and when a file error, could
not find file comes up then resolve it.  For Windows programmers the
registry is a black box programmatically.  And therefore few of us do it
correctly.  Registry code is butt ugly.  This makes things even more
complicated.

Easy answer, FORMAT C:

But thanks for the hope!!!

Christian



------------------------------

From: "Christian Gross" <[EMAIL PROTECTED]>
Subject: Re: Linux Registry Stone Bitch to Administer
Date: Thu, 31 Dec 1998 03:46:14 +0100

Jens Kristian S�gaard wrote in message <[EMAIL PROTECTED]>...
>
>Well, use Ghost to make an image of the original harddrive. Shouldn't
>be that hard ;-)
>
The problem is that you get the old problems back again.  You want to say,
no I do not want that.  I would compare the registry to the following;

/etc is where most stuff is kept.  Now imagine saving network information in
eight different files.  Add on top of that per user dependencies...  (Ahhhh,
run to the hills!!!)

Christian



------------------------------

From: "Christian Gross" <[EMAIL PROTECTED]>
Subject: Re: Linux Registry Stone Bitch to Administer
Date: Thu, 31 Dec 1998 03:54:33 +0100


Taso Hatzi wrote in message <[EMAIL PROTECTED]>...

>I agree entirely. Installing software on WinXX/MsVMS systems scatters
>shit all
>over the place making it impossible to clean up. I want software to
>install
>itself in ONE place so that I can get rid of it when I'm done. If it
>wants to
>install libraries, it should ask me for permission to put them anywhere
>other than in it's own install directory.
>


YEAH!!!!! One place not ten million places.

Or how about the message on the NT setup

"Would you like to repair you NT installation"

Yes

Once the repair has "cleaned the machine", try cleaning up your "program
files" directory!!!!  You cannot because all of the files that F***** up the
system in the first place are locked on boot up.

Remember FORMAT C: is your friend!! <BG>

Christian





------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Subject: Re: Registry for Linux - Why?
Date: 31 Dec 1998 02:54:59 GMT
Reply-To: [EMAIL PROTECTED]

On Wed, 30 Dec 1998 15:57:50 GMT, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote: 
>In article <[EMAIL PROTECTED]>,
>  Thornton Prime <[EMAIL PROTECTED]> wrote:
>>
>> I agree with you that a registry is a bad idea, but here are some
>> arguments for a global registry:
>
>I think if it's done right it could be an incredible asset to Linux. Something
>that can take alot of the confusion out of configuring a UNIX system. If it's
>all in one place, you already know where to look when something's wrong. (of
>course having the data in a single file a la windows is IMHO a VERY BAD
>idea.) No one has to use it if he/she does not want to.

The Right Way to do this is not to try to do this in an authoritarian
fashion, which is the Windows Way, but to rather provide tools to
empower applications developers to "do configuration" in a better
fashion.

It would thus be appropriate to provide a Sample Implementation that
clearly defines a combination of:

- API (so that applications may conveniently get at config data without
having to worry about Physical Representation),

- Specification of Physical Representation, so that Alternative
Implementations may be created and validated.

  (Suppose, for instance, that the SI is a C version, and I'm using
  [Perl|Python|Scheme|ML|...] and don't want to link in the SI code...)

- Tools to help with common tasks such as backing up the config data,
comparing to old versions, checking when something was last fiddled
with, validating that the configuration is correct (e.g. - conforms to
the requirements of the Physical Representation), ... 

The point is to *MAKE IT EASY FOR OTHERS TO USE IT,* and to convince
them that it is a better idea to use this than it is to "roll their
own."  

Note that it is easy enough under UNIX to create Yet Another Config
System from scratch that convincing people to use yours will take some
doing.
-- 
"How should I know if it works?  That's what beta testers are for.  I
only coded it."  (Attributed to Linus Torvalds, somewhere in a posting)
[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/lsf.html>

------------------------------

From: Kazin <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.networking,comp.os.linux.misc
Subject: Re: Ethernet/Token Ring and Cabletron Switches
Date: Wed, 30 Dec 1998 22:34:40 -0500

Volker Dormeyer wrote:
> 
> Hi!
> 
> I have a serious problem in our switched Ethernet/Token Ring
> environment.
> I can�t ping (IP) from a Linux Box (Kernel 2.0.36) in the Token Ring
> segment
> to i. e a Windows NT Workstation in the Ethernet segment.
> 
> Only when I reduce the MTU-size on the Linux Box to Ethernetsize (1500
> bytes)
> it works together with the NT-Workstations and some IBM AIX machines.
> 
<snip>
> 
> Some time ago I observed the same behaviour with a XyLan OmniSwitch.

        I observed the same problems also, but on SynOptics token-ring hardware
and Bay ethernet hardware.  We ended up making the MTU size on all the
token-ring machines down to 1496.  I don't know if the problem was ever
*really* resolved, I left the company.  But I doubt it's your hardware,
I bet it's NT... :)

 
=======================================================================
  Mike Stella                             Software / Systems Engineer
  http://www.sector13.org/kazin            Thirteen Technologies, LLC
=======================================================================

------------------------------

From: Michael Peterson <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.setup
Subject: Re: [Q] SCSI Tape Setup
Date: Wed, 30 Dec 1998 16:01:58 -0800

Rod Smith wrote:

> You've got SCSI tape, but do you have a driver for your SCSI host adapter?
> Do you have any other SCSI devices, and if so, do they work?
>
> --
> Rod Smith
> [EMAIL PROTECTED]
> http://www.users.fast.net/~rodsmith
> NOTE: Remove the digit and following word from my address to mail me

I'm using an AHA2940U/W adapter.  My boot disk (sda1, sda2, and sda3) and my jaz drive
are fully functional.    The driver compiled into the kernel (2.0.36) is the AIC7xxx
driver selected from "make xconfig".  Both the hard disk (LUN 0) and the jaz drive (LUN
4)  are on the internal bus.  The SCSI tape device is on the external bus (LUN 5).
Finally, I installed 2.1.132, rebuilt the kernel after selecting the AIC7xxx DRIVER.
When I booted this kernel, the tape device was seen at /dev/st0 (as we would expect).

I believe this is a bug in the driver, the kernel, or both, unless someone can
demonstrate a working tape device using this adapter and kernel combination?

Cheers,

Michael


------------------------------

From: ag <[EMAIL PROTECTED]>
Subject: resetting kernel IPC parms
Date: Wed, 30 Dec 1998 22:27:29 -0600
Reply-To: [EMAIL PROTECTED]

Hi All,

I'm trying to install the Oracle demo database.  As per instructions,
I'm mucking around in shmparam.h (kernel sources).  I've run across the
following:


=============================================

#define _SHM_ID_BITS    7
#define SHM_ID_MASK     ((1<<_SHM_ID_BITS)-1)

#define SHM_IDX_SHIFT   (_SHM_ID_BITS)
#define _SHM_IDX_BITS   15
#define SHM_IDX_MASK    ((1<<_SHM_IDX_BITS)-1)

/*
 * _SHM_ID_BITS + _SHM_IDX_BITS must be <= 24 on the i386 and
 * SHMMAX <= (PAGE_SIZE << _SHM_IDX_BITS).
 */

=============================================


I need to go above the 24 bit limit spoken about in the final comment. 
If I compile at 486 or higher, just how high can I go?  Any thoughts or
pointers to the correct FM would be really appreciated.

Andrew


------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.os.linux.development.apps
Subject: Re: Registry - Already easily doable, in part.
Date: 31 Dec 1998 02:54:47 GMT
Reply-To: [EMAIL PROTECTED]

On Wed, 30 Dec 1998 14:21:46 +0000, Tristan Wibberley
<[EMAIL PROTECTED]> wrote: 
>Christopher Browne wrote:
>> On Tue, 29 Dec 1998 04:49:06 +0000, Tristan Wibberley
>> <[EMAIL PROTECTED]> wrote:
>> >you get the idea, reverse lookups via hosts needs attention too
>> >(note the invalid hostname character ':' for clarity).
>> 
>> I *think* I get it; as an alternative, would it seem more
>> appropriate to do something like the following?
>> 
>> # touch :192.168.1.2
>> # ln :192.168.1.2 chris.brownes.org
>> # ln chris.brownes.org chris
>> # ln chris.brownes.org mail
>
>How about:
>
># touch  .192.168.1.2
># ln -s  .192.168.1.2 chris.brownes.org
># ln -sf chris.brownes.org .192.168.1.2
># ln chris.brownes.org chris
># ln chris.brownes.org mail

The use of symbolic as well as hard links leaves me feeling a little
uneasy; I wasn't sure about having ".192.168.1.2" or ":192.168.1.2" and
I think I now can articulate why. 

I have to beg the question: 

  Why do you want to have a file to represent the IP address rather
  than having the address *in* a file? 

Other comments indicate that you seem to want to be able to trace from
the IP to host names; I assert that this is not something that people
actually want to do. 

When you use /etc/hosts, what you're doing is to try to determine the IP
address given a host name.  That is nicely represented by having
filenames that either contain or link to a file containing the IP
address. 

It would obviously be a "neat idea" to reverse the lookups in the
context of running a utility to do IP configuration (e.g. - in
Linuxconf); I don't have a big problem in that case with requiring that
the utility build a tree containing the data in /etc/hosts and reverse
the lookup itself.  

Since an IP address may map to multiple names, it's pretty much a given
that any program doing that lookup has to have considerable
sophistication to cope with that. 

If we throw in on top of that the use of varying kinds of links
(symbolic versus hard) to represent in some way how "close/firm/hard" a
connection is, that forces even more sophistication into the process. 

I'm inclined to head back to the original:

# echo 192.168.1.2 > chris.brownes.org
# ln chris.brownes.org chris    
# ln -s chris.brownes.org mail  

Note that "chris" is a hard link, implying that it is a "strong" tie to
chris.brownes.org, whilst the "mail" link is a soft link, as I would be
more likely to move mail to another host.  

Supposing I nuke chris.brownes.org and chris, this leaves mail as a
dangling link that neither goes away nor keeps a reference to
192.168.1.2, which seems to also be the Right Thing To Do. 

Thus, gethostbyname(name) would, when resolving via /etc/hosts, check to
see if it could open the file printf("/etc/hosts/%s", name), and if so,
return the IP address found therein. 

If there is to be a database to do the reverse lookup,
gethostbyaddr(addr), I'd be inclined to have it handled by doing
something like the following bit of pseudo-Perl... 

sub gethostbyaddr {
  my ($addr) = @_;
  tie /etc/reversehosts.db to %REV;
  if (/etc/hosts is newer than /etc/reversehosts.db) {
     delete REV;  # Nuke all links; we're rebuilding...
     foreach $file in (/etc/hosts/) {
        $ip = `cat /etc/hosts/$file`;
        $REV{$ip} = $REV{$ip} . ":" $file;
     }
  }
  untie REV;
  return split(/:/, $REV{$addr});
}

It would be nice if deleting and inserting entries in /etc/hosts would
incrementally change the reverse lookup; that is, I think, something
that would realistically require forcing updates to /etc/hosts to go
through a specific API (or making processes do additional work) which
strikes me as annoying/extra complexity.  The above "on-demand automagic
index rebuild" is certainly less intrusive. 

Comparing to the present /etc/hosts "file" method, this allows updates
to be done in a "transactional/incremental" fashion, being
immediately/directly addressable via gethostbyname() from the moment the
update takes place.  This approach would be faster than the present
situation where one must read through the /etc/hosts file each time. 

Reverse lookups via gethostbyname() may require rebuilding the reverse
lookup database; this is no worse than the present situation where (for
/etc/hosts-based resolution) one must read through the whole /etc/hosts
each time.  If there has been an update to /etc/hosts/, the DB has to be
rebuilt (which shouldn't be worse than reading the /etc/hosts file); if
there has been no recent update to /etc/hosts/, it can go straight to a
hash table, which would give more rapid access than is available now.

Seems to me to be (for the most part) "win-win."

Downsides: hosts no longer are ordered in /etc/hosts/ in a
user-specified order, and there may arguably not be as obvious a place
to provide comments. 

-- 
"Windows 98 Roast Specialty Blend coffee beans - just like ordinary
gourmet coffee except that processing is rushed to leave in the insect
larvae.  Also sold under the ``Chock Full o' Bugs'' brand name..."
[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/lsf.html>

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.sys.mac.advocacy,comp.os.linux.advocacy,comp.sys.next.advocacy
Subject: Re: "One thing that looks definite is....Mac OS X Server in January..."
Date: 31 Dec 1998 02:54:50 GMT
Reply-To: [EMAIL PROTECTED]

On Wed, 30 Dec 1998 04:21:44 GMT, Nelson Gerhardt <[EMAIL PROTECTED]>
wrote: 
>On 16 Dec 1998 05:25:17 GMT, [EMAIL PROTECTED] (Sal Denaro) wrote:
>>On Tue, 15 Dec 1998 13:55:06 -0600, Michael Peck 
>>                                            <[EMAIL PROTECTED]> wrote:
>>>[EMAIL PROTECTED] wrote:
>>>>         Either the source needs to be free or the specification needs to be.
>>>I will affirm this. The Linux community will not accept "black box"
>>>solutions.
>>
>><<Boggle.>>
>>
>>One would wonder why you were advocating that Apple port YB to Linux if you 
>>believed this. Especially since you clarified your position to stating that 
>>Apple doesn't have to give it away.
>
>1999 is going to be an interesting year indeed. As Linux's momentum
>builds ever more powerful, we're likely going to see some pretty
>interesting turn of events.

No doubt there will be many interesting "turns," both for good and for
ill, from various perspectives. 

>Among the most interesting will be what happens to the Linux desktop.
>OSX server _might_ become the defacto Linux desktop amongst less
>command-line driven, or it might not. 

That seems to me to be exceedingly unlikely. 

In order for some form of "OS-X on Linux" to be *commercially
important,* this would require that there be a reasonably substantial
quantity of commercial software (word processors, spreadsheets,
calendaring, web browsers, ...) available *for sale,* and not late in
1999.

That would in turn require that there be a robust *and highly adopted*
development environment for developers "even less late in 1999."

That would in turn require that Apple release development tools "even
further less late in 1999," and I'd argue that if those tools aren't
available in at least an embryonic "alpha" form *RIGHT NOW* that this
project plan is not terribly viable.

The flip side would be for there to be lots of "non-commercial"
deployment of applications on a YB-for-Linux or OS-X-for-Linux.  Much
the same constraints apply for much the same reasons.

If you suggested that it could become relevant in year 2000, then I'd be
more agreeable, but see below... 

>KDE, GNOME and GNUStep might still be the main efforts by the end of
>the year....but I think not.

For completeness sake, I would add various further additional
"frameworks" to the list of possibilities, and suggest that there is
only room for there to be a small number of "main efforts" that can
dominate peoples' attention. 

I would order "levels of interest" of these sorts of "frameworks" thus:

---> Tier 1: High visibility/viability as many developers are actively
working on and committed to these efforts... 
  - KDE
  - GNOME

---> Tier 2A: Less visible, likely viable, "interesting enough" to offer
the possibility of jumping to Tier 1 given unusual support from someone:
  - Java 2.0 (Given the Java cheerleading by Sun and others)
  - GNUStep  (Given the possibility of being a route to getting NeXT
              and Rhapsody
  - WINE     (Win32 emulation and application porting is potentially
              interesting, note that Corel's support may become
              pertinent...) 
  - Things built atop Mozilla components, perhaps?

---> Tier 2B: Less visible, certainly viable within a niche, but not
likely to jump to tier 1 due to being either "too mature to be
interesting" or "too hard to use to be widely used"
  - TCL/Tk
  - CDE
  - Less common X libraries such as FLTK, InterViews, Lesstif
  - Willows TWIN

---> Tier 3: Interesting, but too experimental or too restrictively
licensed to be likely to be of wide interest unless something *real*
unusual happens... 
  - Berlin/GGI
  - Squeak
  - IBM's OpenDOC (free release)
  - Anyone remember MGR?

>A window of opportunity exists, it's just a matter of whom is going to
>exploit it. There is a chance that Apple might be able to keep its
>desktop closed sourced and plop it on Linux...or it might do a SUN an
>have some sorta "Open, but we mak money when you make money" kinda
>deal.

A non-existent YB port to Linux qualifies at this point as a "less than
Tier 3" entrant on the list above. 

It would jump to either Tier 3 or 2A on the above list given some
significant deployment of Apple resources to it.

To get to "Tier 1" requires having an "application framework" that
people can very widely commit to.  Jumping into "Tier 1" would, I would
argue, require that Apple do some remarkably cooperative things
(licensing-wise) that would likely simultaneously move GNUStep into
"Tier 1" and be highly unusual for Apple given the sorts of things they
have done with their technology resources in the past. 

-- 
There are no "civil aviation for dummies" books out there and most of
you would probably be scared and spend a lot of your time looking up
if there was one. :-)                     Jordan Hubbard in c.u.b.f.m
[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/lsf.html>

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.development.system) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Development-System Digest
******************************

Reply via email to