Linux-Misc Digest #484, Volume #21               Sat, 21 Aug 99 01:13:07 EDT

Contents:
  Re: Linux file-size limit? (Christopher Browne)
  Re: *nix vs. MS security (Christopher Browne)
  Re: Any free SQL server available? (Christopher Browne)
  Re: Linux file-size limit? (Leslie Mikesell)
  Re: [Q] Parallel port access program permission (Victor Wagner)
  Re: max. array size GNU C compiler... ("Andrew P. Mullhaupt")
  Re: Where can I find Xfree86 3.3.4 to download? help, please (Lew Pitcher)
  Re: Linux on a 286 (Wine Development)
  SV: *nix vs. MS security ("Efraim Mostrom")
  Re: Linux file-size limit? (Christopher Browne)
  Re: data dump in unix (Christopher Browne)
  Requesting comments on new strategy. (Mario Miyojim)
  Re: Installing Netscape 4.61 (Full Name - Optional)
  Re: *nix vs. MS security (Lew Pitcher)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.os.linux.hardware
Subject: Re: Linux file-size limit?
Reply-To: [EMAIL PROTECTED]
Date: Sat, 21 Aug 1999 03:50:45 GMT

On 20 Aug 1999 11:23:29 -0500, Leslie Mikesell <[EMAIL PROTECTED]> wrote:
>In article <Ua2v3.15065$[EMAIL PROTECTED]>,
>Christopher Browne <[EMAIL PROTECTED]> wrote:
>>On Tue, 17 Aug 1999 16:54:55 -0600, John Thompson
>><[EMAIL PROTECTED]> wrote: 
>>>>         1. Is there a limit to the file size hardcoded in the
>>>>kernel? 
>>>
>>>AFAIK, this is an intrinsic limitation of the filesystem.
>>
>>Apparently what you *think* you know isn't correct.  The ext2
>>filesystem supports files of up to 1TB in size.
>
>But you can't do that on a pentium.
>
>>The standard file access API on 32 bit architectures is what can't
>>handle more than 2GB.
>
>The file access API on *Linux* 32 bit architectures is what can't
>handle more than 2GB.  The *bsd's have done it for years - there
>was a small amount of pain in the transition but it was mostly
>transparent to user programs.

Hmm.  
  Newsgroups: comp.os.linux.hardware, comp.os.linux.misc.
  Subject: Linux file-size limit?

Discussion concerning ext2.

Not unreasonable to consider that the context was that of Linux,
rather than *BSD.

>>The *true* problem is that the data structure used to hold the pointer
>>that indicates how far into an input stream you are is only 32 bits.
>
>Some unix versions used to choke when any stream went over 2GB, including
>the serial ports or network connections.  I thought that was fixed
>years ago and never applied to Linux. 
>
>>TAR doesn't get you around this problem...
>
>Amanda uses a holding disk for backups on their way to tape and
>gets around the size limit by writing the intermediate disk
>copy in chunks that are appended back together on the way to
>tape.
-- 
:FATAL ERROR -- ERROR IN USER
[EMAIL PROTECTED] <http://www.ntlug.org/~cbbrowne/lsf.html>

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.os.linux.security
Subject: Re: *nix vs. MS security
Reply-To: [EMAIL PROTECTED]
Date: Sat, 21 Aug 1999 03:50:12 GMT

On 19 Aug 1999 11:44:19 -0700, Kevin Esme Cowles
<[EMAIL PROTECTED]> wrote: 
>In article <zkJu3.12219$[EMAIL PROTECTED]>,
>Christopher Browne <[EMAIL PROTECTED]> wrote:
>
>[ ... ]
>
>>b) In terms of security, I suggest you think again.  
>>
>>If you want to talk about formal security certifications, there are
>>UNIX systems rated as high as B1 by the NSA/NIST.  NT is only rated
>>C2, and that is only true for version 3.51, with *networking turned
>>off.*
>>
>><http://www.radium.ncsc.mil/tpep/epl/epl-by-class.html>
>
>Just a nit.
>
>I personally admin a system with a B2 rated OS (DG/UX), and the page 
>that you listed even refers to a B3 system (yes I know it's stupid, but
>lower letters are more secure, and higher numbers are secure).

Pardon me; yes, you're quite right.

B3 is indeed "more secure" than B1, and I did know this.  When I said
B1, I really did mean to indicate B3.  Trusted Xenix is listed by TPEP
as a B3 system.

The fact that numbers increase to indicate greater security whilst
letters decrease to indicate the same thing is a Fairly Confusing
Thing.

>And while I'm a huge fan of open source, I'll point out that these
>systems are not open source.  Their code has been reviewed, very
>thoroughly, for any possible security vulnerabilities, as part of the
>certification process.  But they definitely do not allow the general
>public to see the source code.  So to a certain extent, the do rely
>on security through obscurity to prevent any vulnerabilites that do
>exist from being easily found.

I'm not sure that obscurity is a security goal as much as it is a
"preserving the proprietary value of expensive systems" goal.

Justification could be mustered to the effect that:
 "The source code has been thoroughly reviewed by security auditors
  and (probably) by automated analysis tools.  Once audited, it is
  preferable to keep the sources secret so as to prevent attackers
  from looking for vulnerabilities."

I'd bet more money on the *true* and *useful* causality being the
(likely unstated) reason that:
 "The source code has been thoroughly reviewed, at great co$t to the
  vendor.  We don't give out sources to other OS products that we
  sell; we're certainly not going to give out this code that we
  inve$ted an extra few $million in."

-- 
:FATAL ERROR -- YOU ARE OUT OF VECTOR SPACE
[EMAIL PROTECTED] <http://www.ntlug.org/~cbbrowne/lsf.html>

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Subject: Re: Any free SQL server available?
Reply-To: [EMAIL PROTECTED]
Date: Sat, 21 Aug 1999 03:50:31 GMT

On Thu, 19 Aug 1999 15:04:28 GMT, Dustin Puryear <[EMAIL PROTECTED]> wrote:
>On 18 Aug 1999 09:17:15 -0400, [EMAIL PROTECTED] wrote:
>>"WME" <[EMAIL PROTECTED]> writes:
>>> > Not too mention the fact that PostgreSQL is more full-featured, supporting
>>> a
>>> > whole bunch of goodies that MySQL doesn't (like transactions).  It is also
>>Last time I look at both PostgreSQL and MySQL, it was the opposite.
>>See http://www.tcx.se/crash-me.html
>>Its a big chart and will take sometime to load but compares all the
>>databases, not just the free ones.
>
>What's the deal with accessing PostgreSQL from Access and other
>Windows products? Any ODBC support?

PostgreSQL supports the SQL-CLI standard for remote access.

Most ODBC implementations appear to be emulations of the SQL-CLI
standard to one degree or another; they may work...
-- 
Rules of the Evil Overlord #58. "I will make sure I have a clear
understanding of who is responsible for what in my organization. For
example, if my general screws up I will not draw my weapon, point it
at him, say "And here is the price for failure," then suddenly turn
and kill some random underling." 
<http://www.eviloverlord.com/lists/overlord.html>
[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/rdbms.html>

------------------------------

From: [EMAIL PROTECTED] (Leslie Mikesell)
Crossposted-To: comp.os.linux.hardware
Subject: Re: Linux file-size limit?
Date: 20 Aug 1999 11:23:29 -0500

In article <Ua2v3.15065$[EMAIL PROTECTED]>,
Christopher Browne <[EMAIL PROTECTED]> wrote:
>On Tue, 17 Aug 1999 16:54:55 -0600, John Thompson
><[EMAIL PROTECTED]> wrote: 
>>>         1. Is there a limit to the file size hardcoded in the
>>>kernel? 
>>
>>AFAIK, this is an intrinsic limitation of the filesystem.
>
>Apparently what you *think* you know isn't correct.  The ext2
>filesystem supports files of up to 1TB in size.

But you can't do that on a pentium.

>The standard file access API on 32 bit architectures is what can't
>handle more than 2GB.

The file access API on *Linux* 32 bit architectures is what can't
handle more than 2GB.  The *bsd's have done it for years - there
was a small amount of pain in the transition but it was mostly
transparent to user programs.

>>Rather than backing up to another partition, you can backup
>>to a device that does not use a filesystem; eg, a tape
>>drive.  Tar can handle multi-gigabyte archives on a tape
>>drive without size limitation problems beyond the physical
>>limitations of the media used.
>
>Have you tried this so as to verify the veracity of this claim?

I'm pretty sure I have written tapes longer than 2GB without
errors, although I am not sure I have ever had to read one
back all the way.

>The *true* problem is that the data structure used to hold the pointer
>that indicates how far into an input stream you are is only 32 bits.

Some unix versions used to choke when any stream went over 2GB, including
the serial ports or network connections.  I thought that was fixed
years ago and never applied to Linux. 

>TAR doesn't get you around this problem...

Amanda uses a holding disk for backups on their way to tape and
gets around the size limit by writing the intermediate disk
copy in chunks that are appended back together on the way to
tape.

  Les Mikesell
   [EMAIL PROTECTED]  

------------------------------

From: [EMAIL PROTECTED] (Victor Wagner)
Crossposted-To: 
comp.os.linux,comp.os.linux.development,comp.os.linux.development.apps,comp.os.linux.development.system,comp.os.linux.hardware,comp.os.linux.questions
Subject: Re: [Q] Parallel port access program permission
Date: 20 Aug 1999 20:14:35 +0400

In comp.os.linux.misc YANAGIHARA <[EMAIL PROTECTED]> wrote:
:>
:>  Read a section about "Changing process persona" in info libc
:>
:>  Make your program setuid root and make call to seteuid(getuid())
:>  just after calling ioperm. 
:>  Than make your program owned by root and chmod u+s it.
: Thanks. I tried chmod command. Other users became to use my 
: program successfully. 

:>  Of course, having yet another suid-root prog is not good, but 
:>  it would do the job.

: I'm understanding this way is dirtier than making kernel 
: driver module. But, I think chmod oparation is better than 
: getuid(). Because root can administer permission without 
: rebuild the program. What do you think about this point ?

I think you should use both:

1. chmod u+s allows your program to start with root privileges and gain
access to ports
2. Calling seteuid(getuid()) as soon as this access is gained, causes it
to give root privelegies up and continue to work under persona of
invoking user (so it can create files, owned by this user and, more
important, cannot read/write files inaccessible to this user), while
still have rights to access port.
-- 
========================================================
Victor Wagner @ home       =         [EMAIL PROTECTED] 
I don't answer questions by private E-Mail from this address.

------------------------------

From: "Andrew P. Mullhaupt" <[EMAIL PROTECTED]>
Crossposted-To: 
comp.lang.c,comp.os.linux.setup,comp.os.linux.alpha,gnu.gcc.help,comp.lang.fortran
Subject: Re: max. array size GNU C compiler...
Date: Fri, 20 Aug 1999 22:19:51 -0400


EKK <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]...
>
> > Another question is:  Do you really need all those values in memory at
> > the same time?  Can you store the values in a file and only haul in
> > small pieces that you are working on? {Works with matrices, and tables}.
> >
>
> this would be a good thing, however, this is a finite element analysis
> code which would need a massive overhaul to achieve what you describe
> above, which would induce considerable loss of performance.

It very well might not, if you use memory mapped files as the backing store
for the variables. One of the simpler ways to do this would be to replace
the memory allocation used by Fortran, in the case of F77 systems which have
integer/Cray/Digital style pointers provide a way to do this. Then you can
use the VM interface to access the array as if it were all in memory but
with the operating system kernel aware that only a small part of it needs be
in the resident set. On several flavors of Unix, this can provide a
substantial performance benefit, as opposed to a loss. Theoretically, this
would be an even bigger improvement on the Windows platforms, but you have
to get through Microsoft's layer of goo on top of the VM interface, as well
as certain deficiencies in the W95 implementation.

This modification could, depending on hw your code is written, be
accomplished in a very small number of lines of code changed.

In the case of F9x and allocatable arrays, there doesn't seem to be a way to
get around this problem.

Later,
Andrew Mullhaupt



------------------------------

From: Lew Pitcher <[EMAIL PROTECTED]>
Subject: Re: Where can I find Xfree86 3.3.4 to download? help, please
Date: Sat, 21 Aug 1999 00:01:52 -0400

Try http://www.xfree86.org/
-- 
Lew Pitcher

Master Codewright and JOAT-in-training

------------------------------

From: Wine Development <[EMAIL PROTECTED]>
Subject: Re: Linux on a 286
Date: Fri, 20 Aug 1999 20:45:12 +0100

Noah Roberts (jik-) wrote:
> 
> I have heard this is possible, but I am at a loss of were to start
> looking.  I checked www.linux.org a bit.  I know this is not possible
> with the normal setup, but I have heard it is *possible* and I want to
> know how.  A freind brought over a 286 laptop and wants Linux on it.

Try looking at the HURD project, they were trying to build a kernel that
would run on an 8086 (don't ask me why). ISTR that one of the
distributions
was trying to use this kernel.


-- 
Keith Matthews                  Spam trap - my real account at this 
                                                        node is keith_m

Frequentous Consultants  - Linux Services, 
                Oracle development & database administration

------------------------------

From: "Efraim Mostrom" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux,comp.os.linux.questions,comp.os.linux.security
Subject: SV: *nix vs. MS security
Date: Sat, 21 Aug 1999 01:36:21 +0200


Oystein Viggen <[EMAIL PROTECTED]> skrev i
diskussionsgruppsmeddelandet:[EMAIL PROTECTED]
> I actually worked in a bank this summer, and they use OpenVMS on their
> servers....  :)
>
Well.. about uptimes. The servers that have the shortest uptimes
are NT-server. Netware server is among the best, its uptime
is great!

/Efraim



------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Crossposted-To: comp.os.linux.hardware
Subject: Re: Linux file-size limit?
Reply-To: [EMAIL PROTECTED]
Date: Sat, 21 Aug 1999 03:50:09 GMT

On Fri, 20 Aug 1999 07:56:58 -0600, John Thompson
<[EMAIL PROTECTED]> wrote: 
>Christopher Browne wrote:
>> On Tue, 17 Aug 1999 16:54:55 -0600, John Thompson
>> <[EMAIL PROTECTED]> wrote:
>> >>         1. Is there a limit to the file size hardcoded in the
>> >>kernel?
>> >
>> >AFAIK, this is an intrinsic limitation of the filesystem.
>> 
>> Apparently what you *think* you know isn't correct.  The ext2
>> filesystem supports files of up to 1TB in size.
>
>> The standard file access API on 32 bit architectures is what can't
>> handle more than 2GB.
>
>OK.  I stand corrected.  This is a hardware limitation of
>the Intel-type hardware then, yes?

No, this is a limitation of the file access API as implemented on 32
bit architectures.

It is also a limitation of NFS version 2.
-- 
The light at the end of the tunnel may be an oncoming dragon.
[EMAIL PROTECTED] <http://www.ntlug.org/~cbbrowne/linuxkernel.html>

------------------------------

From: [EMAIL PROTECTED] (Christopher Browne)
Subject: Re: data dump in unix
Reply-To: [EMAIL PROTECTED]
Date: Sat, 21 Aug 1999 03:49:59 GMT

On Fri, 20 Aug 1999 19:33:51 GMT, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote: 
>I was trying to data dump in unix directly from a tape which contained
>attributes in EBCDIC, Packed Format and Zoned Format. The command I
>used was
>  $dd if=<inputfile> conv=ascii of=<outputfile>
>
>when I saw the output the EBCDIC attributes  were converted into ascii
>but the others gave out garbled characters. Has anyone dealt with this
>issue before. Please let me know.

EBCDIC in, ASCII out.

Those packed numbers represented EBCDIC characters, and were
translated to the most equivalent characters in ASCII.

The point here is that dd is not aware of the difference between Just
Plain Text and "formatted" text.
-- 
"Unfortunately, because the wicked sorcerers of Silikonn' Vahlli hated
freedom, they devised clever signs and wonders to achieve the mighty
Captive User Interface, also known as the Prison for Idiot Minds."
-- Michael Peck <[EMAIL PROTECTED]>
[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/lsf.html>

------------------------------

From: Mario Miyojim <[EMAIL PROTECTED]>
Subject: Requesting comments on new strategy.
Date: Fri, 20 Aug 1999 16:52:00 -0600

This is a multi-part message in MIME format.
==============E73C9E60202AE27ACD8B62EE
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit



==============E73C9E60202AE27ACD8B62EE
Content-Type: text/plain; charset=us-ascii;
 name="open_source_development"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline;
 filename="open_source_development"

                Open Source Application Development Strategy

Over the years, since the introduction of computers into human
society, the costs of developing hardware and software have 
changed dramatically.

Hardware: mainframes were expensive items and only major enterprises
could afford them. Currently, home businesses can afford more than
one computer.

Software: was very limited in the beginning, then became a highly
specialized product, costing so much to develop that only copies on
specific computers were sold and high maintenance fees were charged.
With the creation of microcomputers, software prices decreased
significantly. Prices changed significantly again with the introduction 
of the open source movement: GNU tools, Linux, free BSD, Apache, the 
Internet as a whole, etc. But this change will only be complete
when we find a sustainable way to develop better applications that
will run on the low-cost platforms. We live in a real world, full
of greedy corporations, so we need to find a way to develop high 
quality software helpful to human society, while make a living in 
the process.

I have been rereading Richard Stallmans's 'GNU manifesto' to better 
understand its implications. His endeavors have been toward a society 
that would be able to enjoy software at low cost, not necessarily free.
The GNU manifesto has generated a successful body of software, but I do 
not know whether anything will maintain its momentum and appeal forever.
Stallman's original problem, in summary, was caused by greedy corporations 
hoarding source code and charging licenses to use the binary at high prices, 
acquiring wealth and power at the expense of the rest of the world.

What I am going to propose is a way to address that  original problem, 
which will produce profound changes in the way people view industry and 
corporations in the current world. It will eventually produce changes 
in all industries, but it can be more easily started from a subset: 
the people-related software development industry.

Let us start by discussing a concrete case: the Opera browser, developed 
by the Norwegian company OperaSoft. This software has received positive 
opinions from everybody, but it has two problems, from the Linux 
users' viewpoint: its license costs $35.00, and it does not run under 
Linux. The reason for being so is that the company wishes to maintain 
the source code proprietary, so it is having difficulties to generate 
the Linux version under a traditional contract due to cost. I have no 
idea how many licenses Opera has sold, but I don't think that they sold 
many millions to Windows users, because Internet Explorer and Netscape 
are available at zero price. I paid
$35.00 for the Windows version, which I do not use, as an incentive
for their Linux version. Let us imagine that they sold half a million
licenses; then they received $17.5 million in license fees, with which
they are paying their personnel and the contractors for porting to
other platforms. 

Imagine that the OperaSoft company had taken a radically different path, 
and that they had opened the source to the Internet, invited the entire
world to participate in adding features and removing bugs from their
initial good design, and that they allowed a copy to be downloaded at
one dollar initially and at twenty cents per upgrade. Assuming that
there are about 100 million computer users in the world, I imagine that
half of them would risk one dollar to have a better browser, without
much thinking. In this case, Opera would receive $50 million. To
encourage the worldwide open source team to continue contributing to
this and future projects, Opera could put aside 30% of this revenue
to reward the contributors. Let us say that 1000 persons contributed
equally to the project, then each person could earn 15 million / 1000 =
$15,000 for having participated in the project so far. Let us say
that all 50 million users buy the first upgrade, then they will pay
$0.20 x 50 million = $10 million. A portion of this will be distributed
to the contributors in the development, too. 
Essentially, two parameters stand out in this hypothetical story: 
much lower unit price to make it attractive to average persons,
and no need to hire regular employees; only one or a few persons having
the initial idea would be required to arbitrate what changes will be
inserted in the next version to be made public, and manage the money.
An immediate consequence if this story became reality, is that OperaSoft
created 1000 jobs without painful admission interviews and no costs up
front, the company had a lot more earnings than in the traditional model, 
and the product came out for practical use faster, for all platforms. 
Would it beat Internet Explorer and Netscape in quality and time to 
market? I bet it would. I do not know the reasons for the Mozilla project
taking long to generate a product, but I think one of them is a lack of
financial incentive.

In my view, the main difficulty to implement this idea is the following:

One US dollar or twenty cents are much closer to the cost of a bank check
or credit card transaction cost, than a regular commercial transaction. 
I do not how much about credit card transaction costs; let us assume  
that it is ten cents of a US dollar. Banks and financial houses still
tolerate occasional low-value transactions because the average value
is much higher. In the case of the above hypothetical sales strategy,
the average value will be less than one dollar, against which banks and 
credit card companies may raise a barrier. Another thing that annoys
me is that the arbitration regarding how much of the revenue would be
distributed among the contributors would still be with a corporation,
which would permit unfair handling of the money.

I have been thinking about possible solutions to this dilemma. The more
I think, the more I get convinced that we need to create a new trusted
entity, that I will call OSB (Open Source Bank).

The basic functions of the OSB are mostly similar to a regular commercial
bank: maintain accounts, transfer values among accounts, accept deposits,
honor withdrawals, interact with other banks and clearinghouses, issue
account reports, etc.

The OSB will be radically new in the following aspects:

1) Some of its accounts are not corporate accounts, but project accounts,
   that is, they handle money related to open source projects, regardless 
   of the persons who lead it, or contribute to it.

2) It will gladly handle small values per transaction, because those will
   the basis of its operation. Maybe it will issue credit cards involving
   low limits especially for e-commerce acquisitions.

3) It will interact closely with open source project leaders, regarding
   distribution of open source earnings to project contributors, who will
   have personal accounts in the same bank.

4) The project accounts will be open to all concerned, i.e., the leader,
   the contributors, perhaps even to the potential buyers of products.
   This will severely reduce the possibility of frauds. Everybody will
   know at any time how many sales were made so far, how much has been
   distributed to contributors, to the leader, to the bank, taxes, etc.

With the concept of open source project account, projects may survive
through many generations of leaders and contributors. Projects would have
longer lives than small enterprises, especially because they will become
stronger as they attract more contributors, who will introduce features
to grant it a longer life; bad projects will not even open an account.

I have considered creating a new currency, which would be called OSC
(open source credit, or currency) to dissociate values from the US dollar, 
but do not see a real necessity for this concept. I thought it might 
facilitate the valorization of products and contributions in countries 
outside the US, but I am not well versed in international economy. If 
anyone knows better, please comment.


Possible consequences of implementing this idea

1) Good ideas become products quickly with highest quality without 
   traditional corporate intervention.

2) Motivate and sustain a large pool of contributors in the whole world, 
   some of them fully dedicated to open source projects, as a way of
   life.

3) Persons with an Open Source Contributor credit card will be recognized
   all over the world.

4) A new important role will emerge: that of Open Source Banker, due 
   to the importance of the OSB to maintain the profile and stability 
   of open source projects.

5) Projects that cannot get traditional funding but have social value 
   will be able to jump start.

6) People with computer knowledge will find on the Internet a site 
   announcing open source application projects classified by area of 
   interest:
   health-oriented, mission-critical, industrial, research, networking,
   entertainment, etc. In this site, there will be no discrimination 
   regarding computer language, operating system, dbms, visa status, 
   location, education level, availability, anything. All of them will be
   telecommuting, so participants can be on several projects simultaneously
   depending on their interest and availability, and no questions will be
   asked when you join or quit a project. You can reuse a programming 
   tecnique you invented as many times as necessary, and projects will pay
   for it if the contribution is accepted.

7) Large projects requiring improvement through research in depth will be 
   possible, because graduate students could earn titles with research work 
   for this purpose. The choice to contribute will be from the researcher
   and not his(her) advisor.

8) Major contributors may evolve to become project leaders, and project
   leaders may become contributors to other projects.

9) Experienced project leaders may become coaches to project leader
   candidates. 

Every day I think about this direction, it gets stronger, and I am aware 
that many points need be addressed to become executable.  

I request comments from the community at large.

==============E73C9E60202AE27ACD8B62EE==


------------------------------

From: [EMAIL PROTECTED] (Full Name - Optional)
Crossposted-To: linux.redhat.misc
Subject: Re: Installing Netscape 4.61
Date: Fri, 20 Aug 1999 23:15:12 GMT

On Fri, 20 Aug 1999 16:45:07 GMT, [EMAIL PROTECTED] wrote:
[snip]
>Also, if my company's clients send me MS Office  files, I better
>have MS Office myself, or I'll be out of business before long
>(food for thought).
[snip]

Star Office http://www.stardivision.com handles all MS Office file
formats (as well as Wordperfect and some others) and is available on
all major OS platforms for under $50. Compare that to $500 for MS
Office ($ to spare).

:)

------------------------------

From: Lew Pitcher <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux,comp.os.linux.questions,comp.os.linux.security
Subject: Re: *nix vs. MS security
Date: Sat, 21 Aug 1999 00:04:24 -0400

Casey Schaufler wrote:
A while ago, Microsoft defined to us the meaning of "heterogenous network".
To them, it meant networking MS Windows 3.1, MS Windows/95 and MS Windows/98
and MS Windows NT 4.0 together.

I wonder what they'd call MS Windows (anything) networked to an IBM OS/390
system?

 
> Christopher Browne wrote:
> 
> > If you want to talk about formal security certifications, there are
> > UNIX systems rated as high as B1 by the NSA/NIST.  NT is only rated
> > C2, and that is only true for version 3.51, with *networking turned
> > off.*
> 
> Actually, there's a B2 UNIX (Trusted Xenix), but I don't think
> TIS is selling it any more. NT's C2 evaluation-in-progress will
> include homogenous networking (or so I'm told) as SGI's B1 did
> in 1995. Be careful casting the networking stone as only two of
> the UNIX evaluations (SGI and Cray) include networking.
> 
> --
> 
> Casey Schaufler                         voice: (650) 933-1634
> [EMAIL PROTECTED]                           fax:   (650) 933-0170

-- 
Lew Pitcher

Master Codewright and JOAT-in-training

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.misc) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Misc Digest
******************************

Reply via email to