Linux-Misc Digest #253, Volume #19                Mon, 1 Mar 99 22:13:09 EST

Contents:
  Re: Multi-OS Booting with SCSI Drives (William Burrow)
  Re: Microkernels are an abstraction inversion (Francois-Rene Rideau)
  Re: Can NT with NTFS coexist with RedHat Linux ("Jon Wiest")
  C++ cross compiler (Monte Westlund)
  S.u.S.E. 5.3 and Matrox Millenium G200 AGP ("Chan, Siu-Kei")
  Re: Pentium III Boycott and survey info (Mircea)
  Re: Pentium III Boycott and survey info (Anthony D. Tribelli)
  StarDivision StarOffice Comments? ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (William Burrow)
Crossposted-To: comp.periphs.scsi,comp.windows.misc,alt.sys.pc-clone.dell
Subject: Re: Multi-OS Booting with SCSI Drives
Date: 2 Mar 1999 01:19:34 GMT
Reply-To: [EMAIL PROTECTED]

On Mon, 01 Mar 1999 14:06:57 GMT,
[EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>We have a requirement to fit-up two training rooms with (possibly) new
>equipment, and I wanted to sound out any potential problems before we got to
>the purchasing stage.  The two biggest requirements we have are:
>
>* Quick switching between classes (so you can run a class during the day, and
>one during the evening).  Switching must be "stateful" -- that is, no loss of
>data or configuration between class days.

You could use LILO or any other boot manager.  They don't care what the
hardware is, as long as the BIOS or something like it can read the disk.

>* Quick rebuild of boot disks -- many of our classes are on the OS itself, so
>the boot disks get messed up pretty quickly (new accounts added, things
>changed, etc.) We need a way to quickly (an hour or so for the full classroom)
>rebuild the boot drives from a master.

You say you are on a network.  Why not back up the drives (I assume all 
user machines are identical, you just have two drives:  one with Linux
on it, another with NT) to a big disk on a server as images.  

Reboot the machines after the day is over, and dd the images from the
server onto the disks.  A simple initrd or nfsroot Linux boot would work,
perhaps using a network boot if necessary (ie if the disks are all totally 
scrambled).

See the Remote-Boot mini-HOWTO and the two documents in
/usr/src/linux/Documentation:  initrd.txt and nfsroot.txt for info on
these.

>* Multiple SCSI drives -- my personal preferred choice.  Have a smallish
>(2-4G) drive for "day" and "night" classes (one each), and a largish (8-12G)
>drive for "master" copies of each drive.  Start of the week, you reboot all
>systems on drive 3, follow the menu, rebuild the drives for that week's
>classes.  Then in the morning, boot drive 1, in the evening, drive 2.

Why waste money duplicating the same identical disks over and over.
The only thing to change are a few system configuration files, which
can be stored separately (eg. /etc on the Linux systems).

>My questions are these: * We're a mostly Dell shop, and will likely be
>getting Dimension-class machines.  Any problems with SCSI-only configs on
>those machines?  Booting from different drives?  BIOS conflicts/problems with
>Adaptec 2940 or 1542 cards?

Can't answer.  However, if you wish to boot from different drives, you
may have problems.  LILO provides boot chains to enable this, though
I've never tested the technique.

>* We'd be using mostly Windows NT and Linux (RedHat or SUSE) operating
>systems.  Any big issues there?  Does Windows care if it gets booted from SCSI
>ID 2 while there are drives at 0 and 1 that aren't being used? (I'm told this
>can be an issue with IDE).  Any Linux issues on Dimensions we should be aware
>of?

I believe most SCSI controllers only bother to check whether there are
bootable drives at ID 0 and 1.  Could be wrong.  

>* For pricing reasons, the Celeron chip might be our processor du jour. Any
>problems with that and Linux, SCSI, Windows?  One of our classes will be
>Checkpoint Firewall, does anyone know of problems with that running on
>Celeron?

Is the Celeron non-compliant with the x86 instruction set?  Why should
there be a problem?

>* Finally, because of the firewall (and some other) classes, we'll have dual
>NICs in all machines.  Any big issues with that?

Check the Multiple-Ethernet mini-HOWTO for issues with Linux.  It doesn't
seem to be a big issue.  Can't say about NT.

>I appreciate any and all suggestions/feedback you can give me.  Feel free to
>respond privately, to [EMAIL PROTECTED], if you don't feel the public at
>large will benefit from your response....

Sometimes I make mistakes, and like to see them corrected so I don't 
make them again.  


Readers beware, this message is cross-posted everywhere.  
Followups restricted.

-- 
William Burrow  --  New Brunswick, Canada             o
Copyright 1999 William Burrow                     ~  /\
                                                ~  ()>()

------------------------------

From: Francois-Rene Rideau <[EMAIL PROTECTED]>
Crossposted-To: gnu.misc.discuss,comp.os.linux.advocacy
Subject: Re: Microkernels are an abstraction inversion
Date: 01 Mar 1999 22:46:33 +0100

Emile van Bergen <[EMAIL PROTECTED]> writes:

>> Of *course* when you have to face stupid code written by stupid people
>> in stupid languages, you have to use stupid low-level barriers.
> Exactly my point.
No, not at all your point.
The valid conclusion is that stupid barriers are "needed" in SOME cases,
just in the way WINE, DOSEMU, or EM86 are "needed", to run legacy code.
There might even be other cases (testing experimental unsafe code)
where they be useful. But building a whole system around that concept,
and mandating multiplication of barriers everywhere (=the �K design)
is stupid and EVIL.

> Still. Even in free software, there may be bugs, as long as some things
> are done in that 'portable assembler' you hate.
New things are not to be done in assembler, portable or not,
except for a tiny bit that can be secured or trusted or proven.
Necula and Lee have shown how proving correctness of small optimized
assembly routines with respect to well-typing was not a problem.

>> By following your argument to its extreme point
>> (which is the one and only test for an argument),
> No. Not every statement is limitless in its scope or domain and
> will become absurd if used outside the intended domain.
Not at all. If you consider the MEANING of statements,
it is limited in its scope and stays true (but becomes irrelevant) out of it.
The SYNTAX can be limitlessly reinterpreted, but is completely irrelevant, and
all the more so in discussions where people don't use their mother language.

> The _ideal_ system is _balanced_.
That's utter non-sense! Balanced between what and what?
If balanced between everything and everything else,
what about balanced between balancedness and disbalance?

To refocus on the original argument:
a computer system is something meant to accomplish high-level human tasks;
at the human (developer) interface is a language
(set of constructs and primitives) to express problems;
underneath is an implementation of that language.
A good system design provides as high-level an interface as can efficiently
implement, and uses all needed dirty low-level tricks under the hood.
An evil system (�K) designs provides a low-level interface to the developer,
and uses an inefficient naive implementation of high-level constructs below.


>> People are free to design clean and stateless things
>> even without forced low-level barriers.
> Of course they are. The only thing is that they don't.
That's the typical fascist argument: "letting people choose is bad,
because they won't choose well, so we must FORBID".
The libertarian philosophy is: "letting people choose is a necessity,
so we must ENABLE better choices by providing high-level tools."

> you have a low-level operating system with low-level barriers there's
> _nothing_ that keeps one from using a high-level language to develop
> a nice application in a single process space. I don't want anyone's
> application written in any language in the address space of the
> 'kernel'.
You've got it all reversed! A low-level system is what gives
the leasts consistency between applications,
hence the most bugs, incoherences, overhead, stupid manual work, etc.
It PREVENTS application writers from having applications
that can trust each other. Applications have to communicate
upon the basis of a the least common denominator, and the developer
is meant to enforce *by hand* all their invariants that are not even
formally specified. This leads to the shoddy philosophy
"yes, it's intrinsically unsafe, but it's very difficult to get it right,
and normal use shouldn't reveal the bug anyway",
and to all the security bugs and general unreliability of UNIX system tools.

A high-level system, on the other hand, ALLOWS applications to define
high-level invariants that be enforced automatically *by the system*.
The programmer is relieved from these tasks, but the very few who want
can still explicitly manage things, with proper escape mechanisms
(just like there is inline assembly in C for the few times when you need it);
if the low-level programmer is trusted, or can prove his code correct,
that's fine, else, the system may implicitly wrap a low-level sandbox
around his code.

[About using a high-level language]
> No. You just pay the price of re-educating all more or less competent
> programmers to write JOCAML.
The cost of reeducation is zero.
A good programmer becomes productive in any new language in a few days,
and fully productive in a few weeks.
Bad programmers are counter-productive in any language and shouldn't be used
without re-education, anyway.

> Enforcing this is fa...... no, I won't use that word again.
I don't see that basing a system upon a high-level language
would enforce choice of language any MORE than basing it upon
a low-level language does! Or else, UNIX is fascist to enforce use of C!
Whatever the system, there is a bias towards a "native" language.
You don't get to choose whether or not there's a bias. There will be anyway.
You get to choose a better language towards which to place the bias.
Re-read the Jargon AI Koan with Minsky and Sussman.

>> Certainly every single program written in LISP, ML, Perl, Modula-3, 
>> Haskell, Mercury, Prolog, or otherwise high-level language, is formally
>> proven to never ever do an unauthorized memory access, [least you 
>> explicitly do unsafe operations].
> 
> Any peripherial driver would need to do this. Imagine a video driver.
> And you can count on it that even if you don't need to do an unsafe
> operation, some people _will_.
>
So WHAT? Under *any* system, low-level drivers, low-level compilers, etc,
are to be trusted, and may only be enabled by the operator.

>> [Strong typing] is already much more than stupid
>> low-level memory protection will *ever* bring to you.
> No.
Yes.

> It provides me with a tool that enables me to distrust other peoples
> code to a certain extent and still run it.
So does strong typing, you fool! Repeat after me:
Strong-typing gives a STRONGER invariant
than just trapping illegal memory access.

> It's a nice tool. It's useful.
> It isn't required, however.
Yes it is. The whole idea of EUNUCHS is to force low-level barriers
around every single program, and disallow high-level invariants.

> You can still write your applications in
> those great languages to share one big address space.
If you mean "I can emulate a high-level system in a low-level one",
yes I can, clumsily, and at a high cost, working around low-level limitations.
If you mean "it gives you the same guarantees", no, because
the low-level system doesn't give me any guarantee
of non-interference and of security preservation from other subsystems,
unless my emulator is really the only running application,
at which time what we have is a very clumsy implementation
of the high-level system.

> No, I don't force _anyone_ to modularize his/her _application_ into
> difference process spaces! Neither does your average uK.
Yes it does, in as much as it is used.
Or else, yours is a non-argument, since even in a high-level system,
there is an escape to low-level access.

> It only forces
> you to follow a certain protocol talking to other processes that weren't
> written by you.
This is EVIL. Because they are low-level protocols that cost a lot at
runtime without giving ANY warranty.
Strong-typing gives you the warranties and costs much less than marshalling
p(ZERO if using static type checking, else a simple tag checking).

> A protocol is necessary, it's an agreement between two
> parties to communicate in a particular way. And I want that agreement
> (no matter how it looks) to be enforced, yes.
This is PRECISELY the point about using a high-level language.
Low-level languages, BY DEFINITION, don't allow
system-enforced high-level protocols. They require stupid manual checking.
High-level languages, BY DEFINITION, allow for system-enforced high-level
protocols. They *also* allow stupid manual checking when you need it.
A high-level language makes both the developer more productive,
and the resulting code more efficient, maintainable, portable and scalable.

> It's not an eternal truth, it's a truth _now_ that needs recognition.
No, it's current state of technology that needs be obsoleted.

>> even braindead languages can be reasonably
>> efficiently implemented in a way that is intrinsically secure,
>> without the need for stupid low-level barriers.
> I fail to see your point here.
We mostly don't need low-level barriers.
We don't need them for normal system use.

>> ANY design is better [than �K]
> Can you found this statement why any design is intrinsically 'better'?
> Define the scope of 'better' for a start.
Better: wastes less human and computer resources for same or better results.

>> Even in C, metaprogramming as demonstrated in Tom Lord's ctool
>> can help a lot at maintaining elaborate system invariants throughout C 
>> code.
> This sounds to me as enforcing barriers before run time. What's the
> difference with barriers at run time.
The differences are
1) you can CHOOSE the relevant invariants that you need
 instead of being enforced invariants you don't care about
2) the runtime cost of it is zero.
That's ENABLING you to easily write safer and faster code,
instead of FORBIDding you to write it.

> I still try to be polite in the general _human_ sense of the word.
If politeness is not a form of respect, it's rubbish.
If it's a form of respect, then it ought to respect things
that are most rare and worth on USENET: TIME, and INTELLIGENCE.
Hence, be up to the point, and don't use long-winded pleasing formulae.

[Strong Typing, with Perl as example]
> It's not a limitation,
Glad to see you admit that.

> it's a tool to prevent you from shooting yourself in the foot.
No. It doesn't prevent it: it doesn't require you to take care of your foot;
rather, it takes care of your foot for you.
That's all the difference between ENABLING and FORBIDDING.
It makes shooting yourself in the foot inexpressible and irrelevant.

> Address space separation in a tool to provent you from
> being shot in the foot by _someone else_.
Not anymore than strong typing of someone else's programs would.

> To enforce the use of one tool
> and forbid the use of the other tool is what I call the fascist
> approach.
Glad to read you write that, for that's precisely what low-level systems do:
they enforce low-level communication,
and make any attempt at *securely* using higher-level protocols impossible.

>> You wouldn't imagine requiring of any project that has been
>> written in LISP, Perl, Erlang, etc, that it had been written in C, would
>> you?
> You wouldn't imagine requiring of any project that has been written in C
> that it had been written in Prolog or Lisp, would you?
For most of them, I sure would.
I'm sure, for instance, that Tom Christiansen's project
to rebuild UNIX utilities in Perl will lead to code that's
much more reliable, maintainable, shorter, and even faster
(if called as modules, short-circuiting low-level process barriers)
than their C equivalent.
Also, if strongly-typed languages were used more widely,
buffer overruns and most other known security risks would be just
a memori in the maindz ov ould doderez.

> Yes. A uK depends on the fact that you handle the high level stuff
> yourself using your tool of choice. I guess it won't be C.
No, the �K forces its inefficient unsecure low-level architecture upon me.
It voids the advantages of high-level tools in terms of
speed, reliability, maintainability, portability.

[ "Far�" | VN: Уng-V� B�n | Join the TUNES project!   http://www.tunes.org/  ]
[ FR: Fran�ois-Ren� Rideau | TUNES is a Useful, Nevertheless Expedient System ]
[ Reflection&Cybernethics  | Project for  a Free Reflective  Computing System ]
I have more, not less, problems with free software than with proprietary
software! Because the few problems I have with proprietary software are all
showstoppers; with free software, they get solved, until the next one shows up.

------------------------------

From: "Jon Wiest" <[EMAIL PROTECTED]>
Crossposted-To: 
comp.os.linux,comp.os.linux.admin,comp.os.linux.networking,comp.os.linux.questions
Subject: Re: Can NT with NTFS coexist with RedHat Linux
Date: Mon, 1 Mar 1999 19:30:11 -0600


Michel Catudal wrote in message <[EMAIL PROTECTED]>...
>This is nonsense. When I compile the kernel I have the option to install
NTFS support.
>I have RedHat 5.2 with kernel 2.2.2 and I can read my NTFS partition
without any
>problem.


Nonsense?  RedHat 5.2 is not the 2.2 kernel by default, it's 2.1.x.  NTFS
support was added in 2.2.  Perhaps he didn't download the latest kernel.
Who would, it's pretty buggy.

Jon




------------------------------

From: [EMAIL PROTECTED] (Monte Westlund)
Subject: C++ cross compiler
Date: Mon, 01 Mar 1999 20:12:47 GMT

Hello,
I have been given the task of finding a C++ compiler that will:

Run on DOS or Windows
Compile programs that will run on a Linux machine

We need to re-compile a couple of DOS exe's so they will run on a
Linux machine used as a server by our ISP.

Source code is C++

We don't have any Linux machines at our company.

I've been searching several ng's and faq's and such.

Thanks in advance,
Monte

------------------------------

From: "Chan, Siu-Kei" <[EMAIL PROTECTED]>
Crossposted-To: 
aus.computers.linux,comp.os.linux.hardware,comp.os.linux.setup,comp.os.linux.x,comp.windows.x,uk.comp.os.linux
Subject: S.u.S.E. 5.3 and Matrox Millenium G200 AGP
Date: Tue, 2 Mar 1999 10:26:02 +0800

I currently upgrade my VGA card to Millenium G200 AGP (8MB SGRAM version). I
have S.u.S.E 5.3 installed and I would like anyone who can tell me what do I
need to upgrade. Do I have to download the XFree86 3.3.3.1 or anything else.
Please tell me what should I need to download and what should I do to
upgrade. It would be great if anyone could tell me in step-by-step process.

Please reply to the newsgroups and me through the e-mail services. Thanks!!!

--
Chan, Siu-Kei
E-mail: [EMAIL PROTECTED]





------------------------------

From: Mircea <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.advocacy,comp.os.linux.hardware
Subject: Re: Pentium III Boycott and survey info
Date: Mon, 01 Mar 1999 16:31:57 -0500


Absolutely right. I remember, back in the eighties, about a program in
x86 assembler that I had found in some magazine, and spent a whole
afternoon typing in, that switched the 286 in protected mode, printed a
message on the screen, and went back to real mode, all this w/o any
apparent reboot, although a cpu reset was required to switch from
protected to real mode. This was just a demo for the "new" operating
mode of the 286. Maybe I can find it again in the piles of magazines I
have!

MST


mlw wrote:
> Sorry, you are wrong. OS/2 1/x was a 16 bit protected mode operating
> system. There was an undocumented instruction that is not in most
> assemblers, but can be coded with db or emit. The instruction was put on
> the chip so test program written by Intel could put the processor into
> protected mode and take it back out again.
> 
> It is this instruction that Microsoft used to enable its DOS box in OS/2
> 1.x. I'm pretty sure it is a protected instruction, so a program would
> have to be in an unprotected environment, such as Windows 9x or kernel
> space in NT. The problem with the instruction was that it clobbered some
> range of memory, I think 40H.
> 
> I have to remember where I have seen it. I bet Dr Dobbs has an old piece
> on it.
> 
> --
> Mohawk Software
> Windows 95, Windows NT, UNIX, Linux. Applications, drivers, support.
> Visit the Mohawk Software website: www.mohawksoft.com

------------------------------

Crossposted-To: comp.os.linux.advocacy,comp.os.linux.hardware
From: [EMAIL PROTECTED] (Anthony D. Tribelli)
Subject: Re: Pentium III Boycott and survey info
Date: Tue, 2 Mar 1999 02:33:09 GMT

mlw ([EMAIL PROTECTED]) wrote:
: Anthony D. Tribelli wrote:

: > I don't think there was a reset instruction, documented or otherwise. I
: > believe the keyboard microcontroller was asked to reset the main CPU, and
: > BIOS could recongnize a cold or warm boot and possibly jump to a location
: > specified in RAM (to resume where things left off rather than FFFF:FFF0).
: > To expand on your brief mention of 'kernel space', a protected mode OS
: > (WinNT and Linux, maybe Win9x) can prevent user programs from doing this
: > sort of thing.
:
: Sorry, you are wrong. OS/2 1/x was a 16 bit protected mode operating
: system ...

Read up on the I/O permission bitmap. A protected mode OS can prevent a
user program from accessing particular I/O ports if it wants to (OS/2 and
Win9x may choose not to do so for compatibility reasons). 

: ... There was an undocumented instruction that is not in most
: assemblers, but can be coded with db or emit. The instruction was put on
: the chip so test program written by Intel could put the processor into
: protected mode and take it back out again.

This sounds like someone misunderstood an improved reset method which
involved I/O ports (again prevantable if the OS chooses). This was
supposedly an alternative the the much slower keyboard reset which OS/2
did use. The faster method was not universally available.

I think yet another method involved causing multiple processor faults, and
again an end user program could not do this.

: It is this instruction that Microsoft used to enable its DOS box in OS/2
: 1.x. I'm pretty sure it is a protected instruction, so a program would
: have to be in an unprotected environment, such as Windows 9x or kernel
: space in NT. The problem with the instruction was that it clobbered some
: range of memory, I think 40H.

I'm still highly skeptical of such an instruction existing. I'd love to 
see a URL, I suspect info got 'mutated' as it passed from one person to 
the next.

Tony
-- 
==================
Tony Tribelli
[EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: alt.linux
Subject: StarDivision StarOffice Comments?
Date: Tue, 02 Mar 1999 01:35:01 GMT

Hi all,

1) Is there any Linux distribution that ships StarOffice Personal edition
office package with it?

2) Are there any other office suites to run on Linux and how do they compare
with StarOffice?

3) Is it possible to read/import/export MS Word, Excel and Word Perfect files
with StarOffice?

BOB

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.misc) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Misc Digest
******************************

Reply via email to