Linux-Advocacy Digest #565, Volume #25            Wed, 8 Mar 00 21:13:06 EST

Contents:
  Re: A little advocacy.. (Terry Porter)
  Re: BSD & Linux (Bill Woodford)
  Re: BSD & Linux (Marc Espie)
  Re: Linux Demo Day a letdown (Terry Porter)
  Re: Microsoft migrates Hotmail to W2K (Steve Mading)
  Re: Disproving the lies. (R.E.Ballard ( Rex Ballard ))

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Terry Porter)
Subject: Re: A little advocacy..
Reply-To: No-Spam
Date: 9 Mar 2000 08:51:11 +0800

On Wed, 08 Mar 2000 18:07:25 GMT,
 George Richard Russell <[EMAIL PROTECTED]> wrote:
>On 7 Mar 2000 20:53:13 GMT, Donovan Rebbechi <[EMAIL PROTECTED]> wrote:
>>>Well, I know a bit about desktop Linux, and where and why it fails.
>>
>>YMMV. It doesn't "fail" on my desktop. I can see where usability 
>>improvements could be made, bu that hardly alters the fact that "Drestin"'s
>>post was pure cr*p,
Gee folks *what* a supprise ;-)

>> and frankly, I'm surprised to see you bending it 
>>and attemptijng to salvage his arguments ( by restating them in a 
>>reverse-strawman manner ) to make them look half credible.
>
>If the usability problems and support gaps are big enough for him to notice 
>them, then Linux needs fixing,
Therefore let him fix his problem.

> and the more people that know it the more it'll
>annoy someone enough for them to fix it.
Only if they *need* it.
 
>
>i.e.
>
>Specialised niche software - say Taxes, Accounting, boring stuff with little 
>kudos to developers.
Free software doesnt work this way, it get written by authors who want it
for *themselves*.

There is CBB for taxes.

>
>Drivers for boring hw - printers, scanners, PDA's not the Pilot (i.e. Psion),
>stuff thats done well enough to say eg want Linux + PDA? Use a Pilot - despite
>a Psion being more popular and arguably better here.
Lots of stuff for the Pilot, if a Linux hacker thinks the Psion is better
and can get the info they need, then you will see it.

>
>Some consideration for Usability - its easier to to add fonts to GEM than to
>X11 - and yes, I have read the howto, and think that end users must be equated
>with cockroaches by the writers of most Unix software. 
I disagree.

>
>Novice documentation is always nice to have.
There is a lot if you are prepared to search, and there are literally
*thousands* of Free Software authors who will help, if you ask politely.


Kind Regards
Terry
--
**** To reach me, use [EMAIL PROTECTED]  ****
   My Desktop is powered by GNU/Linux, and has been   
 up 1 day 17 hours 36 minutes
** homepage http://www.odyssey.apana.org.au/~tjporter **

------------------------------

Crossposted-To: 
comp.unix.bsd.386bsd.misc,comp.unix.bsd.freebsd.misc,comp.unix.bsd.misc,comp.unix.bsd.netbsd.misc,comp.unix.bsd.openbsd.misc
Subject: Re: BSD & Linux
Reply-To: [EMAIL PROTECTED]
From: [EMAIL PROTECTED] (Bill Woodford)
Date: Thu, 09 Mar 2000 00:55:43 GMT

In article <8a6pvs$svr$[EMAIL PROTECTED]>,
Marc Espie <[EMAIL PROTECTED]> wrote:
>In article <8a6oan$2aah$[EMAIL PROTECTED]>, 5X3 <[EMAIL PROTECTED]> wrote:
>>In comp.os.linux.advocacy Noah Roberts <[EMAIL PROTECTED]> wrote:
>>
>>> And, speaking of FreeBSD, how does it fare in compatability with Linux? 
>>> Can the two live together on the same drive with Win98?  NetBSD destroyed
>>> all the partitions I had.  I can't say I am impressed with the *BSDs so
>>> far....
>>
>>Youve missed the point of freebsd entirely...Its not a "hobby" OS or 
>>really a "workstation" OS, its for ridiculously long uptimes and major
>>load handling.  I have no idea why anyone would want to put it on the
>>same hard drive as any other operating system, especially now that 
>>physical drives are so cheap.
>
>Oh boy, I've got news for you.
>
>There's this exciting new technology that's been out there for a few years
>now. It's called a laptop. It usually comes with one single hard drive, and
>it's pretty expensive to connect more.
>
>Of course, you can always buy one laptop per OS.
>Want to buy me a second laptop, so that I can stop having Linux, OpenBSD,
>Windows all on the same hard-drive ?

I think the point he was making is if you're _using_ FreeBSD, then having
it on a multiboot machine is rather pointless.  If you're just playing with
it, than you're just playing with it, and should be prepared for the
complexities of running multiple OS'es on a single disk (or disks).  You
think FreeBSD can mess things up?  Try re-installing winblows on your
multi-OS machine, and check out what that does to your MBR.... and there's
no patch for that.

--
Bill Woodford
[EMAIL PROTECTED]

------------------------------

From: [EMAIL PROTECTED] (Marc Espie)
Crossposted-To: 
comp.unix.bsd.386bsd.misc,comp.unix.bsd.freebsd.misc,comp.unix.bsd.misc,comp.unix.bsd.netbsd.misc,comp.unix.bsd.openbsd.misc
Subject: Re: BSD & Linux
Date: 9 Mar 2000 01:02:02 GMT

In article <jECx4.7778$[EMAIL PROTECTED]>,
Bill Woodford <[EMAIL PROTECTED]> wrote:
>I think the point he was making is if you're _using_ FreeBSD, then having
>it on a multiboot machine is rather pointless.  If you're just playing with
>it, than you're just playing with it, and should be prepared for the
>complexities of running multiple OS'es on a single disk (or disks).  You
>think FreeBSD can mess things up?  Try re-installing winblows on your
>multi-OS machine, and check out what that does to your MBR.... and there's
>no patch for that.

No, it's not pointless at all.

I can't speak for FreeBSD proper, but having Open, Linux, and Windows on the
same machine is fairly trivial for me.

I dare say I _use_ OpenBSD on that machine, every day. 

I also dare say I need to have Linux around, for teaching purposes (yeah, I
use my laptop in front of students), and the Windows partition is a hell of
a lot more efficient than Wine to play games, some times at night.

Why should *BSD be server-only ? 
This is just rubbish, OpenBSD makes a really good desktop, as far as I'm
concerned.

I don't see why I should have to buy two more laptops to have Linux and
Windows around if I want to.
-- 
        Marc Espie              
|anime, sf, juggling, unicycle, acrobatics, comics...
|AmigaOS, OpenBSD, C++, perl, Icon, PostScript...
| `real programmers don't die, they just get out of beta'

------------------------------

From: [EMAIL PROTECTED] (Terry Porter)
Subject: Re: Linux Demo Day a letdown
Reply-To: No-Spam
Date: 9 Mar 2000 09:12:35 +0800

On Tue, 07 Mar 2000 18:10:19 GMT,
 George Richard Russell <[EMAIL PROTECTED]> wrote:
>On 3 Mar 2000 20:34:39 +0800, Terry Porter <[EMAIL PROTECTED]> wrote:
>[minor snippage]
>>On Fri, 03 Mar 2000 06:04:53 GMT, Gooba <[EMAIL PROTECTED]> wrote:
>>As your a previously unseen, anonymous poster, I urge any readers to consider
>>the fact that you have no established credibility at all, "Gooba".
>
>Noone on cola does, bar those in the credits files of major OS components.
Nonsense George.

>
>>> That's all. Linux is
>>>notoriously choosy about what stuff is supported.
>>Bullshit.
>
>Replace Linux with Xfree86, and its true. Its also true for certain things 
>like printers using ghostscript, parallel port devices, anything not likely
>to be found on server class hardware.
No its not true, this is just a gross generalisation.

George your a troll mate.

>
>>> Linux is competing with Windows.
>>No its not.
>
>In certain areas, any OS on compatible hardware are competing. You can only run
>one at a time, bar VMware or IBM's virtualisation on mainframes.
Therefore ...

>
>>> It may not
>>>be agressively competing with Windows, but it is competing. A giraffe and an
>>>elephant compete for the same waterhole, how often do you see them fight?
>>Linux and Windows aren't competing for anything.
>
>Mindshare, developer time, end users attention, third party vendor support,
>driver support, etc....
Green eggs and spam ....

>
>>> Having to code new drivers for yourself?
>>Sure
>>> Reverse engineer or
>>If needed, np.
>
>Want to write one for either my printer or scanner? Its only been several years
>and its not be written yet.
No George I do not, *your* printer is your problem.

Besides you Wintrolls only use the *latest* hardware, where Linux may
not have one available (yet). Get a new printer.

>
>>>apply for licenses for every new piece of hardware?
>>No Free Software does NOT do deals with proprietary information owners.
>>If you had more than a passing introduction with it, you'd know that.
>
>Xfree86, obfuscated video driver code. Go look for it.
You look for it.

>Samba, whose sole purpose is to interoperate with the above owners products.
>
>>> I think not, this is why
>>>Linux needs to compete, it needs a certain base number of users/developers
>>>to remain a viable, modern OS.
>>Your totally, completely 100% *incorrect*.
>
>What, If everyone walks away, Linux will remain perpetually up to date and 
>viable? Nup.  Other people could make it so, but people still need to do so.
This statement shows scant knowledge of Linux George.

Linux developers are NOT walking away, we flocked to Linux in droves, as it 
GAVE us the tools we always wanted. Look at the new apps that appear on places
like Freshmeat every day! 

>
>Without developers. adaptation ceases. Without users, who will become 
Linux is not short of developers, I assure you Geoege.

>disatisfied enough to change something?
Linux exists, its NOT Windows, your model does not fit.

>>Hey thats your interpetration. Windows is about ***looks***, Free software is
>>about doing, not looks.
>
>Enlightenment + imlibs code vs its appearance? It looks nice on screen.
I dont use it sorry.

>
>Windows is about money, free software about Idealism. 
Yes I guess thats true, at least in part.

>Things get done for different reasons, still they get done.
Ans the sun still shines George,... your point ?
>
>George Russell
>(Registered Linux User 61117)


Kind Regards
Terry
--
**** To reach me, use [EMAIL PROTECTED]  ****
   My Desktop is powered by GNU/Linux, and has been   
 up 1 day 17 hours 36 minutes
** homepage http://www.odyssey.apana.org.au/~tjporter **

------------------------------

From: Steve Mading <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Microsoft migrates Hotmail to W2K
Date: 9 Mar 2000 01:54:21 GMT

In comp.os.linux.advocacy Chad Myers <[EMAIL PROTECTED]> wrote:

: "Wolfgang Weisselberg" <[EMAIL PROTECTED]> wrote in message
: news:[EMAIL PROTECTED]...

:> As of today, W2K is as much C2 as Linux.  Untested.

: However, since Win2K is built primarily on NT, which is C2 certified,
: it's a lot more likely Win2K will be able to be ceritified without
: much modification.

When NT went to version 4.0, it already lost the bragging rights
to C2 certification.  Only 3.51 was certified.  A system based on
4.0, which was in turn based on 3.51 is not likely to retain the
properties that make it certified.

: Also, Win2K meets all the base criteria for even being CONSIDERED
: to be tested, whereas Linux does not.



-- 
-- ------------------------------------------------------------------
 Steven L. Mading  at  BioMagResBank   (BMRB). UW-Madison           
 Programmer/Analyst/(acting SysAdmin)  mailto:[EMAIL PROTECTED] 
 B1108C, Biochem Addition / 433 Babcock Dr / Madison, WI 53706-1544 

------------------------------

From: R.E.Ballard ( Rex Ballard ) <[EMAIL PROTECTED]>
Crossposted-To: comp.os.ms-windows.nt.advocacy
Subject: Re: Disproving the lies.
Date: Thu, 09 Mar 2000 01:51:36 GMT

In article <[EMAIL PROTECTED]>,
  "Drestin Black" <[EMAIL PROTECTED]> wrote:
>
> "R.E.Ballard ( Rex Ballard )" <[EMAIL PROTECTED]> wrote in message
> news:8a54li$6et$[EMAIL PROTECTED]...
> > In article <[EMAIL PROTECTED]>,
> <snip>
>
> It's nice to see you *start off* trying to
> sound like you are going to give
> a fair rebuttle...
>
>  <quote>
> >    Provided that the NT/Windows 2000 environment is
> >         operated in a data-center-like manner
> >         with policies and procedures for change management,
> >         software updates, software
> >         distribution, backup/restore, and the like; and
> > </quote>
> >
> > Again, this reaffirms my statements that when you conduct
> > "standard maintenance" (which includes reboots at LEAST once
> > per week, preferably once/night) and you deduct this as "scheduled
> > down time" (therefore not included in availability ratings), you
> > can achieve availability of around 99.97%  (more on this below).
>
> No no no no no - NO weekly reboots. I do
> not tell anyone to do this, I do
> not know of anyone except uninformed/frightened
> idiot "admins" that do this.


> It is completely unnecessary to reboot on
> any schedule UNLESS you knowing
> (and willingly?) installed an application with
> a memory/resource leak.

Memory leaks, resource leaks, MUTEX conflicts, full disks,
and DLL conflicts.

How often you reboot a specific box is based on the requirements
of the application and the organization responsible for deployment.
In some cases, you can just go to services and stop/start a few
services, but with the delicate DLL and OCX interdependencies,
even this could create more problems than it solves.

> But, who would let such a thing continue?

With some third party software, you don't have a choice.  They
released the software in 1998, issued a service pack in 1999, and
Microsoft broke it with SP6 or Win2K.  SP2 broke Cyrix CHIPS.
In some cases, it has even been proven that the disfunctionality
resulting from service packs was deliberate.  Eventually, the
third party software gets a replacement patch to recover from the
Microsoft patch.

Remember that 99.95% is an average.  There are some servers, especially
trivial servers such as simple read-only file servers or static
web-page servers that wouldn't need a reboot and wouldn't crash for
months.  In fact, cleaning up the log files, event logs, and rebooting
the system to clear up memory leaks is about all that is required.

Conversely, if you have SQL, ERP, CRM, SCM, and cash flow
management systems all running in the same server, you
can expect life to get exciting.  You might eventually get
such a system to stay up, but you also expect
to reboot it at least once a day.  Ideally, you put each subsystem
on a dedicated server, cluster them reduntantly, and replicate
every transaction.  Eventually, you might even have 20-30 SQL servers,
10-20 ERP servers, 20-30 CRM servers, 10-20 SCM servers, and 5-10
cash-flow servers.  Of course, you've blown any economy and the
economy of scale available on UNIX based systems becomes very
attractive.

It will be interesting to see how Win2k $/TPM-C actually translates
in the real world.

It will be even more interesting to see what Linux $/TPM-C looks like.
(has anyone published a "legal one" yet?).  The last unofficial one
I saw was $2/TPM-C on a 30,000 TPM system based on P-II/300s and Linux.
Unfortunately, it was not approved, not properly sponsored, and the
TPM review committee demanded that all references to these results
be removed.  They appearantly didn't disagree with the results, only
that the results couldn't be published without the permission of the
entire membership.  Since the membership included Oracle, Sybase, IBM,
HP, and SUN, there weren't any votes that wanted to see these results
published.

> > <quote>
> >    Provided that, in heterogeneous environments, third-party
> >         or custom code is used to enhance system manageability
> >         and security, as well as to improve cross-operating
> >         environment directory services.
> > <quote>
> >
> > Put very simply, if you intend to serve anything other than trivial
> > web pages and intend to integrate to other systems - NT is NOT a
> > self-contained solution.  In fact, NT does not support critical
> > standards required for reliable integration with Minicomputers,
> > Mainframes, and SuperComputers used for truly "Mission Critical"
> > environments.
>
> Oh, really? Tell us, WHICH "critical standards?"

UNIX compatible TCP/IP, Berkeley Sockets, RPC, NIS, NFS, DCE,
CORBA, MQSeries, X11, datastreams of ascii text, kosher XML,
IRC and IRC-II, PVM, and MPI.  Not to mention programming
languages like PERL, ANSII standard C++, PYTHON, interactive
shells, cron, sed, grep, awk, lex, yacc, and cobol.

Sure, you can spend a few grand and get all these goodies, but
they aren't part of the standard package.  Windows 2000 comes
with qbasic, vbscript, and XML/ActiveX.  Even the JVM is so
dependent on ActiveX and Microsoft-only APIs that it isn't useful
as an integration tool.

> I did not know NT was intended for SuperComputers?

But there is often a need to integrate NT and SuperComputers.
You have Enterprise 10K machines managing real-time information
flow and you want to display this to NT and Windows 9x users.
You have to add third party software to even have access to a
socket interface, and the socket API is completely different on
NT than it is on UNIX.  Finally, after you have a working
single-thread application - you have to add all the thread management
overhead to manage resources among all threads.

> > > NT is reliable and definately enterprise ready.
> > > W2K even much more so.

At this moment, the jury is still out on whether Windows 2000 is
even enterprise ready.  NT is useful for some of the canned
applications provided by Microsoft.  If you can get a handle around
domain management, mail domains, and inter-domain communications
and resource management, you might even be able to keep the mail
and file servers working.  Many of the really large enterprises
have simply given up and put everything in a single domain and use
UNIX gateways to pass SMTP mail back and forth.

Most NT workstation users need Administrator priviledges on their own
machine just to keep it useful.  Even user installed applications
require access to system resources.

> >
> > <quote>
> > Table 1
> > Early Adopter Reliability Statistics on Windows 2000
> >
> > Customer #   Run Time (years)  Down Time   Availability
> >                                             Percentage
> >    1           0.45              0.10          99.94
> >    2           4.18              0.84          99.95
> >    3           0.09              0.01          99.96
> >    4           0.30              0.00         100.00
> >    5           0.65              0.09          99.96
> >    6           0.84              0.07          99.98
> >    7           0.72              0.01         100.00
> >    8           0.81              0.41          99.86
> >    9           0.39              0.13          99.91
> > Totals:        8.42              1.65          99.95
> >
> > Source: Microsoft Corporation, February 2000
> > </quote>
>
> Ahhh... and this is where the creative math comes into play...
>
> >
> > Let's see what 99.95% availability really means.  There are
> > 7 days/week 24 hours/day and 60 minutes/hour or 10080 minutes.
> > That means that you can expect an average of 5 minutes of
> > unscheduled down-time per week.  Furthermore, the highest
> > likelihood of a failure is during peak-hour load, during
> > the most critical period.  Often, secondary numbers giving
> > times of 90% uptimes on a 12x6 basis (mon-sat 7am-7pm) tend
> > to indicate that most failures happen during "prime-time".
>
> I'm going to do a big snip here because, yes,
> 99.95% availablity OVER A WEEK
> would result in 5 minutes.
> However, these are NOT weekly figures.

And I said this would be an average of 5 minutes a week.  Some servers
may not crash for years.  Others could be down for 8 hours on the
most important day of the year.

> These are YEARLY figures. I'll repeat. YEARLY figures.

Correct - this means that NT is unexpectedly down for 8 hours and
10 minutes sometime during the year.  This could be 2 hours per
quarter (just as the SEC filings are due), 20 minutes/month (during
the end-of month processing), or 2 hours/day from 4-5 on December
21, 22, 23, 24 - making Christmas a real treat for the IT department.

When I worked on directory assistance systems, we knew that at least
one component (processor, hard drive, communications controller,...)
would fail on mother's day.  We used as many as 8 redundant processors,
3 redundant databases, and 16 redundant communications controllers to
make sure that even if multiple components hit the "1 microsecond
window" that the system would continue to function while the component
was recovering.  We designed this technology into UNIX and eventually
published it, including RPC implementations.  The PVM and MPI models
can easily be structured for such redundant clusters, as can MQSeries.

> You CANNOT simply switch the meaning of a figure at your convience.

I was merely trying to relate uptime in human terms.  AT&T, the
originator of UNIX, measures down-time in parts per million.  One
part per million means that the system fails for an average of
5 minutes per year.  There minimum acceptable rate for most systems
is 5 parts per million.  I've worked on system that delivered 3 parts
per million.  Represented as a percentage, this would be 99.9997%
uptime.

> You cannot simply say, well, this is a year figure but I'll
> just change it into a weekly figure cause it sounds
> better for me.

True.  I've give a few other figures above.  Unfortunately, uptime
is a function of Mean Time Between Failures and Mean Time to Repair.
Neither of these critical numbers were given.  Perhaps because in
such a small sample, the MTBF would have varied widely as would the
mean time to repair.  Given that it takes 3 minutes to reboot an NT
server, one could assume that the minimum MTBF would be about 4 days.

> Rex - this is where you do everyone a disservice.

> When they say 99.95 (but
> you didnt' mention the 99.99 or 100% figures did you?)
> over a year, they
> mean cumulative for the whole year.

Note also that one of the servers measured appeared to have
run without an unscheduled outage for 4 months.  I've never
had more than 30 days (after 30 days I lose my DHCP address).
I'm curious what the 100% machine was doing.

Most NT workstation users have MTBFs of 5-7 days.  In systems
where there is less than 128 meg, this failure may be a BSOD.
In larger systems, it's usually a lock-up or Trap 0E in the
MSVCRT or MSVCRT40 DLLs.  Part of the problem is that some
applications don't work well with newer versions.  Some of
the VBRUN DLLs conflict as well.

Compared to Windows 95 or Windows 98, Windows NT is exceptionally
reliable.  For anyone who has never used anything other than
a Microsoft Windows Operating System, NT 4.0 is the flagship
system.  I'm sure that once Microsoft resolves the 64,000
incompatibilities with third party software, that Microsoft-only
users will consider Windows 2000 the finest operating system
Microsoft has ever produced.

I on the other hand have been spoiled by years of UNIX and Linux
where the worst that would happen is that I would have to put a ulimit
on to keep an application from memory leaking until it filled up
the swap space.

> that means that it's VERY possible for
> there to be 100.000% uptime for 11 months solid
> and then during a single
> month there was a single extended downtime

Yup, if MTBF is 12 months, the MTTR would be 240 minutes.
Since MTBF wasn't given, one can only guess at what the MTTR was.

I have information from other sources - a client I worked for in
1997 and 1998, who had 2700 NT servers and 300 UNIX servers.  I
got the outage reports from both systems and did the comparisons.

The MTBF of NT 4.0 with SP3 was about 7 days.  Furthermore, the
MTBF of file servers was more like one month.  The MTBF of servers
running 3rd party applications was more like 5 days.  The Uptime of
400 NT servers running Lotus Notes was 97% with MTBF average of 5 days.
Fortunately, there were replications and other safeguards which made
the downtime annoying, but you didn't lose your mail.

> (perhaps for maintence and
> upgrades and a major software upgrade/reconfiguration).

No - scheduled maintenance is NEVER included in uptime calculations.
The MTBF only includes unscheduled system failures.

> You simply CANNOT
> start spouting 5 min/week when that is
> NOT AT ALL what they said or claimed
> or imply.

You are correct.  When Microsoft or Aberdeen publishes a proper
uptime report stating MTBF, MTTR, and on a substantial number of
NT servers running for a substantial period of time, I'll count
on you to make it available - I hope you will e-mail it to me.

What I find interesting is that even though Microsoft has had
NT available for over 3 years, and even though they have sites
that supposedly run hundreds of servers, they (or you) chose to
quote a report from a survey which was appearantly done when
NT was less than on year old, based on individual servers.

It really isn't that hard to get uptime calculations for NT.  You
enable "send event to event log" on failure, you look for 6005
messages (boot messages) and exclude those occurring during scheduled
maintenance.

> <snip>
>
> Nice for you to forget/miss the two 100% uptime figures.

I simply gave the average given by Microsoft.  It's their chart.
One was 3 months with 0 failures.  This result is suspect - what
did they consider a failure?  The other was for 8 months and was
just a rounding issue.  Microsoft calculated the 99.95 total on 8
server years.  My calculations on 6000 server years was more like
99.97% uptime.  My calculations for the UNIX systems (Solaris and
AIX) on 800 server years was 99.9994% uptime.  In both cases,
downtime excludes scheduled maintainance.

> AND, let's not
> forget that these downtimes are not necessarily crashes,

Downtimes measure mean time between FAILURES and mean time to REPAIR
these failures.  This could be either that a machine has completely
crashed (Blue Screen of Death) or simply fails to provide availability
metrics (SNMP packets, won't ping, web server down).  Generally, this
specifically means that the primary function is no longer available.

> and it doesn't say crash anywhere.

> Assuming these are controlled down times (forced or
> voluntary) - can we not assume that these admins
> would not pick the middle
> of the busiest day to do these things?

These wouldn't be counted.  True, there are Linux users at the
Linux Counter site that claim to have systems that run for years
without rebooting, but even these systems have their "2 second
maintainance" scheduled by cron.  This usually includes rotating
the syslogs, clearing temp space, synching the hard drives, and
all that other housekeeping.

Of course, installing new Linux software rarely requires a full system
reboot, but even so, this doesn't count when measuring uptime.

[remaining rant deleted :-)]

> NT: more active users than linux

In the US? or Worldwide?  As I understand it, in India, Japan, China,
South America, Australia, and Europe, Linux is more popular than
NT.  At one point japanese sales of Linux exceeded those of Windows 98
upgrade.  At another point, French sales of Linux exceeded those of
Windows 98 upgrade.



--
Rex Ballard - Open Source Advocate, Internet
I/T Architect, MIS Director
http://www.open4success.com
Linux - 60 million satisfied users worldwide
and growing at over 1%/week!


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and comp.os.linux.advocacy) via:

    Internet: [EMAIL PROTECTED]

Linux may be obtained via one of these FTP sites:
    ftp.funet.fi                                pub/Linux
    tsx-11.mit.edu                              pub/linux
    sunsite.unc.edu                             pub/Linux

End of Linux-Advocacy Digest
******************************

Reply via email to