Linux-Misc Digest #625, Volume #21 Wed, 1 Sep 99 02:13:07 EDT
Contents:
Linux hangs (Ananda Rao Ladi)
RE: Trouble with make dep ("Jason Wiseman")
Re: The optimization debate (was: why not C++?) (Stephen E. Halpin)
Re: This is probably a simple one :) (Darin Johnson)
Re: The Microsoft/Linux Conspiracy (John Hasler)
Re: POP3 server. ("--==[bolMyn]==--")
SENDMAIL from my Windowsbox thru Linuxbox... ("MoPar")
Concerning HP 1200C/PS DeskJet on Linux ("Dan Taylor of Grand Rapids, MI")
Re: multiple consoles on multiple monitors (jwk)
Re: POP3 server. (Anton)
Re: Add Linux to OS/2 boot manager (Hann Wei Toh)
Re: PDFs for Linux (Cameron L. Spitzer)
Re: Console Lockup
Re: help with .deb (M van Oosterhout)
Re: dual boot win 98 (rob)
Re: Help installing new kernel (Sitaram Chamarty)
Re: superdisk and linux? (Sitaram Chamarty)
File change monitor for Unix? (Lucius Chiaraviglio)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED] (Ananda Rao Ladi)
Subject: Linux hangs
Date: 1 Sep 1999 04:08:11 GMT
While running some of our programs, the linux box seems to hang.
Based on some user activity, we allocate chunks of memory (~ 12000 bytes)
in the kernel, to collect some data. This memory is freed by another process
parallely.
When we start the tests, these two activities go on fine for sometime.
We have verified that at any time, there are three allocations still to be
freed. Also, the freed chunks are again getting re-used. We are using
kmalloc() and kfree().
When the system hangs, it doesn't even respond to the key-board. The only option
we have is to press the "reset button". Don't know how to debug this.
Any clues. You may send me a mail at [EMAIL PROTECTED]
TIA
Anand
------------------------------
From: "Jason Wiseman" <[EMAIL PROTECTED]>
Subject: RE: Trouble with make dep
Date: Tue, 31 Aug 1999 22:58:21 -0500
Reply-To: <[EMAIL PROTECTED]>
Thanks! that did it.
-jwiseman
> -----Original Message-----
> From: Peter Caffin [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, August 31, 1999 3:01 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Trouble with make dep
>
>
> Message from the Deja.com forum:
> comp.os.linux.misc
> Your subscription is set to individual email delivery
> >
> [EMAIL PROTECTED] wrote:
> > I have a fresh install of the 2.2.5 kernel and am trying to compile the
> > kernel (for ftape). Anyhow, when I run make dep I get the following
> > error:
>
> > make[4]: Entering directory `/usr/src/linux-2.2.5/drivers/fc4'
> > Makefile:10: ../../.config: No such file or directory
>
> You've deleted your kernel configuration file. You'll have to run
> `make menuconfig` again. Then:
>
> make depend ; make clean ; make bzlilo
> make modules; make modules_install
>
> Hope that helps.
>
> --: _ _ _ _
> _oo__ |_|_ |__ _ | _ |_|_o _ peter at ptcc dot it dot net dot au |
> //`'\_ | (/_|(/_| |_(_|| | || | http://it.net.au/~pc |
> / PO Box 869, Hillarys WA 6923, AUSTRALIA |
>
>
>
> _____________________________________________________________
> Deja.com: Share what you know. Learn what you don't.
> http://www.deja.com/
> * To modify or remove your subscription, go to
> http://www.deja.com/edit_sub.xp?group=comp.os.linux.misc
> * Read this thread at
> http://www.deja.com/thread/%3Chachq7.041.ln%40monster.ptcc.it.net.au%3E
>
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: [EMAIL PROTECTED] (Stephen E. Halpin)
Crossposted-To: comp.os.linux.development.apps,comp.os.linux.development.system
Subject: Re: The optimization debate (was: why not C++?)
Date: Wed, 01 Sep 1999 04:19:11 GMT
On Tue, 31 Aug 1999 02:18:04 GMT, [EMAIL PROTECTED] (Christopher Browne) wrote:
>On Sun, 29 Aug 1999 22:18:32 GMT, Stephen E. Halpin <[EMAIL PROTECTED]> wrote:
>>On 28 Aug 1999 18:07:27 GMT, [EMAIL PROTECTED] (Christopher Browne) wrote:
>>>On 27 Aug 1999 12:03:05 +0100, Paul Flinders
>>><[EMAIL PROTECTED]> wrote:
>>>>[EMAIL PROTECTED] (Paul D. Smith) writes:
>>>>> On a micro level, I believe the best way is to write the code the most
>>>>> straightforward way possible first, _then_ when it all works, come back
>>>>> and see where you can tweak it to be faster. Remember, slower, working
>>>>> code is always better than faster, broken code.
>>>>
>>>>That's true unless the very slowness _is_ the breakage.
>>>>
>>>>Also with more multimedia stuff fast but not completely accurate can be
>>>>preferable to slow but complete (i.e would you prefer speech recognition
>>>>which ran in real time but which occasionally got it wrong to speech
>>>>recognition which took 5 minutes to recognise 10 seconds speech but always
>>>>got it right).
>>>
>>>Slower, working code that reads in a straightforward manner, which
>>>thereby makes it easy to make code transformations, is easier to
>>>optimize than slightly faster code where bits have been tuned, and
>>>thereby are virtually non-modifiable.
>>
>>I think you missed the point of the previous poster. He is not talking
>>about "slightly faster code", he is talking about a 30:1 difference using
>>heuristics to produce results which are probabalistically correct. You
>>cant do it everywhere, but where you can do it, its a win. Another
>>example of what the previous poster was getting at is lossy compression,
>>where MPEG can produce far better compression ratios for audio and video
>>than lossless compression schemes can, and its an acceptable tradeoff
>>in many cases. On the other hand, you wouldnt store your company's
>>financial data using a lossy compression algorithm.
>
>The examples of "faster code" that have been quoted so far have
>amounted to:
>
>"reorder the loop so that it decrements rather than incrementing."
>
>That's not a fundamental algorithmic change that would change the
>order of complexity of the program.
You obviously didnt read either of our posts. Speech recognition
and lossy compression algorithms are based on heuristics, where the
results arent always "correct." Neither of these have anything at
all to do with the direction of a particular predictable loop.
>My contention is that there is little merit to looking for those
>little reorderings; better to write straightforward code, that will be
>easier to mutate when analysis is done to find bottlenecks.
>
>It absolutely is the 30:1 improvements, or 300:1 improvements, that
>are the primary optimization goal.
>
>By trying to get the 10% improvements early in the game, that is
>liable to make it harder to go after the bigger gains that can come
>later once the full scope of the system is known.
As the previous poster noted, "that's true unless the very
slowness _is_ the breakage." When you are doing signal
processing and you get 10 instructions per sample, and you
have no hardware buffering, a 10% improvement is the difference
between working and not working. Getting 10% performance
improvements in FFTs, FIR filters, and any other routine that
ends up in a library can benefit from optimization before the
full scope of a system is known. This doesnt mean that all
optimizations can occur early in a program, nor does it mean
that all optimizations can be localized to single components.
> "Optimization hinders evolution." -- Alan Perlis
"Evolution depends on optimization." -- Steve Halpin
The nature of optimization is to make something more
effective, or more useful. This includes not only
optimizing for time or space, but for generality,
adaptation or growth. The nature of evolution is
development, and the nature of development is to make
something more effective. Optimization is therefore
a fundamental component of the process of evolution.
>Once optimized once, it's harder to optimize the code again.
If this werent true, all processes would converge to O(1),
and all data sets would compress to a single bit. Ultimately
there is a minimum amount of work that has to be done to
perform every task, and you cant optimize a process beyond
that minimum. Its not unusual or unexpected for this bound
to be approached asymptotically. Outside a particular context,
you cant generalize this to imply when any optimization may be
made to have the best chance to achieve that minimum, and
indeed, poor system design can prevent later optimizations
from being effective without redesigning the whole system.
>>More to the point you make, there are times when slightly faster is
>>the difference between working and not working in a hard real-time
>>system, and you have to live with it. Its all part of the tradeoffs
>>that an engineer must make. If you havent seen the code for a box
>>where 100% of code and data memory are used and it runs hard real-time
>>at >98% utilization, its quite a sight :->
>
>Some years ago, I worked on a financial system where they "optimized
>too early." (A prototype that was a "6 month stopgap fix" stayed in
>production for 6 years...)
>
>The bank wanted us to mutate it into something new, when what was
>truly necessary was to throw out the whole system and rearchitect
>something new *from scratch.*
>
>The notable problem with it was the date format; YYMMDD was mandated,
>and when the system was managing pensions, there were ample
>opportunities for *BAD* "Y2K" errors what with there being a few
>people left with birthdates in the 1800's, and virtually everyone over
>70 years of age.
>
>There were a couple of "windowing" algorithms in use; once 1994 came
>along, 6 year bond certificates started encountering calculation
>problems, and when there is a billion dollars being managed, this is
>just a bit of a problem...
I guess the question is "did these changes make the system more
effective or more useful?" If not, the >system< was not optimized.
On the other hand, if the process of developing the solution twice
allowed the bank to keep itself on line when it otherwise would not
have been, and the cost of the two developments was less than the
cost of not having the first system online for that 6 month stopgap
period, the process was optimized. "Effective" and "useful" have
to occur in some context, and optimization must be evaluated in that
context.
>The system had been tuned up several times by the time I got to it; it
>was nigh unto impossible to modify. The system was retired in 1994;
>replaced by something new(er).
>
>There were lots of cases in this system of reports that ran for many
>hours due to less-than-intelligent selections of database indices;
>very often there were opportunities for *HUGE* performance
>improvements when we changed a few loops to pick more appropriate
>indices.
Some of these problems can be seen as far back as the design phase
of a project. If you had to scan a 512MB character array on a machine
with 16MB of memory and a 4K page size, you wouldnt scan every 4096th
character from 0, then every 4096th character from 1, etc... You would
scan through the array linearly to read the page set once instead of
4096 times. The same goes for scanning memory arrays which are far
larger than the processor cache. On the other hand, if the traversal
of the data was based on some external source, one would have to analyze
the behavior of that source before being able to determine how to store
and access the data effectively. If that analysis could only occur after
the system was built, then indeed your point is valid. Various scheduling
algorithms, from processes to disk access to networking can fall into this
catagory.
>That was easy enough in cases where report structures had not been
>tuned too much; it proved more difficult when previous programmers had
>started optimizing too early.
>From what you present, the problem wasnt when to optimize but
what to optimize. Indeed, one would want to use the principles
of optimization from the beginning of a project as requirements
are first defined, as some tradeoffs must be made that far back
in a programs life cycle. It really is more complicated than
"just do it at the end of the development cycle."
>--
>"Optimization hinders evolution." -- Alan Perlis
>[EMAIL PROTECTED] <http://www.hex.net/~cbbrowne/lsf.html>
-Steve
------------------------------
From: Darin Johnson <[EMAIL PROTECTED]>
Crossposted-To: alt.os.linux,comp.os.linux,comp.os.linux.help,comp.os.linux.questions
Subject: Re: This is probably a simple one :)
Date: 31 Aug 1999 20:47:28 -0700
DanH <[EMAIL PROTECTED]> writes:
> First, don't telnet to the other box. Use rsh, rlogin, or better yet,
> disable all those and get SSH.
Better to use telnet than rsh or rlogin! The latter were originally
written as stopgaps until something better came long. But it took so
long for better solutions that they became ingrained in the users.
Of course, the latest Linux distributions have more secure versions,
but it's better just to avoid them altogether in favor of SSH.
Telnet at least is a standard TCP/IP protocol.
--
Darin Johnson
Gravity is a harsh mistress -- The Tick
------------------------------
From: John Hasler <[EMAIL PROTECTED]>
Subject: Re: The Microsoft/Linux Conspiracy
Date: Tue, 31 Aug 1999 19:21:20 GMT
Thaddeus L. Olczyk writes:
> This has me curious, I've heard of court cases where NT was declared
> POSIX compliant ( supposedly so Microsoft coiuld get government contracts
> which required PoOSIX comliance ). If this were true, then how could
> Microsoft get away with the SCO agreement and NT?
NT is not POSIX compliant. NT plus a third party POSIX package sold by a
company whose name escapes me at the moment is POSIX compliant. When a
Federal agency wants NT, they buy a package consisting of NT and this POSIX
package. The regulation requires that they buy POSIX stuff, not that they
use it. The POSIX package goes straight into the trash unopened.
--
John Hasler This posting is in the public domain.
[EMAIL PROTECTED] Do with it what you will.
Dancing Horse Hill Make money from it if you can; I don't mind.
Elmwood, Wisconsin Do not send email advertisements to this address.
------------------------------
From: "--==[bolMyn]==--" <[EMAIL PROTECTED]>
Crossposted-To: linux.redhat,linux.redhat.misc,comp.os.linux
Subject: Re: POP3 server.
Date: Wed, 01 Sep 1999 04:42:07 GMT
Wasn't that what we all said all along?
Aaron Lyon wrote:
> I typed rpm -i <rpm for imap>, the rpm is in the rpms directory on the red
> hat cd, that installed the imapd and ipop2d and ipop3d binaries. After i
> did that i uncommented the pop3d line in the inetd.conf and rebooted and it
> worked fine. Enjoy. I did not uncomment ipop2d and imapd, and remote pop3
> clients can connect fine and retrieve mail. That's all i wanted was remote
> pop3 so i am letting you know. Hope i helped.
>
> Luca Satolli (KaBooM) <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]...
> > Is a pop3 server in Red Hat 5.2/6.0?
> > Thanks a lot & best Regards
> > Luca Satolli
> >
--
Bolek,
URL: http://www.bolek.com
e-mail: [EMAIL PROTECTED]
ICQ: 4086197
Join our Seti@Home group at:
http://setiathome.ssl.berkeley.edu/cgi-bin/cgi?cmd=team_lookup&name=inquisitive+minds
------------------------------
From: "MoPar" <[EMAIL PROTECTED]>
Subject: SENDMAIL from my Windowsbox thru Linuxbox...
Date: Wed, 01 Sep 1999 04:35:32 GMT
When I try to send an emai lfrom my windows98 machine, it says "relaying
denied", why is that and what can I do to get rid of it?
Also, I try to make my "D" drive on my windows machine avaliable by mounting
it under /filez on my linuxbox (for FTP purposes) this fails and all I get
is "connection to JOHN failed" (JOHN is the server/Windowsbox that has the
'D' drive).
I think the command was something like "smbmount -L //john/d 'mount
/filez'" or something.
Please be descriptive in your reply because I am very much a novice with
Linux.
THanks
John
PS
I can't smbmount -L john it doesn't show my PC, I can however see it
through my roommates PC by typing smbmount -L Roomate
and then my machine (john) shows up)
Does that have something to do with it?
------------------------------
From: "Dan Taylor of Grand Rapids, MI" <[EMAIL PROTECTED]>
Subject: Concerning HP 1200C/PS DeskJet on Linux
Date: Wed, 1 Sep 1999 00:37:49 -0400
Anybody out there using an HP 1200C/PS Printer on Linux? I have RedHat 5.2
& am hoping someone can tell me that they have an HP 1200C/PS Printer
running with Linux without problems.
------------------------------
From: [EMAIL PROTECTED] (jwk)
Crossposted-To: comp.os.linux.setup,comp.os.linux.x
Subject: Re: multiple consoles on multiple monitors
Date: 1 Sep 1999 04:58:15 GMT
Reply-To: [EMAIL PROTECTED]
On Tue, 31 Aug 1999 13:10:01 GMT, Stuart R. Fuller
<[EMAIL PROTECTED]> wrote:
>There's someone who's not paying attention.
>
>He doesn't want to use X!
>
>Raymonds Doetjes ([EMAIL PROTECTED]) wrote:
>: You need a commercial X-server for this. XFree86 is working on it right now to
>: be released probably januarie but if you can't wait that long
>: spend 200 dollars and buy a X-Server from Metro (MetroLink). ALso be aware that
>: not all VGA cards support 2 carts in a system!!! Since the hardware and drivers
>: are often not made for shared interrupt use. Matrox Millenium works though.
>:
>: Raymond
>:
>: Tim Kohlman wrote:
>:
>: > I have two video cards in my RH6.0 box and i want to run consoles on both of
>: > them. Is there any way to do this? I don't want to use X.
>: >
>: > Please reply by e-mail: [EMAIL PROTECTED]
>:
I've read somewhere (and you may too, try http://www.dejanews.com) that
it is possible with two matrox cards and the vesafb console(s).
Good luck,
Jurriaan
--
While thou sleepest, the dreaded god Murphy plotteth against thee.
Awake and take care.
Henry Spencer - The Ten Commandments for C programmers (Annotated Ed.)
------------------------------
From: [EMAIL PROTECTED] (Anton)
Crossposted-To: linux.redhat,linux.redhat.misc,comp.os.linux
Subject: Re: POP3 server.
Date: Wed, 1 Sep 1999 00:54:30 -0400
Aaron Lyon <[EMAIL PROTECTED]> wrote:
> ... i uncommented the pop3d line in the inetd.conf and rebooted
For future reference: after editing inetd.conf, all you have to do to
make the changes take effect is 'killall -HUP inetd'. Rebooting isn't
necessary at all.
Anton
--
Ever stop to think, and forget to start again?
------------------------------
From: Hann Wei Toh <[EMAIL PROTECTED]>
Subject: Re: Add Linux to OS/2 boot manager
Date: Wed, 01 Sep 1999 04:08:11 GMT
In article <[EMAIL PROTECTED]>,
Howard Brazee <[EMAIL PROTECTED]> wrote:
> I have a computer with Warp on G: & H:. I wish to install Linux on G:
> and continue to use the boot manager.
Follow the usual procedure for Linux installation. But when asked where
you want to put LILO, answer that you want it to be placed at the root
partition. After completed installing Linux, reboot the machine and go
to OS/2. In OS/2, run the boot manager configuration program and add
the Linux root partition there as a bootable partition. Give it a
name. During the next machine bootup, you will be able to start Linux
by choosing the boot manager entry with the above name.
The OS/2 boot manager is located in a separate partition, about 5 MB in
size, in the harddisk. So I think even after removing the OS/2
partitions, the boot manager will still work. Just that reconfiguration
might not be possible then.
Hope this helps.
Hann Wei
Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
------------------------------
From: [EMAIL PROTECTED] (Cameron L. Spitzer)
Subject: Re: PDFs for Linux
Date: 1 Sep 1999 05:22:31 GMT
In article <[EMAIL PROTECTED]>, Kerry J. Cox wrote:
>Does the Adobe available for Linux has any utility for making pdf
>files? I'd like to start making my won pdf files and I don't use
>Windows.
Ghostscript makes PDF files. There are some at
http://judi.greens.org/c/h/get/lilodocs.html
but they look like hell. (User Guide.)
groff makes PDF files that look great.
See for example
http://judi.greens.org/lilo/lilo.8.pdf
Cameron
------------------------------
From: <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.hardware,comp.os.linux.setup
Subject: Re: Console Lockup
Date: Wed, 01 Sep 1999 05:31:08 GMT
Prelimarly testing with a S3 Virge Vx PCI video card does not
present the locking up problem. Does anyone out there know if this
might be an AGP specific problem or a Rage Fury problem?
I had this same card (the Rage Fury) in my old motherboard and never
had a problem with it (except running X, which didn't bother me), but
that was the 2.0.x kernel.
Ideas anyone?
================== Posted via CNET Linux Help ==================
http://www.searchlinux.com
------------------------------
Date: Wed, 01 Sep 1999 15:32:09 +1000
From: M van Oosterhout <[EMAIL PROTECTED]>
Subject: Re: help with .deb
Paul Kimoto wrote:
>
> In article <[EMAIL PROTECTED]>, gc wrote:
> > I have downloaded some software packages which are .deb archives.
> > I have looked around the debian.org website to figure out how
> > to open it, but the software which appears to do that is also in
> > a .deb archive. So is there a way to open these things without
> > having to install the entire debian os?
>
> A .deb file contains some identification files and some .tar.gz
> files. You can disassemble one using ar(1).
It's called dpkg. Note that cross installing packages is the
path to madness.
Redhat may have a package with dpkg, maybe, maybe not. Anyway
I'd suggest looking into alien.
HTH,
Martijn van Oosterhout
Australia
------------------------------
From: rob <[EMAIL PROTECTED]>
Subject: Re: dual boot win 98
Date: Tue, 31 Aug 1999 23:35:55 +0000
Sort of but you are very likely to run into problems with lilo
if you're adding linux to a windows system on the same drive.
Unless the linux boot partition is near the beginning of the
disk lilo cannot load it.
rob.
<snip>
> > >for very small gain to use anything but lilo.
>
> > Lilo will boot Linux, but it will not, to the best of my knowledge, give
> > you the option of booting to Linux, Windows, OR some other OS at boot time.
> >
> > MF
>
> Sure it does. It may not be as pretty as the commercial
> boot loaders but it can do the job just as well.
>
> Press [TAB] next time you're at the LILO: prompt and see
> what pops up...
>
> --
>
> -John ([EMAIL PROTECTED])
------------------------------
From: [EMAIL PROTECTED] (Sitaram Chamarty)
Subject: Re: Help installing new kernel
Date: 31 Aug 1999 04:58:02 -0700
On Mon, 30 Aug 1999 02:20:02 -0700, Jason Bond <[EMAIL PROTECTED]> wrote:
>new kernel, it seems to boot up to when xdm tries to
>kick in. It goes to the text linux login and then attempts
>to switch to the vga screen and then just sort of blinks
>and the login screen never shows. I know this isn't all
Other than the frame buffer stuff, (which isn't avlbl in 2.0.x
anyway) I cant recall anything in the kernel that would screw up
X.
Still, you can get better diagnostics if you do this: (1) boot in
"single" mode - at the LILO prompt, type in
<your kernel name> single
This will get you a nice root login. (2) now edit /etc/inittab
and change the 5 in the line
id:5:initdefault:
to a 3. (3) Now reboot - it will go to a plain text terminal.
Log in, then do either
startx
and see the messages carefully, or do
startx 2>errors
and then (after X crashes or fails or whatever it does), peruse
the errors file.
If the error messages dont say anything obvious please ask here
again.
------------------------------
From: [EMAIL PROTECTED] (Sitaram Chamarty)
Subject: Re: superdisk and linux?
Date: 31 Aug 1999 04:58:08 -0700
On 28 Aug 1999 21:13:52 GMT, siemel naran <[EMAIL PROTECTED]> wrote:
>How do I use Imation Superdisk with linux?
Red Hat's default recognises it: on my laptop it comes up as
/dev/hdb. Most DOS formatted floppies work fine; the 1722 MB
formatted ones don't (on mine anyway!).
If your kernel doesn't seem to recognise it, try re-compiling the
kernel to enable CONFIG_BLK_DEV_IDEFLOPPY, either as "y" or "m".
Can't recall offhand if "m" is even allowed for this (ISTR that
2.0.x insists on making this a "y", while 2.2.x allows you to make
it an "m")
Of course, you know that by "y" I mean built-in to the kernel,
while "m" is building it as a kernel module.
------------------------------
From: [EMAIL PROTECTED] (Lucius Chiaraviglio)
Crossposted-To: comp.unix.bsd.misc
Subject: File change monitor for Unix?
Reply-To: [EMAIL PROTECTED]
Date: Wed, 01 Sep 1999 05:06:55 GMT
Does anyone know of a utility for any Linux and/or *BSD (or
for that matter, any other Unix) which will log all file changes
(creations, modifications, deletions, and moves/renames) on a system?
It should, of course, exclude its own log file(s), system swap files
and memory image files, optionally other user-specified files which
are modified all the time, and optionally temporary files which are
created and then deleted during the monitoring session. Preferably,
it should also be able to give you a diff-style output of the modified
files (before/after comparison, selectable for text only or text and
binary). Automatically making a backup of deleted, altered, or
replaced files (optionally limited to just the version present before
monitoring started, in case files are rewritten multiple times during
a session) would be a bonus.
The purposes of this utility would be multifold, similar to
but more broadly applicable than the use of "uninstaller" programs for
Windows:
1. If you are installing an application, utility, or system
upgrade and don't trust the installation program, at least you
can find out what it did (and know exactly which files to
restore when it fries them). You will also know which files
to scan for viruses after the installation program extracts
them from some archive format that your anti-virus program
cannot read inside (unless the installation program does
something unfriendly like executes code archived in such a
format before giving you a chance to do your scan, in which
case you have to scan EVERYTHING). For this, you will usually
only need the beginning and end states of altered files
(including new files) relative to the monitoring session.
2. If you are trying to debug an installation program, the
use is as for #1 above, except that this could require logging
of intermediate states of files, including files which are
created and deleted during the monitoring session.
3. If you are trying to figure out where something stores its
configuration information (and just what that information is
for a certain configuration) and the documentation doesn't
tell you (or doesn't exist), this would be an excellent way to
find out. For this, you will usually only need the beginning
and end states of altered files (including new files) relative
to the monitoring session.
4. This could help you track an infection of a virus or worm (or
the action of a Trojan Horse program) not detected by your
anti-virus program, although this should not be regarded as
bulletproof for this purpose (malware may bypass normal
channels for doing things and/or tamper with the logging
mechanism). For this, you would need continuous monitoring
with logging of intermediate states (including files which are
created and deleted or modified and then restored to their
original state and possibly their original date/time stamp
during the monitoring session). You would also need the
ability to read the log file(s) without interrupting the
monitoring session.
5. This could help you track a system breakin, although it should
not be regarded as bulletproof for this purpose (whoever broke
in may bypass normal channels for doing things and/or tamper
with the logging mechamism, and will be more likely to react
to it than automated malware would be). For this, you would
definitely need continuous monitoring with logging of
intermediate states (including files which are created and
deleted or modified and then restored to their original state
and possibly their original date/time stamp during the
monitoring session). You would also need the ability to read
the log file(s) without interrupting the monitoring session.
This utility used for this purpose would also need to have
very flexible installation options so that someone breaking
into the system could not be sure of where it was installed
or where its configuration and log files were without taking
substantial time (and possibly causing noticeable disk
activity) to look for them.
Lucius Chiaraviglio | [EMAIL PROTECTED]
--
To reply to this message, remove the first three letters from my user
name. If you are seeing this in an e-mail message, it is because I am
posting it and e-mailing it at the same time -- normal e-mail messages
from me do not have this feature.
--
Note: My news server has a very short expiration time (around 10 days
for most groups), so I will likely miss your reply unless you send it
by e-mail in addition to posting it.
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and comp.os.linux.misc) via:
Internet: [EMAIL PROTECTED]
Linux may be obtained via one of these FTP sites:
ftp.funet.fi pub/Linux
tsx-11.mit.edu pub/linux
sunsite.unc.edu pub/Linux
End of Linux-Misc Digest
******************************