Re: cdda2wav or cdparanoia ?

2002-03-21 Thread William T Wilson
On Thu, 21 Mar 2002, Gerard Robin wrote:

 cdda2wav and cdparanoia both run fine to extract the tracks of a
 CD-audio what is the advantage to use one rather than another ? TIA
 for an advice.

I can't think of a reason why you *wouldn't* want to use cdparanoia.  I
have even been able to extract good (not perfect but good) audio from CD's
that my regular CD-player wouldn't play.

The last time I used cdda2wav, which was admittedly a few years ago, it
worked great with certain models of CD-ROM drive that behaved well for
purposes of audio extraction, but for many other models (including mine)
it was not very accurate.  Cdparanoia works well on almost anything.

Cdparanoia suggests using the SCSI-over-IDE functionality in the kernel to
improve its performance, but I find that it works quite well with the
plain IDE driver even in the 2.2 kernel.



Re: Proprietry Software - The Pain!

2002-02-05 Thread William T Wilson
On Tue, 5 Feb 2002, Paul Sargent wrote:

 This might not be classed as a debian problem per-se, but I'm
 wondering if anybody here has any suggestions to get me out of this
 hole, a hole which is probably more political than technical. Maybe
 somebody else has been in a similar situation.

It looks to me like a library incompatibility issue.  Differences in the
way the libc handles malloc can cause weird problems for programs that
have memory allocation bugs.  Since glibc changes the behavior of malloc
on a quite regular basis, and redhat likes to patch their stuff anyway,
it's quite possible that the bug-tolerance is different between the two.

The simplest thing IMO is to just keep them on separate boxes :}

A worse problem is a kernel difference.  Although I think this is unlikely
due to the nature of the error.  The possible good news here is that it's
likely that you can find a kernel that will run both applications,
although it might not be supported by one of the developers.  Look into
the possibility that the kernel you are using suffers from one of the
miscellaneous Athlon bugs (or rather, Athlon motherboard bugs).  I doubt
this is the problem, but it wouldn't hurt to check.
  
Installing a hard drive containing their supported system and booting that
would allow you to rule out a hardware problem.

Another option is to just put the entire RedHat system into a directory
tree on your Debian system, use chroot to access it, and see if that
works.  If it does, it rules out a kernel problem and it rules out any
problems with running system services.  On the other hand, it will create
new problems pertaining to the Jekyll-and-Hyde nature of a system that has
two /etc/passwd files, etc. etc.

You might consider having Vendor B send you a completely statically linked
binary that doesn't use dynamically linked anything, including (or rather
especially) libc.  This is marginally less efficient in terms of memory
use, but if it runs on your system, you know the problem is a library
incompatibility.  (the memory wastage is pretty small; unless you run a
lot of simultaneous copies of the application, you probably don't have to
worry about it - unless it's a really big X application, and maybe not
even then given the size of your system).

Or, you might copy the libraries off of the supported system into
/usr/local/lib/redhat, or some such.  Use LD_LIBRARY_PATH to cause
Application B to use this set of libraries, instead of the regular ones.

It's not necessarily just libc that could be at fault.  Any system library
might be responsible - the X libraries tend to be a nuisance as well.

 and the other hide it? My guess is that it's to do with the memory
 setup (see below) changing the address space that the app is living
 in, but that's just a guess.

There should be no problems resulting from address space.  The app should
see the address space the same regardless of the situation with the
physical memory (if my understanding of the large-memory patches is
correct).

 Has anybody else ever found themselves in a similar situation? Any
 suggestions for how to get out?

It is a continous and ongoing nuisance in HP/UX :}



Re: Large file sizes (2+Gb)

2002-01-15 Thread William T Wilson
On Tue, 15 Jan 2002, Cassandra Ludwig wrote:

 Now I have tried dumping via NFS (using windows NFS systems
 *shudder*), ftp, and even samba, but all of these drop dead at the 2gb
 limit.  Samba refuses to even try sending the file.

I am not entirely sure which system is running which OS.  If they are both
running Linux, you can do it, if one has Windows you may hit snags.

Upgrading your kernel to 2.4 and using ReiserFS will give you the 64-bit
filesystem, but your libc may not be compiled to support large files.  
You may have to upgrade or recompile libC (depending on what version you
have), and any statically linked applications that you would want to have
support for the large files.

Is your fileserver on Windows?  If that is the case you will have a
problem since Windows filesystems suffer the 2GB limit, as far as I know
(maybe recent NTFS does not, I know FATxx all do).  In the case of FTP and
Samba, both of these have a 2GB limit as well.

The simplest thing, might be to send the files across in 2GB chunks and
re-assemble them at the destination.  NFS is not a very good protocol for
file transfer so I would not suggest using that.  100GB of data would take
ages to send across.  Even 2GB could take a while.  But V3 NFS does
support large files.  V2 NFS does not.  If you have to try to do it with
Windows NT NFS support, I would not expect good results (it isn't even
very good at ordinary tasks).

You might be able to get the file across using HTTP, I do not believe
there is a size limit for that.  Other options, if you get desperate,
might include ZModem over telnet, IRC DCC, or even 20 lines of C code to
open a socket and just send the raw data without any regard to file size.

The problem though, is with the file transfer protocol, not with Linux. :}



Re: Large file sizes (2+Gb)

2002-01-15 Thread William T Wilson
On Tue, 15 Jan 2002, Cassandra Lynette Ludwig wrote:

 Did you read the initial message?  For those who have not read it, or
 are unable to read english here is it detailed (I hate having to do
 this for men all the time *sigh*)

Yes, I read it carefully.  Nowhere in the message do you say which machine
is running which OS.  You imply that one of them is running Windows, and
one of them is running Linux, but you don't even say that clearly.

If I were as sexist as you I would say that women cannot write clearly
because they expect men to read their minds.



Re: Large file sizes (2+Gb)

2002-01-15 Thread William T Wilson
On Tue, 15 Jan 2002, Cassandra Lynette Ludwig wrote:

 Obviously you didn't read the message, I had said I had upgraded to
 2.4.17 to get reiserfs as ext2 cannot handle such large files.

I was only going to flame you once but this message deserves two.

 I cannot split the video data into multiple files - this is not text
 it is compressed DVI data.  If you do not know the file format, do not
 suggest options like this.

If you do not know how files work, do not be insulting.  Any file can be
split and reassembled regardless of what is in it.

 Don't annoy me anymore with messages as un thought out as this.  It
 feels like the sort of responses I used to get from the Redhat
 technical support team.

Until you adjust your attitude no one will be able to help you, no matter
how much they may want to, and you won't be worthy of the help.

When volunteers help you for free, you owe them respect and a willingness
to work with them, not insult them.  You owe this even if you were paying
for support, but it is doubly important when something is community
supported.



Re: OT: Language War (Re: C Manual)

2002-01-06 Thread William T Wilson
On Sat, 5 Jan 2002, Eric G.Miller wrote:

 is one of the reasons pointers to char are so common.  However, there
 is a little trick that's guaranteed to always work:
 
  struct foo {
   size_t length;
   char str[1];
  };
 
  ...
 
 struct foo * str_to_foo(char *a)
 {
   size_t len = strlen (a);
   struct foo *bar = malloc (sizeof(struct foo) + len);
   if (bar) {
 bar-length = len;
 memcpy (bar-str, a, len);  /* bar-str now not NUL terminated */
   }
   return bar;
 } 

It doesn't look particularly guaranteed to me.  You're really allocating
*three* pieces of memory here - one for the struct foo, one for the
1-character array chr, and one for the rest of the string.  Your example
assumes that chr will be located in memory immediately after foo - which
it probably will, but it might not.  It could be anywhere, the language
makes no guarantee.  The compiler might even choose to put the 1-char
array before foo, so you can't use it without overwriting your struct

Structs have fields.  Use them.

 The benefit is being able to use one malloc vs. two (for any similarly

This is a benefit why? :}



Re: OT: Language War (Re: C Manual)

2002-01-02 Thread William T Wilson
On Wed, 2 Jan 2002, Richard Cobbe wrote:

 I'll agree that the two are related; in fact, I'd go so far as to say
 that if a language supports dynamic memory allocation and type-safety,
 it *has* to have some sort of automatic storage management system.

I don't think that necessarily follows; a manual mechanism for freeing
resources would then just set the reference to a NULL value.

But I think that most strongly typed languages do have automatic resource
management of some sort, because it is so useful :}



Re: OT: Language War (Re: C Manual)

2002-01-01 Thread William T Wilson
On Tue, 1 Jan 2002, Richard Cobbe wrote:

  | Casting you can't really get away from nor do you really need to.  In fact
  | the more strongly typed the language is, the more casting you have to do.
  
  This statement is incorrect.
 
 Agreed.

I suppose I will agree as well, I was not meaning to include dynamically
typed languages in the original statement, I just didn't say that :}
Really it was not a very good statement to make, although in the original
context it wasn't so bad :}

 However, I think that the flexibility of a type system is more
 important than its `strength' for removing the need for casts.

I will go along with that as well.  In ML for instance (and other
languages as well) there is parametric polymorphism which give you a lot
of the flexibility of dynamic typing while still retaining much of the
error checking of static typing.  This is different from the
polymorphism found in C++ in which you can have virtual functions (which
still require the programmer to provide all the different implementations)
and inheritance (which only permits polymorphism within a very limited set
of types).  Although I do not know Haskell my understanding is that this
is how it works as well.

For instance you could have:
fun times x y = x * y;

You could then apply this function to either reals, ints, or one of each
and then it would return the appropriate type.  The compiler will trace
the execution through the function, deducing which are legal types from
the operators and functions used within the function.  In this way you do
not need to write a separate function for each combination of types your
functions might want to operate on, even though ML is a statically typed
language.

But because the checking is all done at compile-time you do not have much
risk of runtime errors due to type problems.



Re: OT: Language War (Re: C Manual)

2001-12-31 Thread William T Wilson
On Mon, 31 Dec 2001, Erik Steffl wrote:

   consider perl which doesn't have strong types but it's quite
 impossible to make it segfault and C++ on the other side which is

That is true but it doesn't mean that type safety won't prevent it
also.  Consider a hypothetical language that doesn't have any dynamic
resource allocation at all and has a very weak type system.  Actually a
shell scripting language is not very far from this.  It can never segfault
although it is still possible to have the same sort of bugs which cause
segfaults.

 fairly dangerous even without casting (I would even go as far as
 saying that casting makes no difference (statistally), but I'd have to
 think about it).

The presence of casting doesn't have too much of an impact on the
reliability of a particular program (if anything it improves it because it
means the programmer thought about what his data really meant) but a
language that doesn't keep track of the type of its data on the
programmer's behalf cannot detect many errors at compile-time.

It can make sure that the programmer doesn't try to access a field that
isn't present in the type of data that the pointer is supposed to
represent, but it can't make sure that the pointer actually points to that
sort of data.

The ability to check that the pointer points to the right type of data
catches both a huge number of nuisance bugs *and* detects many types of
segfaulting bugs even at compile time (and provides useful error
descriptions even if it slips through to runtime).

Programmers that haven't used a really strongly typed language may not
even even realize that the compiler is able to catch these sorts of bugs.

   most of the segfaults are because of the resource allocation
 mistakes, not because of mistaken types... at last that's my
 impression.

In a lot of ways they are the same thing.  Suppose you have a pointer to
an integer, and you change it so it points to a dynamically allocated
array of pointers to integers (i.e. a 2-dimensional array) - which is
perfectly legal in C - and then you free() the pointer.  Is this a
resource allocation mistake or a type mistake?  It's really a resource
allocation mistake, but it's also a type mistake, and it's something that
a compiler in a strongly typed language would catch.

   note that in c++ there's basically no need for casting and using
 void pointers (in general, there are special cases). That's of course,

Casting you can't really get away from nor do you really need to.  In fact
the more strongly typed the language is, the more casting you have to do.

(void) pointers on the other hand are generally not your friend :}



Re: OT: Language War (Re: C Manual)

2001-12-29 Thread William T Wilson
On Sat, 29 Dec 2001, Jeffrey W. Baker wrote:

 about C++ (and C) and I don't think I can take it any longer.  Be
 like me, use a language with imperceptible market penetration.  I

Why does market penetration matter?  It's like saying Windows is superior
because everyone uses it; but if you believed that you probably wouldn't
be here.

 really think Mr. Joyner is my polar opposite.  When I think of a
 computer, I think of an electronic device which will do such-and-such
 thing if you place value 0x37 at memory offset 0.  When Ian Joyner

That's because that's what it is.  But then no one derives any real
benefit from having 0x37 placed at offset 0.

 looks at a computer, he wants to represent his model of the universe
 inside it.  The computer and the human are fundametally different

Which is a much more valuable task!  No wonder he wants to do that.

 things.  You'll expend an aweful lot of energy trying to represent
 human concepts in a computer.  By contrast, it is very easy for a
 human to learn computer concepts.

Representing human concepts in a computer is the central purpose of
programming.

 If you ask an Eiffel programmer how to get the value of a byte at a
 given offset in the computer's memory, they'll start with an
 explanation about why the programmer shouldn't concern himself with
 computer memory; memory is in the how domain.  From there, they will

So... why *should* the programmer concern himself with individual bytes of
memory? (Assuming he is writing an ordinary application and not a hardware
driver or something similar).



Re: Games - A question

2001-11-29 Thread William T Wilson
On Thu, 29 Nov 2001, Keith O'Connell wrote:

 Assuming we are against non-free software and would not contaminate or
 machines with closed-source code, what is the panels view on games?

You're pretty well limited in that case to roguelike games, classic Unix
BSDgames, plus original Quake and earlier iD works, as well as most things
based on CrystalSpace.

 I was talking to a friend about the Alpha Centari port by Loki, it is
 for payment binary only as I understand it. Is this an anathema
 because there is no source code? Could it be that it is sensible

Not to me, maybe to others. :}

 because a game is an end in itself, unlike an editor, compiler or
 browser which are tools that it is reasonable to want to modify?

There are more people that modify games than modify compilers :}

To me, I certainly consider free software to be superior to the ordinary
type, but wouldn't refuse to use a program just because it isn't free.

 If the source code is there then in a multiplier game, how can you be
 sure that your opponent has not tilted his client to enhance his game
 play?

You can't :}

You can ensure some consistency with external authentication tools such as
Punkbuster (if the game supports it), and you can make sure that all
important data is stored on a trusted system.  Generally there is no
perfect solution to this problem however :}  (Even in the case of
closed-source games you cannot ensure this).



Re: mp3 encoding

2001-11-26 Thread William T Wilson
On Mon, 26 Nov 2001, nate wrote:

 i use l3enc for encoding(very slow but good quality), i found a
 serial# for it a few years ago(i can't find a way to buy it) and i use

Although l3enc is the only legal encoder I know of that runs on Linux, I
wouldn't necessarily say it has the best quality, except at very low
bitrates.  I've heard that LAME is the best quality encoder at normal
(128-256K) bitrates.  I do not know which encoder is best for variable
bitrate.



Re: TSR

2001-09-25 Thread William T Wilson
On Tue, 25 Sep 2001 [EMAIL PROTECTED] wrote:

 1. Does this mean , that once I switch my machine that is running a
 TSR , the TSR is gone ? I guess that for all programs , a shutdown or
 poweroff , stops the process.

Correct.

 2. One of my friends said that if a TSR hits the machine , you might
 have to re-format the disk ! I dis-agree with him , because after all

No, that is quite wrong.  All you need to do is reboot, or some TSRs allow
themselves to be unloaded.

 3. Till date I believe that Unix/Linux - based machines do not support
 TSRs , am I right ? I reason that since the header file from which
 TSRs take all their functions , id the dos.h you can't have TSRs on
 Unixes.

A TSR is a very old DOS-based term.  A TSR is just a program that sits in
memory, uses DOS's (advisory) memory allocation system to keep other
programs from overwriting it and then registers itself as the handler for
some type of event (usually, a timer, disk, video or keyboard interrupt).

This was all necessary because DOS was single-tasking.  On a real OS none
of this is necessary.  Instead we have the shell's job-control functions
to switch among processes.  It's Much Much Better this way.



Re: deadlock

2001-09-25 Thread William T Wilson
On Tue, 25 Sep 2001 [EMAIL PROTECTED] wrote:

 Please clear these two doubts of mine :
 1. When does a deadlock happen on a Unix/Linux system ?
 2. What is a deadlock ?

Deadlock isn't a Unix/Linux concept, it's a programming concept.  It
happens when you have two processes or threads and two resources.  Both
processes want to control both resources.  But process A has resource X
and process B has resource Y.  Process A is blocked waiting for resource Y
and process B is blocked waiting for resource X.  As a result neither
process ever runs.  (And to make it worse they have gotten stuck holding
the resources, too, so nobody else can use them either).

Deadlock can happen on any system that has lockable resources and multiple
processes/threads that can lock them.

The generally accepted way to avoid this is to make sure that all your
threads in your program request resources in the same order.



Re: how to start a riot

2001-08-19 Thread William T Wilson
On Sun, 19 Aug 2001, Lambrecht Joris wrote:

 Also, imho the personality thing is a grand idea on it's own. When
 will Linux get such a feature. Imagine plugging in an OS personality

It's been in the kernel since at least 2.0, maybe earlier.  This is how
Linux and FreeBSD can run each others' binaries.

 on your linux kernel and running say MacOS X apps or even (urgh)
 Windows 9x/NT apps ... unless i'm completely off center here ... enjoy

The problem with Windows apps is that they require a lot of functionality
in the OS that isn't part of the Linux kernel.  For this see WINE.  As for
MacOS X, that's a little more interesting, because MacOS X is Unixy enough
that this might be usable in case of Linux running on Apple hardware.  
(For x86 it would be no use unless Apple makes a MacOS X for that
platform).  However MacOS X is similar to Windows, there is a lot there
that isn't anything like what the Linux kernel provides.  You'd need
something similar to WINE to handle all the graphics-related stuff and so
on.



Reading a mac disk

2001-08-12 Thread William T Wilson
I want to take a Macintosh IDE hard drive (System 8.6), connect it to my
x86 Linux system and read the data off of it.  (In a pinch, I could use a
Windows system too, but that looks harder).

Do I have a prayer? :}

I've used mtools to read Mac floppies, but as far as I know these are no
use for reading hard drive data.



Re: what is a framebuffer?

2001-08-09 Thread William T Wilson
On Thu, 9 Aug 2001, Faheem Mitha wrote:

 This is possibly offtopic, but can someone explain to me what a
 framebuffer is, and why one should care about it? I have seen it

It's the area of RAM on the video card that holds the actual image being
displayed.

The framebuffer device is the interface to this video RAM through the
kernel.  Essentially it allows you to draw graphics on the screen.  
Although I haven't been keeping up with development, at one point the idea
was that there would be a generic X-server that would use the kernel
framebuffer access, so you wouldn't have to have a bunch of different X
servers for all the different types of video cards.

But this isn't very efficient for accelerated functions and it won't do
for OpenGL at all.  So I don't really know where they are going with it
now :}



Re: FW: Careful. This is for information only.

2001-08-09 Thread William T Wilson
On Thu, 9 Aug 2001, Sebastiaan wrote:

 Is M$ really thinking about this? That would really be the end of the
 internet.

I doubt it.  There's no real thing that Microsoft can offer that would
make end-users *want* to use their proprietary protocol.  On the other
hand, there are a lot of people that would really, really not want this to
happen - like all those web and database servers running Unix.  Plus,
Cisco controls as much of the router market as Microsoft controls the
desktop market.  They aren't likely to just decide to start taking orders
about how to do networking from Microsoft just because Microsoft wants
them to.



Re: FW: Careful. This is for information only.

2001-08-09 Thread William T Wilson
This is quickly departing from the realms of topic :}

On Thu, 9 Aug 2001, John Griffiths wrote:

 Cisco are in BIG financial trouble, MS have LOTS of money, don't bank
 on Cisco stopping them. MS could buy Cisco pretty soon (of course that

Cisco's not in as big of trouble as all that.  It's not as if they are
looking for someone to come bail them out, they simply aren't making any
money this year.  But then nobody is making any money this year.

Cisco's market cap is 131 *billion* dollars.  MS hasn't got that much
money.  On the other hand, Cisco has a huge, and obvious incentive to not
allow MS to gain control of a market that right now they dominate.

It's completely plausible that MS would *like* everyone to stop using
TCP/IP and start using some proprietary MS thing instead, but I see no
reason why either networking companies or end users would help them.



Re: FW: Careful. This is for information only.

2001-08-07 Thread William T Wilson
On Mon, 6 Aug 2001, Nathan E Norman wrote:

 I have to agree with John ... using a security hole in someone else's
 server for good or evil is probably not a good idea legally.  I'd
 advise against it.

In states with Good Samaritan laws you are likely to be shielded from
liability as long as any action you take is clearly intended as help.

Considering the fact that tens of thousands of malicious security attacks
per year go unprosecuted, I doubt that anything non-malicious would be a
big risk.  Unless you have deep pockets.

That said, it's traditional to send the admin a message using the root
account when a hole is found, but it isn't at all necessary.  Just send
the relevant excerpt from your log that shows they are attacking you to
several good guesses at the relevant account ([EMAIL PROTECTED], [EMAIL 
PROTECTED],
etc.) and leave it at that.



RE: FW: Careful. This is for information only.

2001-08-07 Thread William T Wilson
On Tue, 7 Aug 2001, Ian Perry wrote:

 Oh damn... looking at the logs looks like here comes another one...
 GET /robots.txt HTTP/1.0... repeat.

That's usually from a search engine.  robots.txt is an (advisory) control
method so that search engines don't try to index, for example, dynamically
generated or password protected content.

Unless there's a new worm out that uses that somehow (ugh).



Re: icq through masqueraded firewall /socks4

2001-06-12 Thread William T Wilson
On Tue, 12 Jun 2001, Paul Haesler wrote:

 Ah forget it.  It seems to work with outsiders - it's just transfers 
 between clients on the LAN that doesn't work.

I don't think the problem is with the firewall, but with ICQ.  ICQ 99 and
earlier used a different protocol from ICQ 2000.  When clients send files
to each other they establish a direct TCP/IP connection to each
other.  Normally they do this with messages also but it is not required,
messages can also be sent through the server.

When a user on your contact list logs in, the server sends you that
client's IP address.  In the case of ICQ99 and earlier, both the IP
address that the server sees, and the IP address that the client thinks it
has (i.e. the local address on the local network), are sent.  This is how
ICQ clients locate each other to establish these connections.

Although I am not familiar with the ICQ 2000 protocol so much as the
earlier ones, my first instinct is that the local address is not being
sent - either to the client, to the server or both.  The clients are
trying to send to the externally-visible address, the proxy server, and it
does not know what to do with the file request.

See if you can send files between lan clients when one or both of them is
ICQ 99b or earlier.  If so then my hypothesis is probably correct and it
is a limitation in ICQ2000.  (you might also look for strange looking TCP
connections from the sending client to the proxy - probably on a port
corresponding to an open port on the host trying to receive the file).



Re: Sys Admin

2001-04-06 Thread William T Wilson
On Tue, 3 Apr 2001, Noah L. Meyerhans wrote:

 If you want to do it as a career (you are a masochist, and not because
 of UNIX) you can look for junior sysadmin type job listings.  

Heh.  I agree.  *Most* UNIX sysadmin jobs resemble management more than
they resemble playing with your home Linux box, unfortunately.  It's about
half technical, half people-work.  Basically it's your job to say no -
and mediate among programmers, end users, hardware vendors, software
vendors, managers... Unfortunately, because of the way Unix systems are
used today, maintaining the system is the easy part. :}



Re: home network

2001-03-28 Thread William T Wilson
On Tue, 27 Mar 2001, Daniel Freedman wrote:

 If you don't want to get down-and-dirty with configuring IP-masg with
 two-NIC's on one box to serve as internet gateway, you can buy a combo

If you're going to have your Linux system online anyway, you may as well
let it do the masquerading.  It's not nearly as hard as it looks :} Most
of the people I know that have the dedicated routers aren't happy with
them, citing dropped TCP connections, problems with games, and general
nuisance.  I don't think you'll find anyone to argue that they're *better*
than a Linux system.  And they are lots more expensive than a plain hub
and an extra NIC.

The only reason I would recommend one is if your Linux system dual-boots
and you don't want to have the rest of the computers offline while you're
in your other OS.  I can't think of any other real advantage.



Re: Q: Any Linux on 2MB Ram?

2001-03-28 Thread William T Wilson
On Wed, 28 Mar 2001, Jonathan Gift wrote:

 I don't think so. I have an old, very old, laptop floating around with
 2MB ram on it. Anyone know of a Linux distro that will run on it?
 Maybe one of the embedded one's?

You can make Linux boot in 2MB.  However, 1.2 was the last kernel that
would do so, IIRC.  See if you can scare up an ancient Slackware
distribution from 1994 or so :} This is really ancient history Linux-wise;
a fun project just to prove you can do it, but doubtful you'll be able to
do anything useful with it.



Re: Linux Virus

2001-03-28 Thread William T Wilson
On Thu, 29 Mar 2001, Mark Devin wrote:

 Surely this virus cannot overwrite executables that require root
 permission? Or can it?

Like every so-called Linux virus, it requires the user to behave stupidly
- it's really a trojan horse.  It has the same permission rules as any
other program, so it can't change root-owned files, unless they are
world-writable or you are running as root.

The thing that's special about it is that it can infect both Windows and
Linux executables - which is really quite impressive.  Otherwise it's
nothing special.




Re: Pentium 4

2001-03-27 Thread William T Wilson
On Tue, 27 Mar 2001, Alexander Isacson wrote:

 Will I have to recompile all major components in order to get decent
 speed with the p4?

Recompiling won't help; there's no P4-optimizing GCC.  The P4 is actually
respectable at the sort of bit shoveling that characterizes average
applications - the thing it really sucks at is floating-point.

 Whats the best thing to do? Buy an AMD-CPU instead?

It depends on how good of a deal you get on the P4.  The Athlon is, for
general purpose applications, faster than a P4, even at slower clock
speeds.  However, for anything that uses SSE, the P4 will be faster.  
Mostly, you will only use SSE for media-related activities - MPEG, MP3 and
other media encoding and decoding.  The P4 also has a higher memory
bandwidth for applications that are demanding on that - mostly media
again, but also Quake 3 (for other games it is basically a wash).  For
practically anything else the Athlon will be faster and require no special
treatment.



Re: deleting specific files (a litle note about de nada )

2001-03-27 Thread William T Wilson
On Tue, 27 Mar 2001, Miguel S. Filipe wrote:

  I need to delete a bunch of files, all of them of the form 
  *.doc, scattered into several subdirectories inside a given
  directory. What should I do?
  
  snip 
  
  Several options:
- Create a script.  This *is* my preferred method.
  
   $ find . -type f -name \*.doc | sed -e '/.*/s//rm /'  rmscript
   # Edit the script to make sure it's got The Right Stuff
   $ vi rmscript
   # run it
   $ chmod +x rmscript; ./rmscript

I missed the original question, but I'd like to point out that this is a
great deal of unnecessary fuss.

You can do this all with one invocation of find, skipping the script, the
sed, and all that entirely.  Here we go:

find . -name '*doc' -exec rm {} \; -print

will remove all files ending in 'doc' from the current directory and
subdirectories.  The arguments following -exec are executed once for every
file matching the criteria, and {} is replaced with the filename.  The
\; is required at the end of the -exec clause to indicate that you are
done with it.  -print tells find to show what files were removed.

If you need to preserve a particular file:

find . -name '*doc' -not -name 'thesis.doc' -not -name 'budget.doc' -exec
rm {} \;

This will cause find to remove every file except thesis.doc and
budget.doc.

You can use any shell command with find - it doesn't have to be rm:

find . -name '*doc' -exec chmod 666 {} \; -print

will make every file ending with 'doc' world-writable and tell you what it
did.

You can even use multiple instances of exec:

find . -name '*doc' -not -name 'thesis.doc' -exec chmod 666 {} \; -exec
touch {} \; -print

which would make every file ending in 'doc' world-writable and update its
timestamp, except thesis.doc.  (No, I have no idea why you would want to
do this :} )

Anyway, I think find is about the most powerful command in all of
Unix.  Find is your friend :}



Re: home network

2001-03-27 Thread William T Wilson
On Tue, 27 Mar 2001, D-Man wrote:

 I am planning on building an ethernet netowrk at home.  What do I need
 to do it (other than NICs and cable, of course)?  What is the

NICs and cable :}

 difference between a hub and switch?  Any recommended brands/models?

A switch routes each packet only to the port that has the system which is
that packet's destination.  A hub routes all traffic to all ports.  
Essentially this means that the capacity of a switch is determined by the
bandwidth of that switch's internal electronics; the capacity of a hub is
limited to the maximum speed of the network.  Switches also allow you to
mix 10baseT and 100baseT packets on a network.

Generally, you do not need switches for a home network, and one hub is
pretty much like another, unless you are going for wireless ethernet;
that's a whole other ball of wax.

 Do I really need a hub/switch or can I use an old box with a lot of
 NICs instead?  Is there anywhere I can RTFM all of this?

You can, but you need one NIC for every attached system.  You also have to
use crossover cables, instead of the usual cables.  If you make your own
cables, crossovers are no harder than standard, but if you are buying them
premade, they can be a little tricky to find.

Generally, I consider three systems (including the router) the limit for
using a PC as a switch.  Don't forget you also need a NIC to connect to
your cable/DSL connection (or a modem if you are using that, which is even
more taxing to the host), and 3 NIC's is plenty for an older PC.  
Besides, any more than that and you are not saving any money, but you're
making your network harder to set up and deal with.



Re: home network

2001-03-27 Thread William T Wilson
On Tue, 27 Mar 2001, Jason Majors wrote:

 You could run a box with lots of ip masquerading to emulate a hub, but
 that's like swatting flies with a hammer. Just get a hub. It's
 cheaper, uses less power, and allows your boxes to see each other more
 easily.

Actually in such a case you would want to use either ordinary bridging or
routing among the various home systems.  You don't need to masquerade the
local systems to each other.

One thing I forgot to mention clearly in my other post is that you *do*
need a box to proxy the traffic of the others on the Internet, regardless
of whether you use a hub or a box-of-NIC's.

If you have an analog modem, you haven't really got a choice; the system
that has the modem in it does the proxying because you can't share
that.  But if you have DSL or Cable that looks like ethernet to your
computer, you have two ways to do it: Either plug the DSL/Cable modem into
the hub, or plug it into a computer and then use another NIC in that
system to communicate with the internal network.

Do it the second way.  Otherwise, all your internal network traffic goes
over your Internet connection.  Not only is this bad for your privacy, it
ties up the connection, which is likely to irritate your ISP.  Especially
if you have cable, doubly so if your cable company is one that tries to
prevent you from sharing the connection among multiple systems.



Re: FW: softlink/hardlink

2001-03-16 Thread William T Wilson
On Fri, 16 Mar 2001, Holp, John Mr. wrote:

  ls -li vmlinuz  while at/   (root)
  I get the following
  
  12  lrwxrwxrwx  1   rootroot19 Jan 18   08:05
  vmlinuz - boot/vmlinuz-2.2.17
  
  To me this means that vmlinuz is a soft link pointing to
  boot/vmlinuz-2.2.17

That is correct.  If the file shows that sort of link information, it is a
soft link.  Hard links do not have such information, they directly
reference the original file.  So you would see the actual file information
there, not the link information.

You should think of a hard link as another name for the original file.  A
soft link is a file which contains the file name of whatever it points to.

  But when I do a ls -li /boot/vmlinuz-2.2.17 I get the following
  
  12  -rwxrwxrwx  1   rootroot1042807  Jan 18  08:05
  /boot/vmlinuz-2.2.17
  
  Here is my confusion, I thought only hard links used the same inode
  number?  Note that both are using inode 12.

It is possible for files on different filesystems to use the same inode
number.  So if /boot is on another filesystem from / then it looks like
they have the same inode number by coincidence.  If /boot and / are on
different filesystems this proves it is a soft link as hard links cannot
point across filesystems (and they cannot point to directories).



Re: Reboot only w/ mouse.

2001-03-15 Thread William T Wilson
On Thu, 15 Mar 2001, Mathieu, Barry wrote:

 My keyboard is not responding (poof - dead), even the LED indicator
 for caps lock doesn't illuminate.  I have X running, w/ ICEWM.  There

If your keyboard has fallen out of the socket then you can plug it back
in.  It should work fine.  (Well, if you plug it in nicely.  If you touch
some wrong pins together the system will reboot :} )

If your keyboard is broken then rebooting will not help you any.  And I
can't think of anything else that would cause your keyboard to stop
responding.

I can't think of any way to reboot the system using just the mouse.  Just
exit X, wait a few seconds and push the power button (if you really,
really want to reboot).  You will be fine. :}



Re: Reboot only w/ mouse.

2001-03-15 Thread William T Wilson
On Thu, 15 Mar 2001, Peter Jay Salzman wrote:

 have you really gotten focus back when replugging in a ps2 keyboard?

Yes, I have done it several times.  I believe that if you run a ps/2
keyboard through a (physical) switch for connecting multiple systems to
one keyboard/mouse/monitor that this is actually what is happening.  (And
I use a switch like that myself).



Re: system slowdown when copying audio CDs

2001-03-13 Thread William T Wilson
On Tue, 13 Mar 2001, Romain Lerallut wrote:

 Funny that the behavior of the CD drive is so different in the audio
 mode than in the data mode.

It isn't really.  Data CD's contain data headers that help the drive
position itself in arbitrary locations - similar to sector headers on
floppy and hard disks.  Audio CD's contain only vague information.  They
aren't designed to seek so precisely.  Furthermore audio CD sectors are a
different size from data CD sectors.

When you try to seek somewhere on an audio CD, you can rig the CDROM to do
it, but you will not have the required information to aim precisely.  
And, you can read only a few sectors at a time.  With a data CD you can
tell the CDROM drive which blocks you want and it will just go get
them.  With an audio CD, you give it coordinates on the disk, it will go
somewhere in that area and return a few blocks.  Then you have some small
amount of time to get the next read request in before the drive loses its
place and you have to start the whole seek process over again.

All in all it is very difficult to treat audio CD's as data and this is
why CD rippers are so hard to write (and so slow).



Re: kill: cannot kill some processes

2001-03-03 Thread William T Wilson
On Fri, 2 Mar 2001, Ron Peterson wrote:

  away.  They don't consume any CPU time, or any other resources other than
  the slot in the process table and the less than 1K of memory required to
...
 Not entirely true.  Init can inherit enough zombie processes that it
 hits its process limit (1024, if I remember correctly).  Can you

Well, like I said, they do still take up the slot in the process table.

Zombies that *have* been inherited by init go away - it's those that are
still waiting for their parent process to check their status that pile up.  
Init itself doesn't have any limit on the number of zombies it can clean
up, otherwise it would be a problem with any long uptime system.

 'shutdown'?  Nope.  Not unless you can free up a slot.  And if
 something's going haywire and spawning zombies quickly, this can be a
 problem.

Linux reserves processes for root so unless your haywire program is
running as root you are at least partly shielded from this.  
control-alt-delete should still be able to reboot the system in such a
case (or login as root on the console), if it comes to that.

 Not a common occurance, though...

This is true :}



Re: kill: cannot kill some processes

2001-02-23 Thread William T Wilson
On Thu, 22 Feb 2001, brian moore wrote:

  does the process list Z under STAT ? if it is the process has gone
  zombied and i don't think there is much you can do. sometimes zombie'd
  processes die on their own eventually many times they will not die until
  you reboot ..
 
 Not quite true... zombies don't ever die: they're already dead.

While the description of zombie processes is accurate, I think another
likely situation is that the process is in uninterruptible sleep, i.e.
the 'D' state.  This happens when a process is blocked in a system call -
it will be 'D' until the kernel function returns.  Kernel bugs, hardware
problems, and dead NFS mounts can cause these kernel functions to take
a long time or forever.

In such a case, you really are stuck; unless the resource the process is
waiting for frees up, it's going to hang around until a reboot.

One thing about zombie process: Don't worry about trying to make them go
away.  They don't consume any CPU time, or any other resources other than
the slot in the process table and the less than 1K of memory required to
hold their state information.  They are not worth worrying about.



Re: Antwort: Rebooting is foolish ....

2001-02-16 Thread William T Wilson
On Fri, 16 Feb 2001 [EMAIL PROTECTED] wrote:

 As far as I know you only might want to reboot if you change the
 hostname and want it active. If you change the partitiontable it might
 be usefull.

You don't need to reboot to change the hostname, either.  The command is
'hostname'.

You need to reboot to change the partition table of a disk with mounted
filesystems, and you need to reboot to recompile the kernel, and you need
to reboot for hardware upgrades.  That's about it, really...

 The Linuxcommunity is proud of their uptimes, so we never reboot...

That is the real reason :}



Re: backing up a complete linux system

2001-02-16 Thread William T Wilson
On 16 Feb 2001, Krzys Majewski wrote:

  tar -cvzf hda2.tar.gz /
  ( I don't need to add this ... --exclude hda2.tar.gz do I ??) 
 
 Yeah you probably do. You might want to exclude other stuff to, like
 /proc, /mnt, /tmp, and possibly /dev.  Normally device files are
 created with /dev/MAKEDEV.

With recent GNU tar you don't need to exclude the tarfile explicitly.  
Other versions of tar you sometimes do.

The simplest thing to do is to simply make individual backups of each
filesystem (the l option).  This way you can take backups frequently of
filesystems that change a lot (/home), less frequently for those that
don't change much (/usr) and never for others (/tmp, /proc)

  anyway... the part where I'm stuck... 
  I have no idea how to restore this tarred file on to a totally new
  drive

You just untar it.  Make sure that the filesystems are large enough for
containing the data, and untar it.  You can make of course a whole
filesystem tree under /mnt (or wherever).

 I'd probably first leave the two drives where they are, reboot to the
 LILO prompt, pass a kernel parameter like root=/dev/hdc2, and see if
 the new filesystem does the right thing (ie boots). I've never

It should.  However, there are a few caveats.  Some systems can't boot off
of hdc or hdd and require the kernel to be on hda or hdb.  Other systems
can boot hdc/hdd but can't do LBA on them, meaning the kernel has to be on
the first 500MB of the disk.  This refers only to the location of the
kernel, the root filesystem can be anywhere.

Note that you'll have to change your /etc/fstab to reflect the new layout.



Re: Rebooting is foolish ....

2001-02-16 Thread William T Wilson
On Fri, 16 Feb 2001, William Leese wrote:

 ..okay, so we have maxtor, seagate, conner (same company as seagate
 maybe, but they still sell HDs under their name) and i think i've
 heard something about samsung.. so, which HD manufacturer makes
 reliable HDs, anyone? IBM maybe?

IBM drives are quite good.  I've had good luck with Western Digital and,
surprisingly, Fujitsu.  (There seem to be two kinds of Fujitsu drives,
those that die immediately and those that last forever).

There's also Quantum, are they owned by someone else now?



Re: Which computer to buy ?

2001-02-15 Thread William T Wilson
On Thu, 15 Feb 2001 [EMAIL PROTECTED] wrote:

   IMHO, you should buy Intel, since AMD chips don't do floating
 point operations adequately (these are important in graphics), unless

That isn't really the case any more.  Not since the K6, really.  The
Pentium 4 has extremely bad floating point; the Athlon is still faster
than the Pentium 3 in that department.



Re: Cat-ting binary files to the console

2001-02-08 Thread William T Wilson
On Thu, 8 Feb 2001, Benjamin Pharr wrote:

 Every once in a while I slip up at cat a binary file to the console.
 (Or just forget to give mkisofs the -o flag.)  This causes the console
 to use WEIRD characters, just plain gibberish.  Is there any way to
 get rid of this without rebooting?  Thanks!

Type 'reset' at the console (you have to be logged in).
'stty sane' should work also, but 'reset' is more thorough.



Re: Firewall

2001-01-31 Thread William T Wilson
On Tue, 30 Jan 2001 [EMAIL PROTECTED] wrote:

 I have some questions about building a firewall.  I currently have a
 cable modem connection which of course gives me a static IP address.  
 If I was to build a firewall using a old 486 could I still assign my
 Debian box the static IP address as it is needed for my server which I

No.  The firewall system has to have the real IP address.  What you need
to do is use IP Masquerading on the firewall system.  You can then use
ipmasqadm to redirect whatever ports you need to the system(s) behind the
firewall.



Re: those problems where the easiest thing to do seems to be to reboot...

2001-01-19 Thread William T Wilson
On Fri, 19 Jan 2001, CND OConnor wrote:

 1) you 'cat' a file you shouldn't in the console mode. before you know
 it everything on the commandline becomes an unreadable mess of ascii
 characters you didn't know you had.

'reset' will cure this.  That's the command 'reset', not reset the system
:}

 2) X crashes (hasn't for a while actually) and dumps you in a grey
 screen with a mouse pointer and nothing else. (That would be running

control-alt-backspace will boot you out of X and back to console.  If you
have xdm it will restart, otherwise you go back to whatever console you
started X from.



Re: Console Blanking

2001-01-17 Thread William T Wilson
On Wed, 17 Jan 2001, will trillich wrote:

 how do you establish screen-blanking preferences for text consoles,
 when there's NO x installed at all?

setterm is what you need.  setterm -blank (and setterm -powersave) allow
you to control this.



Re: 'S' permissions

2001-01-15 Thread William T Wilson
On Mon, 15 Jan 2001, Rob VanFleet wrote:

 I know what s is, when designated in the permissions of a file, but what
 does a capitol 'S' stand for? ie:
 
 drw-r-Sr--

It means the s bit is set, but the x bit is *not* set.

Not used very much...



Re: Permissions 101

2001-01-15 Thread William T Wilson
On Mon, 15 Jan 2001, Bob Bernstein wrote:

 $ ./sutest
 does this work?
 /var/log/user.log: Permission denied
 
 Can someone explain what's going on here? Is starting a shell the problem?

The setuid bit doesn't work on shell scripts.  You will have to compile a
C program use use perl.  Perl scripts work with the setuid bit because
perl has a special setuid executable to run them with.

Essentially having shell scripts work with the setuid bit allows a
malicious user to trick the system into running a false interpreter with
root permissions.  This won't do, so root shell scripts are prohibited.



Re: Windows Hyperterm alternative for Linux

2001-01-12 Thread William T Wilson
On Fri, 12 Jan 2001, Frank Rocco wrote:

 Can someone point me to a program that does the same thing as the
 Windows HyperTerm program?

You probably want Minicom.  It resembles the old DOS program Telix; should
be a package for it already.



Re: sockets

2001-01-10 Thread William T Wilson
On Thu, 11 Jan 2001, Marc-Adrian Napoli wrote:

 i am a non-root user on a debian 2.2 system and i cant write a c program to
 open sockets.
 
 i am a non-root user on a solaris system and i am able to write c programs
 that open sockets.
 
 is there a switch/setting somewhere on a debian system to change this? or is
 root the only user allowed to create sockets on a debian system?

Anyone can create sockets, although not in the range of ports less than
1024 (this should not be different on Solaris).  The problem is probably
with the program.  Can you send an example of the code that causes the
problem, and maybe someone can find it :}



Re: Undeletable file

2000-12-20 Thread William T Wilson
On Wed, 20 Dec 2000 [EMAIL PROTECTED] wrote:

 of.  I tried rm, chmod, chown on this file as root: all returned 
 permission denied.

It's possible that the file has got the immutable flag set, somehow.  Try
chattr -i filename

   br-xwx1 282708308 114, 114 Dec  9  2023 991203.c
   ^
 
 What does the b indicate, and how do I get rid of this file?

b indicates a block device file, which shouldn't affect the ability to
delete it.

What happened to this drive?  Are you sure the hardware is in good working 
order?



Re: Number of processors

2000-12-05 Thread William T Wilson
On Tue, 5 Dec 2000, Christopher W. Aiken wrote:

 Nope.  We have to use some C or C++ system/function call.  Our
 programmers don't want to depend on the /proc file system being
 available.

Any reasonable Linux system will have the /proc file system.  There is no
way to do it in C.  If there were, it would certainly not be portable,
which is the only real disadvantage of using /proc.  If they don't want to
read /proc/cpuinfo perhaps they would rather read /dev/kmem? :}

A complete list of system calls is in /usr/include/asm/unistd.h and/or in 
/usr/man/man2.  If there should happen to be a way, it'll be in one of
those two places (but I don't think there is).



Re: slaying the inodosaur

2000-12-04 Thread William T Wilson
On Mon, 4 Dec 2000, Justin B Rye wrote:

 Is a niced rm -rf as safe as I'm going to get, or is it worth
 messing about with while sleep 1 do stopafter...?

If your hardware is in good working order you can easily just do rm -rf.  
Probably no need even to nice it, nice only affects CPU allocation and not
I/O allocation.  So if the disk is heavily loaded production system you
are going to want to do it during off-peak hours.



Bunches of virtual consoles

2000-11-11 Thread William T Wilson
So suppose I wanted to have more than 12 virtual consoles on my system,
but I only have 12 F-keys to select them with.

I know the kernel supports up to 255... is there any way to use more than
12?



Re: Bunches of virtual consoles

2000-11-11 Thread William T Wilson
On Sun, 12 Nov 2000, C. Falconer wrote:

 Yup - allocate 13-24 and you can use right-alt + F1-12

Rock on.  I will never run out of logins again!  :}



Re: GPL and software I have written

2000-11-01 Thread William T Wilson
On Wed, 1 Nov 2000, Brooks R. Robinson wrote:

   I have a dilemma, and I expect this to end in a flame war, but
 here goes...

Hmmm.  Ok, you're ugly, and your mother dresses you funny.

   I have issues with my employer that cause me to not want to
 merely hand over my work.  I have never released/published any

Yes.  Merely not owning the company is sufficient issues for that :}

 I would like confirmation. Since I am not modifying any existing
 software, I am creating new software, I can charge for the new
 software.  This could be a license fee or something.

The copyright owner can do whatever he wants with his own code.  This
includes licensing it to some people under one license and licensing it to
others under the GPL (MySQL works this way, as does Quake).

You can always charge for GPL software.  Well, you cannot charge for the
software itself, but you can charge for the act of distributing it.  Such
charge is not necessarily limited to the actual distribution costs.  So
you can charge $500 for a CD of your software whether it is GPL or
not.  But you cannot restrict anyone you distribute software to from
further redistributing it, for any or no cost.  In the case where the
distribution itself is a valued service, people will often pay for that -
such as the case with Linux distributions.

 a computer). Mini-SQL has it's own license (NON GPL) that they would
 have to purchase separately (I developed this as a student, so I am
 not require to pay money for a license, but they would as a commercial
 site/use).

You could always convert your system to use Postgres, mSQL, or MySQL (now
that MySQL has transactions it is suitable even for ecommerce use).

   In essence, I am providing them C code, which they can compile
 and execute. Am I in the ballpark or have I gone off the deep end?

You're fine :}



Re: where to put shell scripts?

2000-10-31 Thread William T Wilson
On Tue, 31 Oct 2000, Bud Rogers wrote:

 I think it could be argued that those changes are not necessarily good
 from the standpoint of system security.

In the modern world, sbin really does mean system binaries.  The
division between things you need to fix a crashed system and things for
ordinary use is made by whether something is in /(s)bin or in
/usr/(s)bin.

It does not have really much to do with security.

Ordinarily sbin is not in an average user's path because there is not much
an average user can do with the stuff in there.  I don't know of any
modern Unix system in which the programs in sbin are actually not
accessible.

It doesn't really matter because you have to assume that any user has
access to the source code for basically every tool (especially on Linux!),
and you can't really count on restricting access to an executable as any
form of security.

I actually think that Linux tends to have the best-organize tools.  Those
that have to do with system administration are in /sbin, and those that
are intended for use by normal users are in /bin.  Those that are always
needed are in /sbin or /bin and those that are needed only for normal use
are under /usr.

I also question the historical accuracy of 'sbin' as static binary -
Unix has always had /sbin, but it hasn't always had dynamic linking.




Re: /usr/bin before /usr/local/bin?

2000-10-31 Thread William T Wilson
On 31 Oct 2000, Hubert Chan wrote:

 My sudoers file is basically just
   hubert ALL=(ALL) ALL

This can be extremely convenient.  But it also makes the security of the
whole system equal to the security of your user account.

If you are worried about security, and you have a situation like this, you
have to take as much care with your personal account as you would with
root.  So you must never type passwords unencrypted over the network,
leave yourself logged in, etc. unless you are sure that the situation is
secure.



Re: Security of sudo [was: Re: /usr/bin before /usr/local/bin?]

2000-10-31 Thread William T Wilson
On Wed, 1 Nov 2000, Damon Muller wrote:

 Without actually knowing your password, which sudo requires, having
 your account *isn't* equivalent to having root.

It's certainly possible to build a rootkit style setup which would be
suitable for converting a privileged account into root.

What if I write aliases for 'ls' and other common file utilities to
conceal my existence, and install a trojan 'passwd' or 'sudo' program (or
something along those lines) which (in addition to passing all your
arguments to the real program) also logs and secretly reports your
keystrokes?

Counting on someone with access to your account to not eventually get hold
of your password, is almost like counting on a chroot() jail to contain
someone with root access.  It's a nuisance and can slow down an attacker
(or stop an inept one) but really doesn't provide much additional
security against a quality attacker.



Re: X config problem causes me to have reinstall entire OS--!!newbie warning!!

2000-10-30 Thread William T Wilson
On Mon, 30 Oct 2000, Jim Merante wrote:

 Is there a keystroke combination that will prompt the
 boot commands and allow me to skip the load X windows
 command?

When lilo comes up you can hit tab.  Then type linux single and you
will get a root prompt.



Re: Should I overdrive my monitor a little bit?

2000-10-26 Thread William T Wilson
On Thu, 26 Oct 2000, Mark Phillips wrote:

 but my monitor only has a horizontal frequency range of 30-60.  I am
 thinking that perhaps changing it to 30-63 won't hurt, but the XFree86
 Video Timings HOWTO warns about overdriving, so I am hesitant.

You can exceed the bandwidth rating of your monitor, but never ever exceed
the horizontral frequency.

Fortunately, having read the Video Timings HOWTO, you are now qualified to
find a modeline in that resolution that your monitor *does* support :}



Re: Copy hard-drive

2000-10-26 Thread William T Wilson
On Thu, 26 Oct 2000, Matheson wrote:

 I want to make an _exact_ copy of my hard-drive to my friend's
 hard-drive, including partitions, boot-able, etc.  Is their a way to
 do this with LInux?  I know that I used to use a program with Windoze
 that could

If the disks are the same size exactly, you can use dd:

dd if=/dev/hdb of=/dev/hdc

which will copy /dev/hdb to /dev/hdc.  It can take a while (and won't
give you status updates), be patient.

If the disks are not the same size exactly, you are better off to set up
the partition table yourself with fdisk and copy each partition
individually.

dd if=/dev/hdb1 of=/dev/hdc1

etc etc.

To use dd to copy partitions, the filesystem on the source partition has
to be unmounted or mounted read-only.  (To use dd on a whole disk then
*every* filesystem on the disk has to be unmounted or read-only).

Note that you will in this case have to make sure that any given
destination partition is at least as big as the source partition.  It will
probably be impossible, if the disks are different geometry, to make the
source and destination *exactly* the same size so just be sure the
destination is bigger.

Depending on how much of your disk is used and unused, it might be faster
to use tar.  If you have lots of unused space (say more than half the
space on the partition is free), you can potentially do better with tar.  
In this case, of course, both partitions have to be mounted.  Then do:

cd /mnt/hdb1
tar cf - . | (cd /mnt/hdc1  tar xpvf - )

(assuming of course the partitions are mounted on /mnt/hdb1 and /mnt/hdc1
respectively, use the appropriate mount points for your system).  This has
the advantage of potential speed (but will be slower on a full disk), you
get status reports, and the filesystem that gets output will come out 100%
defragmented.  But if anything is writing to a file when it comes time to
get it copied then it will not be copied correctly.  So if you are trying
to copy your whole installation from one disk to another, at least do it
in single user mode. :}



Re: Frustrated Windows user making switch

2000-10-23 Thread William T Wilson
On Sun, 22 Oct 2000, Chad Scott wrote:

 My first problem is that my mouse doesn't work. It's a Logitech serial
 mouse, and I've tried the Logitech, Microsoft and Auto options in
 XF86Setup, but none work. My second problem is that XF86Setup tells me
 I need to have the SVGA server installed. I don't know how to do that.

Use the serial mouse driver.  Both the Logitech and Microsoft drivers are
'specialty' and not for use with ordinary mice even if they are made by
Logitech or Microsoft.  If your mouse is really a PS/2 mouse (99% of
serial mice function as PS/2 mice also simply by connecting them to the
PS/2 port) then use that driver.

You'll also want to make sure that you set up the proper mouse link in the
/dev directory, as follows:

If your mouse is on COM1, you want to do 'ln -sf /dev/ttyS0 /dev/mouse'
and if it is on COM2, you should do that same command except using ttyS1
instead of ttyS0.

I have never used XF86Setup.  I think it's dumb and am still using the
XF86Config file I've had since 1995 anyway.  You might also try the
program 'xf86config' which is text-based - and if it's even around - I
used it a couple times on RedHat.

If you can get X to come up at all and display actual graphics,
congratulations, you have completed one of the hard parts.  If not,
then:

You can check which X servers you have installed by doing:
ls /usr/X11R6/bin/XF86*

Hopefully one of these will match the video chipset you have.  If you
don't know what video chipset you have, someone can probably tell you if
you know the manufacturer and model.  Another option is SuperProbe, which
is good at figuring this stuff out.

 Understand that referring me to XF86Config means nothing to me. I
 don't even know how to open the file. I'm sure this type of help

You will probably want to use 'pico' as your first editor.  'joe' is
another good choice.  Both of them are fairly non-intimidating, generally
do what you expect and importantly have easily accessible online help.  
Pico is in non-free, but you are likely to have it anyway.



Re: Swap space signature

2000-10-21 Thread William T Wilson
On Sat, 21 Oct 2000, Ken M. Mevand wrote:

 anyone knows what the message Unable to find swap space
 signature means during boot?  my swap partition is 40Mb on hdd2.

It's actually generated by the 'swapon' command.  A swap partition has to
be type 82 and it has to be prepared with 'mkswap' before you can use it.  
The installer will do this for you, but if you change your partitions
around later you have to run mkswap yourself.  Think of mkswap as mke2fs
for swap partitions. :}

(Actually, if it's a partition type other than 82 it will still *work* but
you should still set it to the right type :} )



RE: linux + its size

2000-10-20 Thread William T Wilson
On Fri, 20 Oct 2000, Fuad wrote:

 I like to know if it will fit on a 200mb hard disk and if the
 installation supports a SCSI hard disk

Yes to both.  But 200mb is not enough for a full install.  You have to
pare it down a great deal; if possible, find someone experienced to help
with it.



Re: Advice to newbie, please

2000-10-19 Thread William T Wilson
On Thu, 19 Oct 2000, Rudi Borth wrote:

 Q1: Would this make sense for a single user who is not a programmer?

You do not have to be a programmer to use Linux.

 My system has been made Y2K compliant with HOLMFIX, shows the date
 correctly, and includes: CPU 80486, 25 MHz, RAM 8 MB, SuperVGA

It will be a little difficult to make Linux run efficiently with 8MB of
RAM and 160MB of disk space.  You can run fine with 8MB of RAM if you
don't mind X being slow (X-windows will be slow if you have less than 16MB
of RAM) but 160MB will constrain what you can do - especially since
you'll need to allocate 20MB or so of it to swap space.

In 1995 I stuffed Slackware installs into 100MB, but I don't know if this
would do very well with a modern Linux system.  Most Linux distributions
fall into either the category of extremely small minimal systems, or full
featured systems.  A fully featured system can be pared down to fit into
100MB, but only by an advanced user, really.

It's hard to say how I would think you should proceed.  If you can find
one on CD, an old Linux distribution from 1994 or 1995 - the era your
computer dates from - would probably be the easiest to get running.  Such
a system would be filled with security holes.  But it would work, and fit
in your disk space requirements.  Unfortunately some modern software will
probably not run on such an old system.

A better option, if you can spend a little money, is to buy an old 500MB
or so disk drive.  This will be enough to hold a minimalist, but still
modern, system.  If you are willing to live without X, you can do fine
with your current system.  But this will make you unable to use Netscape
or Opera.  (You can still use Lynx, of course).

 CD-Rom readers

What is this?  Do you mean software to access the CD-ROM?  This is
built-in to Linux.

 XTreeGold v3.0

XTree is a disk utility, right?  Midnight Commander is probably the
closest equivalent.  However if you gain a little experience with the
command shell you will find that it is quite powerful especially compared
to DOS, and you don't really need such utilities.

 Word processor SemWare Editor Junior

I am not familiar with this product.  However Linux has emacs, which is a
text editor more powerful than most older word processors, and joe, which
is a clone of Wordstar.  There is also StarOffice, which closely resembles
modern MS Office, but which will exceed the capabilities of your system.

 Basic, C, C++, Forth

The default Linux compiler GCC can easily handle C and C++ code.  There
are BASIC interpreters, but I am not familiar with them.  I've seen Forth
utilities also but have absolutely no knowledge of them.

 Chess, FreeCell

I don't know if there is a FreeCell equivalent or not.  There are plenty
of other simple addictive games if there is not :} There are definitely
chess programs available, but the only one I know of, XBoard, requires X
Windows.

 Internet software: Trumpet WinSock v3.0 Rev C, Opera v3.62,
 Eudora Light v1.5.4, WinTel v4.3.5, Telix for DOS v3.22,
 AtomClock, Integrity Master v4.21a

The equivalent of WinSock is built in.  Opera is available for Linux.  
Eudora is not, but Pine and Mutt are more capable.  I don't know what
WinTel is but if it is a simple Internet tool, an equivalent is probably
available for Linux.  There is no Telix, but Minicom is almost an
identical clone of it.  I don't know what AtomClock or Integrity Master
are, but there are lots of clock utilities available - but you'll find
that Linux does not lose time like DOS does, so you probably don't have to
worry about it so much.

 HTMLed, RoPS, Acroread, MSWordViewer, WinJPG29

HTML editors are kind of scarce for Linux, actually.  Emacs has an HTML
mode but it is by no means WYSIWYG.  I don't really do HTML so I can't
help with that.  I do not know what RoPS is.  Acroread is available, along
with Ghostview which can view PostScript files.  Both of these require X
Windows.  MS Word Viewer is not available.  I know StarOffice can read
Word files but it will be too much for your system.  There might be a
converter to change Word documents into text files so you could view them
easily.  There are a variety of image programs available - XV can handle
most image viewing and conversion requirements, if you need an editor the
Gimp is available which is a photoshop clone.  Your system can
handle XV but probably can't run Gimp.

The good news is that every program I have mentioned is available at no
cost and most of them come with source code.  It's the Linux way :}



Re: Suggestions for buying a modem

2000-10-12 Thread William T Wilson
On Thu, 12 Oct 2000, Matthew Dalton wrote:

  It turns out that I have a lucent winmodem which will not work on Linux ( I
  have done around 2 weeks of research on it !!). So I have to buy a new

A very few winmodems will more or less work nowadays.  Of course they suck
just as much under Linux as they do under Windows.  But I have no idea
which these are.  Try www.linmodems.org.

If you really do have to buy a new modem, any modem which supports DOS, or
OS/2 will work, as will any external serial modem.  But external modems
usually cost a lot more.



Re: 60 gig drive

2000-10-11 Thread William T Wilson
On Wed, 11 Oct 2000, Debian Ghost wrote:

 I was thinking about getting a 60 gig hard drive and was wondering
 what linux constraints were on having a drive that big. Would I be

As we've been discussing lately large hard drives can cause a ruckus with
older disk utilities.  In some cases your BIOS can create a nuisance but
since the BIOS isn't really needed for Linux this is only a nuisance for a
few things (like where your kernel is located on the disk).

 able to have one large 60 gig partition or would I have to break it up
 into several smaller partitions?

Ext2fs has a partition limit of 2 terabytes, which ought to be enough for
at least another 2 or 3 years. g



Re: 60 gig drive

2000-10-11 Thread William T Wilson
On Wed, 11 Oct 2000, Mike wrote:

 The drive (from dmesg):
 hda: WDC WD153AA-00BAA0, 14679MB w/2048kB Cache, CHS=1871/255/63, (U)DMA
 
 So what's up with my box?  *Am* I just getting really lucky that this
 works?  Am I likely to get bit in the ass by this some day?  Or is the
 new lilo really that capable?

Just make sure your kernel is located somewhere on the first 8GB of the
disk.  If your disk is split into multiple partitions (which at that size
it ought to be) just make sure your root partition is entirely within the
first 8GB.  If not then you may get bitten sooner or later. :}

As far as I know, LILO still uses the BIOS to access the disk.  The kernel
does not.  So if your BIOS cannot see the whole disk, you can still use it
with Linux but LILO will not work if the kernel moves out of the 'visible'
area.



Re: How to partition a 10GB disk

2000-10-09 Thread William T Wilson
On Mon, 9 Oct 2000 [EMAIL PROTECTED] wrote:

 I have a 20 Gb HD. The BIOS has detected it since I installed it from
 the first time.

Once the kernel is booted the BIOS doesn't make any difference.  BIOS only
matters if you are trying to boot from the big disk AND your kernel is not
in the first 8GB of the disk.



Re: Can shell-script be setuid ?

2000-10-06 Thread William T Wilson
On Fri, 6 Oct 2000, Brad wrote:

 Also, doesn't perl use a special suid binary to run these scripts,
 because as far as the kernel is concerned it just hands it to
 /usr/bin/perl non-suid. Perl detects that the script is suid, and does
 the security handling and restarts suid with that binary.

Just so, just so.



Re: how can I add disk space?

2000-10-06 Thread William T Wilson
On Fri, 6 Oct 2000, Adam Scriven wrote:

 RAID stands for Redundant Array of Inexpensive Drives (Disks?)
 LVM stands for Logical Volume Management (IIRC).

I've always heard 'Disks' but I suppose, sooner or later someone will come
up with a drive which isn't a disk that is still suitable for RAID, and
then it will mean 'Drives'. :}

 RAID combines multiple partitions (not necessarily full drives) into
 one single drive.  Depending on the type of RAID that you use, you can
 have some sort of redundancy (RAID 5), or just combining your drives
 (RAID 0 is Striping, which is what does this, IIRC), or you can get
 pure backup with mirroring (RAID 1, again IIRC).

You're quite right.  However we should distinguish, between RAID
and the Linux multiple-disk driver.  RAID defines a method for spreading
data over multiple disks, the multiple-disk driver actually implements
this, along with some other things like linear concatenation (in which
data is stored on multiple partitions in order without any special
handling at all - this is the best solution for multiple partitions on
the same disk, or very often for two IDE drives on the same controller).

RAID doesn't protect against data corruption from system crash or power
failure.  For that you need to use a journaling filesystem such as
ReiserFS, ext3fs, or IBM-JFS.  In fact, in some cases it can actually make
this damage worse.  RAID only prevents damage from disk failure.

Note that the use of the multiple-disk driver to apply to partitions is
something of a Linux thing.  RAID is not required to work on partitions by
its nature.  In fact, the only time you really want to use RAID on a disk
partition is if you have got other data in other partitions on that disk
that you can't or don't want to relocate (for example, maybe it is your
root disk).  You normally do not want to make two partitions on the same
disk part of the same MD virtual device, except using linear
concatenation, as it will be extremely slow.

LVM is similar to the multiple-disk driver, and duplicates some of the
functionality, but really operates at a higher level.  Both systems allow
disks to be used as logical devices which are more flexible and capable
than the plain physical disks.  But instead of providing RAID features,
LVM allows you to dynamically allocate and resize logical volumes.  Disk
devices are put into volume groups and then logical volumes are allocated
from the volume groups.

It's usually much easier to add space to a logical volume than to shrink
one, so space is generally left unallocated until used.

LVM doesn't provide any of RAID's redundancy, but you can add MD virtual
devices to an LVM volume group.  There is currently some debate about
whether this is actually safe to do yet, although it should be safe in the
released 2.4 kernel.  For best efficiency the physical disks should all be
part of the MD virtual device and then that MD virtual device should be
the only thing in the volume group.



Re: how can I add disk space?

2000-10-05 Thread William T Wilson
On Thu, 5 Oct 2000 [EMAIL PROTECTED] wrote:

 There must be a way to use both HDs' disk space, isn't there one?

There are a few options.

First, you can mount one disk in the directory tree underneath the other.  
This will allow you to have the data written into that subdirectory stored
on one drive, and the data written elsewhere stored on the other.  This is
probably the easiest all-around option but depending on your data you
might not be able to arrange it so easily.

You could also export the new drive separately from the old one, so that
users would be able to select between the old share and the new one.

If you really need to have both drives combined into a single partition
you will have to use the MD (multiple disks) driver.  To do this you need
to add MD support to your kernel and read the Multi-Disk HOWTO and maybe
the Software-RAID HOWTO or the LVM HOWTO (LVM is more flexible, but
requires you to do patches or use the 2.4 kernel which is a major topic in
itself).

You should be able to find a package for the MD-tools you'll need to
combine the volumes; I don't, unfortunately, know what it's called.



Re: Can shell-script be setuid ?

2000-10-02 Thread William T Wilson
On Mon, 2 Oct 2000, Alex V. Toropov wrote:

 Can I make a shell script setuid ?

No.  Linux doesn't support this since it is insecure.  It works with perl
scripts only, because Perl does some extra checks and explicit handling to
make it work.

To get a setuid shell script you have to write a C wrapper program for it
or let the user execute it with sudo.



Re: Off Topic : c++ function for rounding

2000-10-02 Thread William T Wilson
On Mon, 2 Oct 2000, William Jensen wrote:

 aren't what I'm after.  I'm looking for a function that will take 1.4
 and make it 1, but 1.5 or higher is 2.  Know what I mean?  Any
 built-in c++ function to do that?

Add 0.5 to the number before you call floor.



Re: firewall (fwd)

2000-10-01 Thread William T Wilson
On Sun, 1 Oct 2000, Mike Leone wrote:

 @home, the largest cable ISP in the US, *routinely* scans their
 customers, aggressively checking that no one is breaking their service
 agreement by running a server OF ANY KIND.

This isn't necessarily the case.  It certainly appears to vary by
region.  They don't do it here (Denver, Colorado).  Perhaps this is
because DSL is so easily available :}



Re: Good Book for setting up T-1?

2000-09-30 Thread William T Wilson
On Sat, 30 Sep 2000 [EMAIL PROTECTED] wrote:

 I have been using deb linux for some 5 years now and am quite happy
 with it.  It has been a webserver for me for only 1 of those years and
 that is on a DSL.  As it trns out, some of the people I've done some
 contract work with wish to install a t1 line and run debian as the OS
 on all the systems.

The hard part isn't setting up the Linux systems, it's setting up the
T1.  Once you have a plan in place for making that work, then worry about
the hosts on the network.

You already set up a webserver, so you know how to configure systems on
the network.  T1 gives you some advantages- you don't have to worry about
DHCP or anything, you just set your IP address and leave it.  If you have
a whole class C network your DNS gets a lot simpler, otherwise you need
some assistance from your upstream provider.  You can still handle
everything in 'yourdomain.com' fine but reverse DNS will not work.  But
the DNS HOWTO I believe has the trick for the reverse.  Mail is easy to do
too, you just have everyone deliver mail to the mailserver.  To handle
case of '[EMAIL PROTECTED]' you can either use an A record for
'yourdomain.com' pointing to the mail server, or you can use an MX record.  
Never use a CNAME for anything having to do with mail.  Make sure your
mail server knows it has to handle mail for 'host.yourdomain.com' as well
as 'yourdomain.com'.  Of course you will have to set up POP3 or IMAP, but
these are not harder than installing any other program.

 Anyway, I have not done this before... maybe someone could point me in
 the direction of a list of hardware needed.. CSU/DSU, routers, etc...

You'll need a router and a CSU/DSU.  :} The Cisco 2500 series is the
'canonical' single T1 router.  But you can do this with Linux, too, if you
want.  Total cost is about the same, the Cisco has better routing but a
Linux system is more expandable and makes a much better firewall (get a
cheap Pentium to do the routing, the expensive V.35 serial hardware will
make up the difference in cost between the cheap PC and the more expensive
Cisco).  Bat Electronics CSU/DSU's are cheap (I paid like $400 for mine)
and easy to set up, they have a RJ-45 on one side for the T1 and a V.35
serial port on the other to go to the router.  The Bat CSU/DSU has a love
it/hate it reputation - they have essentially no features and a high
defect rate, but Bat will replace any defective ones and they are super
cheap, and they do perform all the required functions for a simple setup.

All my information dates from approximately 1997.  At the time there were
many T1 cards with integrated CSU/DSU's in development, but I didn't
consider any of them quite ready for prime time yet.  You might be able to
save more money by finding one of them.



Re: laptop hot swapping

2000-09-29 Thread William T Wilson
On Fri, 29 Sep 2000, David Smock wrote:

 stuff works great.  My only problem is hot swapping drives - is there
 any way to get linux to recognize the fact that ive changed block
 devices?

You mean you swap between floppy and CDROM drives?

If you compile these drivers as modules you should get the desired effect
by unloading the CDROM module before you take it out, and then loading the
floppy module after it is connected.  You may have to boot with the CD-ROM
in in order for it to ever show up.  And you will probably have to compile
the kernel so the CD-ROM uses SCSI emulation, too.  This is also subject
to being able to compile the floppy driver as a module.  (Having never
seen the need to do this, I don't know whether it is possible or not).

If you do all this, I think it would probably work :}



Re: ongoing sound problems

2000-09-29 Thread William T Wilson
On Fri, 29 Sep 2000, Christopher Fonnesbeck wrote:

 tried modules as well), and the sound card does get picked up and
 configured, albeit not correctly.

I cannot see what is wrong with the configuration you show here.  It seems
that it is finding the card and configuring it.  Are the addresses
wrong?  If they are correct, the card should work fine.

But I've never used MPU-401.  So I don't know if that works or not given
this output.  But the wave-audio should be ok.




Re: Getting CPU load (from /proc/?)

2000-09-27 Thread William T Wilson
On Tue, 26 Sep 2000, Krzys Majewski wrote:

 This is just a guess, but maybe you can't. Maybe the cpu is either
 100% busy or 0% busy, depending on whether or not linux is running a
 program. After a minute, you can say, OK, a program was running 20% of

It's a good guess.  In technical terms the load average is simply the
average number of processes in the runnable state over a period of time.  
You could take an instantaneous measurement of the number of processes in
the runnable state, but it wouldn't be a meaningful number.  (In fact,
this information is already produced by ps - running processes are marked
with 'R').  It's entirely possible that a lightly loaded system could have
many running processes at any particular moment, or that a heavily loaded
system could have few.

 the time. Maybe a second is not long enough to say anything
 interesting. -chris

It depends on what information you are really trying to get. :}



Re: OT what happens to mail when...

2000-09-27 Thread William T Wilson
On Wed, 27 Sep 2000, will trillich wrote:

 obvious one would be http/web stuff). seems very silly to offer that,
 and then shut down your server just to play
 'how-many-ways-can-regedit-fsck-my-shell*'?

But not everyone can afford to have a second computer to play games on :}

That said, there's something to be said for buying a 4-year-old (approx.)
used Pentium system and turning it into a dedicated Linux system and using
your modern system exclusively for Windows.  That's what I have (well... I
bought the aforementioned Pentium when it was new, also).

Other than having occasional IP Masquerading issues you will have no
trouble.  But using portfw in the 2.2 kernel I haven't yet run into a
masquerading problem I cannot solve...



Re: I'm afraid I've been cracked.

2000-09-27 Thread William T Wilson
On Wed, 27 Sep 2000, Alvin Oga wrote:

 egrep -i failed|failure|refused|not allowed|illegal
 port|blocked|denied|passwd\
   /var/log/messages*

There is not much to gain by this.  If the information is found in your
logfile, they didn't get in :}

 check the binaries tooo...
   top, ps, ls, last, w, who, netstat, passwd, login, etc...

Absolutely do this.  I've seen rootkits these days that modify the startup
scripts too.



Re: superformat?

2000-09-15 Thread William T Wilson
On 14 Sep 2000, John Hasler wrote:

  floppy. Just seems a little wierd seeing DOS as the default on a Linux
  manpage...
 
 FAT16 is a pretty good format for floppies (that's what it was designed
 for).  Ext2 isn't.

ext2 does fine with floppies, it has a little more overhead than DOS, but
it also has proper filenames.  The problem is that hardly anybody uses
ext2 as a floppy format, everyone uses DOS, so that is the default :} Most
Linux systems have networking, whereas many DOS systems do not; if a
network is available, why use a floppy?



Re: SMP and potato

2000-09-15 Thread William T Wilson
On Fri, 15 Sep 2000, Leonardo Dias wrote:

  was improving.  Does anyone know what the state of it is?  Is potato's SMP
  better than slink?  I would think it is a function of the kernel, not the
  distro, but I could be wrong.
 
 You are wrong. SMP is totally written in the kernel. But your doubt has
 values, because SMP has been implementated only in kernel v2.2, which
 was distributed only in newest distros.

SMP is implemented in 2.0.x kernels.  Some kernels are better at it than
others, and sometimes you have to edit the kernel Makefile to get SMP
support (this is all explained in the SMP HOWTO).

The trouble with SMB in 2.0.x is not that it is unstable, but that it
isn't very efficient.  If your processes are CPU-intensive, and/or you
have only 2 CPU's, you will be fine, but for processes that do lots of
system calls or I/O they tend to spend a lot of time waiting for kernel
locks.  But this is not disastrous, just suboptimal.  It is much better in
2.2.



Re: WP5.1 under DOSEMU uses 100% CPU all the time

2000-09-06 Thread William T Wilson
On Thu, 7 Sep 2000, I. Tura wrote:

 Everything seemed fine but it happened an event I feared. If I use WP the
 CPU starts going near to 100% and this bugs me a lot, specially because I

Under DOS there is no ability to idle waiting for an event.  Instead the
system must continously poll the keyboard, which takes 100% CPU; it wastes
laptop power and it also kills performance of other applications running
in Linux.

Idon't remember what the setting is, but somewhere there is a setting for
how aggressive DOSEMU is in detecting idle DOS applications.  Try turning
that up and see if you can't make DOSEMU force the issue a little bit :}



Re: Required Hardware?

2000-09-06 Thread William T Wilson
On Wed, 6 Sep 2000, Jeffrey H. Young wrote:

 hardware requirements says the system should have 12MB RAM.  Any
 chance I can get my system running with only 8MB?

It'll run, but you may have to go through some gyrations.  Don't expect X
to be useful, and you might have problems with the installer forcing you
to install on another system and then transfer the HD image over.

Linux itself will run fine in 8MB, giving you a shell and decent
performance from most text based programs.



Re: Is monitor flicker a function of video card or monitor?

2000-08-31 Thread William T Wilson
On Thu, 31 Aug 2000, Krzys Majewski wrote:

 Can I take advantage of a new video card to reduce the flicker I 
 see in X Windows, or is this strictly a function of the monitor? 

Both.  In your case, it's probably the monitor.  Your video card is kind
of low-powered too.

The thing that determines flicker is the refresh rate.  This is not able
to exceed the VertRefresh line in XF86Config and it too is limited by the
monitor itself.  Unfortunately, a monitor is also limited by the maximum
dot-clock it can support (this is also called bandwidth) and the
horizontal sync rate.  If the dot-clock is higher than the monitor can
support, the image becomes blurry and it becomes impossible to distinguish
individual dots.  If the horizontal sync is too high, the monitor can't
display a stable picture (and it's no good for the monitor, either).

My monitor (an NEC XV17) is 5 years old and has, even for the time, a
pretty low H-sync of 65.5 KHz, and lowish bandwidth of 85MHz which I run
at 100MHz.  This is good enough for me to get 1152x864 at about 70 Hz
refresh rate (or 1280x1024 at about 62 Hz - most unpleasant flicker and
the wrong aspect ratio too).

As resolution increases, both horizontal and vertical frequency increase.
This is why refresh rates are lower at higher resolutions.  Horizontal
sync rates tend to place the limit on maximum resolution.

The dot-clock is equal to the H-sync times the refresh rate times the
vertical frequency, and refresh rate cannot exceed vertical frequency.  
So you might find that your monitor has plenty of vertical frequency
available to display whatever refresh rate you want, but cannot use it at
high resolutions because you will exceed your bandwidth.  You can push the
limits with your rated bandwidth (in an emergency, you can sometimes
double it) to buy refresh rate, but your picture quality will decrease,
and it will increase the wear on the monitor.  You should try to avoid
exceeding your vertical sync rate and never ever exceed the horizontal
sync rate.

Although dot-clock cannot exceed the bandwidth of the monitor, it cannot
exceed the capabilities of your video card.  This is usually determined by
the video card's RAMDAC.  My video card (A #9 771 with an S3-968 chip) has
a pretty good 170MHz maximum dot clock; I think the average mach64 is
stumbling around at 100MHz or so.  Of course you don't lose anything if
this is higher than your monitor bandwidth.  XFree86 should tell you when
it starts up, anyway.

 I copied the modelines from my old machine to my new machine and they 
 work fine, but I'm wondering if I can do better. 

Typically modelines are monitor-limited.  The only case where they would
not be is if your video card is limited by the dot clock.

 Any pointers to rtfm on the meaning of things like hsync and vsync
 and dot clock also appreciated.

Look for the XFree86-Video-Timings or Modelines HOWTO.  If you can't find
it, I'll send it to you.

VendorName  Magnavox # I typed this in
ModelName   CM2015D1 # I typed this in

See if you can't dig the specs up somewhere.

HorizSync   31.5 - 53
VertRefresh 50-90
Modeline 1024x768c 65.01024 1036 1180 1304  768 771 777 802 -hsync 
 -vsync

You can almost certainly do better than that - if your monitor is 17 or
larger.  If it's only 14 or 15 you might have fairly accurate readings
there.

Any 17 or better monitor will have a better HorizSync and bandwidth than
that and, by extension, better refresh rates.  That is only 65MHz of
bandwidth, not very much.



Re: limiting access

2000-08-20 Thread William T Wilson
On Sun, 20 Aug 2000, Robert Waldner wrote:

 I have a bunch of luser-accounts on one of my boxes, what I want is to 
 restrict them to their home-dir, with only very special exceptions.

You probably want to use rsh, the restricted shell (as opposed to rsh the
remote shell).

 Any hints? iirc there is a way to set the root-dir to some other than /
 , but what?s the command/utility for that?

chroot.  But chroot removes the entire portion of the filesystem above
where you chrooted to, so your chroot environment has to have its own
/usr/bin, its own libraries, its own /etc files... it has to be a fully
functioning system in its own right.  The most common use for chroot is
for anonymous FTP, which is probably the very minimum chroot environment
that works.

Chroot doesn't guarantee security, as setuid programs within the chroot
environment can still give root access, and users can still communicate
with non-chroot processes normally.  And it is not trivial, but usually
pretty easy, for someone getting root access within a chroot jail to get
out of it again.  The restricted shell can allow you to control precisely
what a user does, which can provide a different sort of security.



Re: Hardware Modems

2000-08-17 Thread William T Wilson
On Thu, 17 Aug 2000 [EMAIL PROTECTED] wrote:

 windows, reaching the capacity of 115 Kbps (at least, windows say that).
 Take a look at their DOS readme section:

Windows is lying, most likely :}  It frequently reports the connection
speed between the computer and the modem, rather than between the two
modems.

If the modem can run under DOS, it can almost certainly work under Linux.
The problem of setting a modem to use a particular COM port is different
from the WinModem problem in which the modem does not really use a COM
port at all.

There are a couple of things to try.  First of which is to boot DOS, run
the modem config program, and then load Linux with loadlin.  This will
leave the modem configured, as opposed to a real reboot, which is likely
to erase the modem config.

The second of which is to try to run the modem config program from within
DOSemu.  But for this you need to know all the port addresses used by the
modem so you can give DOSemu direct access to them.

 I think this CONFIGP is a TSR that makes the connection between the
 (hardware!!!) modem and the COM port.

Even if it is a TSR, it may not need to be one.  You can use the DOS mem
command mem /c/p or is it mem /c /p - to enumerate all the TSR's
running in a DOS session.  If it is not in the list produced by mem, it is
not a TSR.  My bet is that it is *not* a TSR - it just programs the modem
registers to use a particular setup and then exits, since it has nothing
else to do.  It would be extremely difficult for a DOS TSR to manage a
modem in the same way that Windows manages a Winmodem.

This sort of program the registers program was very common a few years
ago especially with sound cards, but modems too.  I have no idea why any
modern modem would use such a thing instead of being a Plug  Play modem,
but there you have it :}

If it *is* a plug  play modem, you need to configure it with the PnP
tools - I bet pnpdump can see it.

 Isn't great? I spent a lot of money ($150) buying a hardware modem that
 cannot work under linux.

That is a lot of money for a modem!  Why didn't you get a cheaper one?

In general it is a bad idea to pay any attention to what tech support
tells you.



Re: Has Corel been violating the GPL for approx 6 months?

2000-08-11 Thread William T Wilson
On Fri, 11 Aug 2000, Spinfire Magenta wrote:

 No one has ever tested the GPL in court, and its difficult to figure
 what the possible outcome of an actual court case would be.

Nobody has been willing to risk it.  So apparently most corporate lawyers
feel that the likelihood of it being enforceable outweighs any potential
benefits of trying to violate it.

The FSF has settled with several companies, typically the settlement
involves the violating party correcting the violation and the FSF agreeing
not to make a big deal about it.



Re: what's the point of mp3's?

2000-08-09 Thread William T Wilson
On Wed, 9 Aug 2000, Ethan Pierce wrote:

 Subject: OT: what's the point of mp3's?
 
  -rw-r--r--1 krzyskrzys  118700 Jul 31 17:28 hip1302mp3.mp3
  -rw-rw-r--1 krzyskrzys 1308716 Aug  9 10:05 hip1302mp3.wav
  -rw-rw-r--1 krzyskrzys  117718 Aug  9 10:06 hip1302mp3.wav.gz

You might try this again with the wave *before* it was converted to an
mp3.  The mp3 format simplifies the file a little bit for its own
compression; this is lossy compression so the simplification sticks around
even after the file is converted back to a wav.

I don't know for sure, but I think it's likely that the un-mp3'd wav file
will not compress nearly so well.



Re: Fixing the stuffed up terminal

2000-08-06 Thread William T Wilson
On Mon, 7 Aug 2000, Triggs; Ian wrote:

 I have noticed that if I accidently 'more' or 'cat' or whatever a
 binary file and the terminal displays unreadable characters, the best
 thing to do is to 'more' the file again, keep pressing space until the

A better way is to use the 'reset' command.  'stty sane' is another good
thing to try.



Re: cos() in math.h ?

2000-08-03 Thread William T Wilson
On Thu, 3 Aug 2000, Christophe TROESTLER wrote:

 Thanks to all for answering my very simple question.  Now, how was I
 supposed to know I had to link against `m'?  I mean, given a header
 file, is the file I have to link against specified in the doc?  Is
 there any info on that subject you can refer me to?

Unfortunately there's no one-to-one correspondence between header files
and library files.  Although every library intended for development will
have a header (otherwise you could not compile programs designed to use
it), not every header has its own library - a given library can have
multiple header files, and some header files aren't associated with any
particular library.

Anyway, the short answer is that the definitions of functions don't
necessarily say what library you have to link to actually get them.  You
pretty much just have to know, pick it up from looking at other
examples... or go rooting around through the libraries looking for it...

You can get a list of functions in a library by using nm -C file.  Those
of class 'T' are function calls you can use.  (The -C makes it work with
C++ functions as well as C functions).
nm -o -C *.a | grep funcname is a good way of finding out which
library a given function is in, provided you know which directory the
library is in (/usr/lib is a good starting place :} ).  Don't forget that
when linking you don't specify the 'lib' or the '.a' - for example to link
with libm.a you just do -lm, not -llibm.a.

Symbols of class 'U' are not defined in this library, but rather are used
by this library but located somewhere else.  So don't be distracted by
those.

A library is really a collection of .o object files (see ar for more
details).  So nm (and other tools) will often tell you what object file
within the library is being referred to.  Normally, this only matters if
you are making your own libraries.



Re: cos() in math.h ?

2000-08-03 Thread William T Wilson
On Thu, 3 Aug 2000, Christophe TROESTLER wrote:

 simply need to include `math.h'.  However, when I compile, I got the
 error:
 
   /tmp/cc9WOsLC.o(.text+0x16): undefined reference to `cos'
   collect2: ld returned 1 exit status

This is actually a linker error - undefined references happen when the
linker (which might be called by the compiler) tries to assemble the
object files into an executable, but can't find all the function calls
that the program wants to make.  cos() is in the math library, libm.a.  So
you need to add -lm to the command line.

Including math.h will allow the compiler to compile the object code
(otherwise you would get warnings or errors about the function declaration
for cos()) but the actual code that does the computation is in libm.



Re: holly crap!

2000-08-03 Thread William T Wilson
On Thu, 3 Aug 2000, Andrei Ivanov wrote:

  I was user not root (little sigh), but I lost a lot of data.. Is there
  ANYWAY to recover all the lost files in /home/me ???
 
 I guess you were in home when you did that.
 Well, nope. Unless you made backups, whatever you deleted is now gone.

It is possible to recover the data with ext2ed... provided that you have a
pretty good idea of what it is, have enough spare space lying around to
make a complete copy of the partition, nothing has written over it, and
you know how the filesystem works.  If it's text data, you might be able
to fish it out by grepping the device file for parts of it.

The 'gitview' program is invaluable doing this sort of thing since you can
see text and binary data together.



Re: [Q] virus susceptibility data

2000-07-18 Thread William T Wilson
On 18 Jul 2000, Olaf Meeuwissen wrote:

 I'm looking for any kind of info on vulnerability to viruses on Debian
 and/or Linux.  Pointers to anti-virus programs are also very welcome.

There are no anti-virus programs because there are no viruses.  There are
a variety of security holes that crop up from time to time, but Windows is
far worse.

 If I can't convince some people here at work, I'm about to be told to
 disconnect from the net or use (heaven forbid!) Windows for any kind
 of internet activity beyond our firewall.  And that seems to include

This shows a remarkable lack of cluefulness on the part of your network
staff.  I wish you luck, but they appear to be so stupid that you will
probably not have much success.



Re: IPC

2000-06-20 Thread William T Wilson
On Tue, 20 Jun 2000, Parrish M Myers wrote:

 Does anyone know what standard Linux/Debian conformes to in regardes
 to IPC?  I recently picked up W. Richard Stevens boot: Unix Network
 Programming Volume 2 [Interprocess Communications].  None of the
 programs inclded with the book will compile on either Debian or
 Redhat.

You can use UNIX domain sockets and the System V IPC (shmget, shmctl,
etc.)

It's likely that the problem is some sort of header or linking problem,
these things differ from one UNIX platform to another.  What errors are
you getting when you try to compile the sample programs?

  From what I understand Linux doesn't even conform to Posix1.

Linux is more POSIX compliant than anything except the standards book. :}



Re: Hard links

2000-05-24 Thread William T Wilson
On Wed, 24 May 2000, Sven Burgener wrote:

  108545 drwxr-xr-x  21 root root 1024 Feb 19 17:34 usr
 
 and now I issue:
 
 hp90:/root # find / -inum 108545
 /usr
 
 All I got is /usr! How can that be explained? I must be missing

Well, the inode for /usr is 108545, so when you search for that inode you
get /usr.  What's wrong with this?

Directories have their own inodes too, which are totally unrelated to the
inodes of any files that might be in that directory.  Therefore, if you
search for the inode of a directory, you should get only that directory.
Any object in a filesystem has an inode, including named pipes, symbolic
links, device files...

Also keep in mind that inodes are not unique on the system, as a file can
have the same inode number as an unrelated file on different filesystems.
Because of this when you use -inum you should probably use -xdev as well.



Re: Hard links

2000-05-24 Thread William T Wilson
On Thu, 25 May 2000, Sven Burgener wrote:

 
   108545 drwxr-xr-x  21 root root 1024 Feb 19 17:34 usr
 
 ... I assumed that the hard links theory of files applies to directories
 in the very same way. That would mean that - if it were possible - there
 are 21 [hard] links to /usr somewhere on the system. That's what puzzled

There are.  The directory entry for /usr in / is one, as is the '.' entry
in /usr and the '..' entries in all of the subdirectories of /usr.  Find
ignores all of these except the first.

Other than these exceptions, no hard links to directories are allowed.

 Hmm. I am not completely new to *nix, but named pipes I know nothing of.

It's basically a pipe that lives in the filesystem.  The man page for
mknod touches on them briefly.  A process can open it for writing and
another can open it for reading.  All the data that the first process
writes is readable by the second as if it were a pipe, and none of the
data is ever actually saved in the filesystem.  This allows programs to
communicate with each other via pipes without necessarily knowing about
the process on the other end of the pipe. For example, you can use this to
have a program output its logfile into a pipe, then parse it with perl,
sed, or something and write the finished output into some other file, and
the original program writing the logfile does not need to have any special
support to do it.



Re: (ot) What is load average?

2000-05-22 Thread William T Wilson
On Mon, 22 May 2000, Jonathan Lupa wrote:

 I know that the first three are 5, 10, and 15 minute averages, but I'm
 not sure what load really is.

It is the average number of processes in the 'R' (running/runnable) state
(or blocked on I/O).  Very simple really.  Unfortunately interpreting
these numbers is something of a black art.  If your load average is
regularly over 1, you are losing performance and would benefit from more
power.  If your load average is under 1, you still have performance to
spare.

 I'm curious what those numbers represent and what reasonable values
 are for them... Are the CPU, memory, or IO related..? A pointer to a
 doc would be cool if available...

They're not memory related directly, and not memory related at all if
you're not swapping.  If you are swapping, that reduces the overall
performance of the system which can in turn make the load average go up.

The load average is most directly related to CPU.  Two CPU-intensive
processes running will result in a load average of 2, etc.  But I/O
intensive processes spend so much time active that they can drive up the
load average also.  In addition if more than one process is blocked on I/O
then the load average will go up very quickly, as both processes count
toward the load even if only one can access the disk at a time.

Load average is further complicated by multiprocessor boxes, and boxes
with multiple disk controllers, because (for example) two processes can
each access data on separate disk controllers while two other processes do
number crunching on two separate CPUs, resulting in a load average of 4,
even though the system is not overloaded.  In general, if I/O is not a
problem, your load average should be equal to or less than the number of
CPU's.



RE: Installing without rebooting (running the installation proggie from within Linux)

2000-04-28 Thread William T Wilson
On Thu, 27 Apr 2000, Steven Satelle wrote:

 My understanding is (more from windoze than linux) that installing on one
 hrd drv and using it in a diff system is a bad idea, lots of different

It is often a good idea.  Sometimes, it is the most efficient way to get a
system installed- say if the remote system doesn't have a fast connection
and you don't have a CD.  Unlike Windows, Linux works pretty well
misconfigured.  :} That said, I managed to do the same trick with Win95,
though it did not inspire confidence :}

I don't think the installer will run without a reboot, but you can
probably format the drive and install all the packages onto it yourself.

You could even just make a copy of your own system onto his drive, but
then he would have all your user accounts and things, which you would
have to remove (unless you don't care).

The biggest problem you will have will be with LILO.  Best way to set that
up is with complex LILO magic, or to boot off a floppy on the new system
and then install LILO from there.



Re: Trying to run one process as root, how?

2000-04-14 Thread William T Wilson
On Thu, 13 Apr 2000, Jim Breton wrote:

 On Thu, Apr 13, 2000 at 06:17:00PM -0400, William T Wilson wrote:
   since I believe if you use +root you would be allowing the root user
   on any other system to connect to your X server as well.
  
  Actually, you will be allowing any user on system 'root' to connect.
 
 Not according to the xhost man page:

I think you have misinterpreted the man page.  You can only add users if
you are authenticating via kerberos or NIS.  In that case, you would have
to specify 'xhost +nis:root@' to get the desired behavior.  And it won't
work (i.e. grant anybody any access) unless you have Secure RPC.  If you
just specify a single word, xhost will assume you mean a network system
and in fact it will give an error if you just type 'xhost +root' and there
is no system called 'root' on your network.

(Yes, the man page is magically obscure on this point :} )




  1   2   3   >