Re: Security
Evan Moore <[EMAIL PROTECTED]> wrote: >If a person has a box connected to a network, but there are no daemons >such as telnetd, ftpd etc etc is it still possible for that box to be >hacked into? There is no way to have perfect security on any system connected to a network. It's just about impossible. With something like Linux, which is changing all the time, I would hazard a guess and say that it IS impossible. The best you can do is minimize your risks, which is what turning everything off, as you suggest, does. The only perfect computer security is to isolate your machine (no wires anywhere from your machine to the outside world), lock it in a vault, and have armed guards around it 24/7. Even then... So, the short answer to your question is, yes, it is possible that someone could crack your system, even with all the daemons turned off. Is it likely? Depends on what you have on that system and how bad somebody wants to get at it and how many people know about it. I doubt any cracker's going to spend hours breaking in to the average users system. Gary
Debian mirror of i386 only.
For some time I've had a partial mirror of debian. Partial being the slink distribution and only the binary-i386 directories. It's worked well and greatly eased upgrading the machines I have Debian installed on. Recently I decided to start mirroring potato instead of slink. I thought it would be easy but I thought wrong. Apparently something has changed and I can no longer "flatten" symbolic links with the mirror utility. After some investigation this seems to be a "feature" of a lot of new ftp daemons, in particluar proftpd that a lot of the Debian mirror sites seem to favor. Further investigation showed that even my mirror of slink was not being updated properly. So this has been going on for some time, I just never looked to see if my mirror was being updated. It wasn't. I thought I could get around the problem by going ahead and mirroring the binary-i386 and binary-all portions of potato. Unfortunately, there's a lot in potato that merely links to slink and I definately don't have the space to mirror potato and slink. The question is has anyone gotten around this problem. It's quite easy to show. Simply log in via an ftp client to one of the mirrors and do: cd /debian/dists/potato/main/binary-i386/admin ls -lRatL The "L" flag is supposed to "flatten" symbolic links. Unfortunately, under proftpd it doesn't. Also, I don't seem to be the only one having this problem. One of the mirror sites listed on the debian mirros page only mirrors the i386 distribution and looking at their site I noticed that none of the files that are actually symlinks in potato are there. You can verify this for yourself by looking at: ftp://csociety-ftp.ecn.purdue.edu/pub/debian/potato/main/binary-i386/admin You'll note that several files are missing from the admin section. In particular debconf_0.2.xx.deb. On the major mirros this file is actually a symbolic link to somewhere else. Any ideas appreciated. Thanks, Gary Hennigan
Re: good GPLed backup program
"T.V.Gnanasekaran" <[EMAIL PROTECTED]> writes: > Looking for good reliable backup program under linux. > I know BRU is good but it is priced. I want a free product. > -gnana I've been happy with afbackup. It's a bit of a steep learning curve, but after I finally got it set up to my taste I haven't had to look at it again, it just works. I haven't yet done a restore, but from what I read in the docs it doesn't seem to complex. Gary
Re: Debian-* & procmail recipe
Pollywog <[EMAIL PROTECTED]> writes: > On 20-Aug-99 Kris wrote: > > Are you fed up of having six or seven useless lines of text and carriage > > returns at the end of each mail from debian-user and the other lists? > > Well, fear no more! For a limited period only, you too can have those > > lines stripped off -- completely free! > > > > Add the following to your .procmailrc: > > I have tried to get procmail to start from my exim .forward file, but it > does not work. I instead went to sortmail, which starts from my Exim > .forward file, but it does not do fancy stuff like what you mentioned in > your post. What error message are you getting? I use procmail from exim and haven't had any trouble. My ~/.forward contains the single line: "|/usr/local/bin/procmail" Gary
Re: [Summary] UPS anyone?
John Pearson <[EMAIL PROTECTED]> writes: > On Tue, Aug 17, 1999 at 11:34:35AM -0600, Gary L. Hennigan wrote > > Peter S Galbraith <[EMAIL PROTECTED]> writes: > > [snip] > > > Unresolved questions: > > > > > > - What do we get for smart mode? I presume more info about the > > >state of the UPS and the line condition gets to the user > > >software. But can the Linux software display it? > > > > If you get the APC Back-UPS pro and use apcupsd in smart mode it > > can. The main advantage is that it can get an estimate of how long > > your system can run on the battery from the UPS. In dumb mode most of > > the UPS software immediately shuts down a system when a power outage > > is detected. In smart mode, with the right software, the system will > > stay up until the battery gets low. > > > > One thing you *don't* seem to get with vendor-supplied stuff > is support for multiple workstations on one UPS, as you get with > upsd. Is that changing? Just FYI, apcupsd is not a vendor-supplied product and it handles multiple Linux computers attached to it quite nicely. I have two machines hooked up to it and while it hasn't had to operate yet, I tested it and it shut both machines down just fine. Gary
Re: [Summary] UPS anyone?
Peter S Galbraith <[EMAIL PROTECTED]> writes: [snip] > Unresolved questions: > > - What do we get for smart mode? I presume more info about the >state of the UPS and the line condition gets to the user >software. But can the Linux software display it? If you get the APC Back-UPS pro and use apcupsd in smart mode it can. The main advantage is that it can get an estimate of how long your system can run on the battery from the UPS. In dumb mode most of the UPS software immediately shuts down a system when a power outage is detected. In smart mode, with the right software, the system will stay up until the battery gets low. Here's the output of "apcaccess status" on my system (apcaccess is part of the apcupsd package): APC : Aug 17 11:21:25 CABLE: APC Cable 940-0095A UPSMODEL : BACK-UPS PRO 650 UPSMODE : Net Master SHARE: NetworkUPS UPSNAME : ULINE: 118.0 Volts MLINE: 118.0 Volts NLINE: 118.0 Volts FLINE: 60.0 Hz VOUTP: 118.0 Volts LOUTP: 042.9 Load Capacity BOUTP: 13.8 Volts BCHAR: 100.0 Batt. Charge TIME : 18.0 Minutes SENSE: HIGH WAKEUP : 060 Cycles SLEEP: 020 Cycles LOTRANS : 002.0 Volts HITRANS : 002.0 Volts CHARGE : 003.0 Percent BFAIL: 0x08 Status Flag ALARM: Always LASTEVNT : SELF TEST LOWBATT : 02 Minutes So, my system can run for an estimated 18 minutes if the power fails. I have it set so that a shutdown will be performed when either BCHAR drops below 10% or TIME drops below 10 minutes (this is something you can configure yourself). The other quantities that are neat to know about, but not critical, are the maximum, minimum and current line voltages (MLINE, NLINE and ULINE, respectively), and the load capacity (LOUTP). Again, it's interesting to see these values, but not really a necessity to save your system when the power goes out. Gary
Re: bash & smaller fonts
"Steven Klass" <[EMAIL PROTECTED]> writes: > While we're on the topic. How can I customize the font display for bash? > Can I make it narrower and smaller? This, to my knowledge, isn't a BASH issue, but a terminal issue. If you're using X windows you can use any of the available xfonts for your xterm. Just pick one you like, using, for example, the xfontsel application and start your xterm like: xterm -fn 'font name from xfontsel' or you can put it in your ~/.Xresources like: xterm*font: font name from xfontsel a specific example would be something like: xterm*font: -misc-fixed-medium-*-*-*-13-*-*-*-*-*-*-* You'll have to reread your resources if you go with the latter route, either by logging out and back in again or by using xrdb. If you're not using X then it's going to depend on the console driver and, while I believe there are ways to change it, I'm not familiar with them. Gary
Re: laptop: my own kernels don't work
Pollywog <[EMAIL PROTECTED]> writes: > On 12-Aug-99 Syrus Nemat-Nasser wrote: > > Hi Andrew, > > > > I recently installed slink on a Thinkpad 560 too. I'm using kernel 2.2.5 > > however, I think your problem could just be the bzImage. Have you tried > > using a zImage instead? Many laptops have a problem loading the bzImage > > compressed kernels--I believe that the tecra boot disk uses a zImage. > I forgot to consider that. On my other machine, I have to use a compressed > kernel, but it is not a laptop. I got into the habit of 'bzImage' and > forgot that laptops can't handle compressed kernels. The laptop does not > need to have as much compiled into the kernel as the other machine, so I > might get away with plain 'zImage'. Not all laptops. My Thinkpad 600 is quite happy with a 2.2.11pre2 bzimage. > > > > Anyway, I use my own customized kernel 2.2.5 with pcmcia-modules and > > pcmcia-cs packages compiled from the pcmcia-source package in potato. > > That's because the version of pcmcia-source in slink does not support the > > 2.2.x kernels. > > > Is there a HOWTO on how to compile my own pcmcia modules? I have the > source but I have never done this before. I will go into the source and > see if I can find some docs there. The PCMCIA HOWTO covers installation http://www.ssc.com/mirrors/LDP/HOWTO/PCMCIA-HOWTO-2.html or, if you installed doc-linux-, /usr/doc/HOWTO/PCMCIA-HOWTO.gz Gary
Re: problems with xpm
Pedro Bastos <[EMAIL PROTECTED]> writes: > hello list :) someday, somewhere, for some reason, i tried to compile > asmem. and, as you all know, asmem uses a lot the xpm library. > so, the configure part of the compile process was sucessful. > but, unfortunely, the compile process itself (make) returned an error: > > gcc -O2 -I/usr/X11R6/include -c asmem_x.c > asmem_x.c:18: X11/xpm.h: No such file or directory > make: *** [asmem_x.o] Error 1 > > and then i checked if i had lib xpm installed: > > ~# dpkg -l | grep xpm > ii xpm4g 3.4k-1 X Pixmap run-time libraries > > finally, does anybody know how to fix it ? :) You need to install the xpm4g-dev package in order to get the headers. The xpm4g package only installs whats needed for running xpm4 binaries. The xpm4g-dev installs what you need to actually compile an XPM dependent program. Gary
rc?.d policy?
Is there any policy on which run level does what? Just looking at rc2.d and rc3.d they appear identical. Personally I like level 2 to boot non-X and level 3 to be exactly the same, but starts an X login manager, e.g., wdm or xdm. Is there any reason I shouldn't update the rc2.d and rc3.d accordingly, as well as setting my default level in inittab to 3? Thanks, Gary
Re: backing up a complete Debian GNU/Linux system
David Wright <[EMAIL PROTECTED]> writes: > Quoting Gary L. Hennigan ([EMAIL PROTECTED]): > > I guess I don't see the logic here. If one of the binaries on your > > backup has a Trojan that, presumably, means that before you did the > > backup you were running a system that had a Trojan. I would assume at > > that point the damage has already been done. > > Logically, that doesn't follow. The trojan may not yet have been > run. You're going in circles here with maybe's and what ifs. I still believe that the chances are so minimal of any Unix system getting a Trojan that if you have the backup media it's easier to just restore from a backup than to reinstall from scratch. > > Besides, assuming someone > > slipped a Trojan onto your system in the first place, restoring all > > your config files as they existed prior to the backup would allow them > > to just log in and introduce it again. > > Again, logically, that doesn't follow. The trojan may have been installed > before the config files were altered. For example, one might have decided > to tighten up security in the wake of a break-in (detected or undetected) > or simply changed the passwords. Or perhaps one decided to loosen security and it slipped in afterward? It's just not worth the hassle. I've done several restores from full backups and I've done reinstalls on a working system. Unless you're running a "high risk" system, which I'd classify as a loosely adminstered system sitting on the open network, it's MUCH more work to reinstall than it is to just restore from a full backup. > > The only chance I see of defeating a Trojan is detecting it and > > defeating the method used to introduce it in the first place. Also, > > the fact that such Trojans are so rare on Unix and Unix-like systems > > would make it a minor concern for me. > > > > Anyway, it's standard practice in large installations to back up > > practically everything for a level 0 backup, excluding things like > > /tmp, /dev (sometimes) and /proc. > > There may be a historical reason for this. A large unix installation > is likely to have gathered its software from all sorts of sources > on all sorts of disparate media, and have put a lot of administrative > sweat into compiling and installing it all. So it makes sense to > backup the *result* of all that work. You're talking to just about the definition of "historical". I've been doing system admin for about 10 years now and I ALWAYS knew exactly what was on the systems I administered (at least on the non-user partitions). Oh, I couldn't say down to the file what was there, but I could, without any hesitation, tell you which partitions held only system files and which held files installed locally from a non-System vendor. And generally, for at least the last 5 years or so, every major Unix version has come with a package management system of some sort. Even so, I always did full system backups, including all the binaries that were probably on installation media somewhere. I've been lucky enough to be at organizations that didn't skimp on the backup media so it was never an issue, and we ALWAYS backed up everything. Doing restores of full backups doesn't involve checking lists to see what needs to be reinstalled, worrying about a configuration file that've changed, patches to the OS that have come along, etc. Believe me, in general, it's easier to do a restore. By the way, in all those 10 years I've seen exactly ONE system intrusion. And it was under the circumstances I described above, a loosely adminstered system sitting on the open network. > OTOH every file on this system I'm typing on is sitting on one jaz > drive. The binaries and kernel-images are all in their .deb files; > the rescue/drivers disks are as disk images together with base*.tgz; > then there are all the configured /etc and /var files in zipfiles > for possible restoration, and copies of /etc and /var plus a non-root > recursive snapshot of /proc/[a-z]* for perusal. /home is split by > user as there are so few. I'm not saying it's a requirement to back up you're entire system. I'm happy that you have a scheme that you're comfortable with. I AM saying that backing up an entire system is far from a worthless pursuit. If you have the money for the backup device/media it's a time saver. > The very idea of all one's system software in a set of homogeneous > .deb files is probably foreign to most unix administrators. Only those with "home brew" systems. In institutional settings it's also a matter of wasted time. Why fight with a whole installation procedure when you can simply do: restore /dev/tape / Certainly Debian, and most modern Unix systems, would be easier to install from scratch, but not as easy as a one line command to restore from a backup. Gary
Re: backing up a complete Debian GNU/Linux system
George Bonser <[EMAIL PROTECTED]> writes: > On Tue, 3 Aug 1999, x x wrote: > > > Hi! > > Could anyone tell me what's a good hardware/software > > combination to use to make frequent FULL backups > > of a Debian system > > (operating system, "applications", and data). > > I asked recently at a fairly large Linux group meeting, > > and everyone seemed suprised by the question and there > > were no good answers, which completely floored me... > > how could anyone smart enough to use Linux not back > > up their entire system RELIGIOUSLY? > > You do not backup the application binaries because you already have a > backup ... either the CDROM you installed from OR the debian archive. I > would never trust a backup of my binaries ... what if one of them has been > replaced with a trojaned version? I guess I don't see the logic here. If one of the binaries on your backup has a Trojan that, presumably, means that before you did the backup you were running a system that had a Trojan. I would assume at that point the damage has already been done. Besides, assuming someone slipped a Trojan onto your system in the first place, restoring all your config files as they existed prior to the backup would allow them to just log in and introduce it again. The only chance I see of defeating a Trojan is detecting it and defeating the method used to introduce it in the first place. Also, the fact that such Trojans are so rare on Unix and Unix-like systems would make it a minor concern for me. Anyway, it's standard practice in large installations to back up practically everything for a level 0 backup, excluding things like /tmp, /dev (sometimes) and /proc. The only reason I wouldn't back up binaries was if I had a limited medium, in terms of space or time, for the backups. > If things are so bad that you must > completely restore, you are probably better off reinstalling. I suppose this might be true in some cases. I'd certainly prefer restoring a backup to a complete reinstall. > There are several good backup methods ... taper, amanda, etc. and several > commercial backup utils for Lnux too. Anyone else tried afbackup? I think it's great. Just about everything I've ever looked for in a backup utility. It's a bit of a steep learning curve but once you have it configured and running it requires minimal hand holding. Just slap the incremental command in a cron entry and you're off. > As for backup devices, if you are talking about more than a few gig, best > to go to DLT tape. A CDROM only holds a bit more than half a gig. You are > going to spend all night swapping CDROMs. Has the price equalized on DLT drives and tapes? The old wisdom said that it's better to get a DAT or 8mm drive if you were going to need a relatively large set of tapes because the DLT tapes were (are?) so expensive. Anyway, it's easy to figure out just: Tape Drive price + Number of tapes * Price of single tape = total and see which total comes out lower. I don't think you'd be sorry choosing either DLT or DAT/8mm, barring price concerns, assuming you already have SCSI. Anyway, my trusty 4 year old, refurbished, 4mm DAT drive has been going strong for a long time and it's served me perfectly as a backup device. Of course being that old it's slow as dirt, but I'm usually not in a rush for backups. Gary
Re: .xsession not being read?
[EMAIL PROTECTED] (Carl Fink) writes: > Thanks to everyone for the answers on installing KDE or GNOME on a > 2.0 box. I appreciate it. > > Installing GNOME brought something to mind I hadn't thought about, > although I noticed it months ago: my .xsession file is never read. > I had to edit the global /etc/X11/Xsession file to force X to load my > window manager. > > This is irritating. Why would .xsession not be read? Any ideas? I > have three Unix books and four Linux books here, and not one > describes the X startup process even well enough for me to figure out > which program actually reads Xsession and .xsession files. On my system (slink) the user .xsession is executed from the system Xsession. You might try taking the relevant lines out of there and executing them manually to see what's up. The only part I see in my Xsession that would cause ~/.xsession not to be started is if the line "allow-user-xsession" doesn't appear in /etc/X11/Xsession.options. Gary
Re: OFFTOPIC: need whois on a gov domain
"Jakob 'sparky' Kaivo" <[EMAIL PROTECTED]> writes: > whois [EMAIL PROTECTED] > > On Sun, 25 Jul 1999, Pollywog wrote: > > > I got spam from a host using a .gov TLD and I forgot how to do a whois on > > those. Anyone know? There's also a Web interface to whois, http://www.networksolutions.com/cgi-bin/whois/whois Gary
Re: Exim config question
Jor-el <[EMAIL PROTECTED]> writes: > Gary, > > The problem is with exim which thinks that mach1 is your entire > domain name. Here is a snippet from the "ROUTERS CONFIGURATION" section of > exim.conf : [snip] > route_list = * $domain byname [snip] That "byname" was all I needed. I'm not running a DNS server on my little net. Too much of a pain and my PPP machine won't be running 24/7. Only the secondary machine. In my smarthost router I had: route_list = "* mach1.my.domain bydns_a" Well, there aren't any A records for my little net and it was apparently getting locked out by that. I changed it to "byname" and everything works like a champ now. Thanks to Jorel, Phillip and Patrick for the help! Gary
Re: Exim config question
"Patrick Kirk" <[EMAIL PROTECTED]> writes: > Most everything you need shouldbe in this article... > www.linuxgazette.com/issue43/stumpel.html A nice article. Unfortunately it really doesn't cover my situation. My "secondary" host, that I can't get to send email, is also a Debian box while Jan had a simple Win9x secondary machine. I've pretty much got mach1 (the PPP machine) working the way I want. I just can't get mach2 to forward any email to mach1. Thanks, Gary
Re: Exim config question
Philip Lehman <[EMAIL PROTECTED]> writes: > >I'm in the process of getting a little home network set up. One of the > >hosts, call it mach2, won't have a connection to the internet, just > >to other hosts on my home network. The other host, call it mach1, will > >occasionally connect to the internet via dialup. I have mach1 all set > >up but can't seem to get mach2 to forward all the mail to mach1. In > >fact I can't get mach2 to deliver mail directly to mach1 at all. > > > >What I want to do is set mach2 so that it uses mach1 as a smarthost. I > >think I got that right in my exim.conf (configuration 2 from the > >debian installation) but I can't seem to make it work. > > Just two ideas: Check if relay_domains is set in mach1's exim.conf and > make sure that sender_host_reject_relay corresponds to your situation. > I think the default setting is "sender_host_reject_relay = *", which > would bounce all mail from mach2... So that helped me fix the problem on mach1, but mach2 is still not able to forward mail to mach1. I've definately got it configured so that it knows mach1 is it's smart host, but I keep getting: [EMAIL PROTECTED] routing defer (-32): retry time not reached whenever I try to send email to [EMAIL PROTECTED] from mach2. Argh! One thing, on the smarthost router in mach2 exim.conf there's this: route_list = "* mach1.my.domain bydns_a" I'm not running a DNS server (bind) anywhere. I've only got two hosts and a maximum of three connected to my network and didn't want to bother with DNS. Could this cause a problem? Is there an alternative lookup method I could use? Something like byhostlookup? Thanks, Gary
Exim config question
I'm in the process of getting a little home network set up. One of the hosts, call it mach2, won't have a connection to the internet, just to other hosts on my home network. The other host, call it mach1, will occasionally connect to the internet via dialup. I have mach1 all set up but can't seem to get mach2 to forward all the mail to mach1. In fact I can't get mach2 to deliver mail directly to mach1 at all. What I want to do is set mach2 so that it uses mach1 as a smarthost. I think I got that right in my exim.conf (configuration 2 from the debian installation) but I can't seem to make it work. When I send email to, for example, [EMAIL PROTECTED] I get this in my exim mainlog: 1999-07-25 10:38:36 118R4n-B0-00 == [EMAIL PROTECTED] routing defer (-32): retry time not reached or 1999-07-25 10:39:47 118R4n-B0-00 == [EMAIL PROTECTED] R=smarthost defer (-1): I get the same thing if I send the email to [EMAIL PROTECTED] Also, something's not quite right with mach1. If I connect manually, eg., telnet mach1 smtp and type rcpt to: [EMAIL PROTECTED] I get: 550 relaying to <[EMAIL PROTECTED]> prohibited by administrator but if I do rcpt to: [EMAIL PROTECTED] it works. Any ideas? Thanks, Gary
Re: PCMCIA mudules wont compile with 2.2.11
Rune Linding Raun <[EMAIL PROTECTED]> writes: > hey yo bros! > > i cant compile my pcmcia-modules with the new kernel 2.2.11? > i got: pcmcia-cs 3.0.9-3 > pcmcia-source 3.0.12-2 > debian 2.1 (and dont wanna mesh with my libc/glibc in order to go > unstable in the pcmcia-cs) > > error dump: > ... > i82365.c:2782: `PAGE_OFFSET_RAW' undeclared (first use this function) > i82365.c:2796: `isa_lock' undeclared (first use this function) > make[4]: *** [i82365.o] Error 1 > make[4]: Leaving directory `/usr/src/modules/pcmcia-cs/modules' > make[3]: *** [all] Error 2 > make[3]: Leaving directory `/usr/src/modules/pcmcia-cs' > make[2]: *** [build-modules] Error 2 > make[2]: Leaving directory `/usr/src/modules/pcmcia-cs' > make[1]: *** [kdist_image] Error 2 > make[1]: Leaving directory `/usr/src/modules/pcmcia-cs' > make: [modules_image] Error 2 (ignored) I had to get pcmcia 3.0.13 to get this to work on my slink system. The good thing is that you just need the modules, no need to install a new cardmgr or any of the other support files. The bad thing is there isn't a Debianized 3.0.13 so you have to get it right from the source, ftp://hyper.stanford.edu/pub/pcmcia/pcmcia-cs-3.0.13.tar.gz. I compiled it manually and just stuck the appropriate modules in /lib/modules/2.2.11/pcmcia and my laptop works great with 2.2.11. Unfortunately I'll have to remember to manually remove that sucker next time I upgrade because I couldn't figure out how to get make-kpkg modules to work with a non-debian piece of source. Gary
Finding orphans?
Is there any way to find orphaned files/directories? For instance I just ran an install script for a non-Debian piece of software and it put some stuff in /sbin. It had a remove option that seemed to get rid of everything but it'd be nice to have something that would tell me that /sbin/junk doesn't belong to any package. Anything like this exist? Thx, Gary
Netgear FA310TX and good?
I ordered a couple of the FA310TX cards the other day for a simple home network application. I ordered them because they were based on the Tulip chipset and I've heard good things, in general, about that chipset and Linux; and their price was hard to beat. Today I was browsing around Deja and noticed some posts stating that the newer FA310's were based on a clone Tulip chipset and some people were having compatibility/performance problems with them. Anyone have any experience with these cards, the newer versions in particular? Thx, Gary
Re: XF86_SVGA 3.3.3?
Brian Servis <[EMAIL PROTECTED]> writes: > *- On 14 Jul, Gary L. Hennigan wrote about "XF86_SVGA 3.3.3?" > > I knew I would regret not saving the post to this list that said where > > we could get version 3.3.3.1 of XFree86 for debian slink and that time > > has come. Anyone have that apt source location handy? I tried > > searching the mailing list archive but it's EXTREMELY slow and I can't > > seem to hit the right combination of words to get the post. > > > > deb http://netgod.net/ x/ Got it, Thanks! Gary
XF86_SVGA 3.3.3?
I knew I would regret not saving the post to this list that said where we could get version 3.3.3.1 of XFree86 for debian slink and that time has come. Anyone have that apt source location handy? I tried searching the mailing list archive but it's EXTREMELY slow and I can't seem to hit the right combination of words to get the post. Thanks, Gary
Re: Uncompress ".deb" files?
[EMAIL PROTECTED] (Marco Nuessgen) writes: > Does anybody know how i can uncompress the ".deb" files from the > installation-CDROM? I must install "sed" before I can install Linux. The *.deb files are simple "ar" archives. You can extract their contents with: ar x file.deb Do a "man ar" for more options. Gary
Slink, 2.2.10 and printing
So I decided to take the plunge and install a 2.2.x kernel (2.2.10), under slink. Everything works fine, except for printing. I can't seem to get the printer to work. lpr seems to work, but when I do an lpq I get something like: waiting for lp to become ready (offline ?) Rank Owner Job Files Total Size 1stroot 22 tape-list 968 bytes The lp kernel module is loaded fine, but nothing happens. Is this something I didn't compile properly in the kernel or do I need to upgrade something in slink to get it to work? I'm not sure where to look on this one so let me know if there's more information I can provide. Thanks, Gary
Basic networking
I'm going to be setting up a small 100Mb home network soon and I'd like some info/help. While I know plenty about Unix/Linux sysadmin, I know next to nothing about networking. At first my network will consist of two hosts, with a third host present intermittently (a laptop that floats with me). I don't forsee ever needing more than 5 hosts connected to my home LAN. I gather I need a HUB or a switch for anything more than two hosts? Which one? I believe I understand the difference. A hub acts as a simple amplifier. Any signal it receives on one of it's ports is amplified and sent to all it's other ports. A switch, if my understanding is correct, adds some smarts to the process and only sends the signals to relevant ports. So if machine A sends a packet to machine B that packet is only sent to the port that machine B is connected to. Is there any advantage to a switch in a small home network? Money, at this level, really isn't the issue, but I don't want to spend extra money on a switch if it's overkill for a small network and doesn't really buy me anything. Thanks, Gary
Re: TAR.GZ
Pollywog <[EMAIL PROTECTED]> writes: > On 01-Jul-99 Cuno Sonnemans wrote: > > Hi, > > > > I've downloaded GUILGNL0.GZ (WP8 language module). > > Now I want to try to extract it. > > I've tried, tar -xzvf .., and gunzip . > > In both cases I got the message: not a gzip format. > > How is this possible and what is the way to extract GUILGNL0.GZ !!! > > > > HTH > > Are you certain you really competed the download? And set binary mode before the transfer? Doesn't usually matter on Unix->Unix transfers, but it can make a difference if the remote isn't a Unix system. Also, it's possible that the file got corrupted during the download. Again, not something that happens often, but it does happen. Gary
Re: /etc/environment
Peter Iannarelli <[EMAIL PROTECTED]> writes: > Assuming all your users are using bash, > put your global environmental setting in > /etc/profile > > Marco Maggesi wrote: > > > I am looking for the appropriate place where to put some > > basic environmental definition like: > > > > PAGER=less > > > > so that they take effect to every user (unless explicitly > > overwritten) WHATEVER LOGIN SHELL they use. is that > > possible? > > > > I saw a file /etc/environment > > who reads it ? And if they're not using bash, but a CSH, or variant, then also put it in /etc/cshrc, e.g., setenv PAGER less Adding it in both /etc/profile and /etc/cshrc should set it for any user on your system. Gary
Re: ATX power on
Pere Camps <[EMAIL PROTECTED]> writes: > Does anybody how to make an ATX motherboard boot without having to > press the 'power' button everytime? That is, I want an standard AT > behaviour: if there's power in the line, then I want the machine running > without having to press anything. This is more than likely a function of the BIOS of your motherboard. Look through your BIOS and see if it supports this functionality. If it doesn't you're probably out of luck. Gary
Re: UPS anyone?
Peter S Galbraith <[EMAIL PROTECTED]> writes: > I want to put a small UPS on my work system to let it shutdown > gracefully when there's a power outage. I'm looking at: > > Best Power Patriot > Best Power Patriot Pro > > APC Back-UPS BK500M > APC Back-UPS BK650M > > Anyone have any recommendations? Does any of them work `easily' > with Debian packages (e.g. using stock cables)? > > The Debain packages I know of are genpower, bpowerd, apcd and > upsd. It'll take a while to look at them all since they mostly > congflict with eachother (for obvious reasons). Last time I looked into this APC was very bad about supplying developers with their specs and the protocol they used for communication between the UPS and the PC. On the other hand Best supplied Unix drivers, with source code, right out of the box, and was gladly supplying any information requested by programmers. Of course this was about a year ago so things my have changed, but Best certainly seemed like the "Best" decision at the time. Gary
Re: Xemacs20 reading compressed files?
Johann Spies <[EMAIL PROTECTED]> writes: > I could not find it in the xemacs-documentation. > > Emacs20 and Xemacs20 are using the same .emacs file. Emacs can read .gz > files and Xemacs not. > > How can I get Xemacs to do it? > > I have got (require 'jka-compr) in my .emacs file. There was a bug in XEmacs 20, at least I considered it a bug, you have to toggle auto-compression-mode off then on before it'll read compressed files. Just do: M-x auto-compression-mode twice (should say the mode is On after the second time) and it should work. Gary
Re: Will PIII work?
[EMAIL PROTECTED] writes: | Apparently intel has discontinued 450 MHz PII processors. Will linux (the | debian flavour, of course) work with a PIII? | | I checked the linux hardware HOWTO, but no mention of a PIII. Not a problem. Been using one for a month or two now and it hasn't had a single problem. After all, the only significant difference between the PII and PIII is the added instructions on the PIII. Otherwise they're pretty much identical. Gary
Re: New Riva TNT Linux drivers
Steve Kondik <[EMAIL PROTECTED]> writes: | im running the new code now. its great, quake2 rocks, but the code is still | a little choppy. and i could only get the xserver started in 16 or 32 bpp, | and at 32 i got segfaults. at least its an effort. That's good to hear. Dave Schmenk (the developer) does mention that you only get hardware acceleration at 15 or 16bpp in the FAQ. Anything else, he states, will fall back to the software renderer. Of course segfaults don't seem like a good fallback scheme! :) But, I concur, it's a great step in the right direction and now maybe some of the CAD/FEA folks that I depend on for a lot of my graphics-oriented software will start considering Linux as a good platform to target. Thanks for the reply Steve. Gary
New Riva TNT Linux drivers
Has anyone taken the plunge and installed all the new Riva TNT OpenGL stuff? Any gotcha's? I've been using the XF86_SVGA 3.3.1 server for quite some time with my TNT card and haven't had a problem. I've been waiting for EONS for a GLX implementation to allow me to display results from my SGI visualization software back to my PC and, even if it's not ready for prime time yet, NVidia seems to be heading in that direction pretty quickly. They've definately got my business, at least for the next 6 months or so! :) Gary Opinons are my own.
Re: ps/2 model 90
Peter Allen <[EMAIL PROTECTED]> writes: | Ok, I give up, | having found an old ps2 90 lying around I decided to set it up to | do all the dogs body tasks like email etc. I have got everything | working except X. What graphics card is the default one in this | baby, as I cannot find it anywhere on the net. Also which xserver | is the one to use? | The computer is a 486 about ?50Mhz? and uses MCA bus. | Thankyou, Of course the first thing to try is just a vanilla VGA server. Does that work? If not the only exception to the "Every card does VGA" rule that I can think of are the old Hercules cards, or even older CGA cards, but I can't imagine an "as shipped" PS/2 using one of these. The more advanced PS/2's used 8514 cards so you can try that. Then later PS/2's used XGA cards. I have no idea if X servers exist for 8514 or XGA, but both of these were backward compatible with VGA. Gary
Re: why make partitions?
Lazarus Long <[EMAIL PROTECTED]> writes: | On Tuesday, June 01, 1999 at 14:46:01 -0500, Jens B. Jorgensen wrote: | > Message-ID: <[EMAIL PROTECTED]> | > X-Mailer: Mozilla 4.5 [en] (WinNT; I) | > X-UIDL: e4da9602a16b12e6fe1dfa928c15b9e8 | > | > The best reason I can ever come up with for creating separate | > partitions is to | > allocate space which can't be spared: eg. create a separate /home | > so users with | > accounts on the system can't screw up the system by filling up | > the disk or so that | > runaway log files can't fill up / and screw things up. | | IMO, the best reason to make partitions is so the KERNEL will be | guaranteed to be located below cylinder 1024 for /sbin/lilo. Otherwise, | later kernel installations will run the risk of making your system | unbootable from those kernels (or at all even.) I can think of a couple of other minor reasons: 1) fsck time. This can be annoyingly large for large partitions and even if you never foresee your system coming down improperly, e.g., without a proper shutdown, ext2 requires periodic fsck (I think the default is every 20 mounts) 2) Backups. Although I generally just back up my home machine all at once, e.g., tar cvf /dev/nst0 /, it is often convienent to back up a single partition at a time, e.g. tar --create --file=/dev/nst0 --one-file-system / tar --create --file=/dev/nst0 --one-file-system /usr etc. This makes it faster to do incremental backups for a particular partition. It also allows you to more easily have some redundancy in your backup scheme. Of course neither of these are overwhelmingly compelling... Gary
Re: awe64 sound problems
"Nadarajah, Dinesh" <[EMAIL PROTECTED]> writes: | I read somewhere that AWE32 driver works only for kernels later than 2.0.36. | I might be wrong (anybody???). As far as I know this isn't true. I've been using the AWE32 driver since at least 2.0.32, if not earlier. Of course something could've changed in the driver without my knowledge. Gary
Re: C function manpages
Colin Marquardt <[EMAIL PROTECTED]> writes: | * Alec Smith <[EMAIL PROTECTED]> writes: | | > On Solaris and other systems I could execute a command such as 'man getc' | > for example to look up info on the C getc() function. On Debian I haven't | > been able to do this without getting the 'no manual entry' message. Which | > package might I install to get the same (or similar) functionality as | > Solaris? | | info getc works fine for me. | | When you are coding in (X)Emacs, another nice thing is word-help.el | or func-doc.el. % locate getc.3 /usr/man/man3/fgetc.3.gz /usr/man/man3/getc.3.gz /usr/man/man3/ungetc.3.gz % dpkg -S getc.3.gz manpages-dev: /usr/man/man3/getc.3.gz manpages-dev: /usr/man/man3/fgetc.3.gz manpages-dev: /usr/man/man3/ungetc.3.gz Looks like the manpages-dev is the package he wants. Gary
Re: [Help] Memory
[EMAIL PROTECTED] (Nguyen Hai Ha) writes: | On Mon, 24 May 1999, Mr. (Ms.) Gary L. Hennigan wrote: | | > It's probably something strange going on with the BIOS function used | > by linux to detect the amount of memory in your computer. I have two | > suggestions you can try: | > | > 1) Manually edit /etc/lilo.conf and add a line like: | > | > append="mem=160M" | > | > or, if you already have an "append" line add it to the line like: | > | > append="floppy=thinkpad,mem=160M" | > | > 2) Alternatively, upgrade to kernel 2.0.36 or higher. Starting with | > 2.0.36 the memory detection uses an extended BIOS call to get the | > amount of memory and this could solve your problem. If Windows can | > properly find the amount of RAM then a Linux kernel >= 2.0.36 will | > also, since they use the same BIOS call. | > | > Gary | > | | Thanks alot for your helpfull advice. I have installed the 2.0.36 | kernel and it seems to work well with the memory but the system | itself is unstable. Sometimes, expecially when I run big programs, | the system comes down with the message like "Segmentation fault". | But when I set the memory to 32M, 64M, or 128M, the system works well. | What does this mean? Could this be a kernel's bug? Hmm. That sounds a lot like a bad memory chip. I had a very similiar situation over the weekend when one of mine went bad. I'd get unexplained kernel crashes, segmentation faults, and my machine would occasionally lock up hard. After pulling my hair out I decided to set my machine's BIOS to do a full memory test and sure enough it failed. The BIOS test isn't always successful at detecting this, but in my case it was. You can try a much more thorough memory test program available in the hwtools Debian package. Install hwtools and read /usr/doc/hwtools/README.debian, looking specifically at memtest86. You boot into it directly from a floppy. See if it shows anything. One last suggestion would be to tell Linux to use 1MB less than you actually have using: append="mem=159MB" in your /etc/lilo.conf file. I've seen reports that this often helps with memory problems like you're describing. Gary
Re: Diamond Viper 550
Armin Wegner <[EMAIL PROTECTED]> writes: | I will get a Diamond Viper 550 graphics card, soon. Which is the best | Xserver for it? Which X version do you recommend? The V550 is based on the nVidia TNT chip. You'll need to get XFree86 3.3.3, or better, in order to use it. Unfortunately Debian 2.1 doesn't include a new enough version for your purposes. You'll want to install Debian 2.1 (aka slink) and then install a newer version of xfree86 from: http://ftp.netgod.net/x You can do this somewhat automatically by using apt and adding: deb http://ftp.netgod.net/ x/ to your /etc/apt/sources.list file. Good Luck, Gary
Re: [Help] Memory
[EMAIL PROTECTED] (Nguyen Hai Ha) writes: | Hi folks, | | I've just installed the debian 2.0.34 on my machine. | Everything seems to work well excepts the memory. | The real memory consists of 2 DIMM 128M+32M. But it | seems to me that the kernel doesn't think so. | | % cat /proc/meminfo | | total:used:free: shared: buffers: cached: | Mem: 15171584 13418496 1753088 7012352 593920 6090752 | Swap: 119697408 5156864 114540544 | MemTotal: 14816 kB | MemFree: 1712 kB | MemShared: 6848 kB | Buffers:580 kB | Cached:5948 kB | SwapTotal: 116892 kB | SwapFree:111856 kB | | I think this is the problem of the kernel's configuration. | Please tell me something. Thanks in advance. It's probably something strange going on with the BIOS function used by linux to detect the amount of memory in your computer. I have two suggestions you can try: 1) Manually edit /etc/lilo.conf and add a line like: append="mem=160M" or, if you already have an "append" line add it to the line like: append="floppy=thinkpad,mem=160M" 2) Alternatively, upgrade to kernel 2.0.36 or higher. Starting with 2.0.36 the memory detection uses an extended BIOS call to get the amount of memory and this could solve your problem. If Windows can properly find the amount of RAM then a Linux kernel >= 2.0.36 will also, since they use the same BIOS call. Gary
Re: compiling and installing lib's
[EMAIL PROTECTED] (Gary L. Hennigan) writes: | Micha Feigin <[EMAIL PROTECTED]> writes: | | How do i compile both static and non static librarys under C (.a and .so) | | and how do i install them on the system so they can be found by the linker | | or programs. | | You just put them in a directory somewhere. Generally /usr/local/lib | is a good location if you're installing them yourself. | | Usually it's best to let the developers specify the path explicitly | for non-standard libraries. They can do this with, on most compilers, | "-L/usr/local/lib -lfoo". [snip] Sorry to followup to my own post, but I just thought of another important point. If you have pre-existing executables that are linked against a shared library you'll need to add the directory where you installed the shared libraries to /etc/ld.so.conf. For example, add the line /usr/local/lib if you put lib*.so files in /usr/local/lib. Or, and this, to me, would be the preferred solution. Have the users that want to run the executable add the path to their LD_LIBRARY_PATH. So, for example, in their ~/.profile have them add: LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib See "man ld.so" for further information. Gary
Re: compiling and installing lib's
Micha Feigin <[EMAIL PROTECTED]> writes: | How do i compile both static and non static librarys under C (.a and .so) | and how do i install them on the system so they can be found by the linker | or programs. You just put them in a directory somewhere. Generally /usr/local/lib is a good location if you're installing them yourself. Usually it's best to let the developers specify the path explicitly for non-standard libraries. They can do this with, on most compilers, "-L/usr/local/lib -lfoo". You could install them in a standard system directory, like "/usr/lib", which is searched automatically, but you're just asking for trouble doing that with non-standard software. Controlling whether they link to the static or nonstatic library is also accomplished via an option to the compiler. On the GNU compilers that option is "-static". See the info for gcc for further options. Gary
Re: Shell for gunzip so I don't have to remember?
Ray <[EMAIL PROTECTED]> writes: | On Tue, May 11, 1999 at 04:56:30PM -0500, Andri Bell wrote: | > | > I ask because I must be doing something wrong. When I ungzip .gz the | > system converts my .gz file to one file with no extension instead of | > unzipping the file and all of its contents. | > | > I know there are multiple files in the gzips that I look at because I can | > view all of the compressed files when I view the contents of the gzip file | > on my pc, just not with gzip on linux :( | | A gzip file only contains one file. Sometimes that one file happens to be a | tar file (ie an archive) in which case you use tar to extract it's contents | after you have unzipped it. For example: | | You start with foo.tar.gz | | gunzip foo.tar.gz Now you have a file called foo.tar which | should be much larger than foo.tar.gz was. | | tar -xvf foo.tar Now all the files are extracted from foo.tar | | | As others have said, you can do this all in one step with tar -zxvf | foo.tar.gz. | | In general, files that end in .tgz or .tar.gz are gziped tar archives while | files that just end in .gz are really just one file. Just to add to this excellent explanation by Ray, in the Unix world it is still pretty common to see files compressed with the Unix compress utility, in which case you might see file names like: foo.Z and foo.tar.Z all the things Ray mentions apply to these files as well, gzip and tar can handle them in exactly the same manner as *.gz files are handled. Gary
Re: finding and using applications
John Galt <[EMAIL PROTECTED]> writes: | Would it be too hard to add a "verbose" type flag that tells exactly | what dpkg is installing as it does it? gzip does this by default, so I'd | think that since dpkg basically calls gzip, there could be a | "pass-through" switch to turn on verbose reporting with not too much | hassle. | | On Tue, 4 May 1999, William R Pentney wrote: | | > On Tue, 4 May 1999, Tommy Malloy wrote: | > | > I agree with this one. Now and then I will install a package in which none | > of the binaries have the same name as the package, and there is no manpage | > available, so I have to hunt for the application's _name_. It makes one | > feel very silly, and can be quite frustrating. | > | > I think that dselect could use an additional tool to navigate through the | > contents of packages. I realize that there is a "dpkg -l" option, but | > there must be a better way. | > | > - Bill [snip] I'm a little confused about what you're (Tommy) asking here. Documentation for the applications that are in a package is a somewhat different issue than finding out what files a package installed. You can find all the files associated with a particular package using "dpkg -L ", e.g., % dpkg -L cvs /. /usr /usr/share /usr/share/doc-base /usr/share/doc-base/cvs /usr/share/doc-base/cvs-client /usr/sbin /usr/sbin/cvsconfig /usr/doc . . . Is this what you wanted? Gary
Re: ftp of complicated directory structure (with tmp simlinks)
Richard Harran <[EMAIL PROTECTED]> writes: | I am trying to completely copy a complicated directory structure using | ftp. It has four levels of directories, all with lots of branches. I | have a root directory on the target machine, and want to create the | directory structure, and copy all the files into the right places. I | have tried filerunner, which throws its toys out at the first hurdle, | refusing to even transfer a single file. I thought lftp's 'mirror' | command sounded hopeful, but this didn't want to play either, giving: | get: Access failed: 550 : not a plain file | for every directory and subdirectory, and not mirror'ing anything. [snip] You might try ncftp. I've used it successfully get whole directory trees "get -R". Works like a champ for me. The Debian slink package is ncftp_3.0beta14-2.deb in the "net" directory of main. Gary
Re: BLAS library?
[EMAIL PROTECTED] (Gary L. Hennigan) writes: | "Thomas Gebhardt" <[EMAIL PROTECTED]> writes: | | > Is there still not a package available that contains the BLAS library? | | > The fact that LAPACK is there is great, but I need BLAS too. I realize | | > there's an optimized PPro version of the BLAS library, but this is for | | > light development work on an old 486 and I'm not even sure the PPro | | > optimized version would work. | | | | Doesn't Lapack provide the blas libs? | | No, I don't believe so. LAPACK uses BLAS a lot though. Sorry to reply to my own message, but there is a BLAS library in lapack-dev that will suit my needs perfectly. Gary
Re: BLAS library?
"Thomas Gebhardt" <[EMAIL PROTECTED]> writes: | > Is there still not a package available that contains the BLAS library? | > The fact that LAPACK is there is great, but I need BLAS too. I realize | > there's an optimized PPro version of the BLAS library, but this is for | > light development work on an old 486 and I'm not even sure the PPro | > optimized version would work. | | Doesn't Lapack provide the blas libs? No, I don't believe so. LAPACK uses BLAS a lot though. Gary
Re: email from cracklib cron
Phillip Deackes <[EMAIL PROTECTED]> writes: | [EMAIL PROTECTED] (Gary L. Hennigan) wrote: | > I have a system that I hadn't bothered adding "> dev/null" to the | > appropriate place in /etc/cron.daily/cracklib to avoid the email | > (slink distribution). It's on a seldom used system and I just never | > bothered. I had hoped that the simple fix would be quickly uploaded | > and I wouldn't need to bother. Well, it finally got to the point that | > it was driving me crazy so I looked in the Debain mailing list archive | > and applied the "> /dev/null". | | Sorry to be a little dense, but where in the /etc/cron.daily/cracklib | file do I put > /dev/null? | | This is the first I have heard about the problem, though I have noticed | a lot of mail on my system for root originating from the cracklib cron | entry. Here's the whole section. It's toward the end of /etc/cron.daily/cracklib if [ -n "${cracklib_dictpath_src}" -a -n "${cracklib_dictpath}" ] then /usr/sbin/crack_mkdict ${cracklib_dictpath_src} |\ /usr/sbin/crack_packer ${cracklib_dictpath} > /dev/null fi Gary
email from cracklib cron
I have a system that I hadn't bothered adding "> dev/null" to the appropriate place in /etc/cron.daily/cracklib to avoid the email (slink distribution). It's on a seldom used system and I just never bothered. I had hoped that the simple fix would be quickly uploaded and I wouldn't need to bother. Well, it finally got to the point that it was driving me crazy so I looked in the Debain mailing list archive and applied the "> /dev/null". My question is why hasn't this been fixed? Or perhaps it has and somehow my system just isn't up-to-date? Or was it deemed too simple a fix to warrant a new version of cracklib-runtime? Gary
BLAS library?
Is there still not a package available that contains the BLAS library? The fact that LAPACK is there is great, but I need BLAS too. I realize there's an optimized PPro version of the BLAS library, but this is for light development work on an old 486 and I'm not even sure the PPro optimized version would work. Thanks, Gary
Re: X freezes!
[EMAIL PROTECTED] (W. Paul Mills) writes: | Stefano Stabilini <[EMAIL PROTECTED]> writes: | > I am a brand newbie to Linux. | > | > I followed Debian's slink 2.1 standard installation and used dselect | > afterwards to add X server and clients. Linux sits on my first hd in | > partitions hda3 (linux) and hda4 (swap), the first two partitions are | > Dos16-big and host win 95. I successfully configured LILO so as to | > choose starting OS at booting time (that i am quite proud of...). | > | > My problems came with the installation of the X server: i tried first to | > get it from XFree86.org, and XF86Setup'd it, but it would only start the | > VGA16 server. Then i realized that i could download and install X via | > dselect, which i did overwriting previous installation. | > | > I have a Cirrus Logic GD5465 AGP chipset with 4MB RAM and a HP D2817A | > Ultra VGA monitor, so i am using the SVGA X server, which gets started | > directly on boot. I configured it with XF86Setup again, leaving RAMDAC, | > VRAM and clockchip to be probed and i chose an 800x600 16bpp resolution | > for the monitor. | > | > The server and client started smoothly, i am able to change video mode, | > but after a few minutes' work everything freezes and i am no more able | > to even Ctrl-Alt-Bksp or Ctrl-Alt-Del. Actually the only means i found | > to get unstuck without powering-off is rebooting via telnet from another | > machine. | | Are you running xdm? If so: | | /etc/init.d/xdm stop | | followed by: | | /etc/init.d/xdm start | | should restart X without rebooting the machine. Also, you can switch to a different virtual console with the Ctrl+Alt+F# key sequence, where "#" is replaced with 1 through 6, thus avoiding going to another machine and telnetting in. | Why the lockups -- I do not know, but would suspect that something | from your previous installation of X is lingering around and causing | problems. [big sig deleted] Sounds like a reasonable guess, it could also be an I/O or interrupt conflict somewhere. Good idea to check your logs to see if anything shows up in there (files in /var/log). Gary
Re: Hit by virus !? Help, please...
H C Pumphrey <[EMAIL PROTECTED]> writes: | Yike. That is a nasty thought. I have Debian and W98 on separate physical | discs at home and W98 refuses to acknowledge the existence of the Debian | disc. Hopefully a W98 virus would trash W98 on hda and leave my Debian | setup on hdb alone, except that I would need a boot floppy to get going. | Does this sound plausible? Depends on what you mean by Win9x refusing to see the disk and what the virus is. Are you saying your Linux disk is not visible in Win/DOS fdisk? In all likelihood Win9x doesn't see it because it doesn't recognize the partition type. That doesn't mean you can't get to it, via Win/Dos fdisk. At any rate, a virus could easily be written to wipe out the partition tables on any and all disks it has access to. The only way to get marginal assurance that this won't happen to you is by running a virus scan utility under Win9x. | I think I might just go home and make a boot floppy or two. That's only part of the story, if the virus wipes out your partition table you'll likely need a full backup to recover from it. In addition it doesn't hurt to do "fdisk -l |lpr" and keep the prinout around. This is a good idea even if you're not worried about a virus. Gary
Re: Java development under Debian
Daniel Barclay <[EMAIL PROTECTED]> writes: | > From: Bruce Sass <[EMAIL PROTECTED]> | | ... | > On 9 Apr 1999, Gary L. Hennigan wrote: | > | > > ... how do I install | > > them so that I don't have to explicitly list the jar files in a | > > classpath... | > | > Put them where they should be, then add the files to the appropriate | > /var/lib/dpkg/info/java-pkg.list file. The "convention" is so that the | > package manager knows about the files, manually adding them to the .list | > accomplishes that. | | How will that help? None of those files sets the class path. I solved the problem by first unjarring all the Java packages, zipping all the class files and putting the *.zip files in /usr/local/lib/java, which I then added to my CLASSPATH environment variable. Works like a champ and I didn't have to mess with the Debian installation database or put them in a system directory managed by Debian. I don't recall the exact details but if someone else needs the info I'll look at my home system tomorrow. Gary
Re: Time Command error in Slink?
Wayne Topa <[EMAIL PROTECTED]> writes: | I have found a problem when using the time command. From the man page | -o FILE, --output=FILE | -a, --append | | So the command 'time -a -o log makebzImage' should log the time used | to compile the kernel (in this example) to the file 'log'. What is | does instead is : bash: -a: command not found Then reports the time | when the compile completes. Nothing is appended to the log file. "time" is also a Bash builtin command. To use the utility you're trying to get at specify the entire path, e.g., on a slink system, /usr/bin/time. Gary
Re: 2940U2W drivers in slink?
Graham Ashton <[EMAIL PROTECTED]> writes: | Just a quickie - can anybody verify that the slink boot floppies contain | drivers for the adaptec 2940U2W scsi card? I don't want to go out and | buy 6 boxes that I can't install debian on! | | I've been reading on dejanews that you need the 2.2 kernel to get a | recent driver for the U2W... It might, but you'd be better off getting a copy of the boot disks with a more current version of the aic7xxx drivers. You can pick them up at: http://www.debian.org/~adric/aic7xxx Gary
Re: ddd segfaults - help
Micha Feigin <[EMAIL PROTECTED]> writes: | I'm trying to start ddd or xxgdb in order to debug c/c++ programs | I tried versions 3.1.3-2 3.1.3-1 and the slink version. | on the potato and up version I get a message, oops you found a bug in | the program | and then segfault every time i try to start it. It starts loading | (gives the image saying that it's starting) and then hangs. | On the slink version all i get is segfault and it stops. | the log file only says that ver 3.1.3-1 was compiled for i586 and ver | 3.1.3-1 was compiled for i686, nothing else about the error or that | might be relevant. | xxgdb gives me a message: Can't allocate color map entry "Snow1" and | "Snow2" and then can't perform malloc. | | I am running a 486/DX2 with a slink kernell 2.0.36. libc6, gcc, g++, | gdb and most of the development stuff is from potato. Was there a reason that you mixed slink and potato pieces? It's really hard to find a problem when you do this. Especially with slink and potato, which are based on two different versions of glibc (2.0 and 2.1). My suggestion would be to go with just one distribution and then let us know about any problems. Most problems with DDD aren't problems with DDD, but problems with Lesstif. Until Lesstif goes stable I've been staying away from the dynamically linked versions of DDD and going with the statically linked version. The static version (Debian package ddd-smotif) is linked against a commercial Motif distribution and I've never had one problem with it. Gary
Re: ddd's segfaulting tradition
"Noah L. Meyerhans" <[EMAIL PROTECTED]> writes: | On Tue, 20 Apr 1999, E.L. Meijer (Eric) wrote: | > I wonder, does anyone use ddd in a serious way with C++? Everytime a | > new debian release arrives I give it a try, and everytime it manages to | > segfault within a few minutes. I suppose ddd should be nice for C++ if | > it worked, but I never found one real life bug with it in _my_ code | > before I hit one in ddd itself. | | I've used DDD quite a bit, and I've never had a single crash. I suspect | your problems have more to do with Lesstif than with DDD. I compiled DDD | myself and linked it against OSF/Motif (I got it for free from my former | employer, a Motif re-seller). If you really need DDD that bad (it is a | great debugger), consider purchasing a Motif license and building your own | copy. [large sig deleted] If the problem is in fact Lesstiff why bother purchasing Motif? Just download the version that's statically linked against Motif. Certainly the executable is larger, but other than increasing the amount of load time and a larger memory footprint it would solve a Lesstif problem, and it's a lot cheaper than US$100. Last time I looked there was even a Debian package for a statically linked copy. But if not you can always go directly to the source, http://www.cs.tu-bs.de/softech/ddd. If you do this I'd recommend the semi-static binary. It only links in Motif statically, and everything else dynamically. Gary
Re: Specifying a proxy server for ftp
Rick Tan <[EMAIL PROTECTED]> writes: | I would like to use a proxy server with ftp. How do I specify this? | With http, I used http_proxy=htpp://proxyserver:port/ with great | success. I tried ftp_proxy but it didn't work. Personally I use ncftp. It can deal with a lot of firewall types and, in addition, is a much better ftp interface. Gary
Re: Again: Problems with Diamond Viper V550
Sebastian Lindenmüller <[EMAIL PROTECTED]> writes: | Hi! | | XFree 3.3.3 is on the 5. CD of my debian-distribution, but I can't install | it. Could anyone please help me by writing a short manual how to install a | new program with DSelect? I'm trying it since 9 hours now. Personally I didn't bother installing a Debian version. I pulled the XF86_SVGA (version 3.3.3) directly off of: ftp://ftp.xfree86.org/pub/XFree86/current/binaries/Linux-ix86-glibc/Servers file name XSVGA.tgz. I untarred it and installed the resulting XF86_SVGA in /usr/local/bin. I then edited the file /etc/X11/Xserver so that the first line was /usr/local/bin/XF86_SVGA and I was off and running with my Creative RivaTNT without upgrading to anything in potato. Gary
Re: MICROSOFT BS FUD
"Christopher J. Morrone" <[EMAIL PROTECTED]> writes: | On 14 Apr 1999, Gary L. Hennigan wrote: [snip] | > You guys need to read your Slashdot (http://slashdot.org). I've heard | > that this particular "benchmark" was commissioned by Microsoft. Anyone | > who pays attention to a benchmark commissioned by one of the | > interested parties deserves what they get. | | Well, while I agree with that, this is already being read and believed by | managers and suits. What we need are numbers to the contrary, not "it was | commisioned by Microsoft". Again, any logical person would conclude that the test was biased given that one of the interested parties paid for the test. I'm not saying that nobody will believe it, but I think given Mindcraft's readership and their apparently close ties to Microsoft it'll have a pretty limited impact. | Of course, its not likely that anyone in the free software movement will | be able to verify the results, because they used pretty expensive | machinery. A four processor Xeon as the server, and 144 pentium test | nodes with ethernet switches. Again, read Slashdot. There's already a "questioning" of the procedures used by Mindcraft up on Linux Weekly News site, http://lwn.net/1999/features/MindCraft1.0.phtml Gary Opinons are my own.
Re: compiling kernel
Andrew Waltman <[EMAIL PROTECTED]> writes: | > On Wed, 14 Apr 1999, Pollywog wrote: | | > Building the kernel the "standard" way is fine in Debian. I did it for | > almost 2 years before recently switching to the 'make-kpkg' method. | > Make-kpkg is really nice, and quite easy. It turns the newly compiled | > kernel into a .deb file, which you can easily install. It will make sure | > things like the system map file get put in the right place, and other | > handly little details like that. | | I have been hand-building the kernel for a while now, but decided to try | using make-kpkg today. It seemed to work well, but I have a couple of | questions: | | How are the header files handled? I made the kernel-headers deb and | installed it, but the files in /usr/include/asm, linux, etc. did not get | updated. Do you have to do something special to get them to install | correctly? Also, according to the kernel source README there should be | symlinks into the the source tree, but it looks like Debian does it | differently -- there are no symlinks. How does this work when you upgrade a | kernel? The information in the kernel source is not really correct. Linus himself has stated numerous times that making the symlinks can cause some serious problems. Namely, that the libc file doesn't correspond to the header files if you modify the links yourself. Personally, I have NEVER updated the kernel header files when I package the kernel myself. I imagine some software might require updated header files, but I haven't run into it yet. | How are modules made and installed using make-kpkg? I had to manually make | and install them. I tried "make-kpkg buildpackage", "make-kpkg | modules_config", "make-kpkg modules", but the modules where never built | until I did "make modules; make modules_install" in the | /usr/src/linux-2.2.5 directory. make-kpkg compiles the modules automatically whenever you build a kernel image. These modules get put into the kernel*.deb file along with the kernel and are installed when you use "dpkg --install kernel-image_.deb". The only reason I can think of for it not to do this is if you didn't specify kernel_image as the target. Invoke make-kpkg like: make-kpkg --revision whatever.1 --zimage kernel_image Gary
Re: compiling kernel
Pollywog <[EMAIL PROTECTED]> writes: | If I need to upgrade a kernel on a Debian system, can I do it in the same way | as it is done with other distributions or is there some obscure Debian way to | do this? It seems that when I do it in the customary way, I later run into | problems which appear to be related to the kernel. The best way to deal with kernel source, either a debianized kernel-source package, or kernel source straight off of ftp.kernel.org, is with the Debian kernel-package package. You do the standard make config, e.g., "make xconfig", and then invoke make-kpkg. This compiles the source, including any modules, and builds you a nice *.deb file that you can install manually with "dpkg -i". Install kernel-package 6.05 (on slink) and read the man pages and docs in /usr/doc/kernel-package. Once you use it you'll never go back. It's a great utility. Gary
Re: MICROSOFT BS FUD
Kenneth Scharf <[EMAIL PROTECTED]> writes: | There already has been feedback on the web (and this list) about this. | It does appear that a great effort was made to pull all the stops out | in configuring NT, and little care was given to setting up Linux. IE: | use of a kerenl with know network bugs, none of apache's optimizations | turned on... | | --- Rick Macdonald <[EMAIL PROTECTED]> wrote: | > On Wed, 14 Apr 1999, Kenneth Scharf wrote: | > | > > Well it finally happened. Microsoft has paid | > someone off to fix a | > > benchmark showing that Windows NT is actually | > better than linux. | > > | > > | > http://www.mindcraft.com/whitepapers/nts4rhlinux.html | > | > This doesn't look good. Are the results cooked or | > flawed, or the | > configuration not optimal? Or is it true? You guys need to read your Slashdot (http://slashdot.org). I've heard that this particular "benchmark" was commissioned by Microsoft. Anyone who pays attention to a benchmark commissioned by one of the interested parties deserves what they get. Gary
Re: using FAT floppies- a drawback
Pollywog <[EMAIL PROTECTED]> writes: | On 09-Apr-99 Gary L. Hennigan wrote: | > Pollywog <[EMAIL PROTECTED]> writes: | >| I was going about reformatting a floppy disk I use to backup my Exim | >| filters | >| and configs and I remembered something: DOS floppies are limited to 8.3 | >| type | >| names. I believe there are some ways around this which I cannot | >| remember, but instead I will keep using ext2 on some floppies like this | >| one. | > | > Do you need to read these floppies on a DOS system? If not don't even | > bother formatting them, just use tar on the device itself, e.g., | > | > tar cvf /dev/fd0 | | I was not certain that I could do that. Does this mean I can tar a small | directory tree and copy it to floppy using any filename I like? I don't think you can rename the files with tar. You may be thinking of a tar archive, where you do "tar cvf filename /usr/directory". To copy the files to floppy you use the device is place of "filename". | For instance, could I tar (recursively) /local/mail and copy all the files to | a floppy with the directory structure intact? I know this can be | done on tape. Others have answered this and yes it is quite possible. You can even span multiple floppies using the "-M" option to tar. Also, someone else mentioned that a bad sector would disallow you from reading the floppy. While formatting the floppy can certainly be done I'd personally just recommend using the "-W" tar option to verify it. This just causes tar to reread what it just wrote to the floppy and compare it to the original. It's too time consuming to do this on most tapes, but on a floppy it doesn't take much time at all, particularly a single floppy. Gary
Re: using FAT floppies- a drawback
Pollywog <[EMAIL PROTECTED]> writes: | I was going about reformatting a floppy disk I use to backup my Exim filters | and configs and I remembered something: DOS floppies are limited to 8.3 type | names. I believe there are some ways around this which I cannot | remember, but instead I will keep using ext2 on some floppies like this one. Do you need to read these floppies on a DOS system? If not don't even bother formatting them, just use tar on the device itself, e.g., tar cvf /dev/fd0 Gary
Java development under Debian
I'm doing some Java development and I'd like to use my Debian box as one of the development platforms. Trouble is that I need the swing and infobus libraries for this project and they're not included with the Debian JDK. I can download these puppies from Sun but how do I install them so that I don't have to explicitly list the jar files in a classpath, and not violate the convention of installing software in anything but /usr/local or /opt? If anyone has a good solution for this I'd be interested in hearing it. Thanks, Gary
Re: xntp3 on dialup - how?
Christian Dysthe <[EMAIL PROTECTED]> writes: | Hi, | | I have installed xntp3 (running slink). However, the daemon is stared from | init.d and I am not yet online. The log tells me that the network is not | available (what a surprise!;). | | How do I best utilize xntp on a dialup box? Could I put something in | ppp-up and ppp-down? If this is the case, what? As root, in /etc/ppp/ip-up.d create a file called something, e.g., timesync. In that file should be: Start file timesync- #!/bin/sh if [ -x /etc/init.d/xntp3 ] then /etc/init.d/xntp3 stop > /dev/null 2>&1 fi if [ -x /usr/sbin/netdate ] then /usr/sbin/netdate your.xntp3.server > /dev/null 2>&1 fi if [ -x /etc/init.d/xntp3 ] then /etc/init.d/xntp3 restart > /dev/null 2>&1 fi exit 0 End file timesync- Don't copy the lines with "-" on them and make sure you substitute a real machine name, or IP address, for "your.xntp3.server". Now do "chmod 700 timesync" and you should be set. Of course, you can make the script nicer, and personally I never bother with the ip-down thing to stop xntp. It will run, and probably write complaints in the log, when you don't have a network connection but I just ignore the messages. This is on a Debian "slink" system. I do something different since I use my own scripts for dialup so take this solution as is, i.e., untested. Gary
Re: lspci -- can't find.
"Person, Roderick" <[EMAIL PROTECTED]> writes: | I have kernel 2.2.3 installed. I on boot when the sound module is loaded I | get the error | lspci:can't find | | I forget the rest (i'm at work), but when I try to initalize the sound card | with isapnp I get the same error. What is lspci? Where do I get it? Install the pciutils package in the admin section of slink. That will get rid of the message. Gary
Re: perldap on debian linux
"Aaron M. Stromas" <[EMAIL PROTECTED]> writes: | i'd like to build perldap on debian linux. as it uses netscape c sdk i | decided that the fastest way would be to build it first. unfortunately, | the configuration failed because it didn't find Xm/Xm.h. | | is there a debian package with that (motif?) include file? what is it? Motif isn't free software. You'll have to purchase a copy from someone like Redhat. There are other vendors as well. I'd suggest a search of comp.os.linux.misc for other recommendations. Last time I looked it wasn't exactly cheap, depending on your financial status. Something on the order of 100 US$. Gary
login package source?
I'm having problems with the newgrp command. Whenever I try to change my primary group with it, as a user, I get: % newgrp - audio getgroups: Invalid argument I suspect something in my environment is causing the problem, since it seems to work ok for other users, and root. I've had trouble like this before and I've resorted to getting the source, compiling it for debug and chasing down the trouble. Problem is I can't find the source for the login package. The package is part of "base" but there doesn't seem to be a source package for it. Anyone know where I can pick it up? Thanks, Gary
Emacs Trouble (was Re: The GNU thing)
Jonathan Guthrie <[EMAIL PROTECTED]> writes: | On 24 Mar 1999, Gary L. Hennigan wrote: | | > Jonathan Guthrie <[EMAIL PROTECTED]> writes: | | > | Okay, so emacs is a religion, I can deal with that. I use emacs the | > | editor quite a bit. However, I can't get xemacs-20 to suspend with | > | CNTRL-Z on my computer so I either have to use multiple virtual terminals | > | or do everything under X. | | > | Do you know what the problem is? | | > When you're running it in a terminal window you can't get it to | > suspend when you hit Ctrl+Z? | | Yes. | | > Is your suspend character set to Ctrl+Z? | | Yes. (I also verified this by doing the "stty -a", just to be sure.) | | The symptoms (gosh, wouldn't have been nice for me to include the symptoms | the first time) are different from those I would expect for a different, | or missing, suspend character. | | xemacs stops taking input, but won't start the shell. I can hit ^C a | couple of times and get xemacs' attention, but the only thing it'll let me | do is abort the edit and dump core (and it won't do the core dump because | the default core size, as set by ulimit, is 0, and I never remember to | change that before I run emacs.) | | This is under Debian 2.1 as most recently released, kernel V2.2.3, and | xemacs20-nomule-20.4-13. (FWIW, it also does this if I use the mule | executable, not that it should make any difference.) | | Is it possible that the .emacs or the .xemacs-options file has something | in it that could cause this behavior? I suppose you could've somehow overridden the suspend-emacs function. Try that function manually, i.e., from within XEmacs do: M-x suspend-emacs and see what happens. Gary
Re: The GNU thing
Jonathan Guthrie <[EMAIL PROTECTED]> writes: [snip] | Okay, so emacs is a religion, I can deal with that. I use emacs the | editor quite a bit. However, I can't get xemacs-20 to suspend with | CNTRL-Z on my computer so I either have to use multiple virtual terminals | or do everything under X. | | Do you know what the problem is? When you're running it in a terminal window you can't get it to suspend when you hit Ctrl+Z? Is your suspend character set to Ctrl+Z? At the Unix prompt do a "stty -a". What's it say for "susp = ". It should, for example, be "susp = ^Z" and you can set it by doing % stty susp ^V^Z The ^V is the bash shells literal quote key. It just says to treat the next key you hit literally and not to interpret it. Of course this is also a terminal setting, it shows as the "lnext" character when you do the "stty -a". Gary
Re: Time track for X similiar to gtt?
Lorina Poland <[EMAIL PROTECTED]> writes: | titrax is the package you should check out. [Time Tracker 1.98 is the | version I have.] | | [EMAIL PROTECTED] is the author; I can't find the URL for the | location that I got the source from. Thanks Lorina! I found it via ftpsearch. If anyone else is interested the URL is: http://www.alvestrand.no/domen/titrax/TimeTracker.html Not fancy, but it'll do the trick nicely. Gary
Time track for X similiar to gtt?
I'm looking for a time tracker utility for X similiar to the Gnome Utility gtt. I don't really want to use Gnome, since I'll be using the utility on different architectures, namely a Debian box and a SGI box, and the idea of compiling Gnome for my SGI isn't exactly appealing. If you're not familiar with gtt it allows you to track the time you spend on a particular project. It's like having a stop watch for each project. Thanks, Gary
Re: kernel Image Size.
Justin Akehurst <[EMAIL PROTECTED]> writes: | On Thu, 11 Mar 1999, Person, Roderick wrote: | | > Ok, | > | > I finally got a kernel to compile!! Now, it telling me it too large to | > mount!!! So I remade it, making as much stuff as possible as modules. Still | > too large. So I tried to make bzImage - STILL TOO LARGE!! | > | > What FF is the larges size a Kernel can be to get mounted!! My smallest | > kernel so far is 808,536 in size. My largest was over a Meg. Help ! All I | > maked was to hear Window Maker! That's all | | make menuconfig; | make dep; | make clean; | make zImage; | | And if you use modules... | | make modules; | make modules_install | | Those are the steps you have to take to make a kernel. make zImage will | compress the kernel image so that it can fit in memory. Or, you can bypass all this crud with the make-kpkg Debian utility. It does all these steps for you, and it puts together a nice kernel-image*.deb file which you can install manually with "dpkg -i". Not only is it just nicer to use, it's also the way it's supposed to be done under Debian, i.e., it's the Debian Way (TM). Gary
Re: 2.2.2
Paul Nathan Puri <[EMAIL PROTECTED]> writes: | Has anyone else had problems with v2.2.2? | | I just downgraded to 2.2.0 because my ppp connections were sluggish, and | my screen actually froze on me! | | I couldn't believe it. So now I'm running 2.2.0 just to see if I keep | having these problems | | Any other stories like this? I had problems with PPP uploads under 2.2.1. They were extremely slow so I went back to booting 2.0.36 by default. I had assumed that 2.2.x needed a newer version of PPP than was available in slink, but I don't see any mention of it in the documentation. Gary
Using update-menus?
I have mathematica installed on my system and was thinking it'd be nice to make this available via the menus. I know Debian has a nice method of doing things like this but I can't seem to make it work. I THOUGHT it was as easy as adding a menu entry in /etc/menu, so I installed a file called /etc/menu/mathematica with the contents: ?package(mathematica):\ needs=x11\ section=Apps/Math\ title="Mathematica"\ command=/usr/local/bin/mathematica and ran update-menus. Unfortunately I must be missing something because the entry didn't show up in my WindowMaker menu. Apps/Math only contains: Xcalc bc dc I tried various things, including a reboot, and a "update-menus -v", but I didn't see anything that was of use to me. Can someone shed some light on what I'm missing here? Thanks, Gary
Re: Managing /usr/local
[EMAIL PROTECTED] writes: | I was also looking for just this sort of package, but never did find one. | There are some similar tools, but none that perform the functions you're | looking for. | | * tripwire monitors systems and reports modifications to files, but | it doesn't | tell you what was changed, only that they were changed. | | * moninst comes close, but doesn't handle additions and deletions of files. | | I was thinking of writing a package to do this, but it'll take me | awhile, it's | not my top priority at the moment. Here's a couple more I found earlier, but they don't really do what I want either. They're basically wrappers around the Unix "install" command and that's how they track installations: SmartInst (http://www.iae.nl/users/grimaldo/OpenSoft/smartinst.shtml) Pack install monitor (http://www.linuxos.org/Pack.html) I really don't want an "install" wrapper. That's a neat idea but there's still a lot of software that just doesn't use "install" when you do "make install". I'd like something that keeps a database of the files/links/subdirectories in specified directories and after you install something it creates a package file with anything new it finds, updates the base database and then allows you to uninstall it. I could've sworn I read somewhere that someone had written a utility like this but perhaps I'm mistaken. Gary
Re: Help. Trying to upgrade to hamm 2.0.36
Tim Heuser <[EMAIL PROTECTED]> writes: | > o type make zImage | | I got to the "make zImage" command and sat around till the computer got done processing all kinds of stuff. | The last 6 lines printed to the screen were... | | as86 -0 -a -o bootsect.o bootsect.s | make [1]: as86: command not found | make [1]:*** [bootsect.o] Error 127 | make [1]: Leaving directory `/usr/src/kernel-source-2.0.36/arch/i386/boot' | make:*** [zImage] Error 2 | coyote:/usr/src/linux# Umm, you need to install the bin86 package. You may also want to take a look at make-kpkg in the kernel-package package. It automates kernel compilation a bit and gives you a nice Debian package that you can install with dpkg. The short of it is you to a make config and then simply do a: make-kpkg --revision custkernel.1 --bzimage kernel_image This proceeds to compile the kernel and when it's done it'll produce a kernel-image.deb file in ".." which can be installed with: dpkg -i kernel-image.deb Do a "man make-kpkg" to the details for yourself first. And read the documentation in /usr/doc/kernel-package. When I first heard about it I thought it was a waste of time. I've been building kernels manually since 0.99, but it really is a nifty piece of work that simplifies a lot of things for you. Gary
Managing /usr/local
What's the name of the software that helps you keep track of what's getting installed in /usr/local when you install a non-Debianized piece of software yourself? Something akin to the Win98 uninstall utility. I believe you simply run it after doing a "make install", or similar, and it scans the specified directories for any files that may have been installed. It keeps a database of these so that you can easily uninstall whatever it is. I know I've seen this discussed somewhere. If not here then certainly on comp.os.linux.misc, but I can't for the life of me think of a keyword to use in searching for it. So, I apologize in advance for the repetition. Thanks, Gary
Re: Ejecting scsi tape
Chris Hoover <[EMAIL PROTECTED]> writes: | Is there a way to have my system automatically eject a tape from my scsi | tape drive? I'm wanting to add this to my backup script, so I don't | forget to change to tape and overwrite a backup. man mt Short answer: mt -f /dev/nst0 rewoff Gary
Re: Curious Question.
Lawrence Walton <[EMAIL PROTECTED]> writes: | I am not sure that's always true; try looking at addgroup in redhat and | addgroup in debian. Or the different choices UID's, or file placement. | Enough that I rather dislike distro hopping. | | /Blatant Debian plug/ | Also I almost alway agree with Debian's file placement. | On 15 Feb 1999, Gary L. Hennigan wrote: | | > Dan Willard <[EMAIL PROTECTED]> writes: | > | Just how closely does Linux match with Unix? If I know Linux | > | and sitdown | > | in front of a Unix terminal am I just going to notice a few | > | differences (ie | > | file locations and a couple of commands) or am I going to be | > | lost? I think | > | I already know the answer but would like confirmation. Thanks. | > | > Almost without exception Unix is Unix at the user level, especially | > basic commands and tools, e.g., ls, df, du, awk, grep, etc. Things can | > vary more at sys admin level though. For example, even among Linux | > distributions there's the variation in "init", with some distros using | > SYSV and others using BSD style init schemes. Even at this level | > though there's usually a root commonality. For example I don't think | > I've ever run across a Unix system that didn't use /etc/passwd. | > | > Even with this variation at the sys admin level, once you've learned | > one flavor of Unix it's much easier to become familiar with a | > different flavor. Of course all the things you mention are sys admin type issues, and as I stated, such things do vary between different Unix variants and even between Linux distributions. I stand by my statement that, from a user perspective, Unix is pretty much Unix. I've had experience with Linux, HP-UX, IRIX, SunOS, Solaris, DEC OSF, Paragon OSF, QNX, AIX, BSD and probably some I've missed and never had any problem going from one to the other as a user. As a sys admin some of them gave me HUGE headaches they were so different, but not as a user. Still, learning admin issues on a new variant was much easier once I learned my first Unix system. Gary
Re: Curious Question.
Dan Willard <[EMAIL PROTECTED]> writes: | Just how closely does Linux match with Unix? If I know Linux and sitdown | in front of a Unix terminal am I just going to notice a few differences (ie | file locations and a couple of commands) or am I going to be lost? I think | I already know the answer but would like confirmation. Thanks. Almost without exception Unix is Unix at the user level, especially basic commands and tools, e.g., ls, df, du, awk, grep, etc. Things can vary more at sys admin level though. For example, even among Linux distributions there's the variation in "init", with some distros using SYSV and others using BSD style init schemes. Even at this level though there's usually a root commonality. For example I don't think I've ever run across a Unix system that didn't use /etc/passwd. Even with this variation at the sys admin level, once you've learned one flavor of Unix it's much easier to become familiar with a different flavor. Gary
FreeWRL and Xswallow?
Has anyone successfully swallowed FreeWRL via Xswallow? I can't seem to get it to work. Netscape keeps giving me an error about blib not being found. If anyone has a functional xswallow.conf entry for FreeWRL I'd appreciate a copy of that info. Thanks, Gary
Re: Slink boot with SCSI
Clyde Wilson <[EMAIL PROTECTED]> writes: | When I try to boot to install Slink I get frozen up just before my | SCSI stuff loads. The same disk works fine on my non-SCSI machines. | I have tried all the "special" rescue disks and they do the same. | | Any idea what I am doing wrong? What type of SCSI adaptor are you using? Gary
Re: Slink, xbase-common and xdm...
Sergey V Kovalyov <[EMAIL PROTECTED]> writes: | How about just pressing Ctrl-Alt-F1 and login through the text console ? | AFAIK, you problem is a known bug - missing ;; in the /etc/X11/Xsession. | Hopefully it will get fixed soon. | | Sergey. | | On Tue, 9 Feb 1999, Hogland, Thomas E. wrote: | | > Interesting problem - I've upgraded to slink as part of kernel 2.2.1 on two | > different PC's, and I think I goofed on one. When the new X stuff went in | > and asked if I should use my current config files, the PC at work was told | > No - leave mine alone; the PC at home was told Yes - overwrite with new | > ones. PC at work is fine, but PC at home now immediately boots into xdm and | > refuses to let me log in... Won't take root, me, nothing. Tried | > booting from | > the boot disk I made - slowly boots, then starts xdm :-(...Anyone know how | > to kill this? Even Ctl-Alt-Backspace just restarts it... | > It was fixed in the latest update (yesterday on http.us.debian.org) so it's in the pipe for a mirror near you. Gary
Re: slashdot poll
Pollywog <[EMAIL PROTECTED]> writes: | On 09-Feb-99 Adam Di Carlo wrote: | > | > Debian seems to be taking a beating on the recent /. poll | > of distributions. Have you all voted? | | Why is that? I just ordered a copy because I have heard good things | about the distro. You won't be sorry. I've tried a few distros, and been using Linux since 0.99 days, and have never been disappointed in Debian. As I said in a previous note on the list, I personally consider Debian's showing on /. to be pretty good. It's second, behind RedHat. The only reason RedHat is doing better is because they have an advertising budget. There's nothing technically superior about it, other than, perhaps, a little easier installation procedure. Gary
Re: slashdot poll
Adam Di Carlo <[EMAIL PROTECTED]> writes: | Debian seems to be taking a beating on the recent /. poll | of distributions. Have you all voted? A beating? Second place? Seems pretty good to me. True, it trails RedHat by a significant margin but I don't think that's really surprising. Just reading comp.os.linux.misc leads you to the conclusion that RedHat is the most popular distribution. Gary
Re: X-windows not working anymore...
"Damir J. Naden" <[EMAIL PROTECTED]> writes: | Hi Pete Harlan; unless Mutt is confused, you wrote: | > > Question. I recently upgraded my kernel from 2.0.36(I think) to 2.2.1. | > > I am running slink(frozen) and now, my x-windows doesn't work. If I | > | > This happened to me, and .xsession-errors now says | > | > /etc/X11/Xsession: line 47: syntax error near unexpected token `default)' | > | > The bit of /etc/X11/Xsessions that reads | > | > case $1 in | > failsafe) | > if grep -q ^allow-failsafe $optionfile; then | > if [ -x /usr/bin/X11/xterm ]; then | > exec xterm -geometry +1+1 | > else | > echo "Xsession: unable to launch failsafe X session: xterm not | > found." | > exit | > fi | > fi | > default) | > ;; | > | > perhaps should have a ;; on the line before the "default)" line. | > (Haven't tried this myself yet.) | > | > Good luck, | > | > -- | > Pete Harlan | > [EMAIL PROTECTED] | | I have just commented out the offending 'default)' line and it worked for | me. Does anyone know if this is a bad thing to do? I thought it was there | just as a comment sort of thing In this case it won't hurt anything since the "default)" doesn't do anything. However, the correct solution is to put the ";;" in there. Gary
Re: Will frozen work with 2.0.36 ?
[EMAIL PROTECTED] writes: | Above asks it all ... will I have failings with that, or must I upgrade to | 2.1.xx? | | Sorry if it's been asked before. Debian frozen/slink is based on 2.0.34 but provides packages for both 2.0.35 and 2.0.36, so, yes it will work with 2.0.36. Most things in frozen/slink also work with 2.1.xx kernels (I'm running it as I type) but Debian won't be completely 2.1.xx ready until, at the earliest, the next release, currently called potato. Gary
Re: upgrade via proxy
Serge Gavrilov <[EMAIL PROTECTED]> writes: | Hello debian users! | | I haven't direct connection to Internet, but I can use http or ftp via proxy. | How can I upgrade my system using dselect or apt-get in this case? man sources.list All you need to do is set: http_proxy="http://:/" where you substitute what's appropriate for and , and you're set. Gary
Re: Meta and Alt keys
Rob Mahurin <[EMAIL PROTECTED]> writes: | I had occasion today to be playing on a Sun Sparcstation which ran XDM with | the chooser. Out of curiosity, I entered my machine's address, and whoa! | it worked! So I was playing around on my own system from the Sun for a | little while, complete with the Sun's Unix keyboard, and discovered that | it's really handy to have a real Meta key in addition to an Alt. For one | thing, it works from xemacs (alt does not), and it eliminates the annoying | bug that fvwm2 has where the Alt key spontaneously quits being useful as a | panning/shortcut/windowmanager key and starts getting passed to programs. | | I think it would be really handy to change my keymap (either in X or the | console, or both) so that Alt is an Alt key and the useless Win95 Start key | is a Meta key. Hmm, I thought this was the default. I seem to distinctly remember changing that around because I hated those d*mn Windows keys so much I removed them from my keyboard! I'm using slink on my main system so maybe that's the difference? Anyway, the "xkeycaps" program is your friend in X! I think it's in it's own Debian package? It allows you to define the keys anyway you want, save it to a file, and I think tells you how to load it by default when you log in. | However, I haven't been able to find any accurate documentation on how to do | this. The Keyboard-and-Console-HOWTO on my system is dated 16 November 1997 | and whatever it told me to do to change the default keymap for the console | (I tried this a month or two ago, just for kicks) didn't work; I could load | the new map manually but something else was being booted. The same for X: I | have seen several references to xmodmap, but have also seen that it is | defunct and has been replaced by something which is not named. | | How would I do this? This also seems like something which would be helpful | as a system default, since so many newbies (me included) are putting Linux | on a < 3 year old Windows system. As I said, to each his own, but I hate those keys, even in Windows. Can't count the times in a game I was trying to hit the Alt key and hit the Win key instead only to be popped back to desktop. As I said, I pryed those suckers completely off! In addition, under Linux I redefined my Alt keys, via xkeycaps, to be my Meta key for XEmacs too. Good Luck, Gary
Re: Switching between X and Text mode terminals
"Alfie Costa" <[EMAIL PROTECTED]> writes: | Question: | | Is there an easy way, such as a few keystrokes or a command, to switch | between text terminals and X, and vice-versa? | | That is, before running X, I can press 'Alt-F1' to get the first text | terminal, 'Alt-F2' to get the second, and so on. Once I'm in X, this | doesn't work. It seems as if one has to quit X to return to one of | those other text terminals. | | If that's not clear, the simplest example of what's desired might be to | boot up, and login as root. Press 'Alt-F2', and login as some other | user, and from that terminal, type 'startx'. Then to somehow return to | the 1st root window without quitting X. | | It would be useful in some cases, as some programs look better in text | mode, others don't run well in X, and some programs don't run at all | in X. | | I wouldn't have thought it was possible, till I ran, (from X), an app | that required SVGAlib. X seemed to crash, as there was an error | message, and the screen was back in text mode. Running 'ps' showed | that X was still alive though. I found I could switch between text | terminals OK, and if I did an 'Alt-right_arrow' past the last text | terminal, X came back up, seemingly no worse for the wear. Hmm, I didn't even know the Alt-F# thing worked? Anyway, Ctrl+Alt+F# will switch between consoles. By default X is running in console 7. So 1-6 are text consoles. Gary
Re: Help compiling my kernel
[EMAIL PROTECTED] writes: | > 4) According to the README in /usr/src/linux, I should link as follows: | > | > ln -s /usr/src/linux/include/asm-i386 /usr/include/asm | > ln -s /usr/src/linux/include/linux /usr/include/linux | > ln -s /usr/src/linux/include/scsi /usr/include/scsi | > | > Problem is, I have libc6 installed, and it is installed in /usr/include. | > Should I do the links as directed? Or will the libc6 stuff work? | | Yes, follow the readme. Take a look in /usr/include, there are a lof | of things there. So what? No, I don't think this is true anymore. At some point I remember reading a post from Linus stating that doing the above steps could be harmful to your system since, presumably, libc was compiled with what's already in /usr/include and now you're wiping that with what's there for a particular kernel. This could cause a conflict between the kernel and libc, which is NOT a Good Thing (TM). He said the best thing to do was ignore the above and not make those particular symbolic links. I never have, and my kernels compile up and run fine. | > 5) I seem to have three different versions of the header files. | > Do I really | > need them all? I have: | > | > libc6 -> /usr/include | > libc5-altdev-> /usr/i486-linuxlibc1/include | > linux source-> /usr/src/include | | Only if you want the system to work. Hmm, is anything essential to Debian still linked against libc5? I'm not sure, but I don't think so. The only thing I remember in hamm requiring libc5 was netscape. That's not true in slink. Of course if you try to remove libc5 it'll tell you if anything you have depends on it. | > 6) The kernel-package docs say I need to run: | > | > make-kpkg --rootcmd fakeroot | > | > Do I need to do this if I compile as root? I don't have fakeroot | > - do I need | > it? Will su work instead? | > | | Don't know about the --rootcmd as I always make new kernels as root | and I don't have fakeroot either. Hope that answers your questions. I'll second this. Never used fakeroot and have never had a problem. Gary
Re: Where is kernel 2.2.0?
"M.C. Vernon" <[EMAIL PROTECTED]> writes: | On 2 Feb 1999, Gary L. Hennigan wrote: | | > Mike Garfias <[EMAIL PROTECTED]> writes: | > | > | M.C. Vernon spoke forth with the blessed manuscript: | > | > I saw the uploaded message days ago, and sunsite still doesn't have the | > | > kernel-source or kernel-headers packages available yet :( | | | | > It's usually a dog. I'm not sure what "sunsite" MC was talking about, | > perhaps the UK sunsite? But certainly the sunsite in the US | > (sunsite.unc.edu) has the new kernels via their mirror of | > ftp.kernel.org. Here's the URL | > | > ftp://sunsite.unc.edu/pub/Linux/kernel.org/pub/linux/kernel/v2.2/ | | Yes, but I was wondering about the kernel-source and kernel-headers | packages. How can I make these fromt he kernel source .tar.gz? 'Fraid I can't help you with that. I usually just download the raw kernel source and use make-kpkg to get a kernel-image*.deb. I've never had a need to build the kernel-source and kernel-headers packages. You'll have to rely on one of the maintainers of those packages to answer that question. Gary
Re: Where is kernel 2.2.0?
Mike Garfias <[EMAIL PROTECTED]> writes: | M.C. Vernon spoke forth with the blessed manuscript: | > I saw the uploaded message days ago, and sunsite still doesn't have the | > kernel-source or kernel-headers packages available yet :( | > | > ftp.debian.org doesn't either | > | > Matthew | > | > | [snip] | | Try ftp.kernel.org It's usually a dog. I'm not sure what "sunsite" MC was talking about, perhaps the UK sunsite? But certainly the sunsite in the US (sunsite.unc.edu) has the new kernels via their mirror of ftp.kernel.org. Here's the URL ftp://sunsite.unc.edu/pub/Linux/kernel.org/pub/linux/kernel/v2.2/ Gary
Dselect and obsolet packages
I upgraded to slink some time ago, via apt-get, and it's been running smoothly ever since. I hadn't even thought about using dselect again until recently and it upgraded a few thing apt-get hadn't caught. However, it's also got a lot of packages in "Obsolete" sections. What do I do with these? Here's a sampling of what dselect has: --- Obsolete/local Required packages in section base --- *** Req base slang0.99.38 0.99.38-6 --- Obsolete/local Standard packages in section devel --- *** Std develncurses3.4-d 1.9.9g-8.10 --- Obsolete/local Standard packages in section utils --- *** Std utilslsof 4.28-3 - Obsolete/local Optional packages - --- Obsolete/local Optional packages in section libs --- *** Opt libs libwraster0 0.14.1-7 *** Opt libs newt0.21 0.21-8 --- Obsolete/local Optional packages in section non-free/graphics -- *** Opt non-free libjpeg-gif 6a-11 --- Obsolete/local Optional packages in section non-free/libs --- *** Opt non-free libglide22.4-3 --- Obsolete/local Optional packages in section x11 --- *** Opt x11 xfntbig 3.3.2.3-2 - Obsolete/local Extra packages - --- Obsolete/local Extra packages in section net --- *** Xtr net opie 2.31-3 - Obsolete/local Unclassified packages - --- Obsolete/local Unclassified packages without a section --- *** ? ?kernel-image homepc.16 *** ? ?kernel-image homepc.17 The last two are my home-brewed kernels built with make-kpkg (2.0.36 and 2.2.1) so I THINK I can just ignore those, but what about the others? Can I remove them? Thx, Gary
Re: netscape
"Brian Morgan" <[EMAIL PROTECTED]> writes: | The only thing I can find on netscape's ftp site is the linux20_glibc2 | version under: | ftp://ftp.netscape.com/pub/communicator/4.5/english/unix/unsupported/ | I couldn't find any libc6 version anywhere on the site. Am I looking in the | wrong place? glibc2 = libc6. It's a bit confusing. Gary
Re: kernel 2.2.0
[EMAIL PROTECTED] writes: | On 28 Jan, Anthony Campbell wrote: | > On a slightly different theme, will there be patches available to upgrade | > 2.0.36 to 2.2.0, or do we have to start afresh? As this would be a 12 Meg | > download, it would take a long time and be expensive :( | | I'm afraid that enough has changed that you'd have a 12 Meg patch. | OTOH, rumor on the developer list has it that the 2.2 kernel source | will be included in slink, so it should be available if you get a slink | cd (if you can wait that long :-) In slink? I thought there were significant enough changes in 2.2.x (2.2.1 just came out) that it wasn't possible to just pop a 2.2.x kernel into a system that had been running 2.0.x and have everything work? Is that a mistaken impression? I mean I ran a couple of the 2.1.12x kernels and I know I had to make some pretty substantial mods to get my AWE 32 support to work, then I remember hearing about problems with PPP and networking if your didn't have the latest tools. Also, I remember when I was running the 2.1.12x kernels a message about setserial being obsolete when I booted So if slink is frozen, meaning nothing but bug fixes can be put into the distribution, how can putting 2.2.x into it be justified? Please don't read this as negative or sarcastic, I'm genuinely curious about the process. Thx, Gary