Re: Help requested: DMA, Seagate ST340014A, Kernel 2.4

2004-08-16 Thread Richard Cobbe
Lo, on Sunday, August 15, Rthoreau did write:

 I have used ext3, and assume you are using ext2, what kernel
 parameters are you pasing to the kernel? 

Yes, ext2, although I'd be surprised if that makes a difference.  As far
as kernel parameters, the only one I'm using that looks at all relevant
is `lba32' (simply because it's part of Debian's default lilo.conf file,
and I've never needed to wonder if it's necessary).

 Is their a reason you have not tried a 2.6 kernel?

Well, as I understand the situation, I can't run 2.6 without upgrading
to sarge, and I can't upgrade to sarge while running 2.2.  Since 2.4
does not currently look to be an option, this causes a bit of a
catch-22.

Thanks,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Help requested: DMA, Seagate ST340014A, Kernel 2.4

2004-08-16 Thread Richard Cobbe
Lo, on Monday, August 16, Rthoreau did write:

  Date: Yesterday 14:49:47
 
  Lo, on Sunday, August 15, Richard Cobbe did write:
 
  The computer did come with a PCI IDE controller card (the unknown mass
  storage controller line above) that has another two controllers on it.
  I haven't yet tried switching controllers to see if that helps at all; I
  may attempt that later this afternoon.
 
  Well, that didn't work.  Moving it to the other controller should make
  the hard drive visible as /dev/hde, but the kernel couldn't find the
  root device, even when I specified `root=/dev/hde2' on the Lilo command
  line.  The odd thing is, I've switched hard drives to this controller
  before, back when I was having strange trouble with the previous drive
  and trying to diagnose it.  I don't remember exactly what incantation I
  had to mumble in order to get the kernel to find the partition, though,
  so I can't repeat the process.
 
  Does anyone else have any suggestions?
 
  Richard

So I installed the latest 2.4 kernel image out of stable
(kernel-image-2.4.18-1-k7).  I'm not sure this is an improvement,
although we'll see.

 Your lspci listing should not contain any unknown devices, for example below 
 is my listing for my MP 2460.

Still getting unknown devices in lspci; the output is the same as
up-thread.

Further, when I ran
/sbin/hdparm -X66 -d1 -u1 -m16 -c3 /dev/hda
I got a slightly-updated version of a standard error/warning message:
ide0: unexpected interrupt, status=0x58, count=1
although things *appear* to be working fine for now.  I'd still be a lot
happier if I weren't getting that warning, though.

However, and I don't know if this is related, the kernel image only
supports a single processor; stable doesn't appear to have an
SMP-enabled kernel prebuilt.  As soon as the soruce installs, I'm going
to rebuild it for SMP without making any other changes, just to see what
happens.

So, in short, better but not good.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Help requested: DMA, Seagate ST340014A, Kernel 2.4

2004-08-15 Thread Richard Cobbe
Greetings, all.

I'm running Debian stable, and I've been running the 2.2 kernel series
for some time now, because I've never had very good luck getting DMA to
work for my hard drive under the 2.4 series.  However, with sarge's
release appearing increasingly imminent, I need to try to get this
resolved.  Google has, unfortunately, completely let me down on this.

The hard drive is a Seagate ST340014A, and it's plugged into the main
IDE controller on my motherboard, a Tyan Tiger MPX S2466.  According to
the manufacturer, this uses the AMD-760 MPX chipset.  As far as I can
tell, I've configured my kernel correctly, but I'm still having serious
problems that occasionally result in filesystem corruption.

I've tried three different invocations of hdparm:

  1) /sbin/hdparm -X66 -d1 -u1 -m16 -c3 /dev/hda
  2) /sbin/hdparm -X66 -d1 -u0 -m16 -c3 /dev/hda
  3) /sbin/hdparm -X33 -d1 -u0 -m16 -c3 /dev/hda

I have used the first invocation with the 2.2 kernels for a couple of
years now with no filesystem problems.  Under 2.4.26, however, it
doesn't work as well.  The failure modes are somewhat complicated, but
they generally involve a message printed to the console immediately
after I run hdparm (which is one of the last things in my boot process).

In one scenario, I'll just get a warning message printed to the console:

hda: dma_intr: status=0x58 { DriveReady SeekComplete DataRequest }

Things generally seem to work well subsequently, although I can end up
with transient failures later on, some of which require rebooting, and
some of which involve filesystem corruption.  Unfortunately, I haven't
had one of these in a while, so I can't describe it more fully.

In another scenario, I'll get a longer message printed to the console
during hdparm's execution:

hda: set_drive_speed_status: status=0x58 { DriveReady SeekComplete
DriveRequest }
ide0: Drive 0 didn't accept speed setting.  Oh, well.

At this point, the system hangs and is completely unresponsive to
keyboard input, and I have to hit the reset button.  In the subsequent
fsck, I'll often find some filesystem errors that are relatively minor
but still require manual intervention to fix.  (One of these left a
couple of things in /var/lost+found and clobbered /etc/motd.)

In the third scenario, I'll get this error during hdparm's execution:

EXT2-fs error (device ide0(3,2)): ext2_check_page: bad entry in
directory #452483: unaligned directory entry -- offset=0,
inode=16501003888, rec_len=59267, name_len=6

at which point the system automatically remounts / as read-only.  Since
this happens before the boot procedure removes /etc/nologin, I have to
reboot, although in this case a ctrl-alt-del is sufficient.  Again, this
requires an fsck on reboot, although I don't think this generally leads
to filesystem errors (other than non-zero sizes on FIFOs).

Out of the hdparm invocations given above, numbers 1 and 2 both lead to
these problems.  Number 3, which I'm using currently, seems to be more
stable, although I do get console messages on startup as well:

hda: dma_intr: status=0x58 { DriveReady SeekComplete DataRequest }

hda: CHECK for good STATUS

I've only been running this for a couple of days, though, so I'm not yet
sure that it'll be stable over the long term.

Since this worked under the 2.2 series, I'm pretty sure that it's a
software configuration issue rather than a hardware problem.  As I say,
I believe that I've configured things correctly, but things still aren't
working.  I've attached my kernel configuration file below; note that
I've tried this both with and without CONFIG_IDEDISK_MULTI_MODE enabled,
with no observable difference.

I would greatly appreciate any suggestions that anyone might have.

Thanks much,

Richard



config-2.4.26rc1
Description: Binary data


Re: Help requested: DMA, Seagate ST340014A, Kernel 2.4

2004-08-15 Thread Richard Cobbe
Lo, on Sunday, August 15, James Vahn did write:

 Richard wrote:
  I'm running Debian stable, and I've been running the 2.2 kernel series
  for some time now, because I've never had very good luck getting DMA to
  work for my hard drive under the 2.4 series.
 
  The hard drive is a Seagate ST340014A, and it's plugged into the main
  IDE controller on my motherboard, a Tyan Tiger MPX S2466.  According to
  the manufacturer, this uses the AMD-760 MPX chipset.
 
 If you check lspci, I'll bet you are using a VIA interface. I have the
 same problem here. It's been suggested to me that this is a kernel bug in
 the VIA driver. The same person suggested that another IDE card stands a
 good chance of working, and he recommended HPT (HighPoint).
 
  ~$ lspci
   :00:01.0 PCI bridge: Advanced Micro Devices [AMD] AMD-760 [IGD4-1P]
.
   :00:04.1 IDE interface: VIA Technologies, Inc.
VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06)

Didn't know about lspci; thanks.

If I'm reading the output correctly, though, I'm not using a VIA IDE
interface:

[nanny-ogg:~/mail/.spools]$ lspci
00:00.0 Host bridge: Advanced Micro Devices [AMD]: Unknown device 700c (rev 11)
00:01.0 PCI bridge: Advanced Micro Devices [AMD]: Unknown device 700d
...
00:07.1 IDE interface: Advanced Micro Devices [AMD]: Unknown device 7441 (rev 04)
00:07.3 Bridge: Advanced Micro Devices [AMD]: Unknown device 7443 (rev 03)
00:08.0 Unknown mass storage controller: Promise Technology, Inc. 20268 (rev 01)

The computer did come with a PCI IDE controller card (the unknown mass
storage controller line above) that has another two controllers on it.
I haven't yet tried switching controllers to see if that helps at all; I
may attempt that later this afternoon.

Alternatively, should the fact that lspci keeps saying unknown device
worry me?

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Help requested: DMA, Seagate ST340014A, Kernel 2.4

2004-08-15 Thread Richard Cobbe
Lo, on Sunday, August 15, Richard Cobbe did write:

 The computer did come with a PCI IDE controller card (the unknown mass
 storage controller line above) that has another two controllers on it.
 I haven't yet tried switching controllers to see if that helps at all; I
 may attempt that later this afternoon.

Well, that didn't work.  Moving it to the other controller should make
the hard drive visible as /dev/hde, but the kernel couldn't find the
root device, even when I specified `root=/dev/hde2' on the Lilo command
line.  The odd thing is, I've switched hard drives to this controller
before, back when I was having strange trouble with the previous drive
and trying to diagnose it.  I don't remember exactly what incantation I
had to mumble in order to get the kernel to find the partition, though,
so I can't repeat the process.

Does anyone else have any suggestions?

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Using sarge with 2.2 kernel?

2004-08-13 Thread Richard Cobbe
Greetings, all.

I'm current running stable.  I understand that sarge is likely to be
released fairly soon now.  Even if it's not, external pressures (access
to svn, gcc 3.3) will likely force me to upgrade in any case.

In light of that, could someone comment on how well sarge works with a
2.2 kernel?  I gather that, since the 2.2 kernels are still available in
sarge, it should work on some level, but I'd like to hear more details.

I'd also love to upgrade to 2.4, but I have not yet been able to figure
out how to get DMA working reliably on my hard drive under the 2.4
series.  I have a couple of additional things I want to try; if those
fail, then I'll start another thread with details on that.

Thanks,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How does one get ssh to not wait?

2004-06-14 Thread Richard Cobbe
Lo, on Sunday, June 13, Dan Jacobson did write:

 How does one get ssh to not wait?
 
 ssh somewhere !
 touch file
 sleep 333  rm file
 !
 echo I want control to arrive at this line without waiting 333!
 
 I tried (...), disown, etc.
 Must I resort to batch(1)?
 Perhaps one could use nohup, but then I want to clean up nohup.out...

Have you tried the -f flag to ssh?  That backgrounds the ssh process
after querying you for passwords or passphrases, which seems to be what
you want.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Can I install non debian source packages?

2004-06-02 Thread Richard Cobbe
Lo, on Tuesday, June 1, Thomas Adam did write:

  --- James Sinnamon [EMAIL PROTECTED] wrote: 
 
  I am running 'testing' and am considering changing to 'unstable'.  
  
  In any case, does this site give me any clues about how to go about 
  building non-Debian applications from source archives?  
  
  ... or do I just try to run ./configure, then make etc, as normally 
  instructed within the package documentation?
 
 Why are you compiling it? Either way, you'll need to ensure that you have:
 
 build-essential
 
 installed. Then it is simply (ha!) a case of:
 
 ./configure [--options]  make  su -c 'make install'
 
 [--options] to ./configure is optional -- it depends how rich a
 feature-set you want. make install must be run as root.
  
 But unless you have a reason not to, I suggest using .deb packages
 whenever you can.

You can mitigate some of the annoyances of installing a package manually
and recover a few of the advanced features provided by dpkg (in
particular, package removal) by using stow.  Stow itself is available as
a deb; see the included docs for more details.

Brief summary: with stow, you install each source package into a
separate directory (usually under /usr/local/stow, although that's
configurable); stow then creates symlinks to make it look as though the
program is in /usr/local/bin, /usr/local/lib, /usr/local/man, etc.
Removing a package is a breeze: use stow to get rid of the symlinks,
then delete the package's directory.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



exim configuration: smarthost and address rewriting

2004-04-23 Thread Richard Cobbe
Greetings, all.

I'm having a little difficulty with exim configuration.  Debian stable,
exim 3.35-1woody2.

This is a home system, set up to deliver all outgoing mail through a
smarthost.  I ran eximconfig, selected the smarthost option, and entered
the relevant data to get my basic exim.conf file.  Since then, I've made
the following changes:

*** exim.conf.orig  Fri Apr 23 06:56:48 2004
--- exim.conf   Fri Apr 23 07:08:00 2004
***
*** 29,35 
  # domain to unqualified sender addresses, specify the recipient domain here.
  # If this option is not set, the qualify_domain value is used.
  
! # qualify_recipient =
  
  # Specify your local domains as a colon-separated list here. If this option
  # is not set (i.e. not mentioned in the configuration file), the
--- 29,35 
  # domain to unqualified sender addresses, specify the recipient domain here.
  # If this option is not set, the qualify_domain value is used.
  
! qualify_recipient = localhost
  
  # Specify your local domains as a colon-separated list here. If this option
  # is not set (i.e. not mentioned in the configuration file), the
***
*** 40,46 
  # are no local domains; not setting it at all causes the default value (the
  # setting of qualify_recipient) to be used.
  
! local_domains = localhost:comcast.net
  
  # Allow mail addressed to our hostname, or to our IP address.
  
--- 40,46 
  # are no local domains; not setting it at all causes the default value (the
  # setting of qualify_recipient) to be used.
  
! local_domains = localhost:home.rcc
  
  # Allow mail addressed to our hostname, or to our IP address.
  
***
*** 413,418 
--- 413,420 
  # don't have their own domain, but could be useful for anyone.
  # It looks up the real address of all local users in a file
  
+ [EMAIL PROTECTED][EMAIL PROTECTED] h
+ 
  [EMAIL PROTECTED]${lookup{$1}lsearch{/etc/email-addresses}\
{$value}fail} frFs
  


There's another hunk enabling authenticated connections, required by my
smarthost, but I'm omitting that here to avoid broadcasting my passwords
to all and sundry.  I don't think it's relevant to my question, in any
case.

I should point out that I have (deliberately) given my machine a bogus
hostname, nanny-ogg.home.rcc, to avoid collisions.  And my user name on
my local machine is `cobbe'.

In general, all of this works correctly.  Mail that I send from cobbe's
account is correctly routed through the smarthost to its destination,
and it is labeled as coming from the address [EMAIL PROTECTED],
which is the desired result.  Additionally, mail that is sent to
root is handled according to the local alias file, *not* routed up to
[EMAIL PROTECTED] (who, I'm sure, really doesn't want my logcheck output).

There's just one minor fly in the ointment left: mail that is sent from
[EMAIL PROTECTED] (as by logcheck, for instance) is rewritten to appear as
coming from [EMAIL PROTECTED]  I'd really like to have exim configured
to leave sender addresses of [EMAIL PROTECTED] alone, but it's not clear to
me from the Exim manual how to disable the qualify_domain rewriting for
a single local address.

Could anyone point me in the right direction, please?

Thanks very much for any help,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Where to put local PPD file for CUPS?

2003-02-23 Thread Richard Cobbe
Greetings, all.

I've just switched to CUPS from lprng.  I've been quite happy with it;
the web configuration interface is particularly nice.

Only one minor question: the best .ppd for my particular printer is not,
so far as I can tell, included in any of the cups-related packages (at
least in woody).  I was able to download the .ppd from the web, put it
into /usr/share/cups/models, and add my printer; everything works fine.

I just don't much like having a local non-package file in /usr/share.
Can cups be configured to look under /usr/local (or, I suppose, possibly
/etc) for PPDs, or am I stuck with a non-package file under /usr/share?

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Can't print from woody: HPDJ 670C, gs 6.53-3, magicfilter 1.2-53

2003-02-18 Thread Richard Cobbe
Greetings, all.

I'm having serious trouble printing postscript files from my Woody
system.  I've got a HPDJ 670c, and I'm running gs 6.53-3 and magicfilter
1.2-53.  (This, of course, applies to printing anything that goes
through PostScript, like PDF.)  I'm using lprng 3.8.10-1.

The actual behavior varies somewhat: at best, I'll get one or two lines
of good output, then the printer starts making these *horrible* noises,
like it's trying to move the print head way off the end of the track and
stripping the motor in the process, then spit out the page.  It'll suck
in a new page, then either spit it out blank or spit it out with two or
three lines of gibberish on it.  Repeat until I yank the power cord
(hitting the power button doesn't help; the printer gets seriously
wedged.)

Printing ASCII works fine, as does printing PostScript  PDF from
Windows.

/etc/printcap (generated by magicfilterconfig):

lp|hpdj670c-m|HP DeskJet 670C (mono):\
:lp=/dev/lp0:sd=/var/spool/lpd/hpdj670c-m:\
:sh:pw#80:pl#66:px#1440:mx#0:\
:if=/etc/magicfilter/dj670c-filter:\
:af=/var/log/lp-acct:lf=/var/log/lp-errs:

/etc/magicfilter/dj670c-filter is as installed by the magicfilter
package.

To print, I'm using the command

dvips -Phpdj670c-m specs.dvi

The relevant dvips configuration file follows:

M hpdj
D 2602
X 2602
Y 2602
o |lpr -Phpdj670c-m
O 0in,1.54cm

I've also tried gs-aladdin, and lpr 2000.05.07-4.2.  Neither helps.

This used to work, several months ago.  I don't keep very careful
records of what I change on my system, though, so I can't say exactly
what broke it.  (I was running testing, though, when I first noticed the
problem.  I downgraded to stable a while back, in an attempt to fix
this and also some truetype font issues.  This is the first time I've
tried printing since the downgrade.)

Any suggestions would be very welcome.

Thanks,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: Compiler error: C compiler cannot create executables

2003-01-18 Thread Richard Cobbe
Lo, on Saturday, January 18, Eric G. Miller did write:

 On Sat, Jan 18, 2003 at 12:07:24PM +0100, Achton N. Netherclift wrote:
 [snip]
  According to packages.debian.org, the file that is missing according to
  the config.logs (crt1.o) is contained in the libc6-dev package. The file is
  missing on my system, so I attempt to install it:
 
 AFAIK crt1.o would be a temporary file created from the combination of
 conftest.c and some gcc parts when compiling the C file directly to
 an executable.  Your installation is incomplete because you don't have
 libc6-dev installed.  All that stuff about gcc not being able to produce
 an executable...  GCC needs some of the things provided by libc6-dev.

About half-right:

[nanny-ogg:~]$ locate crt1.o
/usr/lib/crt1.o
/usr/lib/gcrt1.o
/usr/lib/Mcrt1.o
[nanny-ogg:~]$ dpkg -S /usr/lib/crt1.o
libc6-dev: /usr/lib/crt1.o

crt1.o is, I think, an object file containing a big chunk of the C/C++
runtime.  In particular, it contains the startup code that does a bunch
of initialization, then calls main().  After main() returns, it does a
bunch of OS-specific cleanup, like taking main's return code and putting
in the place where the OS expects to find the process's exit code.  It's
essentially part of the compiler/library infrastructure.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: how to set my email address on outgoing mail?

2003-01-15 Thread Richard Cobbe
Lo, on Wednesday, January 15, Adam did write:

 On my ISP shell account I can  set my email address in Emacs when using 
 RMAIL but this doesn't work on my Debian box, is this exim's doing?

I don't think so; I'm able to configure my outgoing email address
successfully using VM, XEmacs, and Debian.  Have you set
user-mail-address?  What's showing up in the From header when you send
mail from your Debian machine?

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: How to make Alt GNU Emacs Meta?

2003-01-11 Thread Richard Cobbe
Lo, on Saturday, January 11, Bob Proulx did write:

 Adam [EMAIL PROTECTED] [2003-01-10 22:28:14 -0800]:
  In the current stable release of debian, GNU Emacs uses the Windows key 
  as Meta instead of Alt.  I am told this is not true for other linux 
  distributions or other releases of debian.  How can I fix this?  If I 
  have to I can make an .xmodmap but I would rather not.  If there is some 
  package I can upgrade to testing or unstable I am willing to do that.
 
 I have noticed that too.  But it is related to the keyboard you chose
 when you installed.  If you said you had a pc102 then the windows key
 is not available and you get normal meta operation on alt.  If you
 said pc104 then it configures the windows key as meta.  I believe this
 had to do with preserving the functionality of the alt key for other
 purposes.
 
 So presumably you could rerun 'dpkg-reconfigure' again and change the
 keyboard.  But I don't know the correct packge off of the top of my
 head.  Oh well, you can just change the configuration file by hand.
 Here is mine.

dpkg-reconfigure xserver-xfree86

And be aware that if you change a section of /etc/X11/XF86Config-4 by
hand (within the debconf-managed section), it'll get overwritten the
next time you configure the xserver-xfree86 package.  I'm not entirely
sure, but this may include package upgrades.

Another alternative: instead of messing around with your XF86Config-4
file and restarting X, you can also play around with X keymaps, by way
of xmodmap(1).  The relevant bit of my .xmodmaprc file follows.

keycode 0x73 =  NoSymbol
keycode 0x40 =  Meta_L
keycode 0x71 =  Meta_R

The first line removes the key sym associated with the left windows key,
so that hitting it effectively does nothing.  The other lines rebind the
two alt keys to act like meta.  You may also want to unbind the right
windows key; its keycode is 0x74.

Warning: this is only guaranteed to work if you chose the pc104 setting
when configuring X.  It may well have bad side effects when used with a
different keyboard type.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: xterm menus

2003-01-07 Thread Richard Cobbe
Lo, on Tuesday, January 7, will trillich did write:

 On Mon, Jan 06, 2003 at 06:21:07PM -0500, Richard Cobbe wrote:

SNIP

  What does `hideous' mean here, specifically?  I commented these same
  lines out back when I upgraded to woody, and now my xterm menus look
  fine---same foreground and background colors as the main terminal
  window.  I'd think you'd get the same results (unless you've set these
  resources with xrdb or something).
 
 it's highly subjective, of course, but when you have the option
 of a fancy vignette from light gray to medium gray with dark
 gray text overlay, start white on stark black seems alarmingly
 hideous by comparison. (coming from the mac platform, i'm a bit
 spoiled when it comes to presentation.)

Ah.  Well, I've not tried this, but I'm reasonably certain that you can
take the lines you commented out in /etc/X11/app-defaults/XTerm-color,
and put them in your X resource file (traditionally ~/.Xresources or
~/.Xdefaults) with settings that you like better.  I'm not sure,
however, exactly what the set of correct values for these options is,
and this sort of thing is not generally well-documented.  :-(

Still, if you want a better gradient than the default, playing around
with this should work.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SOLVED: Still have no idea of the xhost replacement

2003-01-07 Thread Richard Cobbe
Lo, on Monday, January 6, nate did write:

 Abdul Latip said:
 
  IT WORKS! Thank you very much! May I know for what is
  -nolisten tcp in xserverrc?
 
 sure, glad to help. the nolisten tcp is to prevent the X server
 from listening for connections on TCP ports.

... which is a good thing for security reasons.

 nolisten tcp breaks setups that depend upon exporting the
 display e.g. export DISPLAY=remote.server:0.0

Yes.

 SSH bypasses this by tunneling the connection over the SSH connection
 and(I think) connecting to the X server via sockets instead.

Pretty much, although `sockets' is an overly broad term.  In this case,
I believe that the ssh client uses Unix-domain sockets to communicate
with the X server on the local machine.  Unix-domain sockets are like
normal TCP/IP sockets, with a couple of exceptions:

 - Unlike TCP/IP sockets, their addresses are pathnames, so these
   sockets live in the filesystem.  Try /bin/ls -l /tmp/.X11-unix to see
   an example.

 - Unix-domain sockets allow connections only to other processes on the
   same machine.  This loss of flexibility gets you a speed benefit and
   a much simpler security situation: you don't have to worry about
   connections from arbitrary hosts on the internet.

(For those who don't know what a socket is, read `connection' instead:
it's roughly the same idea.)

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: xterm menus

2003-01-06 Thread Richard Cobbe
Lo, on Monday, January 6, will trillich did write:

snip

From /etc/X11/app-defaults/XTerm-color:

  ! The following two sections take advantage of new features in version 7
  ! of the Athena widget library.  Comment them out if you have a shallow
  ! color depth.

... or if you use a dark background with your xterms, like I do

  *mainMenu*backgroundPixmap: 
  gradient:vertical?dimension=350start=gray90end=gray60
  *mainMenu*foreground:   gray15
  *vtMenu*backgroundPixmap:   
  gradient:vertical?dimension=445start=gray90end=gray60
  *vtMenu*foreground: gray15
  *fontMenu*backgroundPixmap: 
  gradient:vertical?dimension=220start=gray90end=gray60
  *fontMenu*foreground:   gray15
  *tekMenu*backgroundPixmap:  
  gradient:vertical?dimension=205start=gray90end=gray60
  *tekMenu*foreground:gray15
 snip
 
 aha! that's exactly the same as mine, and yet gray15 appears to
 be misinterpreted as white on my menus. i commented out that
 whole section (quoted above) and they're now hideous, but i can
 read them all.

What does `hideous' mean here, specifically?  I commented these same
lines out back when I upgraded to woody, and now my xterm menus look
fine---same foreground and background colors as the main terminal
window.  I'd think you'd get the same results (unless you've set these
resources with xrdb or something).

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: Problem with ssh

2002-06-29 Thread Richard Cobbe
Lo, on Saturday, June 29, Sam Varghese did write:

[ wrapped to 72 cols ]

 On Fri, Jun 28, 2002 at 05:33:17PM -0500, Richard Cobbe wrote:

  Not correct.  In woody, both packages support both versions.  OpenSSH
  defaults to protocol v1; for v2, supply the `-2' switch to ssh or add
  the appropriate line to ~/.ssh/config.
 
 Right, I discovered that I was using a version of ssh-non free as client
 rather than OpenSSH. 
 
 But does OpenSSH default to version 1 or version 2? I find that I can
 log in to the two servers which I need to - one uses version 1 and the
 other version 2 - and I don't need to supply the -2 switch.

I'm not entirely sure how OpenSSH decides which protocol version to
use.  It's entirely possible that there's a hand-shaking phase in the
connection process, during which the client and the server figure out
which versions of the protocol they support, and in what order.

So, it would seem that my earlier statement about it defaulting to
version 1 may not be correct.  Mea culpa.  (Most of the machines that I
deal with are running potato, which until recently was restricted to
protocol v1.  The one woody machine I run only talks to a potato
machine, so the fact that it supports protocol v2 was, until I installed
the security fixes, irrelevant.)

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Problem with ssh

2002-06-28 Thread Richard Cobbe
(Snipped all the cross-posts.)

Lo, on Friday, June 28, Sam Varghese did write:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On Thu, Jun 27, 2002 at 09:25:52PM +0700, [EMAIL PROTECTED] wrote:
  Dear All,
  
  I have a problem with my ssh, when i try to connect to our server using
  ssh have an error like this :
  
  ssh -l [EMAIL PROTECTED]
  2f65 7463 2f73 7368
  Disconnecting: Bad packet length 795178083.
  
  
  What's Wrong with my server or my ssh client. And how to solve them.
 
 Your ssh client is probably using protocol version 1 and the server you are
 trying to log in to is using protocol version 2.
 
 I had the same problem. On woody the ssh client which comes with Openssh uses
 protocol version 1. I installed ssh2 and used it instead.

Not correct.  In woody, both packages support both versions.  OpenSSH
defaults to protocol v1; for v2, supply the `-2' switch to ssh or add
the appropriate line to ~/.ssh/config.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: ssh difference v3.3 vs. 3.4 ???

2002-06-26 Thread Richard Cobbe
Lo, on Wednesday, June 26, Colin Watson did write:

 On Wed, Jun 26, 2002 at 03:39:49PM -0400, Reid Gilman wrote:
  3.4 contains bugfixes for a few problems I don't completely understand
  but I believe that there was a bug that could allow root access. 
 
 If you're running 3.3 with privilege separation enabled (as it is by
 default), most remote root exploits become remote exploits of the sshd
 user, which is considerably less serious. 

So, I'm running ssh 3.3 as packaged for woody.  I don't have
UserPrivilegeSeparation turned off in any config files, but I still see
the following:

[nanny-ogg:~]$ ps aux | grep [s]shd 
root   268  0.0  0.2  2788  716 ?S06:19   0:00 /usr/sbin/sshd

sshd is still running as root.  Is this what I should be seeing?  I
would have thought, from the descriptions of privilege separation, that
this process would be running as `sshd'.  Or is there some other
access-control mechanism going on here?

I'm also observing this on the 3 potato machines I administer as well,
though of course they're running ssh version 3.3p1-0.0potato6.

 3.4 added fixes for the real problems rather than just bandaging over
 them.

Any word on when 3.4 will be available as a .deb?

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: ssh difference v3.3 vs. 3.4 ???

2002-06-26 Thread Richard Cobbe
Lo, on Wednesday, June 26, Colin Watson did write:

 On Wed, Jun 26, 2002 at 05:25:16PM -0500, Richard Cobbe wrote:
  Lo, on Wednesday, June 26, Colin Watson did write:
   If you're running 3.3 with privilege separation enabled (as it is by
   default), most remote root exploits become remote exploits of the sshd
   user, which is considerably less serious. 
  
  So, I'm running ssh 3.3 as packaged for woody.  I don't have
  UserPrivilegeSeparation turned off in any config files, but I still see
  the following:
  
  [nanny-ogg:~]$ ps aux | grep [s]shd 
  root   268  0.0  0.2  2788  716 ?S06:19   0:00 
  /usr/sbin/sshd
  
  sshd is still running as root.  Is this what I should be seeing?
 
 Yes, the parent process continues to run as root. If you ssh to a box
 running 3.3 and leave the connection at the password prompt, you'll see
 a process running as the sshd user until the authentication is
 completed.

Ah.  Since I use public-key authentication almost exclusively, that
would explain why I never saw the sshd user.

Thanks,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: What is happening to testing/unstable?

2002-06-20 Thread Richard Cobbe
Lo, on Thursday, June 20, Colin Watson did write:

 If you care about dpkg's available file being up-to-date, you need to
 run 'dselect update', which runs 'apt-get update' for you. You don't
 need to run 'apt-get update' as well.

Pardon the somewhat elementary question, but what is dpkg's available
file used for, and why would I need it to be up to date?

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: open ports question

2002-06-06 Thread Richard Cobbe
Lo, on Wednesday, June 5, Paul Johnson did write:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On Wed, Jun 05, 2002 at 02:32:00PM -0400, tvn1981 wrote:
 
  9/tcp  opendiscard 
 
 Not sure myself...

Standard TCP service; routes everything written to that port to the bit
bucket.  I'm not aware of any security risks here.

  13/tcp opendaytime
  37/tcp opentime   
 
 ntp daemon, you can safely disable these in inetd.conf

No, it's not the ntp daemon; that listens on 123/tcp (see
/etc/services).

The daytime service responds to connections simply by writing the
current time, in human-readable form, to the connection and closing.  I
think time does the same, but in machine-readable format:

[nanny-ogg:~]$ telnet localhost time
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
ÀªH¤Connection closed by foreign host.
[nanny-ogg:~]$ telnet localhost daytime
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Thu Jun  6 15:46:32 2002
Connection closed by foreign host.

Far as I know, you can safely disable these (I'm not running inetd at
all on either of my two machines, and nobody's complained at me yet).
As with discard, though, I don't know if they're a security risk.

  113/tcpopenauth
 
 identd.  Keep if you *ever* connect to IRC; most networks will drop you
 if it can't get an ident response.

Does this service have any uses besides IRC?

Richard


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: in case you missed this from ponik

2002-06-05 Thread Richard Cobbe
Lo, on Wednesday, June 5, Jeronimo Pellegrini did write:

   To me, the best solution to this would be to customize the tagline on
   each outgoing message, so that it would read something like you are
   subscribed as [EMAIL PROTECTED], to remove send a message _from that
   address_ to [EMAIL PROTECTED] with the magic word.  That way, the
   clueless would have a fighting chance at getting off the list.
   If they are still incapable, perhaps they will include the tagline in 
   their
   quoted reply so that others can take the appropriate action.
 
 Agreed. Absolutely a good thing! Wishlist bug against lists.debian.org?
 
   I don't know how hard or easy this would be to implement, but it sounds
   nontrivial.  I suppose there are some privacy / archival issues, such as
   the desire to scrub mailing list archives of email addresses to foil
   spambots.
 
 But since the e-mail would be set to a different string for every copy
 that is sent, it would make more sense if the software sent it to the archives
 before inserting an address.

...except for the archives at http://www.mail-archive.com/.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Question about mail alias for root

2002-06-02 Thread Richard Cobbe
Greetings, all.

eximconfig, when asking where to route mail for root and postmaster,
prints the following warning:

Note that postmaster-mail should usually be read on the system it is
directed to, rather than being forwarded elsewhere, so (at least one
of) the users you choose should not redirect their mail off this
machine.

Why is this considered a good idea?  For convenience, I'd like to
configure exim on my home firewall to route root's mail to my user
account on my normal desktop machine (behind the firewall).  Before I do
this, though, I'd like to know if there are any security issues or other
things I should be aware of.

Why does exim consider this a bad idea?

Thanks,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Upgrading bugzilla to woody?

2002-05-28 Thread Richard Cobbe
Greetings, all.

I've got a server that's currently running Debian/stable.  I've also
installed bugzilla from source in /usr/local.

When woody is released, I'd like to move to the version of bugzilla
that's in woody.  However, I'd really like to keep my existing bug
database intact during the upgrade.

Has anybody been through this process before?  Is there anything special
that I need to do to preserve the existing bugs?  Are there any other
gotchas to worry about?  Any info would be appreciated.

Thanks kindly,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Acroread segfault

2002-05-25 Thread Richard Cobbe
Lo, on Thursday, May 23, Michael Jinks did write:

 On Wed, May 22, 2002 at 06:21:27PM -0500, Richard Cobbe wrote:
  
  I can't tell from the parts of your X log that you included, but what
  color depth are you running on these machines?  In my experience,
  Acroread really doesn't like 24bit color at all; it's much happer in
  either 16bit or 32bit.
 
 Oho!  24-bit it is, which BTW also plays nasty for Netscape.

Exactly my experience.  Once I switched to 16- or 32-bit color, both
apps started behaving nicely again.  (That said, netscape under 24-bit
color works fine, it just looks kinda funny.)

I gather that this is a general trend: 24-bit color modes were added
kinda late in the game, so not all applications handle them very nicely.
IMO, a well-designed and well-implemented app should work regardless of
the color depth (except perhaps for a minimum), but I've never done X
development, so what do I know?

 (Yeah they still want to use that too. :( ) So we were going to have
 to try other color depths for these machines anyway, if it doesn't
 work I'll post a link to an strace.

I can't speak for other folks on the list, but I know so little about
acroread, Netscape, or X that an strace won't do me much good.

My suspicion is that the problem stems from an interaction between
acroread and the X server.  Since these aren't system calls, I don't
think these will necessarily show up on an strace dump, or at least not
in a meaningful fashion.  (Depending on the situation, they may involve
several system calls, as for networking, but I don't think those will
help you much.)

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: xemacs problems

2002-05-25 Thread Richard Cobbe
Lo, on Thursday, May 23, Glen Lee Edwards did write:

 I installed xemacs 21 from Woody.  I run fvwm as my window manager.  I'm using
 the same configuration files as I did with Red Hat for both.  Xemacs is 
 ignoring
 the font information I have in my configuration files, and when it loads in
 fvwm, instead of staying on desktop 0,0, it spreads out well beyond desktop 
 0,0
 and into the other 3.  Since I haven't had this trouble with xemacs at any 
 other
 time, I though I'd ask here in case Debian has done some tweaking to it.

How are you setting the fonts?  .emacs config options, or X resource
entries?  If it's not too long, could you post the settings?

I had some unusual font behavior when I recently upgraded to woody, but
it's more related to how XEmacs generates the default italic font: since
my normal font doesn't have an italic variant, XEmacs goes down two
sizes to find one that does and scales up---looks really bad.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Emacs and shell variables

2002-05-22 Thread Richard Cobbe
Lo, on Wednesday, May 22, Tom Cook did write:

 On  0, Stefan Bellon [EMAIL PROTECTED] wrote:
  Felix Natter wrote:
   Stefan Bellon [EMAIL PROTECTED] writes:
  
When using Emacs to start a compilation (e.g. with C-c C-c from C++
mode) you get make -k as default. The problem I'm experiencing
is, that I need some shell variables set in the Makefile. I've set
them in my ~/.bashrc and it works fine if I start Emacs from an
xterm which has this variable set in its shell.

But when I start Emacs with a function key, then the ~/.bashrc
obviously isn't executed, the shell variable isn't set and the make
process fails.

So, how do I tell Emacs always to execute ~/.bashrc in order to get
at my shell variables?
  
   You can put your variables both in ~/.bashrc and ~/.bash_profile.
   Or you can use (setenv TEST foo).
  
  It looks like I didn't make my problem clear enough.
  
  The variables *are* already in ~/.bashrc (and they're exported there).
  But Emacs only knows about them if I start Emacs from a bash. If I
  however use a function key I have defined with fvwm, then Emacs doesn't
  start with the shell as parent and therefore hasn't the variables set.
  And I'd like Emacs to have those variables set even then.

I would think that you'd want the environment variables to be visible to
your entire X session, including FVWM and all of the programs it starts.

How are you starting X?  Are you running one of the display managers
like xdm, or do you run startx?

Your .bashrc is only read for interactive shells, not scripts or
anything else.  Does moving the environment settings into .bash_login or
.bash_profile work?  See bash's manpage for details about when it reads
each of the various configuration files.

 So put (setenv variable value) in your .emacs

Also a possibility, but a bit of a kluge.

HTH,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Emacs and shell variables

2002-05-22 Thread Richard Cobbe
Lo, on Wednesday, May 22, Stefan Bellon did write:

 Dave Carrigan wrote:
  Stefan Bellon [EMAIL PROTECTED] writes:
 
 [snip]
 
   gdm, but without the rest of GNOME, only gdm.
 
  gdm won't evaluate your .bashrc to set the environment variables. The
  idomatic solution is to create a ~/.environment file where you set all
  of your environment variables, then each of your other .rc files
  (.bashrc, .gnomerc, .xsession, etc.) source that file.
 
 I know very little of the login process. Does gdm evaluate the
 ~/.environment file? Or the ~/.xsession file? If neither, then the
 above solution doesn't give me any advantage. If it does evaluate one,
 then yes, this is clearly the way to go.

I *think*, although I'm not entirely certain, that a gdm login will end
up reading .xsession.  It's been a long time since I've used gdm, so I'm
a little unclear on the details.

However, I don't remember whether it ever creates a login shell for
you.  If it doesn't, you might try putting the following on the first
line of .xsession:

#!/bin/bash --login

This will force it to be a login shell, which should source the
appropriate bash dotfiles.

In my case, I log in on the console and run startx from within a login
shell, so it pretty much just works.  (If you're not too attached to
graphical login managers, you may want to try this tactic; I've always
found it a lot easier to understand and configure.)

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Acroread segfault

2002-05-22 Thread Richard Cobbe
Lo, on Wednesday, May 22, Michael Jinks did write:

 Hi, all.  I'm mostly looking for tips on where to seek more info, since
 at this point I'm completely puzzled and lack the background to really
 dig into this issue.
 
 I've just set up the first of a batch of new Dell Optiplex GX240's,
 running Woody.  These are going to be desktops for researchers, so
 reading PDF files is extremely important.  Unfortunately, Acroread
 segfaults when it tries to display to the screen.  It works fine when
 the display is exported elsewhere or to a VNC session; but it segfaults
 when I run the program on my own desktop and export the display to the
 new Dell.  From all of that I surmise that the problem arises from an
 interaction between Acroread and the X driver which is actually painting
 the display on these machines.

I can't tell from the parts of your X log that you included, but what
color depth are you running on these machines?  In my experience,
Acroread really doesn't like 24bit color at all; it's much happer in
either 16bit or 32bit.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Serious Bug in most major Linux distros.

2002-05-21 Thread Richard Cobbe
Lo, on Tuesday, May 21, Sean 'Shaleh' Perry did write:

Where's the attribution?  Who was the OP?

  Why the sam hell is there not, by default, no questions asked, it's
  installed because it's *right*, a statically linked /sbin/sh as
  roots default shell? 

 because the days of static bins are long passed.  

In most cases, yes.  However, the OP has a point.  Consider:

[vimes:~]$ ldd /bin/bash
libncurses.so.5 = /lib/libncurses.so.5 (0x40016000)
libdl.so.2 = /lib/libdl.so.2 (0x40055000)
libc.so.6 = /lib/libc.so.6 (0x40059000)
/lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x4000)

On my potato box, /lib/libc.so.6 is a symlink to libc-2.1.3.so.  If I've
fscked up that symlink, bash won't load.  Certain binaries, particularly
/bin/sh and /bin/ln, need to be statically linked to allow root to
recover from problems like this.  If you don't want to statically link
/bin/ln, then make sure that /sbin/ldconfig is statically linked.

(On my system, at least, ldconfig is statically linked.  Still doesn't
help much if I can't get to it because my shell won't load.)

 if *you* want this, Debian makes it even easier.  apt-get install
 sash.  not only is is statically linked it also includes enough stuff
 to help you save a system.
 
 Debian is very strongly against making any decision for you we do not
 have to make.  And almost all of our decisions can be overruled.

True, but I really can't see any harm in making root's shell a
statically-linked binary, myself.  After all, how many root shells do
you expect to have running at one time?

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Blocking 'unsubscribe'

2002-05-19 Thread Richard Cobbe
Lo, on Sunday, May 19, Michael C Alonzo did write:

 Osamu Aoki wrote this message last Sun, May 19, 2002 at 06:17:29AM -0700:
  On Sun, May 19, 2002 at 08:56:32PM +0800, Michael C Alonzo wrote:

  # blackhole for autoresponders
  :0
  * 1^0 ^From:[EMAIL PROTECTED]
  * 1^0 ^Subject: Automated reply from
  * 1^0 ^Subject: subscribe
  * 1^0 ^Subject: unsubscribe
  /dev/null
 
 what does 1^0 mean?

Procmail's weighted scoring rules.  See procmailsc(1) for details.

I'm not entirely sure why it's useful in this case, though.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: wrapping [was: Re: disable paragraph flows in mozilla?]

2002-05-17 Thread Richard Cobbe
Lo, on Saturday, May 18, Hans Ekbrand did write:

 On Fri, May 17, 2002 at 03:40:47PM -0700, Vineet Kumar wrote:

  The reason most people suggest 72 is that traditionally, terminals
  are 80 characters wide, and 72 leaves enough room to be quoted with
four times.

That's one of the reasons I like VM and Gnus.  They run in (X)Emacs, and
fill-paragraph-or-region (M-q) is almost always smart enough to get the
quoting brackets right when it refills a paragraph.

 Although I actually have a terminal (can't say I use it much though),
 I sometimes wonder if email conventions should be derived from
 limitations of such ancient hardware. In some sense, its a good
 practice to require as little as possible from the clients, but is
 80x25 a limit that anyone is facing anymore?

Yes.  My primary computer is in the shop, so I'm reduced to reading mail
on my firewall.  As it's a firewall with limited disk space and so
forth, I don't have X installed.  Thus, 80x25.

Plus, if I'm in a hurry, or over a slow network connection, I like to be
able to read my mail with /usr/bin/less.  The preponderance of
quoted-printable and base-64 and HTML, never mind long lines, makes this
difficult---IMO, for no real gain.  (Binary attachments are another
story, obviously.)

 So, a better argument for wrapping lines at 72 chars would perhaps be
 that it make the text easier to read (even if you have real screen
 estate that could handle a lot more).

True; it's long been understood in the professional typesetting
community that lines which are too long are difficult to read.  I've
even seen discussions of what `too long' means---I think it's a function
of how long the font's em-space is, but I don't remember the details off
the top of my head.

(Add this to the fact that most on-screen computer fonts, IMO, don't
have enough leading, and you've got serious legibility problems.)

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Line wrapping with mutt/emacs (was Re: Problems: mutt to Outlook (Express))

2002-05-16 Thread Richard Cobbe
Lo, on Thursday, May 16, Cam Ellison did write:

 * Gary Hennigan ([EMAIL PROTECTED]) wrote:
 snip
  
  And, *please*, for the love of God and country, can you wrap your
  lines at 70 characters or so?!
  
 
 I would love to, but every attempt seems to go nowhere.  Not everyone
 complains, but I see the litle red markers coming back indicating that
 mutt is wrapping them for me.
 
 So, here's another problem.  I have this in .muttrc:
 
 set editor=emacs '+/^$' \set textwidth=70\
 
 
 (Let's not have a flamewar about emacs vs vi, please.)  The
 information I have to date is that this should work.  It obviously
 doesn't.
 
 What should I do differently?

I'm not entirely sure what the '+/^$' means---is this perhaps a Mutt
thing?

The `set textwidth=70' is almost certainly incorrect.  In Emacs, the
relevant variable is fill-column, which is 70 by default.  However, this
only applies if you manually fill each paragraph (M-q) or turn on
auto-fill-mode.  To do the latter, add the following to your ~/.emacs
file:

(add-hook 'FOO-mode-hook 'turn-on-auto-fill)

where FOO is the major mode in which you edit your mail messages.  And,
if you're using XEmacs,

(require 'filladapt)
(add-hook 'FOO-mode-hook 'turn-on-filladapt-mode)

So, for instance, I've got

(require 'filladapt)
(add-hook 'mail-mode-hook 'turn-on-auto-fill)
(add-hook 'mail-mode-hook 'turn-on-filladapt-mode)

You shouldn't need to do this, but while we're at it, to change a
variable's contents, use something like the following:

(setq fill-column 72)

HTH,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Netscape Browser Masquerade

2002-05-14 Thread Richard Cobbe
Lo, on Tuesday, May 14, Arthur H. Johnson II did write:

 
 My bank's online account program only works with netscape classic, but
 recently they decided to only allow Red Hat 6.1 machines to access the
 program because they can support that version of Linux.  In other words,
 instead of supporting the browser, they support the OS they read from the
 broswer string.  Stupid, I know, but is there a way to masquerade my
 Netscape Classic browser to look like its coming from a Red Hat 6.1
 machine or better yet a Win95 machine?

If all else fails, you may want to look into using junkbuster as a
proxy.  Among other things, you can configure the User-Agent header sent
to the remote webserver.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Red Hat user shopping around

2002-05-08 Thread Richard Cobbe
Lo, on Wednesday, May 8, Glen Lee Edwards did write:

 Hi,
 
 I've been a loyal Red Hat user for the last 4 or 5 years.  Their recent
 distributions will no longer install on all my computers because they now
 require more than 16 Meg RAM.  I have a few questions:

SNIP

 How well does FVWM run on Debian systems?  I currently build FVWM rpms
 for Red Hat.

Quite well; it's my standard WM.  It's available as a normal Debian
package, as well, so you don't need to build it yourself.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Two users writing to the same file at the same time.

2002-05-05 Thread Richard Cobbe
Lo, on Saturday, May 4, AE Roy did write:

 I've set up my system with 15 computers and 60 users so that they have a
 directory where they all can share files, under /home/staff, I have
 them belongign to the group teacher who is the owner of /home/staff, and
 the GUID is set on /home/staff.

 And I have a problem; If two teachers deceides to work on the same file at
 the same time, then all changes made by the first to exit will be lost,
 without him noticing.

Yup.  Standard race condition.

 I know CVS, but thats not an option. People I've talked to that know MS
 say that in MS under the same situation, you'd gett a warning when someone
 already had that file open, does anything similar exist for linux?
 They all use OpenOffice.org to write these files.

First, why is CVS not an option?  Is it because you're working with
binary files?

Second: AFAIK, no, nothing similar to MS's behavior (``another program
already has this file open'') exists for Linux, unless you implement it
yourself.  It's a fundamental difference in the semantics of the
filesystem interface.  The Unix/Linux answer is to provide a separate
synchronization mechanism to prevent the race condition from occurring.
It's up to the appliation.  Most version-control systems like CVS, RCS,
et al do this.  If OpenOffice doesn't provide this functionality
already, using some sort of lockfile as another poster suggested is
the only other alternative I can think of.

HTH,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Getting mozilla going...

2002-05-02 Thread Richard Cobbe
Lo, on Thursday, May 2, Mike Fontenot did write:

 
 Actually, the more I look at the advice to patch up
 mozilla so that it can handle java applets, the less
 I understand the advice:
 
  unzip jre.xpi -d $MOZILLA_FIVE_HOME/plugins 'jre-image-i386/*'
  ln -s jre-image-i386/plugin/i386/ns600/libjavaplugin_oji.so .
 
 I don't have unzip on my potato system...just gunzip, and
 if I do a gunzip -c jre.xpi out_file_name, it says that

Different program; it's in either the `unzip' or the `unzip-crypt'
package; take your pick.  (See apt-cache search for more info.)

 Also, I don't understand his link: ln -s f1 f2 establishes a
 soft link from the new name f2 to the existing ordinary file f1,
 which doesn't make any sense to me with the syntax he gave.

`ln -s foo/bar/baz quux', where quux is a directory (like `.'), is
equivalent to saying `ln -s foo/bar/baz quux/baz'.  Same as with most of
the fileutils, actually.  This is documented on the ln man page,
although it's pretty terse.

HTH,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



xdvi won't display small caps from postscript fonts

2002-04-28 Thread Richard Cobbe
Evening, all.

I just noticed some odd behavior from the xdvi in woody; it doesn't seem
to display small caps from postscript fonts correctly.  The following
LaTeX file demonstrates the problem nicely:

%
\documentclass{article}

\usepackage{palatino}

\begin{document}

THIS IS ALL CAPS

\textsc{this is all caps}\qquad (but it was supposed to be small caps!)

\end{document}


If I run this, latex produces a dvi with no errors, and xdvi displays it
correctly, with one exception: the second line appears in full-size
capital letters, but spaced as though it were in small caps.  (In other
words, the letters are squished much more closely together than they
would be normally.)  I get this behavior for palatino, times, and
utopia.  If I comment out the usepackage line and use the Computer
Modern fonts, though, I get the small caps that I expect.

Also, I get the expected results with postscript fonts and dvips/gv or
other dvi viewers, like dvilx.  Looks to me like the problem is with
xdvi, rather than latex or my font installations.

LaTeX's font mechanism continues to elude me, and a Google search hasn't
really helped here.  Is this a bug, or a misconfiguration on my part?
(I haven't changed any of the tex configuration files, though I have
added some additional packages under /usr/local/share/texmf.
Temporarily removing them made no difference, though.)

Any suggestions would be welcome,

Richard

(Oh -- package versions:

[nanny-ogg:~/rec/books]$ COLUMNS=100 dpkg -l | grep ^ii\ \*tetex
ii  tetex-base  1.0.2+20011202-2basic teTeX library files
ii  tetex-bin   1.0.7+20011202-6teTeX binary files
ii  tetex-doc   1.0.2+20011202-2teTeX documentation
ii  tetex-extra 1.0.2+20011202-2extra teTeX library files


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug in xfree86-xserver setup?

2002-04-21 Thread Richard Cobbe
Lo, on Sunday, April 21, Andy Saxena did write:

 On Sun, Apr 21, 2002 at 01:46:31AM -0500, Dimitri Maziuk wrote:
  * Sridhar M.A. ([EMAIL PROTECTED]) spake thusly:
   Hi,
   
   Just today when apt-get upgrade installed the new version of
   xfree86-xserver in testing (4.1.0-16), I could not start X. It was
   looking for Nvidia chipsets. This appeared strange and on checking I
   found that the installation script retained almost every entry in my
   XF86Config-4 file and strangely changed the driver entry from mga to nv.
   It also removed one of the font paths. Should it not retain the old
   entries? It was funny looking at the config file: Card identification is
   Matrox G200, but the driver is nv :-)
  
  If anyone out there still thinks dexconf was a good idea,
  they ought to have their head examined.

Fuel for the fire?  Probably.  What the heck; here goes anyway.

Granted, I've only just recently upgraded from potato to woody, so I
haven't been using dexconf all that long.  And, I've been reading d-u for
a while, so I've had the opportunity to go to school, so to speak, on
the folks who met dexconf just after it was released.

However, I've really not had any major problems with it.  Based on the
instructions in the file, I moved my Fonts section outside the DEBCONF
region so I could add some additional truetypes, and everything's been
fine.  I even upgraded the XServer package yesterday; no breakage at
all.  Based on some of the things that posters here have said, I did
back up my XF86Config-4 first, just in case.  According to diff, the
upgrade changed *two* things in the file:

1) It added some comments explaining the use of dpkg-reconfigure and a
   reference to the FAQ.

2) It changed my mouse device from /dev/psaux to /dev/misc/psaux.  Since
   I'd previously upgraded from 2.2 to 2.4 and devfs, this was exactly
   the Right Thing to do.  (Course, I've got the old compatibility
   symlink, so it wasn't strictly necessary, but that's beside the
   point.)

From what I can see, dexconf is a step in the right direction (but
please read on before you start screaming about how it's screwed up your
XF86Config file).  It provides a way of setting X config options that
survives package upgrades, and it *is* optional -- you can remove your
XF86Config-4 file from dexconf's control if you so choose.  I don't
really know how it compares to other X configuration utilities, though:
before I upgraded to X 4.1, I used the same old XF86Config file I'd been
carrying around for 6 years, so I haven't had to reconfigure X from
scratch in a while.

However, while the design and intent are valid, the implementation
appears to leave some things to be desired.  First, while you can remove
XF86Config-4 from dexconf's control, it appears that this was not as
clear as it could have been early on.  Second, based on comments
elsewhere in this thread and on the list in general, there appear to be
some bugs in dexconf that get tickled during package upgrades.  While
these may mean that the program was released before it was ready for
prime time, they do not necessarily mean that the entire program was a
bad idea and should be discarded.

Intent != Implementation
Design != Implementation
(Oh, and Intent != Design as well, but that's not really my point.)

Generally, on the programs I write for a living, flaws in implementation
are best handled by submitting bug reports.  This even works well for
many design flaws, too.  IMO, complaints about the entire program being
a bad thing are only valid if its intent is wrong or if the design flaws
are irreparable.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: upgrade to ext3 ?

2002-04-14 Thread Richard Cobbe
Lo, on Sunday, April 14, Lars Jensen did write:

 Robin,
 
 How do I  place the module on the initrd?

On woody, at least, make sure it's listed in /etc/mkinitrd/modules.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: User's perspective on upgrading to kernel 2.4?

2002-04-12 Thread Richard Cobbe
Wow!  Lots of responses.  Thanks to all for your comments.  Rather than
write about 15 separate replys, I'm going to address most of the points
in this one mail.

Greg Madden [EMAIL PROTECTED] wrote:

 Two programs that from a user perspective cause workarounds are
 xcdroast/cdrecord/mkisofs  vmware. Each one is compiled for one kernel
 version. Switching causes them to not work until recompiled or new
 modules generated (vmware) If you are using Woody the 2.4 series works
 well, smp support is better.

VMWare isn't an issue; cdrecord and mkisofs are.  I just installed them
a couple of days ago on a 2.2 kernel, so I've already rebuilt them for
2.2.  I guess I can just swap those packages in and out, but that's kind
of a hassle.  Still, once I install 2.4, I only really intend to go back
to 2.2 in an emergency.

When you say SMP support is better: are you referring to the fact that
2.4 scales to a higher number of processors, or will I actually
experience a performance boost on my dual Athlon?

dman [EMAIL PROTECTED] wrote:

 If you use the packages provided by Herbert Xu, you'll find that apm
 and the cd driver are modules now.  Add them to /etc/modules to
 restore their functionality.  The Deps will ensure your modutils is
 new enough to handle it.  The other difference is using an 'initrd' in
 the boot sequence.  I don't know how to configure lilo for it, but
 grub is trivial.

Thanks for the warning about the CD driver; that would have been a
surprise.

Does APM work for SMP systems on 2.4?  You can enable it under 2.2; it
just doesn't do anything.

I used to do initrds under lilo when my root partition was on a SCSI
disk, so I can handle that part with no problems.  (I've also been
meaning to investigate grub for a while, but that's a separate issue.)

 For your firewall, load the 'ipchains' modules (or whatever it is
 called) to continue to use ipchains.  You can't mix ipchains and
 iptables, though.

I have a separate firewall machine running potato, so I can play with
iptables until I get used to it.  (In fact, that's one of the main
reasons I have a separate firewall machine.)

Paul 'Baloo' Johnson [EMAIL PROTECTED] wrote:

 I would highly recommend ext3 at this point, just make sure your
 e2fstools are the current version (in woody?  sid?), and once you've
 recompiled, tune2fs -j /dev/(your partition goes here).  Edit fstab so
 ext2 is now est3, and viola!  You're using a journaled, fast
 filesystem.

I think I'll keep my filesystems at ext2 for now, since I may need to
reboot under 2.2 in a pinch.  When I'm reasonably satisfied that 2.4 is
OK, I'll probably move up to ext3.

In another message, Paul also said:

 Moderately faster (YMMV) virtual memory system (2.4.10), better
 hardware support, and compile times stretching into the weeks on a
 386.  8:o)

Well, I have a dual Athlon 1900+; do you think compile times will be a
problem?  :-)  (make -j is your friend.)

Paul Smith [EMAIL PROTECTED] wrote, in response to my
question about /dev changing:

 That is only if you enable the dev filesystem, which is still beta,
 not enabled by default, and probably not recommended.

This is probably the biggest issue; I've gotten several conflicting
recommendations about devfs/devfsd.  What are the advantages of going to
the new devfs/devfsd?  I haven't really paid much attention to the buzz
about this issue, although I gather it generated some controversy early
in the 2.4 series.  FWIW, the hardware on this machine doesn't change
all that often; USB currently isn't an issue, as I don't have any USB
devices.

So: stick with the old dev, or use the devfs with devfsd running in
compatibility mode?  

Jeffrey W. Baker [EMAIL PROTECTED] said that in his experience, devfsd
in compatibility mode didn't create the necessary devices for SCSI hard
drives.  I've got one of those, although nothing critical is on it at
this point.  What would I need to do in order to get these devices back?

Thanks again,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: OT: vm vs mutt (was Re: Someone tell me the secret of mutt)

2002-04-12 Thread Richard Cobbe
Lo, on Friday, April 12, Craig Duncan did write:

 I just recently dumped Netscape mail in favor of vm (in emacs).  I'm a
 _long_ time emacs user and although i've heard a lot of good things
 about mutt, when i was trying to figure out what to replace Netscape
 mail with, i decided upon vm because i'd also heard good things about
 it and i could see a lot of benefits from reading my mail from within
 emacs where i have available all the editing capabilities that i've
 spent so many years mastering.

Same reason I switched -- 6 years ago.  (See X-Mailer header.)

 So i've been using vm for a few weeks and . . . it's not bad.  The
 benefits because of the vm/emacs connection are definitely valid.  But
 i'd probably give vm itself only a B.  I'm sure there's a lot more i
 can do in terms of customizing it (i _hate_ the fact that i can't
 delete a message until it's been opened ... which when its an html
 message is _way_ too slow, because it insists on rendering the html
 before it will make that message the current message so that it can
 then be deleted).

You can disable HTML rendering by default, and farm it out to a real web
browser (w3 doesn't work very well under my setup; it gets the colors
all wrong and illegible).  Stick the following in .vm:

(add-to-list 'vm-mime-internal-content-type-exceptions '(text/html))
(add-to-list 'vm-mime-external-content-types-alist 
   '((text/html netscape -remote 'openFILE(%f)' || netscape %f)))

Replace the last string with whatever browser invocation you like.  (I
really ought to get around to replacing mine with a call to galeon.)

With the above, you'll get a `button' for HTML content or attachments;
when you middle-click on it, it'll fire up your browser.  One annoying
thing, at least with VM 6: if the click actually starts the browser as
opposed to using an existing instance, when you move off that message,
it kills netscape.  :-(

 One last thing.  With vm, yesterday, it progressively became unable to
 get mail from my ISP's pop server (unknown name or service error).  It
 failed sporadically, then finally ceased to be able to retrieve mail
 at all (error occurring every attempt).  I even called up my ISP but
 what fixed it was _restarting_ emacs.  A!

Odd.  I use fetchmail/procmail to do some fairly complicated filtering;
VM gets mail out of a local spool.  Perhaps you'll have better luck with
that.  (If you're interested, I'll send you my .vm in a private
message.)

 
 
 -- 
 To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



User's perspective on upgrading to kernel 2.4?

2002-04-11 Thread Richard Cobbe
Greetings, all.

I'm happily running woody with kernel 2.2.20, and I'm thinking of
upgrading to kernel 2.4.18.  Before I do this, though, I'd like to know
more about this process -- it's been a long time since I upgraded across
minor version numbers.

First question: what are the major differences between 2.2 and 2.4 from
a *user's* perspective?  I know about the improved scheduler and
wake-one and USB support and things like that, but that's not what I'm
asking.  For now, I'm interested primarily in what differences I will
see as I use my system; I'll worry about the system-level programming
interfaces later.

Second question: is it possible to have a 2.2 kernel and a 2.4 kernel
installed on the same system and switch between the two by means of
LILO?  It's a little unclear, but I get the impression that the /dev
hierarchy, in particular, is very different between the two, which
suggests that switching back and forth is not very easy.

Any and all comments welcome,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Galeon not to be in Woody?

2002-04-10 Thread Richard Cobbe
Lo, on Wednesday, April 10, The Doctor What did write:

 I saw Anthony Towns's message about Galeon not being in Woody.
 
 Is this true even for galeon 1.0?

Not sure about this.  However, it's pretty straightforward to install
Galeon from Sid on top of a Woody system; I've been doing that here for
about 2 weeks now, and it's worked out just fine.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Unix(LF) files to MSDOS(CRLF) and vice versa

2002-04-09 Thread Richard Cobbe
Lo, on Tuesday, April 9, Daniel Toffetti did write:

   How can I use rpl (or any other suitable command) to transform the
   \n character between Unix and Msdos formats ?? rpl seems to be
   the right tool, but I can't figure out how to specify that strings.
 
  In the sysutils package, there are utilities to do this. Look at the
  manpages for dos2unix and unix2dos.
 
 #%[EMAIL PROTECTED](*) !!!  I've been looking for unix2dos, but failed to 
 find it 
 in dselect and apt-cache search.

On woody:

[nanny-ogg:~]$ apt-file search unix2dos
sysutils

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: 2 distros

2002-04-08 Thread Richard Cobbe
Lo, on Monday, April 8, j y did write:

[reformatted for 70 columns]

 Hi I'm using Suse 7.3 right now and I want to put debian on my hard
 drive. Is there an easy way to do this. I'm into easy if
 possible. thanks

I'm assuming you want to install the two distributions in parallel,
right?

It's certainly possible to have multiple distributions installed on the
same machine; I did this when I transitioned from RedHat to Debian about
18 months ago, and I'm doing it now as I'm transitioning from stable to
testing.

How easy this is depends on how your disk is currently partitioned.

The basic process:

1) Create a boot floppy that lets you get into your current Suse
   install.  You may not need this, but it's a good idea to have one
   around.

2) Adjust your partitioning scheme so that you've got room for Debian
   (see the installation guide as well as
   http://home.netcom.com/~kmself/Linux/FAQs/partition.html for
   suggestions on partition sizes and disk requirements.)  Other than
   possibly having to resize them, you'll leave your existing Suse
   partitions alone.

3) Install Debian on the new partitions.  There are some issues with
   uids and so forth, but you may be able to share your /home partition
   between distributions.  (This assumes you have a separate partition
   for /home, of course.)

   During the installation, you'll have the option to mount existing
   Linux partitions; you'll probably want to mount your Suse stuff at
   this point.  Use mount points like /suse, /suse/usr, and so forth.

4) Boot into debian using the boot loader installed during step 3.  Edit
   /etc/lilo.conf (or whatever boot loader config file you use) to
   provide choices for your Suse kernels as well as the Debian ones,
   reinstall the boot loader, and reboot.

HTH,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: more alt key questions

2002-04-07 Thread Richard Cobbe
Lo, on Saturday, April 6, Bob Thibodeau did write:

 On Sat, Apr 06, 2002 at 04:16:35PM -0600, Richard Cobbe wrote:

  First step: see what keysym your alt keys are generating.  Run xev (from
  the xbase-clients package), make sure the new window has focus, and hit
  both alt keys.  If you want the plastic bump labeled `alt' on your
  keyboard to act as meta in emacs, then you should see something like the
  following:

SNIP

  KeyRelease event, serial 27, synthetic NO, window 0x161,
  root 0x31, subw 0x0, time 223020366, (56,121), root:(810,145),
  state 0x8, keycode 113 (keysym 0xffe8, Meta_R), same_screen YES,
   ~~~  ~
  XLookupString gives 0 characters:  

SNIP

 Events are received, 64 and 113.

Since you've already fixed this, I'm just nitpicking.  The relevant
details here are not the keycodes but the keysyms.  Your keyboard
generates, in hardware, a number corresponding to each key (= physical
piece of plastic) on the board; this number is the keycode.  The X
server receives this and maps it into a keysym.  The server sends the
keysym to the application with input focus; apps should never see raw
keycodes.  (On PC hardware, anyway; not sure about how other systems
work off the top of my head.)

For this situation, you'd want to see the keys generating keysyms Meta_L
and Meta_R.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: more alt key questions

2002-04-06 Thread Richard Cobbe
Lo, on Saturday, April 6, Bob Thibodeau did write:

 On Sat, Apr 06, 2002 at 08:32:59PM +0200, Carel Fellinger wrote:
  On Sat, Apr 06, 2002 at 04:09:00AM -0500, Bob Thibodeau wrote:
   I just went back through the archives to questions I thought
   would help. They did, but for a different problem ( which
   I wasn't having).
   
   I used to have a 'meta' key in Emacs. I also used to do
   Ctrl-A, Alt-d in xterms to erase a command. Now my alt
   key doesn't affect the letters at all (alt-x prints 'x').
   
   I can still Ctrl-Alt-Fx to switch consoles and
   Alt-TAB switches windows in Icewm.
  
  Could it be that your window manager thinks alt keys are for him?
 
 I tried without a window manager with the same same results.
 No menu, of course, but no effect from the alt keys.

First step: see what keysym your alt keys are generating.  Run xev (from
the xbase-clients package), make sure the new window has focus, and hit
both alt keys.  If you want the plastic bump labeled `alt' on your
keyboard to act as meta in emacs, then you should see something like the
following:

KeyPress event, serial 27, synthetic NO, window 0x161,
root 0x31, subw 0x0, time 223019973, (56,121), root:(810,145),
state 0x0, keycode 64 (keysym 0xffe7, Meta_L), same_screen YES,
XLookupString gives 0 characters:  

KeyRelease event, serial 27, synthetic NO, window 0x161,
root 0x31, subw 0x0, time 223020047, (56,121), root:(810,145),
state 0x8, keycode 64 (keysym 0xffe7, Meta_L), same_screen YES,
XLookupString gives 0 characters:  

KeyPress event, serial 27, synthetic NO, window 0x161,
root 0x31, subw 0x0, time 223020300, (56,121), root:(810,145),
state 0x0, keycode 113 (keysym 0xffe8, Meta_R), same_screen YES,
XLookupString gives 0 characters:  

KeyRelease event, serial 27, synthetic NO, window 0x161,
root 0x31, subw 0x0, time 223020366, (56,121), root:(810,145),
state 0x8, keycode 113 (keysym 0xffe8, Meta_R), same_screen YES,
XLookupString gives 0 characters:  

If you see different keysyms, then your X keyboard settings are off.
Check your X server configuration file or use xmodmap(1) or xkeycaps to
adjust this.

If you don't see any events in response to you hitting the Alt keys,
then something (probably your window manager) is intercepting the keys
before it gets to the server.  Adjust your WM config.  (Based on your
description above, though, I don't think this is happening.)

HTH,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Re[2]: FQDN hostname

2002-04-05 Thread Richard Cobbe
Lo, on Thursday, April 4, Alan Poulton did write:

 Thursday, April 04, 2002, 7:35:53 PM, Richard Cobbe wrote:
 
  First question: do you want your FQDN to be
  hotstuff.bc.hsia.telus.net? With exim, I ran into some problems with
  that (mail for [EMAIL PROTECTED] getting sent to my ISP and so forth).  I
  ended up giving my system a hostname that wasn't valid outside my
  network, home.rcc.
 
 HMM! Interesting point!  No, I don't want mail for [EMAIL PROTECTED] to get
 sent to my ISP.  In that case, should I change my hostname?  Or is there
 a way to prevent it from routing to my ISP?

Actually, now that I think about it, I adjusted the exim configuration
(qualify_recipient and local_domains settings) to prevent this kind of
misdelivery; check your MTA's docs.  I think I redid the domain name
just for added safety, although there may well have been additional
reasons; I don't honestly remember.

 My etc/hosts now looks like this:
 
 127.0.0.1 hotstuff.bc.hsia.telus.net localhost loopback hotstuff
 192.168.43.1 hotstuff.bc.hsia.telus.net hotstuff
 
 -
 
 I have a Dynamic IP through my ISP - should I have (or need) the second
 line?

No, probably not.  It's superfluous, as well as likely incorrect---you
may have gotten 192.168.43.1 last time you connected, but you'll
probably get a different IP next time, especially if you're using PPP.

However, since I believe gethostbyname(3) and related functions search
through the entries in /etc/hosts in order, I don't think it's doing any
harm.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: FQDN hostname

2002-04-04 Thread Richard Cobbe
Lo, on Thursday, April 4, Alan Poulton did write:

 I'm slowly isolating my problems with sendmail and variants. It seems
 they're asking me for my Fully Qualified Domain Name.
 
 I've given the name of my box: hotstuff
 
 My ISP is telus.net, but they give me: bc.hsia.telus.net .. so I know my
 FQDN *should* be hotstuff.bc.hsia.telus.net
 
 But where do I enter that?

This trips a lot of people up; it's not at all clear.

First question: do you want your FQDN to be hotstuff.bc.hsia.telus.net?
With exim, I ran into some problems with that (mail for [EMAIL PROTECTED]
getting sent to my ISP and so forth).  I ended up giving my system a
hostname that wasn't valid outside my network, home.rcc.

Whatever you decide you want your FQDN to be, make sure your hostname
(as reported by /bin/hostname) is in fact `hotstuff'.  Then edit
/etc/hosts in one of the two following ways:

A) Static IP.  /etc/hosts should have at least the following two lines:

127.0.0.1 localhost
192.168.1.1   hotstuff.bc.hsia.telus.net hotstuff

   where 192.168.1.1 is your statically-allocated IP.  Note that your
   FQDN must be the *first* thing on the line after the IP address.

B) Dynamically-allocated IP (whether through DHCP, BOOTP, PPP, or
   another mechanism).  /etc/hosts should contain the following line:

127.0.0.1   hotstuff.bc.hsia.telus.net localhost hotstuff

   Again, the FQDN must be the first hostname on the line.

Once you've done this, verify with `/bin/hostname --fqdn'; it should
print out hotstuff.bc.hsia.telus.net.

That should set you up.

HTH,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Creating local package cache with apt-move

2002-03-31 Thread Richard Cobbe
Lo, on Sunday, March 31, Simon Hepburn did write:

 Richard Cobbe wrote:
 
  On a related note, how come /var/lib/dpkg/available keeps getting hosed?
  It's happened to me twice now.  The first time, I figured I was SOL and
  re-installed from scratch (fortunately I didn't lose much).  It happened
  again about an hour ago, but I recovered by blowing away the existing
  copy (on the grounds that I didn't have anything to lose) and rerunning
  `dselect update', which recreated it correctly.  What's going on here?
 
 Not sure. Did you originally create your local mirror with apt-move
 3.x (potato version) ? Are you now using 4.x (woody version) ? If so
 you need to read /usr/share/doc/apt-move/README. There were major
 changes between 3.x and 4.x. If you have not done apt-move fsck then
 perhaps the Packages.gz apt-move creates are corrupted and this in
 turn is affecting /var/lib/dpkg/available.

No, I upgraded to woody from a brand-new, minimal potato install, so I
only started creating the mirror with apt-move 4.1.20.

I am running apt-move fsck as I type this, though, so we'll see if that
fixes the problem.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: why use sendmail?

2002-03-31 Thread Richard Cobbe
Lo, on Sunday, March 31, Simon Hepburn did write:

 dman wrote:
 
  The reason is that KMail is not a proper SMTP client.  The RFCs (821,
  2821) state that if a message can't be delivered to the next server in
  charge, then it must keep the message and retry later.  It can't just
  say oh, well and give up.  KMail (along with Lookout and every other
  User Agent) doesn't do this.

Haven't read the RFCs in question: does that requirement apply to MUAs,
or just MTAs?

 In such a situation KMail keeps the undelivered message in the outbox. I 
 don't think you could call that giving up. The message is not simply lost 
 in cyberspace. How does Kmail keeping the message and trying again later 
 differ from what an mta does ?

First, not all MUAs have that kind of functionality.  In fact, the
`traditional' Unix MUAs, written in the days before POP was the dominant
mechanism for retrieving email, pretty much require that there be an MTA
running locally which can handle queuing and delivery issues.  (And, in
general, this MTA darn well better be called /lib/sendmail or
/usr/lib/sendmail!)  The mailer that I use, VM, also makes this
assumption, although it's possible to get it to talk to a remote MTA.
It's been so long since I've used that configuration that I don't recall
exactly what happens if the remote MTA is inaccessible.

Second, if you've got exim running constantly (instead of from
/etc/inetd.conf), it'll retry in the background without users having to
take any action.  I guess it's my software engineering/development
training, but I like the idea of placing the queueing functionality in
one place (the MTA), rather than replicating it out among lots of
different places (all the various MUAs that people use).  Still, I guess
this is more important on a large multi-user system.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: testing + some unstable packages

2002-03-31 Thread Richard Cobbe
Lo, on Sunday, March 31, Alan Su did write:

 i've got an installation of testing on my laptop, but i wanted to
 upgrade a few packages (notably galeon) to the unstable version.  i'd
 rather not just point apt to the unstable distribution, as i'm pretty
 happy with the way testing is working out for me.
 
 in order to do this, i'm manually using dpkg to install packages that
 i download from the unstable distribution.  however, for this to work,
 i need to upgrade a bunch of other packages on which these few
 packages depend.  is there a way to determine the set of packages
 which must be installed (from the unstable distribution) before i go
 about installing the packages i want?  thanks for any ideas you might
 have!

Just spent some time experimenting with this myself; here's what I did.

1) Add lines for unstable to sources.list.
2) Add the following two stanzas to /etc/apt/preferences:

Package: *
Pin: release a=testing
Pin-Priority: 900

Package: *
Pin: release a=unstable
Pin-Priority: 200

3) apt-get update
4) apt-get -t unstable install galeon
(As documented in apt-get's manpage, the -t flag temporarily
raises the relevant packages from unstable to priority 990; as
long as this is higher than the priority assigned to testing,
you'll get what you want.)

Now, galeon and all of the packages on which it depends should track
unstable, while the rest of your system should track testing.

I've installed galeon and gnucash this way; everything seems to be
working just fine.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Creating local package cache with apt-move

2002-03-30 Thread Richard Cobbe
Lo, on Saturday, March 30, Simon Hepburn did write:

 Richard Cobbe wrote:
 
  I've added the local mirror to sources.list, but neither apt-get nor
  dselect appears to see these files.  What step have I missed?
 
 It's a simple typo. Change deb file:///home... to deb file://home...

No, the three slashes are in fact correct.  Turns out you have to write
the Release files by hand.  (I have to say, I do wish this part of the
process were documented better---if not necessarily in the apt-move
manpage, then at least in /usr/share/doc/apt-move.  It looks like there
are already some bug reports which address this, though.)

So, I created the following release files

./dists/woody/main/binary-i386/Release
./dists/woody/contrib/binary-i386/Release
./dists/woody/non-US/main/binary-i386/Release
./dists/woody/non-US/contrib/binary-i386/Release
./dists/woody/non-US/non-free/binary-i386/Release
./dists/woody/non-free/binary-i386/Release

based on the following model (with different Component values as
appropriate):

Archive: testing
Component: main
Origin: Debian
Label: Debian
Architecture: i386

Things are fine now; apt-get and dselect see the package files in my
local mirror.  (I found found this after a little bit more careful
digging through the debian-user archives.)

 BTW when I reply to your message only part of it shows up in my
 mailer. Not sure if that is my problem or yours.

I doubt this is a problem; it's probably just the way KMail works.  I
included the sources.list and apt-move.conf files as MIME attachments
(text/plain); a lot of mailers tend to display that MIME type inline but
not include it in replies.  I know VM works that way, ISTR that
Netscape's mailer does also.

Thanks,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Creating local package cache with apt-move

2002-03-30 Thread Richard Cobbe
Lo, on Saturday, March 30, Simon Hepburn did write:

 Richard Cobbe wrote:
 
  No, the three slashes are in fact correct.
 
 Hrmm.. just to add to the confusion. I just checked my sources.list
 and saw I was using single slash. I did man sources.list to see if I
 was going out of my mind. I'm not.

No; it's apparently variable.  Still, the apt-move manpage did say to
use a file: URI, and every time I've seen those on a Unix system to
date, they've started with three slashes.  That, however, may be a
netscape/mozilla peculiarity; I believe the kernel considers multiple
consecutive slashes to be superfluous and throws away all but one.

   Turns out you have to write the Release files by hand.
 
 I have never had to do that. If I comment out all lines in sources.list 
 except my local mirror and do dselect|update I get to see everything in 
 there.

 I'm not sure this is the problem. I just checked your apt-move.conf
 again and noticed a couple of problems:
 
 ARCHS=alpha arm hurd-i386 i386 m68k powerpc sparc
 Do you really have all these installed ;-)

Yeah, I realized that only *after* I did a `apt-move sync' and pulled
down all sorts of random crud.  This is now i386 only.

 DIST=potato
 This should be testing. At least, that's what your sources.list points to. 
 They need to be the same.

Yeah; saw that at the same time; it's now `woody'.

 However...
 
  Things are fine now; apt-get and dselect see the package files in my
  local mirror.  (I found found this after a little bit more careful
  digging through the debian-user archives.)

...where `now' is defined to mean `after I fixed the above two settings.'
 
 If it ain't broke don't fix it :-) 

In general, I agree, but I would like to understand exactly what's going
on here.  I may try moving the Release files outside the mirror,
commenting everything out of sources.list, and trying again.

   BTW when I reply to your message only part of it shows up in my
   mailer. Not sure if that is my problem or yours.
 
  I doubt this is a problem; it's probably just the way KMail works.  I
  included the sources.list and apt-move.conf files as MIME attachments
  (text/plain); a lot of mailers tend to display that MIME type inline but
  not include it in replies.  I know VM works that way, ISTR that
  Netscape's mailer does also.
 
 I guessed they were attachments #1 and #2, what I was referring to was this:
 
 Any advice would be very welcome.
 
 Thanks,
 
 Richard

Oh.  More attachment stuff; this is just how VM handles attachments
interposed into the body of the message.  I'd attached the two config
files, then kept typing; VM basically considers everything after the
first attachment to be an attachment, possibly of type text/plain with
inline content disposition.

On a related note, how come /var/lib/dpkg/available keeps getting hosed?
It's happened to me twice now.  The first time, I figured I was SOL and
re-installed from scratch (fortunately I didn't lose much).  It happened
again about an hour ago, but I recovered by blowing away the existing
copy (on the grounds that I didn't have anything to lose) and rerunning
`dselect update', which recreated it correctly.  What's going on here?

Thanks much for your advice,

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Junkbusting flash ads?

2002-03-30 Thread Richard Cobbe
Lo, on Saturday, March 30, dman did write:

 If root doesn't put the plugin in the global plugin folder, and you
 use the local plugin folder instead, then you don't need root
 permissions.  Only a marginal improvement, I know.

Where is the local plugin folder?  I've tried installing into
~/.mozilla/plugins, but no success.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: why use sendmail?

2002-03-30 Thread Richard Cobbe
Lo, on Saturday, March 30, Sean 'Shaleh' Perry did write:

 
 On 30-Mar-2002 John Lord wrote:
  Hi folks,
  
  
  Just a simple question realy, why should I use sendmail in
  conjunction with KMail, rather than let KMail do the job?
  
  I'm sorry but I can't see the reason why, but there probably is
  one. I have sat reading the various files about setting it up, but
  have drawn a blank.  Having a bit of a confusing time atm ;-)

Sorry; missed the OP.

Let me make sure I understand the question: you're trying to choose
between two options.

1) Have your MUA (KMail) send outgoing mail directly to your ISP's mail
   relay.

2) Run an MTA like sendmail locally, have it relay outgoing mail up to
   your ISP, and configure the MUA to route outgoing mail through the
   local MTA.

Right?

 what if you want to send mail from something else?  for instance the console
 program 'reportbug' which helps you submit proper bug reports.

That's one reason for option #2, sure.

Another one: if, for whatever reason, the connection between you and
your ISP is unavailable for a while, a locally-running MTA will
automatically queue outgoing mail until delivery becomes possible
again.  No user action necessary.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: OT: Is .RTF an Open Standard?

2002-03-29 Thread Richard Cobbe
Lo, on Thursday, March 28, Markus Grunwald did write:

 Hello !

Please don't delete the attributions.

   Besides, do most Windows apps (Word, 
   WordPerfect) allow saving to this format?
  
  Not saving as such, but it is possible to convert a Word document to
  postscript [...description deleted...] there
  you are.
 
 This is exact the way those 4MB postscript files are created. (compared
 to the 4kb postscript files from LaTeX).
 
 I never found a windows postscript driver that wasn't bloated.

Oh, I never made any claims about the quality of the resulting
postscript.  In fact, in some situations, there's about a 3-line PCL
header that gets written to the top of the file, and you have to go edit
that out before gs can deal with the resulting file.  The whole thing's
a bit of a pain, but it's the best thing I've found.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Creating local package cache with apt-move

2002-03-29 Thread Richard Cobbe
Greetings, all.

I know this is a FAQ, but I can't find the answer anywhere.

So, I'm in the process of upgrading from stable to testing (again---the
first time, somebody crapped on /var/lib/dpkg/available, and I couldn't
repair it.  There was a large chunk missing out of the middle, so far as
I could tell).

So, I dutifully saved all the .deb files I downloaded during the first
attempt, and I used apt-move to move them off to another directory.
I've added the local mirror to sources.list, but neither apt-get nor
dselect appears to see these files.  What step have I missed?

When I do a dselect update or an apt-get update, I see messages that
appear to indicate that it is ignoring the Release file in my local
mirror.  I've rebooted into potato, so I don't have the error message
handy.  In any case, when I install packages that I do have locally,
dselect still downloads them off the 'net.

# See sources.list(5) for more information, especialy
# Remember that you can only use http, ftp or file URIs
# CDROMs are managed through the apt-cdrom tool.
#deb http://http.us.debian.org/debian stable main contrib non-free
#deb http://non-us.debian.org/debian-non-US stable/non-US main contrib non-free
#deb http://security.debian.org stable/updates main contrib non-free

# Uncomment if you want the apt-get source function to work
#deb-src http://http.us.debian.org/debian stable main contrib non-free
#deb-src http://non-us.debian.org/debian-non-US stable non-US

deb file:///home/mirrors/debian/ testing main non-free contrib

deb http://mirrors.kernel.org/debian/ testing main non-free contrib
deb-src http://mirrors.kernel.org/debian/ testing main non-free contrib

deb http://download.sourceforge.net/debian/ testing main non-free contrib
deb-src http://download.sourceforge.net/debian/ testing main non-free contrib

deb http://non-us.debian.org/debian-non-US testing/non-US main non-free contrib
deb-src http://non-us.debian.org/debian-non-US testing/non-US main non-free 
contrib
#  Configuration file for the apt-move script.
#
#  You should modify the following configuration to suit your system.
#  See the apt-move(8) manpage for information about these settings.
#
#  The defaults for this file are simply the settings I currently use.

# Configuration for apt-move script --

# The sites in ``/etc/apt/sources.list'' that you wish to mirror.
#APTSITES=debian.midco.net non-us.debian.org
APTSITES=mirrors.kernel.org download.sourceforge.net non-us.debian.org

# The architectures that your site contain, separated with spaces.
ARCHS=alpha arm hurd-i386 i386 m68k powerpc sparc

# The absolute path to your debian directory (top of your local mirror).
# This MUST appear as the first entry of your sources.list if you use
# sync or mirror.
LOCALDIR=/home/mirrors/debian

# The distribution you want to mirror (see the apt-move(8) manpage for
# details) 
DIST=potato

# The package types you want to mirror. 
# Possible values are: binary, source, and both (use only one).
PKGTYPE=binary

# The full (absolute) path to your local cache of package files. The default
# will work for the apt-get packages, unless you've reconfigured apt.
FILECACHE=/var/cache/apt/archives

# The full (absolute) path to your local cache of Packages files.  The
# default will work for the apt-get Packages, unless you've reconfigured apt.
LISTSTATE=/var/lib/apt/lists

# Do you want apt-move to delete obsolete files from your mirror? (yes/no)
DELETE=no

# Maximum percentage of files to delete during a normal run.
MAXDELETE=20

# Only move packages if its version matches exactly with the master files.
# (yes/no)
STRICTMOVE=no

# End Configuration --

Any advice would be very welcome.

Thanks,

Richard


Re: Creating local package cache with apt-move

2002-03-29 Thread Richard Cobbe
Lo, on Friday, March 29, Richard Cobbe did write:

 Greetings, all.
 
 I know this is a FAQ, but I can't find the answer anywhere.
 
 So, I'm in the process of upgrading from stable to testing (again---the
 first time, somebody crapped on /var/lib/dpkg/available, and I couldn't
 repair it.  There was a large chunk missing out of the middle, so far as
 I could tell).
 
 So, I dutifully saved all the .deb files I downloaded during the first
 attempt, and I used apt-move to move them off to another directory.
 I've added the local mirror to sources.list, but neither apt-get nor
 dselect appears to see these files.  What step have I missed?
 
 When I do a dselect update or an apt-get update, I see messages that
 appear to indicate that it is ignoring the Release file in my local
 mirror.  I've rebooted into potato, so I don't have the error message
 handy.  In any case, when I install packages that I do have locally,
 dselect still downloads them off the 'net.

SNIP

Sorry; I should have pointed out in my first post that

cd /home/mirrors/debian
find . -name Release\* -print

doesn't find anything.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: OT: Is .RTF an Open Standard?

2002-03-27 Thread Richard Cobbe
Lo, on Wednesday, March 27, Kent West did write:

   .PDF and postscript are great display formats, but they're not very 
 useful for actual editting. Besides, do most Windows apps (Word, 
 WordPerfect) allow saving to this format?

Not saving as such, but it is possible to convert a Word document to
postscript (and then on to PDF through ps2pdf or something similar).
Install a new printer, attached to the local machine, on port FILE:
(instead of LPT1:), and select a driver for a postscript printer.  I
usually use something like one of the HP LaserJets for this.  You may
need to tweak the driver's options for maximum PS compatibility.  Then,
open your app, print to this printer, type in the filename, and there
you are.

It is, however, a one-way translation; I don't know of any programs
that allow you to open a PS file and edit it.

This is probably much more complicated than your users will want to deal
with, but I find it a useful trick from time to time.

Richard


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: ? about C++

2002-03-11 Thread Richard Cobbe
Lo, on Thursday, March 7, Dimitri Maziuk did write:

 * Craig Dickson ([EMAIL PROTECTED]) spake thusly:
  begin  Dimitri Maziuk  quotation:
  
   Does anyone know about a school that teaches people to use
   vector, auto_ptr, basic_string and references?
  
  I have no idea what they teach in school these days, but I should think
  they would have to teach references if they teach operator overloading.
 
 Not according to the code I get to play with.
 
 [ snip ]
  The standard auto_ptr is an abomination.
 [ snip ]
 
 Well, there's that... I'd argue that garbage collection is very
 hard to get 100% right anyway

In C and C++, this is basically correct, although it depends on exactly
what you mean by `right'.  In advanced languages which actually provide
type safety, I disagree.  Correct GCs have been pretty well understood
since, oh, about the mid '70s.  (I want to say '74, but I don't recall
exactly when the mark-and-sweep algorithm was developed.)  Most of the
research in this area since then has been concerned with performance
improvements, rather than correctness issues.

This is pretty irrelevant when you're talking about auto_ptr; it's not a
full GC.  IMO, most of auto_ptr's bad reputation comes from people
trying to use it as though it single-handedly solved all memory
allocation problems.  It doesn't, and it was most specifically not
designed for that in the first place.  It does, however, do a wonderful
job at what it *was* designed for: it prevents memory leaks in the face
of exceptions.

 and there are other ways to detect memory leaks...

Yes, although this is not really one of the main benefits of using a
GC.  For my money, one of the primary benefits of GC is that I don't
have to worry about a lot of really annoying bugs, especially dangling
pointers or heap corruptions through incorrect deletions.

 Anyhow, my point was, name 4 problem areas in C.
 
 1. No array bounds checking (Fix: use vector or equivalent)

Agreed.

 2. Pointers (Fix: use references, iterators etc.).

Pointers are not the problem; the fact that they're not safe is the
problem.  Solution: get rid of void*, and prevent all typecasts
involving pointers except those between different classes in an
inheritance hierarchy.  Oh, and check pointer arithmetic, too, but
that's basically equivalent to point #1.

Referencs a la C++ have their own problems; they allow variable
aliasing, which causes all kinds of grief during optimization.

 3. Forgetting to free() malloc()'ed memory (Fix: use auto_ptr, 
destructors etc.)

auto_ptr and destructors are a step in the right direction, but I don't
find that they're completely general.  This may be due to the fact that
I'm used to the freedom and flexibility that comes with a GC language.

 4. Using pointers to (hopefully array) (hopefully allocated) (hopefully
null-terminated) of char as string data type (see also 1, 2,  3)
(Fix: use std::string)

Oh, absolutely.  This is basically related to the other three points.

Richard



Re: SoundBlaster Live!, /dev/audio, and bad sound quality

2002-03-05 Thread Richard Cobbe
Lo, on Monday, March 4, Dave Sherohman did write:

 On Sun, Mar 03, 2002 at 01:22:38PM -0600, Richard Cobbe wrote:
  In general, everything works fine, except that sending .au files to
  /dev/audio has really lousy sound quality.  You can hear the sound, but
  there's a loud hissing or static sound on top of it.
 
 I'd guess it's either that the .au is 8-bit and /dev/audio is
 expecting 16-bit data or the .au contains signed data and /dev/audio
 is expecting unsigned (or vice-versa).

Possible, but I'd be surprised.  The same sound file worked just fine on
my old computer.  Don't remember exactly what kind of sound hardware,
but the following are the relevant module configuration lines:

alias sound opl3sa2
#pre-install sound /sbin/insmod sound dmabuf=1
alias midi opl3
options opl3 io=0x388
options opl3sa2 mss_io=0x530 irq=5 dma=0 dma2=1 mpu_io=0x388 io=0x370
alias synth0 off

Out of curiosity, is there any way to convert between 8- and 16-bit .au,
or between signed and unsigned?  A cursory glance at the sox manpage
doesn't seem to indicate that these operations are supported.

In any case, using play (from sox) to play the .au file works just fine.

Richard



Re: SoundBlaster Live!, /dev/audio, and bad sound quality

2002-03-05 Thread Richard Cobbe
Lo, on Monday, March 4, dave mallery did write:

 On Sun, 3 Mar 2002, Richard Cobbe wrote:
 
  Greetings, all.
  
  New sound card (actually, new computer).  It's a SoundBlaster Live!, so
  I've compiled in support for the emu10k1 module.
  
  Up-to-date potato, kernel 2.2.20, SMP.
  
  In general, everything works fine, except that sending .au files to
  /dev/audio has really lousy sound quality.  You can hear the sound, but
  there's a loud hissing or static sound on top of it.  I had this
  behavior both with emu10k1 v0.7 included with the 2.2.20 source, and
  emu10k1 v0.18 that I just downloaded from SourceForge.  I tried playing
  with various mixer settings, but that didn't help.  The only way I could
  get rid of the hissing was by turning the volume all the way down.
 
 did you mute all the other inputs on the mixer?

Just tried it.  No luck.

As I remarked elsewhere in the thread, though, play(1) from the sox
package works quite nicely.

Richard



Re: SoundBlaster Live!, /dev/audio, and bad sound quality

2002-03-05 Thread Richard Cobbe
Lo, on Sunday, March 3, Rick Macdonald did write:

 On Sun, 3 Mar 2002, Richard Cobbe wrote:
 
  Greetings, all.
  
  New sound card (actually, new computer).  It's a SoundBlaster Live!, so
  I've compiled in support for the emu10k1 module.
  
  Up-to-date potato, kernel 2.2.20, SMP.
  
  In general, everything works fine, except that sending .au files to
  /dev/audio has really lousy sound quality.  You can hear the sound, but
  there's a loud hissing or static sound on top of it.

SNIP

 I wonder if it's meant to work. cat english.au  /dev/audio on my system
 also sounds horrible, but the same file sounds fine when played with
 xanim, esdplay or play. I actually don't know where play came from.
 bplay, from the bplay package, sounds horrible just like cat does.

You're apparently right -- play works quite nicely on the same file.

I'd be interested in learning the differences between play and catting
to /dev/audio, but I'm not going to worry about it too much.  I have a
quick way of playing .au files, and that's really all I'm after.

 I think cat'ing the file _used_ to sound OK on the old sound card (SB16)
 that I had before the the SBLive.

Yeah, the same file sounded good on the sound hardware in the previous
machine.  I'm a little unclear on details, but I think it was a cheap
SB16 knockoff.  (Exact modules info elsewhere in the thread.)

Thanks for the advice,

Richard



Re: How do I turn off the Virtual Desktop?

2002-03-05 Thread Richard Cobbe
Lo, on , March 4, O Polite did write:

  
  a. keep hitting alt + until it is no longer virtual
  
 
 I never got this trick to work. Is this behaviour  controlled from
 XF86Config-4? I can't see anything in there that indicates that. I'm
 using KDE. Might standard KDE key settings be hiding this behaviour?

The quoted text above is incorrect; try ctrl-alt-plus and
ctrl-alt-minus, and you have to use the plus and minus keys on the
numeric keypad.  The ones on the top row of the keyboard won't do
anything (well, they get passed on to whichever application has keyboard
focus; whatever happens then is up to the app).

Richard



SoundBlaster Live!, /dev/audio, and bad sound quality

2002-03-03 Thread Richard Cobbe
Greetings, all.

New sound card (actually, new computer).  It's a SoundBlaster Live!, so
I've compiled in support for the emu10k1 module.

Up-to-date potato, kernel 2.2.20, SMP.

In general, everything works fine, except that sending .au files to
/dev/audio has really lousy sound quality.  You can hear the sound, but
there's a loud hissing or static sound on top of it.  I had this
behavior both with emu10k1 v0.7 included with the 2.2.20 source, and
emu10k1 v0.18 that I just downloaded from SourceForge.  I tried playing
with various mixer settings, but that didn't help.  The only way I could
get rid of the hissing was by turning the volume all the way down.

Other sound operations, like playing MP3s or WAVs in xmms, work fine; no
hissing or static at all.  (I only tried this with v0.18 of the driver,
though.)

I don't know that many things use /dev/audio any more, so this isn't
that big a deal, but I'd like to get this working---especially because I
typically use english.au from www.kernel.org to test the sound system
after I change things.  Nice, easy, and short.

Any suggestions would be appreciated,

Richard



Re: Help!!! undelete for ext3fs!!!

2002-03-01 Thread Richard Cobbe
Lo, on Friday, March 1, Jeff did write:

 Ulf Rompe, 2002-Mar-01 10:25 +0100:
  
  alias rm = 'mv --backup=numbered --target-directory=/tmp/Trashcan'
  
 
 This is nice, and I'm starting to use this from my root and user
 account on my laptop.  
 
 However, how would I delete from the Trashcan, save removing the 
 alias temporarily?

/bin/rm /tmp/Trashcan/whatever

Specifying the full path blocks alias/function expansion.

Richard



Re: Swap space

2002-02-25 Thread Richard Cobbe
Lo, on Sunday, February 24, Charles Baker did write:

 I'm about to install sid, using unoffical iso's, on a
 machine w/ 384MB of RAM. Old rule of thumb was
 2*RAM-SIZE = SWAP-SIZE . Do I really need 768MB of
 swap space?!?!?! Plus, since the install uses 2.2.20
 kernel, will it be able to handle a swap space larger
 than 128MB?

I don't remember the exact details, but the 128M limit on swap space was
relaxed quite some time ago.  2.2.20 should handle a single 768M swap
partition/file quite nicely.

You'd need to know what you're planning on using the machine for in
order to decide if 768M swapspace is really necessary.

Richard



Re: ipchains on startup

2002-02-25 Thread Richard Cobbe
Lo, on Monday, February 25, [EMAIL PROTECTED] did write:

 Just wondering if Debian had any specific place to put ipchains stuff for 
 initialising the rules on bootup.

See the ipmasq package.

Richard



Making floppy images (was Re: 3c509 setup disk?)

2002-02-25 Thread Richard Cobbe
Lo, on Monday, February 25, Arthur Buijs did write:

 Baloo,
 
 if you tell me how to make a dd image, (I'm almost new to Linux) I will
 send it to you as soon as I'm back in the office. (Aproximately 4 hours
 from now.)

Assuming the disk is in your first drive (what Microsoft calls A:), then

dd if=/dev/fd0 of=floppy.img bs=1440K

Adjust the bs (block size) value as necessary.

Richard



Re: cvs stupid user problem, please help me debug

2002-02-23 Thread Richard Cobbe
Lo, on Friday, February 22, Paul E Condon did write:

 I saved a cvs repository of personal programs in RedHat some time ago,
 before I discovered Debian. Now I'm trying install it and access the
 files in it. I used tar to save the whole repository as cvs.tgz. When
 I untar it, I discover that the permissions are wrong and the group
 assignment of the files is different from the Debian convention.  So I
 set about to make them right for Debian, but I can't get it to work.

To which Debian convention are you referring?  The numeric GIDs should
have shifted, but other than that, I'm not aware of any Debian
convention for ownership of files in a CVS repository.  In fact, due to
the way CVS repositories work, such a convention would most likely be
counterproductive.

 First question: Debian uses src as the conventional group name for
 cvs internal files.

Granted, I've customized my repository pretty heavily, but I don't see
this:

[ankh-morpork:/var/cvs]$ /bin/ls -ld /var/cvs
drwxr-xr-x4 root root 4096 Feb 10  2001 /var/cvs
[ankh-morpork:/var/cvs]$ /bin/ls -l /var/cvs
total 8
drwxrwxr-x3 root cvsadmin 4096 Feb 13  2001 CVSROOT
drwxr-xr-x   10 cobbecobbe4096 Feb  6 21:08 cobbe

IIRC, I created the cvsadmin group, but the permissions on /var/cvs
seem to indicate that the group permissions just default to whatever's
appropriate for the user who created the repository.

Which version of CVS are you using?  The above results are from
1.10.7-7, the version in potato.

 Second question: What permissions do I need to set? I've done
 chmod g+w *
 everywhere that there is a directory, but I still get these error 
 messages when I try to checkout a sample program directory, e.g.:

While Debian and RedHat may have default groups that they use for CVS
repositories, this isn't reallly all that relevant.  As long as all of
the CVS users have write access to the appropriate directories,
everything should work out.

Typically, one does this by manipulating the group ownerships and
permissions of the files in the repository.  I'm assuming that you want
all-or-none access: each user can access either the entire repository or
none of the repository.  (More finely-grained access controls are
possible, but they complicate the process somewhat.)  I'm also assuming
a `local' or `ext' access method, as opposed to `pserver'.

I'll use two metavariables to make the explanation easier: $CVS_GROUP is
to the name of the group controlling CVS access, and $REPOSITORY is the
name of the directory which contains the repository files (e.g., where
you untarred your archived repository).

- addgroup $CVS_GROUP
- Use adduser to ensure that all of the appropriate users are in
  $CVS_GROUP.
- chgrp $CVS_GROUP $REPOSITORY
- chmod g+ws $REPOSITORY
## The setgid bit is crucial.
- find $REPOSITORY -print | xargs chgrp $CVS_GROUP
## Make everything owned by the CVS group.
- find $REPOSITORY -type d -print | xargs chmod g+ws
## Make all directories group-writable and setgid.

Make sure that you're running in $CVS_GROUP (you may need to log out and
log back in), and you should be good to go.

(If you don't understand what all of the above does or why you're doing
it, please ask.  That understanding will help you deal with other
repository problems later on.)

 [EMAIL PROTECTED]:~/co$ cvs co hello
 cvs checkout: warning: cannot write to history file 
 /var/lib/cvs/CVSROOT/history: Permission denied
 cvs checkout: Updating hello
 cvs checkout: failed to create lock directory for `/var/lib/cvs/hello' 
 (/var/lib/cvs/hello/#cvs.lock): Permission denied
 cvs checkout: failed to obtain dir lock in repository 
 `/var/lib/cvs/hello'
 cvs [checkout aborted]: read lock failed - giving up
 [EMAIL PROTECTED]:~/co$

Yeah.  User pecondon doesn't have write access to the repository
directory.  The above process should fix this.

HTH,

Richard



make-kpkg with -j switch?

2002-02-23 Thread Richard Cobbe
Hello, all.

I've done a fair amount of searching on this topic, but a search key of
-j confuses a lot of the search engines, so I've not been able to find a
conclusive answer here.

Is it possible to have make-kpkg supply the -j switch to make?  I've got
a dual-processor machine, and I'd love to build the kernel with `-j 2'.

(FWIW, I'm using kernel-package 7.04.potato.3 from potato.)

Thanks,

Richard



Re: make-kpkg with -j switch?

2002-02-23 Thread Richard Cobbe
Lo, on Saturday, February 23, Sean 'Shaleh' Perry did write:

 
 On 23-Feb-2002 Richard Cobbe wrote:
  Hello, all.
  
  I've done a fair amount of searching on this topic, but a search key of
  -j confuses a lot of the search engines, so I've not been able to find a
  conclusive answer here.
  
  Is it possible to have make-kpkg supply the -j switch to make?  I've got
  a dual-processor machine, and I'd love to build the kernel with `-j 2'.
  
  (FWIW, I'm using kernel-package 7.04.potato.3 from potato.)
  
 
 man kernel-pkg.conf and look for 'CONCURRENCY_LEVEL'.  Not sure if the potato
 version has the option but it does exist under newer versions.

Yes, it's there in potato as well.

Thanks to you and Faheem for the pointer.

Richard



Re: potato web browsers

2002-02-17 Thread Richard Cobbe
Lo, on Sunday, February 17, will trillich did write:

 On Sun, Feb 17, 2002 at 09:35:01AM -0600, Alex Malinovich wrote:
  Galeon should work with no problems on a Potato system. I had it running
  for a couple of weeks on my desktop before I upgraded to sid. I've run
  it on a P133 with 40 megs of RAM with no major problems. And Galeon is,
  by far, the most superior browser I've had the pleasure of EVER using.
  There are relatively recent deb's available in non-US.
 
 what's the potato-friendly way to get galeon installed? i've got

Ximian's got packages for galeon (plus lots of other Gnome stuff).

For sources.list:
deb http://red-carpet.ximian.com/debian stable main

Note that Ximian does have later versions of packages which appear in
potato.  As a result, it's possible that things may break.  I've not had
any problems, but I don't use GNOME all that heavily.

Richard



Re: /usr/bin/ld: cannot find -lndbm

2002-02-13 Thread Richard Cobbe
Lo, on Tuesday, February 12, Erik van der Meulen did write:

 On Tue, Feb 12, 2002 at 13:36:46 -0600, Jordan Evatt wrote:
 
  A quick search using apt on #debian gave me this: (/usr/include/db1/ndbm.h)
  in devel/libc6-dev
  Maybe you don't have libc6-dev installed? Check for that first. I have no
  way of knowing if you have it installed or not.
 
 Thanks for that. I do have libc6-dev installed and (consequently, I
 think) I have a /usr/include/db1/ndbm.h. But this does not seem enough.

The linker's looking for `libndbm.so', not ndbm.h.  However, on my
potato system, /usr/lib/libndbm.so is in libc6-dev along with the .h
file (as one would expect).

(All packages are as of potato; things may have moved since.)

Couple of things to check:

   1) libndbm.so should be a symlink; make sure that there's a real file
  at the end of the symlink chain.

   2) Make sure the file is really a shared library:
`readelf -h /usr/lib/libndbm.so'
  If it prints out
readelf: Error: Not an ELF file - it has the wrong magic bytes at the 
start
  then the file got crunched; make sure your libc6 install is still
  valid.

   3) Where is the compiler/linker looking for the libraries?
  Cut-n-paste the compiler invocation that is printed when you do a
  `make' -- the bit that reads

cc -Dbool=char -DHAS_BOOL -D_REENTRANT -DDEBIAN -I/usr/local/include 
-O2  spamd/spamc.c \
-o spamd/spamc -L/usr/local/lib -lnsl -lndbm -lgdbm -ldbm -ldb -ldl -lm 
-lc -lposix -lcrypt

  into a terminal, stick a `-v' after the cc, run it, and post the
  output.  (There will likely be a fair amount, not all of which
  will necessarily make sense.)

That should give us a better idea of the problem.

Richard



Re: dselect and resolving

2002-02-11 Thread Richard Cobbe
Lo, on Monday, February 11, John Cichy did write:

 Hello all,
 
 It seems the dselect ignores the host file when updating it's lists. I have a 
 debian mirror in my DMZ and have added an entry in my hosts file to use an 
 internal address to access the mirror, but it seems that dselect is ignoring 
 that entry and trying the public address instead. Does anyone know how to 
 make dselect look at the hosts file, my host.conf has the entry :
 
 order hosts,bind
 
 so I would think that it would resolve to the address in the host file.

I'm not entirely sure how this works, but there's a very good chance
that /etc/nsswitch.conf is more significant than /etc/host.conf.  What
does the `hosts' line say from nsswitch.conf?

Richard



Re: Auctex/LaTeX/Emacs problem

2002-02-06 Thread Richard Cobbe
Lo, on Monday, February 4, Ryan Claycamp did write:

 On Mon, Feb 04, 2002 at 04:08:52PM -0600, Richard Cobbe wrote:
  Lo, on Sunday, February 3, Ryan Claycamp did write:
  
   I have lost the color markings in xemacs when I edit a LaTeX file.  I
   am running woody and I think it happened after I updated to the new
   version of auctex.  I noticed that when auctex installed, it said
   something like emacsen ignoring flavor xemacs.  How do I get the color
   back in xemacs when it is editing LaTeX files?  I really enjoyed that
   feature.
  
  IIRC, xemacs comes with its own copy of AUCTeX, which explains the
  `ignore' error message.
  
  Assuming that you've got font-locking on in other editing modes, simply
  adding (require 'font-latex) to your .emacs file should do the trick.
  
 
 Thank you.  That did the trick, at least for xemacs.  I put the
 command in both my .emacs file and .xemacs/init.el.  It didn't work
 for emacs, but I don't have any color in emacs.  That is another worry
 for another time as I mainly use xemacs for my editor.

It's been a while since I've used plain FSF emacs, but I think
font-latex will work with both.  However, it assumes that you've already
got basic font locking turned on, and the method for doing that differs
between the two editors: check the manuals for details.  (My .emacs file
is set up in such a way that I can't easily look this up and paste it
here.)

Richard



Re: Auctex/LaTeX/Emacs problem

2002-02-04 Thread Richard Cobbe
Lo, on Sunday, February 3, Ryan Claycamp did write:

 I have lost the color markings in xemacs when I edit a LaTeX file.  I
 am running woody and I think it happened after I updated to the new
 version of auctex.  I noticed that when auctex installed, it said
 something like emacsen ignoring flavor xemacs.  How do I get the color
 back in xemacs when it is editing LaTeX files?  I really enjoyed that
 feature.

IIRC, xemacs comes with its own copy of AUCTeX, which explains the
`ignore' error message.

Assuming that you've got font-locking on in other editing modes, simply
adding (require 'font-latex) to your .emacs file should do the trick.

Richard



Re: adsl and xemacs (beginner)

2002-01-27 Thread Richard Cobbe
Lo, on Saturday, January 26, Kapil Khosla did write:

SNIP

 2) I used apt-get install xemacs and got the editor but am not able
 to open it from console (xemacs ),I get the follwing error
 Xlib: Client is not authorized to connect to Server
 
 Initialization error: X server not responding
 : :0.0
 
 [1]+  Exit 255xemacs
 
 I can however use it from the Apps-Editor-Xemacs
 Why is this so..can you give me a web resource to understand the \
 reason,

Has to do with X security.  My guess is that the shell in your terminal
is not running as the same user you logged in as -- did you su to root?

Simplest solution: don't su within the terminal; just stay as your
normal user and everything should be OK.

(X security is a complicated system, so I didn't go into it here.  I can
post a brief explanation of the issues if you're interested.)

Richard



OT: hardware recommendations?

2002-01-27 Thread Richard Cobbe
Greetings, all.

My PII-233MHz is beginning to show its age (already?!), so I'm looking
for a replacement.  The idea of saving a few bucks by building my own
system from components has its appeal, but frankly, I'd really rather
not bother with the effort of trying to make sure that all of the pieces
work well with Linux: the time it would take is worth more than the
$100-200 I'd be able to save.

So, I'm looking at Penguin Computing's Tempest 210MP workstation.
Obviously, with Penguin, Linux compatibility isn't a problem, but they
sell RedHat, so I'd like to be relatively sure that at least one of the
three current branches of Debian work well with this stuff.  (I'm
already fairly certain that Potato doesn't have support for the video
card.)

Oh, and I do in fact run Windows 95 from time to time, mostly for games
(which require DirectX, so VMWare/Win4Lin won't work, and I haven't yet
tried WINE).

Here are the components I'm a little concerned about:

Graphics card:
baseline model is an ATI 8MB Rage Pro AGP; other alternatives
include the Nvidia GeForce 2 MX (32 MB SDRAM), GeForce 3 (64MB DDR).
I'd like to go with the ATI, because my graphics needs are pretty
modest (see below), but X support for the Rage Pro appears somewhat
dicey.  What experiences have people had with this card?  Would this
card require testing or unstable, or does it work with Potato as well?

When I say gaming, I'm not talking about FPS or heavy-duty 3D
graphics stuff: I tend to prefer strategy games like Civilization
II/III and FRPGs like Might  Magic, where raw rendering speed
really isn't all that critical.  Plain 2D graphics are far more
important.  And, I've only got a 15 monitor, so resolutions higher
than 1024x768 give me eye strain.

Sound:
The choices are Creative Labs SoundBlaster 128 PCI or Creative Labs
SoundBlaster Live! 5.1.  Again, my needs under Linux are fairly
modest: the occasional MPG, AVI, RealAudio, or QuickTime movie.

Based on web searches, the SoundBlaster Live! seems to be supported
well enough with the emu10k module, and the PCI-128 works well with
the ALSA drivers.  Again: any comments?

On a second note, my CD-RW drive has apparently just died.  (When you
stick a new blank in, it scans the disk looking for something or other.
It's no longer finding it on about two thirds of the disks, and attempts
to burn the other third fail with a buffer underflow---which I've never
had before, even writing at 4x.)  I'm looking for an internal SCSI
writer---any recommendations?  I'm told Plextor is quite good.

Thanks much for any advice,

Richard



Re: kernel warning what should I do?

2002-01-22 Thread Richard Cobbe
Lo, on Sunday, January 20, Adam Majer did write:

 On Sun, Jan 20, 2002 at 07:23:27PM -0800, David Csercsics wrote:
  I had to recompile my kernel today because my sound wasn't working right and
  a couple other things weren't working. So I recompiled it and everything
  works but I get a warning now about /boot/system.map not matching kernel
  datawhat do I do to fix that.
 
 cp kernel directory
 cp System.map /boot/System.map-kernel-version
 
 that is, if you have something like /boot/bzImage-kernel-version
 for your kernel. If you just have /boot/bzImage or other 
 with no version, then just
 
 cp System.map /boot/System.map [or /boot/system.map in your case...]

If you build your kernel The Debian Way (see
http://www.debian.org/releases/stable/i386/ch-post-install.en.html#s-kernel-baking
for a brief intro to the process), the resulting package automatically
contains, e.g., /boot/System.map-2.2.19 and installs it in the correct
place.

Richard



Re: Gtk error only when root! (??)

2002-01-22 Thread Richard Cobbe
Lo, on Tuesday, January 22, Mark Ferlatte did write:

 On Tue, Jan 22, 2002 at 01:23:16PM -0800, Camilux wrote (1.00):
  if i am not root, i can for example open gdmconfig, but it says i 
  need to be root to change things; so i su - myself up to root, but 
  when i type gdmconfig, i get this error:
  
  Gtk Error: Can't Open display
 
 snip
 
  whats wrong here?
 
 X uses security mechanisms to prevent other users on the same machine
 from using your display.  The easiest way to fix this is to put the
 following in your .bashrc (if you use bash):
 
 export XAUTHORITY=$HOME/.Xauthority
 
 then log out, log back in.
 
 This will let you run X apps via su or sudo.

Except that when you do `su -', as the OP noted, not all of your
environment settings propagate into the root shell.  Instead, after you
su, do the following:
xauth merge ~user/.Xauthority
where `user' is the name of the user from which you su'd.  (If, for
whatever reason, user's X authority file is in a different location, as
with an ssh connection, substitute the correct filename in its place.)

Richard



Re: MIME Decoding command-line tool?

2002-01-22 Thread Richard Cobbe
Lo, on Tuesday, January 22, Elizabeth Barham did write:

 Hi,
 
 I've been receiving MIME encoded messages in the mail from
 someone. These are base64 encoded and I'm having trouble extracting
 them. Is there a good command-line tool for doing this (please don't
 menion mutt)?

It seems, from your X-Mailer header, that you're using Gnus.  Doesn't it
do MIME?  Or is it the base64 encoding?

Anyway, to answer your question, metamail(1) is likely what you're
looking for.  Note that the input you supply to this command should have
the headers included, especially the Content-Type header.

Richard



Re: list all packages that are installed

2002-01-16 Thread Richard Cobbe
Lo, on Friday, January 11, Craig Dickson did write:

 Richard Cobbe wrote:
 
  (Just out of curiosity, is the COLUMNS trick documented anywhere?  I
  couldn't find it in any of the obvious manpages: dpkg(8), bash(1),
  environ(7).)
 
 Yes, it actually is in the bash manpage:
 
Simple Commands
 
A simple command is a sequence of optional variable assignments
followed by blank-separated words and redirections, and
terminated by a control operator. The first word specifies the
command to be executed, and is passed as argument zero. The
remaining words are passed as arguments to the invoked command.

Wrong trick.  I meant the special meaning of the COLUMNS environment
variable, not the ability to set it on the command line for a single
process.

Richard



Re: OT: Language War (Re: C Manual)

2002-01-16 Thread Richard Cobbe
Lo, on Sunday, January 13, Erik Steffl did write:

   type is a propert of variable.

Not exclusively.  Two counter-examples, one in C, and one in Scheme.

C:
int x;

x = foo;

You'll get a type error here at compile time, for obvious reasons.
Question: how can this be a type error if only variables have types?
You need to realize that foo has type (const) char * before you can
determine that you can't assign it to an int.

Scheme:

(define f
  (lambda (x)
(cond
  ((boolean? x) (if x 42 23))
  ((symbol? x) (string-length (symbol-string x)))
  ((char? x) (char-integer x))
  ((vector? x) (vector-length x))
  ((procedure? x) (x 42))
  ((list? x) (length x))
  ((pair? x) (car x))
  ((number? x) (- x))
  ((string? x) (string-length x))
  ((port? x) (read x))
  ((promise? x) (force x)

This defines a function f with one argument, x.  What's x's type?  The
function is equally well-defined for an argument of just about any value
supported by R5RS, the current spec.  (For that matter, what's the
return type of f?  Answer: it's usually a number, but it depends on x!)

In languages like Scheme, Lisp, Python, and Smalltalk, almost all
typechecking is deferred to run-time.  Therefore, it is meaningless to
describe the type of a variable; in these languages, types only apply to
values.

Richard



Re: list all packages that are installed

2002-01-16 Thread Richard Cobbe
Lo, on Wednesday, January 16, dman did write:

 On Wed, Jan 16, 2002 at 10:31:47AM -0600, Richard Cobbe wrote:
 | Lo, on Friday, January 11, Craig Dickson did write:
 | 
 |  Richard Cobbe wrote:
 |  
 |   (Just out of curiosity, is the COLUMNS trick documented anywhere?  I
 |   couldn't find it in any of the obvious manpages: dpkg(8), bash(1),
 |   environ(7).)
 |  
 |  Yes, it actually is in the bash manpage:
 |  
 | Simple Commands
 |  
 | A simple command is a sequence of optional variable assignments
 | followed by blank-separated words and redirections, and
 | terminated by a control operator. The first word specifies the
 | command to be executed, and is passed as argument zero. The
 | remaining words are passed as arguments to the invoked command.
 | 
 | Wrong trick.  I meant the special meaning of the COLUMNS environment
 | variable, not the ability to set it on the command line for a single
 | process.
 
 ncurses thing, apparently.

Ah.  I would not have thought to look there.  Thanks!

Richard



Re: cvs no space left on device

2002-01-11 Thread Richard Cobbe
Lo, on Friday, January 11, Kendall Shaw did write:

 Brenda J. Butler [EMAIL PROTECTED] writes:
 
  Is this a remote cvs repository?  Maybe the remote machine
  ran out of scratch space.
 
 Yes, it's a remote repository. I'll see if that's it. 

That's almost certain to be it.  In my experience, CVS only needs the
/tmp directory on the server side.  (Pity the error messages don't
clarify this issue.)

(Incidentally, there is a dedicated CVS mailing list; send mail to
[EMAIL PROTECTED] to subcribe.  Moderate traffic: 50-60 messages
a day, back when I subscribed.)

Richard



Re: list all packages that are installed

2002-01-11 Thread Richard Cobbe
Lo, on Friday, January 11, Paul E Condon did write:

 How can I get a list of all the debian packages that are installed
 on my computer?
 
 I see lots of stuff about getting a list of all the packages that are
 available. But thats not what I'm asking.
 
 I know its there, because dselect somehow maintains it little
 symbols on each package line in its display. I would like that
 as a text file that I might pass through grep.

dpkg -l

although you may need to set the COLUMNS variable to a larger number in
order to prevent some of the fields from getting chopped.

See dpkg's manpage for more goodies.

(Just out of curiosity, is the COLUMNS trick documented anywhere?  I
couldn't find it in any of the obvious manpages: dpkg(8), bash(1),
environ(7).)

Richard



Determining FQDN (was Re: Yow, Madduck!)

2002-01-11 Thread Richard Cobbe
Lo, on Thursday, January 10, dman did write:

SNIP

One minor nit to pick from an otherwise very good explanation (and I
wouldn't bother, except that I've been bitten by this before).

 This directive tells exim to use that name as the hostname in the SMTP
 greeting (HELO/EHLO) instead of that reported by the gethostbyname() C
 function (which returns the first thing after 127.0.0.1 in
 /etc/hosts).

SNIP

Not quite.  gethostbyname() returns the host record for whatever name or
address you supply as a parameter.  I think, although I'm not actually
certain, that by default, MTAs like exim use the machine's FQDN on the
HELO/EHLO line.  Finding the FQDN is a somewhat complicated process:

1) Find the machine's local hostname (e.g., in my case, ankh-morpork).
   This is stored in a kernel variable which root can set with
   hostname(1); it's initialized on boot from the contents of
   /etc/hostname.  To check this value, run either `hostname' or `uname
   -n' from the command line, or use the uname(2) system call in a C
   program.  (It'll be in the nodename field of the utsname struct.)

2) Pass this value to gethostbyname(3), which resolves it to an IP
   address, then determines the canonical hostname for that IP.  This
   resolution and lookup follows the normal mechanism for host lookups:
   DNS, /etc/hosts, NIS, whatever (see /etc/nsswitch.conf and
   /etc/resolv.conf).  This canonical hostname is the FQDN.

I believe DNS records mark one of the hostnames as canonical; I would
assume that NIS records have a similar ability.  For lines in
/etc/hosts, the first hostname on the line is considered canonical.

So, the upshot of all this:

* If you have a dynamic IP (ppp or dhcp), then you should have the
  following line in /etc/hosts:
127.0.0.1   HOSTNAME.DOMAIN.TLD HOSTNAME localhost
  replacing HOSTNAME, DOMAIN, and TLD with the appropriate values.  This
  is what I do at work, since my IP is assigned via dhcp.

* If you have a static IP, like I do at home, then you'll want the
  following lines in /etc/hosts:
127.0.0.1   localhost
1.2.3.4 HOSTNAME.DOMAIN.TLD HOSTNAME
  where 1.2.3.4 is your IP, and HOSTNAME, DOMAIN, and TLD are as above.

In either situation, the order of the hostnames on the line *is*
signficant!

Richard



Re: OT: Language War (Re: C Manual)

2002-01-07 Thread Richard Cobbe
Lo, on Monday, January 7, dman did write:

 I've just come up with a good description of what a 'type' is :
 A type is the set of all valid values.

*DING*DING*DING*DING*DING*DING*

Got it in one.  Types are sets of values.  That's all.  C, C++, and Java
provide a fairly limited language for describing these values, but types
are sets of values.  (I'm using, and I suspect dman is as well, `set' in
the mathematical sense as an unordered collection of objects.)

 Have you read The Hobbit?  Do you remember what Treebeard told Bilbo
 about his name?

(Actually it was _The Two Towers_, and it was Merry  Pippin, not Bilbo,
but it's still a good analogy.  grin)

Richard



Re: OT: Language War (Re: C Manual)

2002-01-06 Thread Richard Cobbe
Lo, on Friday, January 4, David Teague did write:

 On Thu, 3 Jan 2002, Richard Cobbe wrote:
 
  Not in the general case, no.
  
  std::string *s = new string(foo);
  std::string *s2 = s;
  
  delete s;
  
  If we assume a variant of C++ that extends delete to set its
  argument pointer to NULL, you still have the problem of s2 hanging
  around.  In the general case, it's not so obvious that you've got
  two pointers to reset.

 The allocated memory is released to the free store manager. There is
 no leak.

The original goal was avoiding bogus pointers rather than avoiding
memory leaks.  The suggestion to which I was responding was that C/C++
automatically zero out a pointer when you deallocate the memory to which
that pointer refers.

 However, you have the dangling pointer s2, which you must not apply
 the delete operator to again. This will result in at least a
 segmenatation fault.

It *might* segfault---if you're lucky.
 
 For safety's sake, assign 0 to s2, so it will receive the null
 pointer value.

My point was that *finding* s2 is extremely difficult in the general
case, for the runtime system, the compiler, or the programmer.  In fact,
in certain situations, the programmer can't set s2 back to the null
pointer, as it's not in scope at the point of the deallocation (e.g., a
local variable in the function a few levels back up the call stack).

Richard



Re: OT: Language War (Re: C Manual)

2002-01-06 Thread Richard Cobbe
Lo, on Friday, January 4, David Jardine did write:

 On Thu, Jan 03, 2002 at 05:34:00PM -0600, Richard Cobbe wrote:
  
  Yes, it *is* types.  Remember the definition of type-safety:
  
  If an expression E is determined at compile time to have type T,
  then evaluating E will have one of two results:
  
  1) The value of E is a valid object of type T and not just some
 random collection of bits we're misinterpreting as a T.
  2) a well-defined runtime error occurs (e.g., Java's
 ArrayBoundsException).
  
 An ArrayBoundsException is not a question of type in Java.  An
 array of two chars is exactly the same type as an array of two
 thousand chars.  A two-dimensional array of chars, on the other
 hand, is a different type from a three-dimensional one.

Consider the Java declaration
int a[]

The issue above is not the type of a, but rather the type of a[45].
The compiler determines that a[45] has type int---try passing it as an
argument to a method that expects a string, and you'll get a compilation
error, regardless of the actual number of elements in a.

Since the compiler has determined a[45] to have type int, then
evaluating that expression better darned well produce a real-live int.
If it can't (because a only has 4 elements), then, to preserve
type-safety, Java signals an error.

Richard



NAVER-MAILER and procmail recipes

2002-01-06 Thread Richard Cobbe
Greetings, all.

I'm tired of getting bounce messages from [EMAIL PROTECTED] every
time I post to this list, so I thought I'd add a recipe to my
.procmailrc to drop these into /dev/null automatically.  Unfortunately,
none of my attempts seem to work; the messages are delivered as normal.

I've currently got

:0
* ^From:.*naver\.com
/dev/null

but I've also tried the following conditions
* ^From:[EMAIL PROTECTED]
* ^From:.*NAVER-MAILER

and none of them seem to work.

I've been using procmail for a while now, so I generally understand what
I'm doing.  I figure I'm missing something pretty silly here; could
someone point it out to me?

A selection of the headers from a sample message appears below.  (If you
need more header info, I can attach a copy of one of the messages.)

From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: 
=?ks_c_5601-1987?B?uN7AzyDA/LzbIL3HxtAgvsu4siA8YmlnaWNlQG5hdmVyLmNvbT4=?=
Date: Mon, 7 Jan 2002 03:24:05 +0900 (KST)

In the message that I'm composing, by the way, the capital F is the
first character in the first line---any leading  got added by something
else.

Thanks much,

Richard



Re: OT: Language War (Re: C Manual)

2002-01-03 Thread Richard Cobbe
Lo, on Thursday, January 3, William T Wilson did write:

 On Wed, 2 Jan 2002, Richard Cobbe wrote:
 
  I'll agree that the two are related; in fact, I'd go so far as to say
  that if a language supports dynamic memory allocation and type-safety,
  it *has* to have some sort of automatic storage management system.
 
 I don't think that necessarily follows; a manual mechanism for freeing
 resources would then just set the reference to a NULL value.

Not in the general case, no.

std::string *s = new string(foo);
std::string *s2 = s;

delete s;

If we assume a variant of C++ that extends delete to set its argument
pointer to NULL, you still have the problem of s2 hanging around.  In
the general case, it's not so obvious that you've got two pointers to
reset.



Re: OT: Language War (Re: C Manual)

2002-01-03 Thread Richard Cobbe
Lo, on Thursday, January 3, Erik Steffl did write:

 what's the difference? the point is you can assign almost anything to
 anything, and yet there is no segfault - i.e. the strength of types has
 nothing (sort of) to do with segfaults... the resource allocation is
 crucial...

Type safety (plus dynamic allocation) implies advanced memory
management.  The converse is not true: you can slap Boehm's conservative
GC onto a C++ program, but you can still get segmentation faults:

char str[] = { 'b', 'a', 'd', ' ', 's', 't', 'r', 'i', 'n', 'g' };
// note the lack of a terminating '\0'!
cout  str;

No allocation issues involved.  As Ben Collins pointed out elsewhere in
this thread (bit of a tree-shaped thread, isn't it?), this won't
necessarily cause a segfault, but it can.  It's also a violation of
type-safety: cout expects a null-terminated string, and as far as the
compiler is concerned, str fits this.  However, there's no runtime check
in the output routine to verify that this is, in fact, the case.  Ooops.

Therefore, I claim that type safety is a more fundamental concept than
resource mangement.

 that's all interesting, but the point was that the perl type system is
 as weak as I can imagine yet it doesn't lead to segfaults... it's the
 resource allocation!

Type safety != `strong' types.  They're orthogonal.  (Well, sort of.  It
turns out that `strong types' isn't a very well-defined concept;
language researchers prefer to discuss compile-time type verification.
*That* definitely has nothing to do with type safety.)
 
 If you cannot make sure that MyUserType variable will only be assigned
 MyUserType values then you have (almost) NO type safety.

That's only meaningful if you can declare a variable to be MyUserType at
runtime.  This does not apply to a whole variety of languages, many of
which are still type-safe.

(I think I'm just going to decide that Perl's type system is on crack.
But what do you expect when a linguist designs a programming language?)

  In any case, memory allocation errors aren't the only cause of
  segmentation faults---how about walking off the end of an array?  Here,
  I claim that Perl *does* maintain type-safety, although in a seriously
  fscked-up way: it simply expands the array to make the reference valid.
 
   it's not the types! it doesn't care about types at all. it just makes
 sure you always have a place to store whatever you are about to store.
 what does that have with types? not much. it has to know what are you
 trying to store but it doesn't care at all what was there before, what's
 the type of variable where you are storing it. again, I find it pretty
 strange to call that kind of behaviour 'type safety'.

Yes, it *is* types.  Remember the definition of type-safety:

If an expression E is determined at compile time to have type T,
then evaluating E will have one of two results:

1) The value of E is a valid object of type T and not just some
   random collection of bits we're misinterpreting as a T.
2) a well-defined runtime error occurs (e.g., Java's
   ArrayBoundsException).
There are no other possibilities.

In this particular case, Perl chooses to use the memory allocation
system to satisfy type safety (it creates a new T, initializes it, and
returns it).  It's not the only possibility, though; Java would throw an
exception in this case.  No allocation involved.

Richard



Re: OT: Language War (Re: C Manual)

2002-01-02 Thread Richard Cobbe
Lo, on Wednesday, January 2, Ben Collins did write:

 Just because in C it can cause a segfault doesn't mean the other
 languages are any better.

No, it doesn't.  However, IMNSHO, the fact that C and C++ have many
*more* undefined constructs that other languages does mean that the
other languages are better (in most situations).

 Show me one language that doesn't have some action that is classified
 as undefined.

I'm not aware of any, although PLT Scheme (http://www.plt-scheme.org/)
comes pretty close.  So what?  I never claimed that other languages are
completely defined; I simply said that the construct I mentioned
up-thread has undefined behavior in C and C++, and that this lack of
definition is dangerous.  (The few places where PLT Scheme's behavior is
undefined are also dangerous, but they arise much less
frequently---when's the last time you tried to invoke a continuation
from within one of dynamic-wind's guard thunks?)

 Documenting that something is undefined is called a specification.

Yes, in one sense, you're right.  But it's a pretty useless
specification.  `We specify that we do not specify what this construct
does.'  I don't see how this is supposed to be helpful.  If I had a
compiler that flagged all undefined constructs at compile-time so that I
didn't make them through careless errors, that would be one thing.
Unfortunately, that's not possible, because you can't *detect* all of
C's undefined constructs at compile-time.

 It is there so you know that you cannot rely on certain behavior. If
 you ignore that, then no language is going to help you.

Look, I'm not trying to ignore the fact that the ANSI/ISO C spec marks
certain constructs as undefined.  I'm saying that the fact that they are
undefined is not acceptable, especially when other languages
specifically define the behavior of the analogous constructs.  The fact
that the behavior of these constructions is completely specified by the
language means that I catch errors much earlier, whereas with C, I may
not catch them at all.  

The main issue: I'm tired of getting bogus output with no indication
that it's bogus, and I'm tired of spending long hours tracking down
silly pointer bugs that aren't an issue in more sophisticated languages.
The more undefined constructs in my language, the more time I have to
spend doing this kind of grunt work.  It's not that I want the computer
to do my entire job for me, but I would like it to do some of the grunt
work so I can concentrate on the interesting, challenging, and above all
*useful* parts of the program.

Richard



Re: OT: Language War (Re: C Manual)

2002-01-02 Thread Richard Cobbe
Lo, on Wednesday, January 2, Erik Steffl did write:

 Richard Cobbe wrote:
  
  Lo, on Monday, December 31, Erik Steffl did write:
  
  Perl does have strong types, but they don't really correspond to the
  types that most people are used to thinking of.  Perl's types are
  
  * scalars (no real distinction between strings, numbers, and the
undefined value)
  * lists
  * hashes
  * filehandles
  
  (I haven't really used Perl since Perl 4, so this list may not be
  complete.)
 
   actually there is real distinction between string and number, it's
 just that it's internal only (perl stores numbers and strings
 differently, it also treats them differently).

Since Perl automatically converts between strings and numbers whenever
it feels like it, there's really no important distinction between the
two from the programmer's perspective.  (In other words: since you can't
tell if a Perl scalar contains the number 33 or the string '33', there's
no practical difference between the types.)  Implementation details are
really pretty irrelevant at the level at which I'm discussing the
language.

   the point was that it's not a strong type system - by which I mean
 that you can assign pretty much any value to any l-value, no questions
 asked. You don't get segfault but you still get non-working program
 (e.g. when you mistakenly assing array to scalar, you get size of array
 in scalar).

Oh, right.  I'd forgotten about the whole scalar/array context thing.
(You can tell that I don't use Perl that often, yes?)  Since, however,
the following two statements aren't inverses
$bar = @foo;
@foo = $bar;
(the latter sets @foo's length, does it not?) I'd be tempted to explain
the list-to-scalar conversion in terms of an implicit conversion, much
like those performed by C++ in the presence of, say, single-argument
constructors.

Even if that doesn't work, though, then you can take the LISP tactic,
and just consider everything to be the same (static) type:
ThePerlDataType.  This type has several variants for scalars, lists,
references, and so forth.

   the reason you don't get segfaults is that perl takes care of memory
 allocation, e.g. if you try to assign something to the variable that's
 undefined (no storage place yet), it allocates appropriate amount of
 memory or if you try to read a value of variable that doesn't have value
 it says that undefined variable was used (doesn't give you random piece
 of memory like you can get in c).
 
   most of the segfaults are because of the resource allocation mistakes,
   not because of mistaken types... at last that's my impression.
  
  Resource allocation mistakes (at least, the kind that typically lead to
  seg faults) *are* type errors, from a certain point of view.
 
 when you stretch it far enough.

It's not a stretch at all; it simply requires thinking about types in a
way not very common among C/C++/Java programmers.

   generally there are two distinct problems: [resource allocation
   errors and type errors]

I'll agree that the two are related; in fact, I'd go so far as to say
that if a language supports dynamic memory allocation and type-safety,
it *has* to have some sort of automatic storage management system.
Otherwise, you can delete objects too early and cause problems.  My
original point, though, was that early deletions like this actually
short-circuit the type system, since they break the type-safety
invariant.

   IMO it's good to have clear distinction between resource allocation
 and type safety. example of perl - it doesn't have type safety, it lets
 you assing (almost) anything to anything but you still don't get
 problems as you describe above because it handles the memory allocation
 automatically.

Perl's a little weird, I'll grant you, but the fact that you can do this
sort of `free assignment' doesn't preclude typesafety.  Ferinstance, in
Scheme, I can do this without any problems:

(define x 3);; variable definition  declaration
(set! x three);; variable assignment

Works just fine.  Of course, if I then try to add x to 4 after the set!,
I'll get a run-time error---thus type-safety.

In any case, memory allocation errors aren't the only cause of
segmentation faults---how about walking off the end of an array?  Here,
I claim that Perl *does* maintain type-safety, although in a seriously
fscked-up way: it simply expands the array to make the reference valid.

  Performing run-time checks, as with array indexing, does not necessarily
  imply the existence of an interpreter.  If a compiler sees the
  expression a[i], it can either assume that i's value is a valid index
  for the array a (as C and C++ do), or it can insert the appropriate
  checks directly into the executable code.  I still claim this is part of
  the language's run-time system, regardless of how it's interpreted.
   ~~~

Erm, that should have read `implemented' -- sorry.

   just

  1   2   3   >