Re: Bash problem with subshell and *

2006-12-13 Thread Almut Behrens
On Wed, 13 Dec 2006 12:10:58 +0100, Andrea Ganduglia wrote:
 Hi. I'm working on Sarge. I'm parsing a text file when..
 
 cd tmp
 ls
 aaa bbb ccc ddd
 
 My script parse all file into directory, and grep ^Subject line.
 
 for i in *; do
 egrep '^Subject:' $i
 done
 
 Subject: Hello Andrea
 Subject: Ciao Debiam
 Subject: {SpAm?} * Viiagrra * Ciialiis * Leevittra *
 Subject: Good OS
 
 Ok? But if I modify my script and store egrep result into VAR
 
 for i in *; do
 SUBJECT=$(egrep '^Subject:' $i)
 echo $SUBJECT
 done
 
 Subject: Hello Andrea
 Subject: Ciao Debiam
 aaa bbb ccc ddd
 Subject: Good OS
 
 In other words subshell expand willcard `*' and shows all files into
 directory!

You need to put double quotes around the variable when echoing it:

echo $SUBJECT

This will prevent pathname expansion (which normally takes place
_after_ variable expansion).  echo by itself does not disable the
normal processing of the command line, which includes pathname
expansion.

In case you'd like to understand all the nitty-gritty details, the bash
manpage will provide lots of reading material ;)  For example, you'll
find statements such as

EXPANSION
   Expansion is performed on the command line after it has been
   split into words.  There are seven kinds of expansion performed:
   brace expansion, tilde expansion, parameter and variable
   expansion, command substitution, arithmetic expansion, word
   splitting, and pathname expansion.

QUOTING
   (...)
   Enclosing characters in double quotes preserves the literal
   value of all characters within the quotes, with the exception
   of $, `, \, and, when history expansion is enabled, !.

Admittedly though, it needs careful reading, and sometimes a bit of
reading between the lines...

Cheers,
Almut



-- 
Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! 
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bash script question

2006-12-07 Thread Almut Behrens
On Thu, Dec 07, 2006 at 12:16:54PM -0600, Nate Bargmann wrote:
 
 I have a directory of files that are created daily using 
 filename-`date +%Y%m%d`.tar.gz so I have a directory with files whose
 names advance from filename-20061201.tar.gz to filename-20061202.tar.gz
 to filename-20061203.tar.gz and so on.  Based on the date in the
 filename, I would like to delete any than are X days older than today's
 date.  So, I'm not interested in the actual created/modified date, just
 the numeric string in the name.

... what, no Perl one-liner yet?? :)

So, here it is, the line noise version that should do the job:

$ perl -MTime::Local -e 'unlink grep {/-(\d{4})(\d\d)(\d\d)/; 
timelocal(0,0,0,$3,$2-1,$1)time-864000} glob *.tar.gz'

This would delete all of your .tar.gz files older than 10 days (or
864000 secs), in the current directory.

Cheers,
Almut

PS: Of course, you can add some whitespace and stuff, and make a script
out of this, e.g.

#!/usr/bin/perl

use Time::Local;

my $t_crit = time - 10*24*60*60;

unlink
grep {
/-(\d{4})(\d\d)(\d\d)/;
timelocal(0,0,0,$3,$2-1,$1)  $t_crit;
}
glob *.tar.gz;



Or, even more verbose, almost self-documenting:

#!/usr/bin/perl -w

use Time::Local;

my $t_crit   = time - 10*24*60*60;
my $wildcard = *.tar.gz;
my $date_pattern = qr/-(\d{4})(\d\d)(\d\d)/;

my @files = glob $wildcard;

for my $file (@files) {
my ($year, $mon, $day) = $file =~ $date_pattern;
my $file_age = timelocal(0, 0, 0, $day, $mon-1, $year);
unlink $file if $file_age  $t_crit;
}


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bash script question

2006-12-07 Thread Almut Behrens
On Thu, Dec 07, 2006 at 03:41:53PM -0600, Ron Johnson wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 On 12/07/06 15:12, Almut Behrens wrote:
  On Thu, Dec 07, 2006 at 12:16:54PM -0600, Nate Bargmann wrote:
  I have a directory of files that are created daily using 
  filename-`date +%Y%m%d`.tar.gz so I have a directory with files whose
  names advance from filename-20061201.tar.gz to filename-20061202.tar.gz
  to filename-20061203.tar.gz and so on.  Based on the date in the
  filename, I would like to delete any than are X days older than today's
  date.  So, I'm not interested in the actual created/modified date, just
  the numeric string in the name.
  
  ... what, no Perl one-liner yet?? :)
  
  So, here it is, the line noise version that should do the job:
  
  $ perl -MTime::Local -e 'unlink grep {/-(\d{4})(\d\d)(\d\d)/; 
  timelocal(0,0,0,$3,$2-1,$1)time-864000} glob *.tar.gz'
  
  This would delete all of your .tar.gz files older than 10 days (or
  864000 secs), in the current directory.
 
 OP specifically noted:
 I'm not interested in the actual created/modified date

yes, I know... It does in fact use the date specification from the filename
(extracted by the /-(\d{4})(\d\d)(\d\d)/ regex).


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Font for PC graphics characters

2006-05-08 Thread Almut Behrens
On Mon, May 08, 2006 at 02:46:31PM -0400, T wrote:
 On Mon, 08 May 2006 13:31:27 -0400, Stephen R Laniel wrote:
 
  On Mon, May 08, 2006 at 01:27:59PM -0400, T wrote:
  which font is capable of showing the PC graphics characters (ie ascii 
  art)? 
  
  Well, if you want to display ASCII, then every font known to
  man, basically, will display it. So ASCII art isn't the
  trouble. It's probably characters that are outside of the
  ASCII range that are troubling you. 
 
 Yes, exactly, those PC graphics characters/symbols.
 
  For that, you have to
  make sure that your locale is set properly, and that your
  terminal program is also using the right locale. If you need
  help in that direction, let us know.
 
 I'm using xterm, and I've set my LANG=C...
 
 I've read that 
 -misc-fixed-medium-r-semicondensed--12-110-75-75-c-60-iso10646-1
 
 have the symbols...
 
 But still I am unable to view those PC graphics characters/symbols.

The package xfonts-dosemu might be what you want (if I'm understanding
you correctly).  It contains the following fonts:

vga  -dosemu-vga-medium-r-normal--17-160-75-75-p-80-ibm-cp437   

vga11x19 -dosemu-vga-medium-r-normal--19-190-75-75-c-100-ibm-cp437  

vgacyr   -dosemu-vga-medium-r-normal--17-160-75-75-c-80-ibm-cp866   

vga10x20 -dosemu-vga-medium-r-normal--20-200-75-75-c-100-ibm-cp866  

vga-ua   -dosemu-vga-medium-r-normal--17-160-75-75-c-80-ibm-cp1125  

vga10x20-ua  -dosemu-vga-medium-r-normal--20-200-75-75-c-100-ibm-cp1125 


Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Makefile parametrisation

2006-04-25 Thread Almut Behrens
On Tue, Apr 25, 2006 at 06:56:27PM +0200, Dennis Stosberg wrote:
 [EMAIL PROTECTED] wrote:
 
  I'd like to define a symbol ARCH in my Makefile to be the output
  of
uname -m
 
  The obvious thing, just starting with
 
  ARCH = `uname -m`
 
  didn't seem to work.  It defined ARCH to be `uname -m' instead of
  i686 or x86_64.  Not unreasonable, but What *is* the way to do
  this?
 
 With GNU make you can use ARCH = $(shell uname -m).

...or even ARCH := $(shell uname -m), the difference being that when
you use :=, the value will be expanded only once upon definition,
while with =, it is evaluated anew every time you use it -- resulting
in lots of unnecessary fork()s when you have many occurrences of
$(ARCH) in your makefile.[1]

Since the result of uname -m is unlikely to change while running
make, this performance optimisation can safely be made.

Cheers,
Almut


[1] sceptical minds can verify this themselves: ;)

With the following little makefile

ARCH := $(shell uname -m)
target:
# $(ARCH) $(ARCH) $(ARCH) $(ARCH)

the command

$ strace -eprocess make 32 21 13 | grep fork | wc -l

should count only 2 forks, while when using ARCH = $(shell uname -m)
you'd get 5 ...


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: libXda.so

2006-04-24 Thread Almut Behrens
On Mon, Apr 24, 2006 at 10:52:25AM -0700, Isabella Thomm wrote:
 Hi,
 
 I am using Debian unstable and since the last update to the new xorg my 
 3d acceleration does not work anymore. glxgears tells that it misses the 
 accelerated x library libXda.so.1, which was not necessary before. In 
 which package is this library or has somebody encountered the same 
 problem? A friend of mine has the same hardware, everything works fine 
 and he does not have that library.  I  have an  ati  radeon mobility 
 9000 and I am using the open source drivers.

I could be wrong, but AFAIK, libXda.so is part of the proprietory /
commercial X server by Xi Graphics (www.xig.com), and is implementing
their XiG-DirectAccess X extension.  I have no idea, though, why your
glxgears is trying to load that lib...  Taking a quick glance at their
demo package (http://www.xig.com/Pages/Summit/Demos/DX-GoldLinux.html),
it seems that libXda.so is being pulled in by their implementation of
libGL.so.

Maybe you could check which libGL.so you have installed, and run ldd
on the .so file to verify whether it in fact depends on libXda.so (it
shouldn't, AFAICT).  Apparently, something went wrong during the Xorg
upgrade -- like some old libs not having been replaced properly...
(BTW, have you ever had XiG Accelerated-X/Summit installed, or have
you tried to install the demo version some time in the past?)

In case of doubt, I would reinstall the entire GL/Mesa stuff (i.e. the
specific implementations of the virtual packages libgl1, libglu1, etc.
that you were using while things still worked). 

Good luck,
Almut

P.S. Not exactly sure in what way the radeon X driver is involved
in the whole game: it claims to support 3D acceleration, but what
exactly does that mean with respect to OpenGL, GLX, DRI, etc.??
Dunno, sorry.  Wiser heads than mine will have to help you here... :)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: LD_LIBRARY_PATH under Linux

2006-04-12 Thread Almut Behrens
On Tue, Apr 11, 2006 at 11:23:00PM -0400, T wrote:
 On Tue, 11 Apr 2006 21:17:44 -0400, Roberto C. Sanchez wrote:
 
  We used the environment var LD_LIBRARY_PATH to give preference/order of
  the libraries that we use. Does this still applied to Linux?
  
  I tried to do it under Linux but didn't success. Here is what I tried:
  
  [...]
  
  It should work.  The way to be sure which library is being loaded is to
  export the LD_LIBRARY_PATH variable the way you want it and then run `ldd
  /opt/old/usr/bin/transcode` to see where it finds the shared libraries.
 
 thanks. Roberto for the reply. I am now confirmed that it is not the
 LD_LIBRARY_PATH's problem. Maybe transcode is looking for its libs in a
 fixed location or something. 
 

 Is there any way to specify transcode command line parameter when probing
 with ldd? --

wouldn't make much sense, as the program isn't really being run...


 which lib for transcode to load depends on its parameter...

now this very much sounds like the program is using dlopen(3) to load
the library at runtime, after having determined which one to load.

In this case, if the authors have decided to pass an absolute path to
dlopen, you're essentially out of luck (you could still try to patch
the binary, or recompile -- but well...).  To further inspect what's
going on, you might want to try ltrace(1) or strace(1) -- most likely
won't help much, though, to actually _control_ which lib gets loaded...


BTW, for the sake of completeness: LD_LIBRARY_PATH will have no effect,
if the executable has been linked using the rpath option, because any
RPATHs will be searched before the ones specified in LD_LIBRARY_PATH
(don't really think this is your problem, but just in case...).

If unsure, use readelf -d | grep RPATH (or objdump -x | grep RPATH)
to check whether any RPATHs have been compiled into the binary  (both
programs belong to the binutils package).

To override RPATH settings, you should usually be able to explicitly
preload the librarie(s) in question using LD_PRELOAD (see man ld.so
for the nitty-gritty details).  In this case, any symbols provided by
the preloaded library will serve to satisfy symbol requests directly,
thus preventing that same library from being loaded from some other
undesired location.


Good luck,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Stupid shell script question about read

2006-03-02 Thread Almut Behrens
On Thu, Mar 02, 2006 at 09:19:02AM -0800, David Kirchner wrote:
 On 3/2/06, Kevin B. McCarty [EMAIL PROTECTED] wrote:
  Hi list,
 
  Could someone tell me why the following works in zsh but not in
  bash/posh/dash?
 
  benjo[3]:~% echo foo bar baz | read a b c
  benjo[4]:~% echo $a $b $c
  foo bar baz
 
  If I try the same with bash (or other sh-compatible shells), the
  variables $a $b and $c are unset.  From the bash man page:
  ...
  So read claims to read from the standard input, but it doesn't
  actually seem to happen when a pipe is involved.
 
 What's happening here is that the pipe is causing a subshell to be
 spawned, which is then parsing the command read a b c.

 
 http://linuxgazette.net/issue57/tag/1.html
 
 The example he gives, with the  () syntax, worked in bash, but not
 in Debian or FreeBSD's /bin/sh.

In more recent bashes, the following should work as well

#!/bin/bash
read a b c `echo foo bar baz`
echo $a $b $c

The  (here strings) are an extension of the here document syntax,
IOW, the string given after  is supplied as stdin to the command.

Then, there's another variant, which is about as ugly as it can get...
It should, however, work with most bourne shell compatible shells:

#!/bin/sh
eval `echo foo bar baz | (read a b c; echo a='$a';b='$b';c='$c' )`
echo $a $b $c

To get the variable's values from the subshell back to the main shell,
a shell code fragment is written on stdout, captured with backticks,
and then eval'ed in the main shell...  (this is the moment when I
usually switch to some other scripting language -- if not before :)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: xterm and mc shortcuts

2006-01-25 Thread Almut Behrens
On Wed, Jan 25, 2006 at 09:21:43AM +0200, Andras Lorincz wrote:
 
 so /usr/share/terminfo/x/xterm is used in both cases. Regarding the stty -a
 output they are also the same. When pressing Ctrl-V the behaviour is the
 same: if I press once then nothing and if I press twice then ^V appears. If
 I press once Ctrl-V followed by Alt-c the same character appears.

What you _should_ get (for Meta-hotkeys to work in mc), is ^[c when you
press Ctrl-v Alt-c, for example.  Note that ^[ stands for ESC, i.e. the
terminal needs to generate the two-char sequence ESC c  (prefixing
Ctrl-v is just to have the next one key or escape sequence be printed
in human readable representation...).

Not sure I'm understanding you correctly, but I suppose you're seeing
nothing but the char c, but only when you type Ctrl-v Alt-c _as root_. 
As your regular user I'd expect you to see ^[c  Otherwise, mc would
most likely not work in this case either. [1]

Anyway, next thing to try is to explicitly set the xterm X resource
metaSendsEscape, which is supposed to make xterm always issue the
above mentioned escape sequences (when the Meta modifier is held down)
instead of generating 8-bit characters (see xterm's manpage for details
and related info).  IOW, start xterm with the following command

# xterm -xrm XTerm*metaSendsEscape:true 

and see if it works then...  If it does, you're lucky [2].  If not, the
following information would help to narrow down on the problem:

* output of xmodmap
  (also, do you have any private ~/.xmodmaprc (or similar) ?) [3]

* output of xev, when hitting Alt key
  (in case Alt should generate keysym Alt_L, is there any other
  shift/modifier key which generates Meta_L?)

* xrdb -query | grep -i xterm
  (might be empty...)

* content of section InputDevice (for keyboard)
  in your XF86Config or xorg.conf

* locale setting / utf-8 enabled?

and of course

* xterm -version

* version of X (server and client libs)

* debian flavor

* anything else unusual... plus what I forgot to think of :)

Ideally, determine all of these both as root and as regular user, to be
able to tell apart a working from the malfunctioning setup  (Well, the
debian flavor most likely won't differ as root, but you get the idea..)

And, if you want to earn extra bonus points for being supportive in
solving the issue ;)  build xterm from source with debugging enabled,
i.e. use configure --enable-trace.


 
 I must mention that this behaviour is happening in mlterm too but not in
 konsole.

Different terminal, different code... can't say much more here, as I'm
not using those on a regular basis.  Seems mlterm's approach to the
Meta key issue is similar to that of xterm.  My personal favorite (in
particular for mc) is rxvt, or some derivative thereof.  Typically, it
just works, and if it once really should not, you simply tell it via a
commandline option what your Meta key is (plain and simple... less
magic that could go wrong).

Almut


[1] BTW, as a workaround, you can always spell out ESC c to emulate
Alt-c (i.e. hit ESC and c, one after the other)...  admittedly somewhat
more cumbersome to type, though -- except for those longterm vi users,
with their eleventh ESC-key finger :)

[2] what's left to be done then is to make that setting a more
permanent constituent of the effective X resources.  In case you're
not sure how to go about doing that, report back here -- or better yet,
search the list archives; this question comes up from time to time
(good keywords would probably be Xdefaults, Xresources, or some such).

[3] you could also try to play around with commands such as

xmodmap -e keycode 64 = Meta_L -e clear mod1 -e add mod1 = Meta_L

(but be prepared to restart your X session, if you've messed up things
entirely, and can't work out any longer how to restore your original
key mappings...)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: urgent newbie question : ssh : problem with variable inside ssh session

2006-01-25 Thread Almut Behrens
On Wed, Jan 25, 2006 at 11:49:39AM +0100, Stephane Durieux wrote:
 Hello 
  
  I have a problem in a ssh script: 
  ssh host  EOF
  for o in /directory/*
  cp -pr /directory/* /other_location
  
  the problem is that variable o isn t created in fact. It seems that the it 
 is created  on the remote machine (normal) but cannot be printable on the 
 local.
  what is the reason?
  Can someone give me in depth  explanation  
  I have tried --tt option whithout any result 

(Well, maybe somewhat late reply for an urgent question, but anyway...)

I suppose the idea is to supply some commands to be run remotely, via
here document syntax.

The key point here is that you need a shell, not a tty, to execute code
like you're trying to use (for-loop, * globbing, ...). -tt would
merely give you a (pseudo) tty.

Something like the following should work (of course, substitute more
sensible code to run...)

#!/bin/sh

ssh host /bin/sh  'EOF'
for file in /some/remote/directory/*.jpg ; do
  echo Found picture: $file
done
EOF

Apart from that, I can only second what David said.

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: xterm and mc shortcuts

2006-01-24 Thread Almut Behrens
On Tue, Jan 24, 2006 at 10:06:25AM +0200, Andras Lorincz wrote:
 Hi,
 
 I'm using xterm but there is a thing that annoys me: when I'm using xterm as
 a normal user there is no problem using the shortcuts with mc (Alt+C or
 Alt+S an so on), but as I do a su in the same vt and try to use the
 shortcuts with mc, there appear some carachters. So how could I make the
 shortcuts work when I am root? Thanks.

There are a number of reasons this can go wrong.  Most likely, though,
it's a different TERM setting (echo $TERM to check).  This tells mc
what escape codes to expect from the terminal (emulator) -- which is
defined in some terminal capabilities database, typically terminfo
these days (the ancient termcap would be another option).

You can check what entry is actually being read by mc, using strace:

  strace -efile -o /tmp/mc.trace mc
  (...quit mc)
  grep term /tmp/mc.trace

For example, I get something like

open(/home/ab/.terminfo/x/xterm, O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such 
file or directory)
open(/usr/share/terminfo/x/xterm, O_RDONLY|O_LARGEFILE) = 3

which tells me that /usr/share/terminfo/x/xterm is being used.
Do this as your regular user and as root and compare what you get...

Next thing to compare would be stty settings (stty -a).

Unfortunately, there's more...  but maybe this helps to solve it already.

What do you get when you type Ctrl-v, immediately followed by the
shortcut in question?  (do this outside of mc, or in its subshell)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Appletalk Starting and Failing during Sarge Boot

2006-01-24 Thread Almut Behrens
On Tue, Jan 24, 2006 at 05:21:57AM -0600, Mike McCarty wrote:
 Chinook wrote:
 
 [uninstall netatalk, possibly task-howl, howl-tools, and mdnsresponder]
 
 
 Thanks very much for the advice. I'll go to her machine and
 try to figure out how to uninstall.

or better even, let her do it herself...  You never know, maybe she'd
find out she enjoys that kinda thing?  But then again, I certainly
wouldn't want to get you to mess with the established roles in your
relationship -- it's just that sometimes, we gals like to take care
of our own stuff... :)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Appletalk Starting and Failing during Sarge Boot

2006-01-24 Thread Almut Behrens
On Tue, Jan 24, 2006 at 01:15:11PM -0600, Mike McCarty wrote:
 Almut Behrens wrote:
 On Tue, Jan 24, 2006 at 05:21:57AM -0600, Mike McCarty wrote:
 Thanks very much for the advice. I'll go to her machine and
 try to figure out how to uninstall.
 
 or better even, let her do it herself...  You never know, maybe she'd
 find out she enjoys that kinda thing?  But then again, I certainly
 wouldn't want to get you to mess with the established roles in your
 relationship -- it's just that sometimes, we gals like to take care
 of our own stuff... :)
 
 I'm very aware of that. The way we met was she ran a BBS back
 in the days before the internet (1995). I was one of her users. So,
 yes, she has a little technical streak. So I'm not showing
 some kind of predisposition to run her computer for her.
 
 (...)

Thanks Mike, for your considerate response.  I have to admit I'm a
little surprised (positively :), and I guess I should add a word of
apology, if you feel I extrapolated a bit too far beyond your original
statement...  As it looks, everything is at its best, with respect to
the assignment of roles in your relationship :)  So..., I'm sorry!

And thanks for sharing.  It's always nice to hear there are other
technically interested girls around -- and men who don't have a problem
with this.  Me is impressed once more by a debian user ;)

Have a nice day,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Sharing Linux printer with Mac

2006-01-22 Thread Almut Behrens
On Sun, Jan 22, 2006 at 02:49:17AM -0500, Chinook wrote:
 The issue is something that I wondered about (and commented on) early 
 on.  On my Mac I see the printer [EMAIL PROTECTED] but my Mac can't 
 actually print to it because my Mac (via my router) can't resolve the 
 hostname debian1. 
 
 When I first put up Debian on the P4, I noticed that where my Mac sets a 
 hostname with its local address on my Belkin router, the Linux box 
 leaves the hostname blank on my Belkin router.  My router does have the 
 Linux box represented as an address (192.168.2.48), but no hostname.
 
 When the printer was attached to my Mac I could print to it from my 
 Linux box it because the Mac hostname (pmacg5) could be resolved by the 
 router, but now that the printer is attached to my Linux box (and 
 working there) I can't print to it from my Mac because it can't resolve 
 the Linux box hostname (debian1)  :-   I had a strong feeling this 
 hostname issue would come back to bite me %-\
 
 So, whats to do?  Preferably I'd really appreciate some help in getting 
 Debian to post its hostname on the router like my Mac does.  If that 
 can't be done at the moment, then I would appreciate some help in 
 getting my Mac to resolve the Linux box hostname to an address 
 (192.168.2.48). 

If name resolution is the problem, then you might try manually adding
the address of the debian box to the Mac's local name lookup (which is
typically setup to be tried before DNS).  On Unix that would generally
be in /etc/hosts.
I haven't got the foggiest how to admin Macs, but the first hit when
googling for /etc/hosts equivalent mac turned up the following
instructions (http://forums.macnn.com/archive/index.php/t-121363.html):

  Just edit /etc/hosts using an admin user.
  This problably won't help though as your name resolution is done through 
netinfo.
  You need to add the hosts file to you netinfo db.
  If your hosts file is in the format
  xxx.xxx.xxx.xxx some_machine_name
  Then you can login as root (not admin but root)
  $ niload hosts /  your_hosts_file
  You have now added you hosts file to the netinfo db.
  you can use things like nicl to look around it.
  You can take a backup by copying /var/db/netinfo/*.nidb to another dir.

Sounds like a good place to start :)

If that doesn't work for some reason (or if your debian box' local
address isn't static, so you'd have to fiddle with this every time
anew), you could try to figure out how to have debian register its
hostname with the router.  Not sure about the belkin, but most routers
have some web interface and/or provide telnet access to configure such
things. If so, it shouldn't be too hard to automate those steps in some
script, that you could then run during init, or something like that...

Alternatively (and preferably), try to figure out how the Mac is going
about registering its name with the router (maybe there's some other
protocol beyond web or telnet), and then do it the same way on the
debian side.  I'd start digging through the Mac's init scripts... but I
apologize in advance, in case that should put you on the wrong track
entirely ;)  If you're lucky, it's even documented in the router's
manual, or somewhere on the belkin website.

Good luck,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Building Linux 2.6.16-rc1-git4

2006-01-22 Thread Almut Behrens
On Sun, Jan 22, 2006 at 01:50:37AM +0200, Linas Zvirblis wrote:
 I cannot get 2.6.16-rc1[-git4] to build, so am seeking advice here.
 
 make vmlinux gives me this:
 
 /bin/sh: -c: line 0: syntax error near unexpected token `('
 /bin/sh: -c: line 0: `set -e; echo '  CHK include/linux/version.h'; 
 mkdir -p include/linux/;if [ `echo -n 2.6.16-rc1-git4 .file 
 null .ident GCC:(GNU)4.0.320060115(prerelease)(Debian4.0.2-7) .section 
 .note.GNU-stack,,@progbits | wc -c ` -gt 64 ]; then echo 
 '2.6.16-rc1-git4 .file null .ident 
 GCC:(GNU)4.0.320060115(prerelease)(Debian4.0.2-7) .section 
 .note.GNU-stack,,@progbits exceeds 64 characters' 2; exit 1; fi; 
 (echo \#define UTS_RELEASE \2.6.16-rc1-git4 .file null .ident 
 GCC:(GNU)4.0.320060115(prerelease)(Debian4.0.2-7) .section 
 .note.GNU-stack,,@progbits\; echo \#define LINUX_VERSION_CODE `expr 2 
 \\* 65536 + 6 \\* 256 + 16`; echo '#define KERNEL_VERSION(a,b,c) (((a) 
  16) + ((b)  8) + (c))'; )  /usr/src/linux/Makefile  
 include/linux/version.h.tmp; if [ -r include/linux/version.h ]  cmp -s 
 include/linux/version.h include/linux/version.h.tmp; then rm -f 
 include/linux/version.h.tmp; else echo '  UPD 
 include/linux/version.h'; mv -f include/linux/version.h.tmp 
 include/linux/version.h; fi'
 make: *** [include/linux/version.h] Error 2

Seems other people have had similar problems.  Apparently something
with /dev/null accidentally being removed and then replaced with a
regular file containing trash...  See this thread:
http://marc.theaimsgroup.com/?l=linux-kernelm=113765329515437w=2

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: System administration question

2006-01-22 Thread Almut Behrens
On Sun, Jan 22, 2006 at 02:33:47PM +0100, Mitja Podreka wrote:
 hello
 
 As I'm very new in system administrating and not an old Debian user 
 either, I would like to ask you for some suggestions about system 
 administration.
 I have six identical networked computers running Debian. All the 
 computers are new and powerful enough to run all the necessary applications.
 Question: how to administer the computers centraly (install new 
 software, change settings,...)? What programs or technologies should I use?
 A simple explaination or link will be more than enough.

cfengine would probably be a good choice. Its learning curve might be
a little steep, though, depending on how new you are to sysadmining.
Anyway, you might want to start by taking a look at the docs linked
from http://www.cfengine.org/ - in particular the debian tutorial.

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: gs-esp, gs-gpl or gs-afpl?

2006-01-22 Thread Almut Behrens
On Sun, Jan 22, 2006 at 12:38:27PM -0500, H.S. wrote:
 
 While reading another post here, I noticed gs-afpl was giving smaller
 PDF files of these gs-* options to a user.
 
 Which of the three is 'better'? Or, which of the three makes 'best' PS
 or PDF files? Any recommendations?

Unfortunately it's hard to say which flavor is best.  Depending on
what exactly you need to do, or which feature is involved, there will
be subtle differences and occasionally even bugs in certain versions. 
It's not only AFPL vs. GPL or ESP, the specific version matters just as
much. I know this isn't much help, but it's probably best to just try...

I've been using gs for many years now, and due to subtle issues,
which often vary from version to version, I've gotten into the habit
to always keep several versions around.  If something fails to work, 
sometimes even an older version does help...
This is not to belittle ghostscript's usefulness or anything.  Quite
the contrary, it's an excellent piece of work, and I'm very grateful
it exists. It's just the complexity of the subject matter at hand...

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: permissions - is this the best approach?

2006-01-19 Thread Almut Behrens
On Thu, Jan 19, 2006 at 12:45:55PM -0500, Chinook wrote:
 Johannes Wiedersich wrote:
 On Thu, 2006-01-19 at 01:38 -0500, Chinook wrote:
 The idea is to allow various users on the Linux box the ability to 
 create and delete their own files in /home/lanshare/public and to 
 read/copy any files therein. The Mac will create and delete files 
 therein as the user lanshare.
 
 Dexter wrote:
  In principe, it`s correct. Write permission and stiky bit on folder
  make, that everybody can create file in this directory, but only owner
  of the file can delete it.
 
 IIRC, things may become messy, when users start to *copy* files to 
 /home/lanshare/public. Then the sticky bit is not preserved; it works 
 only for files *created* in that directory. It should be noted 
 somewhere in the info pages.
 
 I appreciate the heads-up about the copy issue.  It would certainly 
 come up and I was not aware of it.

Hope you don't mind me getting nitpicky here, but I'm not really sure
what that copy vs. create issue might be, and how it would relate to
the effect of the directory's sticky bit (i.e. everyone can create, but
only owner can delete)...

AFAIK, all that matters here is that the sticky bit is set on the
_directory_, and that the directory is writable.  The files therein
don't need any special bits set.  IOW, there's no need to preserve
anything.  (Prominent examples are /tmp, /var/tmp and /var/lock)

Also, I wouldn't know how copy could get around creating the files
in the first place (at their destination).  At the system call level
it all boils down to the same steps anyway, which essentially are

  open (with appropriate flags set)
  write
  close

independently of whether cp, tar, or any other tool is being used.

But maybe I didn't quite understand what Johannes meant.  Any example?

Maybe there's some confusion with the set-GID bit on directories...
which is often used in similar shared-file-access contexts, or even in
combination with the sticky bit.

Having said that, a few more comments on your original question :)

As I understand your requirements, everyone (including the user
'lanshare', representing the Mac side) is supposed to be able to read
all others' files, but only write/delete their own. The latter is being
taken care of by the sticky bit.  Whether files can be read by others,
though, needs to be set up by using appropriate ownerships and/or
permissions.  Here you have several options.

One would be to have all files be created world-readable.

Another would be to allow access via some common group (lanshare),
which means that files would need to be created with that group
ownership, and group-readability, of course.  As you probably know,
the default group ownership is determined by the primary group of the
creator.  Not sure how exactly you've set things up, but I suppose
you've made the 'lanshare' group just a supplementary group of the
users. In that case, the set-GID bit on directories might come in
handy, as it sets group ownership to the ownership of the directory
that the files are being created in  (not telling you anything new, I
guess...).

A third option would be to make sure that every user is a member of
all possible groups that files happen to belong to (note however, that
under certain circumstances there's a limit to the maximum number of
groups, e.g. 16 groups with NFS v3).

As to the permissions, setting the appropriate umask would be an
important prerequisite.  However, it doesn't _enforce_ that files in
fact do end up with the required permissions (it's only a mask after
all).  IOW, if someone copies a file that's not world- or group-
readable, it'll keep those insufficient permissions - they won't
automagically be corrected... (upon rethinking, maybe that's what
Johannes was referring to?)  Unfortunately, there's no such thing as
a set-permission bit analogous to the set-GID bit on directories.

So, depending on how failsafe the whole thing is supposed to become,
some special care might need to be taken.  Experience from real-world
scenarios, with real-world people (like you and me :) shows that
it's not a good idea to require them to always do something like
chmod -R g+rX ... after having copied stuff into the public folder.
This _will_ be forgotten, sooner or later...
One possible workaround would be to use some kind of wrapper script to
upload files (and ensure correct permissions), or something like that.

Anyway, good luck with your Mac-Linux connectivity.

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Most directories locked read-only: unlocked!

2006-01-19 Thread Almut Behrens
On Thu, Jan 19, 2006 at 10:04:17PM -0500, Ken Heard wrote:
 
   I then was able to amend /etc/fstab for the root directory mount by 
 fixing the typo.  I also removed the errors=remount-ro option. 
 Apparently the Debian installer always adds this option to the fstab 
 mount line for the root directory, and no one on the list could 
 enlighten me as to what purpose it serves by being there,

The idea simply is to prevent greater damage, in case a filesystem
should start to develop minor inconsistencies, due to hardware slowly
beginning to fail, or whatever.  In that case, you'd be glad to have
some mechanism in place that immediately stops further write accesses,
if any such problem is detected -- as mildly broken filesystems tend to
get corrupted exponentially, if you just keep writing to them...

It probably simply hasn't occurred to the developers that, by treating
all types of mount errors the same way, poor admins might get into the
somewhat unfortunate situation of locking themselves out, if they
accidentally put a wrong option into fstab -- as you have witnessed...

I think it would indeed make sense to keep such mere syntax errors from
resulting in a read-only mount.  Maybe you want to file a wishlist bug.

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Most directories locked read-only: how to unlock them?

2006-01-18 Thread Almut Behrens
On Wed, Jan 18, 2006 at 05:56:31PM -0500, Ken Heard wrote:
 Almut Behrens wrote:
 
 Copy /etc/fstab to /tmp/fstab and fix your drfaults typo in there.
 
   I copied /etc/fstab to /var/fstab and fixed the typ.
 
 Then patch a temporary copy of /bin/mount and libc.so to use /tmp/fstab
 instead of /etc/fstab.  To do so, create this little script, make it
 executable, and run it as root:
 
   Because I used /var instead of /tmp I modified your script as 
   follows:
 
 #!/bin/bash  #You suggested /bin/sh
 
 FSTAB=/var/fstab # var changed from tmp
 LIBC=/lib/libc.so.6
 
 perl -pe s|/etc/fstab|$FSTAB|g $LIBC /var/libc.so.6   # var changed 
 from tmp
 perl -pe s|/etc/fstab|$FSTAB|g /bin/mount /var/mount  # ditto
 chmod +x /var/mount  # ditto
 export LD_PRELOAD=/var/libc.so.6 # ditto
 /var/mount -n -o remount,rw /

The modifications look okay.
(Just make sure there's nothing after the #!/bin/bash in the first
line -- though I presume you've appended that comment just in this
post here...  BTW, just FYI, /bin/sh and /bin/bash should both work, as
(on linux) /bin/sh is just a link to /bin/bash, i.e. they're the same
program.  The only difference is that if bash is called as sh it
mimics the behavior of a regular bourne shell.  This shouldn't matter
here, though, as there's nothing bash-specific in the script...)

 
   I saved this script as /var/fixfstab, made it executable and -- as 
   root and in /var -- ran ./fixfstab.  The following was returned:
 
 : bad interpreter:  No such file or directory

Typically, you'd get this error, if you create the file on Windows
and then copy it over to linux.  The problem is the different line
ending conventions (\n on Linux, and \r\n on Windows), which is not
always immediately evident -- unless you already know what to look for.
Due to this, there'd be a trailing \r at the end of the interpreter
name, i.e. the system is trying to find a program /bin/bash\r, which
of course doesn't exist...

To check, you could do a less -u /var/fixfstab; if you have the above
problem, you'd see ^M (= \r = carriage return) at the end of the lines.

To fix it, run the following command

  perl -i -pe 's/\r//g' /var/fixfstab

and then try again... (and, if it still doesn't work, report back here).

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Most directories locked read-only: how to unlock them?

2006-01-16 Thread Almut Behrens
On Mon, Jan 16, 2006 at 07:42:38PM -0500, Ken Heard wrote:
   Thanks to Stural Holm Hansen and Steve Kemp for answering my post. 
 Unfortunately I my problem is still not solved.
 
   I first tried Steve Kemp's suggestion, because it was the simpler, 
   as it did not require use of a live CDROM.  He warned me that the 
 command 
 mount -n -o remount,rw / might not work.  It didn't.  It returned
 
   EXT3-fs: Unrecognized mount option drfaults or missing value
 mount: / not mounted already, or bad option
 
I then ran mount -n -o remount,defaults,rw / and mount -n -o 
 defaults,rw /.  The first command returned the same result as above. 
 The second returned:
 
 mount: /dev/mapper/SOL-root is already mounted or / busy
 mount: according to mtab, /dev/mapper/SOL-root is already 
 mounted on /

If all else fails, you could try the following approach:

Copy /etc/fstab to /tmp/fstab and fix your drfaults typo in there. 
Then patch a temporary copy of /bin/mount and libc.so to use /tmp/fstab
instead of /etc/fstab.  To do so, create this little script, make it
executable, and run it as root:

#!/bin/sh

FSTAB=/tmp/fstab
LIBC=/lib/libc.so.6

perl -pe s|/etc/fstab|$FSTAB|g $LIBC /tmp/libc.so.6
perl -pe s|/etc/fstab|$FSTAB|g /bin/mount /tmp/mount
chmod +x /tmp/mount
export LD_PRELOAD=/tmp/libc.so.6
/tmp/mount -n -o remount,rw /


(I'm assuming you can still write in /tmp -- if not, you could of
course also use some other writable location, but make sure the length
of the string you then use in place of /tmp/fstab always is exactly
10 characters long.)

Use at your own risk!

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Scripting again...

2006-01-16 Thread Almut Behrens
On Tue, Jan 17, 2006 at 07:40:48AM +0100, John Smith wrote:
 Hi All,
 
   how do you change (from the command line) with (sed/awk/... 
 anything that's available in the installation environment)
 
 text:\n someothertext to text: someothertext
 
   The trick is in the newline of course.
 
   I now do with
 
   cat output.txt | tr '\n' '!' | sed -e 's/text:! someothertext/text: 
 someothertext/g' | tr '!' '\n'
 
   But I bet somebody can do better...

Personally, I'd use perl for this kind of thing:

$ perl -p0e 's/text:\n someothertext/text: someothertext/g' infile outfile

or

$ perl -i -p0e 's/text:\n someothertext/text: someothertext/g' file

to edit in place.

The option -0 makes perl use \0 as the input record delimiter, which is
typically okay for text files.  If you really must match \0 within the
regex (e.g. to mess with binary files) you can use

$ perl -p0777e 's/foo\0bar/baz/g' infile outfile

to have perl slurp in the whole file as one piece...

(See perl -h and perldoc perlrun for details.)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Script challenge

2006-01-14 Thread Almut Behrens
On Sat, Jan 14, 2006 at 02:39:17PM +0200, Simo Kauppi wrote:
 On Sat, Jan 14, 2006 at 11:37:42AM +0100, John Smith wrote:
  
  #!/bin/sh
  cat EOF newscriptfile.sh
 [snip]
  It's driving me nuts!!!
 
 Depending on what you want cat to the newscriptfile.sh, you need to
 escape all the $s to prevent the parameter expansions.

alternatively, you can put quotes around the first EOF, like this

#!/bin/sh
cat EOF newscriptfile.sh
...
EOF

which keeps things somewhat more readable...

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug #269499 apache-ssl CustomLog problems

2006-01-12 Thread Almut Behrens
On Thu, Jan 12, 2006 at 10:38:29AM -0500, jef e wrote:
 apache-ssl bug #269499 apache-ssl: SSL log directives don't work
 
 I'm wondering if anyone has found or is using an easy workaround for 
 this particular bug that doesn't require a recompile/source code change 
  of the package as mentioned in the bug report correspondence.
 
 It seems that the syntax given to get the ciper info, etc is broken. The 
 supplied httpd.conf syntax doesn't work.
 
 CustomLog   /var/log/apache-ssl/ssl.log %t %{version}c %{cipher}c 
 %{clientcert}c
 
 Output to the log file only returns output like this:
 [12/Jan/2006:09:34:42 -0500] - - -
 [12/Jan/2006:09:34:42 -0500] + + +
 [12/Jan/2006:09:34:42 -0500] + + +
 
 
 This bug has been outstanding for over a year and was apparently kicked 
 back upstream to apache-ssl. However, their page also references the 
 broken syntax.
 
 Anyone have any ideas or experiences short of rebuilding it?

You might try using mod_ssl-supplied environment variables instead.
The following log directive should give approximately the same info:

  %t %{SSL_PROTOCOL}x %{SSL_CIPHER}x CERT:%{SSL_CLIENT_CERT}x

Unfortunately, the certificate gets split across several lines, which
could make parsing a little ugly, e.g.

[12/Jan/2006:18:45:38 +0100] TLSv1 RC4-MD5 -BEGIN CERTIFICATE-
CzAJBgNVBAgTAkJXMRIwEAYDVQQHEwlUdWViaW5nZW4xHzAdBgNVBAoTFnNjaWVu
... rest of PEM encoded certificate here ...
6ZcBaCqLrMk=
-END CERTIFICATE-

but maybe you don't actually want the full certificate, but rather its
DN or something... (for which there are specific variables).

See here for details:
http://www.modssl.org/docs/2.8/ssl_reference.html#table4
http://www.modssl.org/docs/2.8/ssl_compat.html#ToC2

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Problem with kdm 3.5 not starting

2006-01-10 Thread Almut Behrens
On Tue, Jan 10, 2006 at 01:07:10PM +, David Goodenough wrote:
 I have a sacrificial machine which I keep fully up to date with  unstable.
 
 This morning KDE 3.5 arrived, so I installed it.  It seems to work just fine 
 if I start it with startx.
 
 BUT kdm will not start, and in the file /var/log/kdm.log there is an 
 error saying the on the X command, the -br option is not recognised.
 
 I have tried purging kdm and reinstalling to see if any of its config files
 needed updating, but that that did not cure the problem.
 
 The only odd thing about this machine is that it has a very old S3
 graphics chip, which is not supported by x.org.  So I use the old S3
 XFree86 xserver.  
 
 I have so far been unable to find where X is being invoked with the
 -br option, any pointers would be gratefully accepted.

I think it's configured in /etc/kde3/kdm/kdmrc - something like ServerArgs
or ServerCmd.  That's from memory, though (not running kdm/KDE here).
In case that should be wrong, try grep -r -e -br /etc/kde3 to locate
other likely candidates...

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: dialup internet connection + mc = long startup

2006-01-06 Thread Almut Behrens
On Sat, Jan 07, 2006 at 03:28:01AM +0300, Roman Makurin wrote:
 Hi All!
 
 At home I`ve got slow dialup internet connection and when I connect to 
 internet mc startup time becomes very long - about 20 seconds :) When I 
 disconect everything goes fine - strtup less then a second. I think it`s 
 resolver issues but I don`t know what I need to do. Does anyone know what I 
 need to do to solve my problem ?

Yes, it might be related to a name lookup problem.  Does name resolving
work properly otherwise?
IIRC, mc does call gethostbyname(3) under certain circumstances -- I
think it had to do with samba VFS in particular, but it might also get
called upon regular startup for some reason...

Does it also hang when you simply do mc -V, and when you temporarily
move away ~/.mc/ini?

You might try starting it under ltrace to figure out where it's
spending its time.  Something like

$ ltrace -f -S -r -n 2 -o /tmp/mc.ltrace  mc

The -r produces another output column with relative timings of the
individual calls.  Watch out for those that take considerable time...
Also check whether a call to gethostbyname() is actually being made.

(Note that the screen might get messed up when running mc under ltrace
or strace, and you can't any longer terminate it normally.  In that
case try ^C or simply kill it from another terminal...)

Other info that might (or might not :) help is:

* the contents of
  /etc/nsswitch.conf
  /etc/resolv.conf  (are the specified nameservers working?)
  /etc/hosts

* are you using samba?

* the usual stuff, like
  version of mc (or better, full output of mc -V)
  debian flavor, kernel version

Good luck,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: MTA slow to start

2006-01-05 Thread Almut Behrens
On Thu, Jan 05, 2006 at 01:13:21AM -0500, Tyler Smith wrote:
 
 I'm still not clear on the purpose of exim4. I understand that the 
 system will on occassion send me a message, but in several months on 
 Sarge this has never happened. I get all my mail via a pop account - 
 does exim4 know my address? I've never configured anything that I'm 
 aware of that would do this. Or does mail get sent somewhere else? Or 
 maybe mail is sent so rarely that I just haven't had any yet...

You're right, depending on what you're using the box for, you don't
absolutely need an MTA.  I myself have a special purpose light-weight
machine setup without an MTA, and it's been running happily for more
than two years (first woody, now sarge).  Occasionally, something might
try to send you a mail (e.g. some cron job), so this will produce an
error message.  But so what?  It doesn't break anything fundamentally.
If you're not expecting the system to send you messages, why bother?
Typically, you can also configure programs to not send mail.
For most purposes, log files are all you really need to keep a eye on
how the system is doing...

You can even read/send your personal mail from such a machine, if
you're using an external account (like a freemail provider or some such).
Many MUAs have built-in POP/IMAP and/or SMTP functionality, and are
thus capable of handling this kind of mail transfer all on their own.
(though this isn't the classic unix way of doing things -- but that's
another topic...)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Boot in console not X [Was]Re: X server problem

2006-01-05 Thread Almut Behrens
On Thu, Jan 05, 2006 at 01:35:40PM +, David Pead wrote:
 
 Now however, in my fumbling around trying to fix the Xserver I've messed up
 the keyboard mapping. I need to get back to the command line to reconfigure
 but can't use the usual alt-F1.

normally, from within X, that would be ctrl-alt-F1 (once you're in the
virtual console, both ctrl-alt-F7 and alt-F7 work to get you back...)

 How can I boot and not start the X server?
 Can I hold down a key when Linux starts? I use bootX to boot, can I pass an
 argument of somesort to go straight to the console?

I'm not familiar with the bootup process on Macs, so others will have
to chime in here.
Assuming your keyboard setup isn't messed up entirely, so you can still
open up an xterm in X, you could always switch runlevels (single user
mode - init(8)), or try to terminate the X server (ctrl-alt-Bksp), or
kill it explicitly, or kill/shutdown the display manager (xdm, kdm,
gdm), in case you're using one (to avoid automatic restarting of X).

OTOH, depending on which admin task you need to perform, you might also
be able to do it right from within X (from a root shell).

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: cdrecord as user

2006-01-04 Thread Almut Behrens
On Wed, Jan 04, 2006 at 10:43:29AM +0100, Ernst-Magne Vindal wrote:
 Hi, I need help get cdrecord running as normal user.
 
 I have installed it as suid root but get access denied trying to run it as
 normal user. The user is in the group cdrom
 
 cdrom dev is: brw-rw  1 root cdrom 11, 0 2005-02-26 07:43 /dev/scd0
 
 ls -la /usr/bin/cdrecord
 -rwsr-xr--  1 root cdrom 133 2005-01-09 17:55 /usr/bin/cdrecord

On debian, /usr/bin/cdrecord is just a wrapper script that executes the
actual binary /usr/bin/cdrecord.mmap, so make sure that's suid root. [1]
However, this should only be an issue if you've set permissions
manually... running dpkg-reconfigure cdrecord to set suid mode should
in fact have done it properly.

(Also - just in case you've added the user to the group cdrom
immediately before - make sure you've logged in again (in the shell
from which you issue the command), for the group membership change to
take effect.)

If that doesn't help, what's the exact error message you're getting?
(simply cut-n-pasting the command together with the error message is
usually a good idea, 'cos minor infos that do seem irrelevant to you,
might actually provide an important hint to someone else...)

Cheers,
Almut

[1] though basically useless, it's still a good idea to keep the script
itself suid root, too, as some frontends are checking permissions of
the file /usr/bin/cdrecord


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Sarge-built binaries running on Woody systems?

2006-01-01 Thread Almut Behrens
On Sat, Dec 31, 2005 at 08:01:37PM -0600, Matt England wrote:
 I'm still looking for any guidance on this topic.
 
 In summary:
 
 Can one run Sarge-built binaries on Woody?

As I tried to explain at length in my previous post: in general no,
except with massive tweaking (either at build-time, or at run-time).

 Can one run Woody-built binaries on Sarge?

Somewhat less problematic, but still no guarantee...

 
 In the same context, how well would Debian 2.x-built binaries work on 3.x 
 and vice versa?

Same thing... for exactly the same reasons.

 
 For what it's worth, I soon have to establish some Debian 
 binary-to-test-system rev-control policies before I got into first round 
 test on my group's software.

Essentially, static linking is the only easy way that would get you
reasonably far with what you seem to have in mind.  But then again,
this cannot really be recommended as a general measure against the need
to rebuild (when things have diverged too far) -- after all, there
_are_ reasons that dynamic linking had been introduced many years ago...

Almut


PS: a few meta comments. It might help to bring about more useful answers

* if you followed up by replying to what people have said so far,
instead of merely restating the original question.

* if you defined what exactly you mean by sarge-built, etc.
A sarge-built binary could be anything from the very binaries as they
come with the stock debian packages, to what might pop out of a highly
tweaked build process, which simply happens to be performed on a sarge
machine...  Depending on what you mean, answers would vary widely.

* if you elaborated somewhat on the motivation behind your question,
i.e. which problem are you trying to solve? any specific application
you have in mind? (if so, what type of), and so on.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: php4.so, undefined symbol: RAND_load_file (Apache 1.3)

2005-12-31 Thread Almut Behrens
On Fri, Dec 30, 2005 at 07:12:20PM -0500, Julien Lamarche wrote:
 
  For MapTools' FGS base with Chamelon, I used it's own install script.

Did you run the install script as root, or as your normal user?
Which destination directory did you specify when asked?

I haven't really used FGS myself so far, but if I'm understanding
things correctly, the underlying installation concept is to provide a
more or less self-contained setup with its own apache and all required
libraries, including its own version of libssl, and a busload of other
private stuff.

 (...)
 
 After it still doesn't work.
 -
 picard:/home/jlam# apachectl configtest
 Syntax error on line 245 of /etc/apache/httpd.conf:
 Cannot load /usr/lib/apache/1.3/libphp4.so into
 server: /usr/lib/apache/1.3/libphp4.so: undefined symbol: RAND_load_file

Generally, the RAND_load_file symbol that libphp4.so is missing is
supposed to be provided by libcrypto, as you can verify by doing

$ nm -D /lib/libcrypto.so.0.9.7 | grep RAND_load
00087390 T RAND_load_file

(also see http://www.openssl.org/docs/crypto/RAND_load_file.html)

As you probably figured from this URL, libcrypto is part of the OpenSSL
package, and on debian, libssl is dynamically linked against libcrypto:

$ ldd /lib/libssl.so.0.9.7
libcrypto.so.0.9.7 = /lib/libcrypto.so.0.9.7 (0x40032000)
libdl.so.2 = /lib/libdl.so.2 (0x40138000)
libc.so.6 = /lib/libc.so.6 (0x4013b000)
/lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x8000)

Yet, for some reason I can't tell, the libssl shipped with FGS does _not_
depend on libcrypto.so.0.9.7

$ ldd /tmp/fgs/lib/libssl.so.0.9.7
libdl.so.2 = /lib/libdl.so.2 (0x400ca000)
libc.so.6 = /lib/libc.so.6 (0x400cd000)
/lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x8000)

Debian's libphp4.so does not explicitly load libcrypto, but instead is
relying on libssl to pull it in as a secondary dependency...
The net effect of all this would be that the symbol would simply not be
made available, in case FGS's libssl should be used.

IOW, I suppose what's happened is that the install script (or you :)
somehow messed up things, so the wrong libssl is being used now.
I can't tell the exact reason, but maybe we'll find out...

One possible issue could be that the LD_LIBRARY_PATH setting (which
the FGS installation needs to run) somehow still is in effect.  In the
setenv.sh script that comes with the FGS package, you can see that it
prepends its own lib path
...
export LD_LIBRARY_PATH=$FGS_HOME/lib:$LD_LIBRARY_PATH
...

AFAICT, the installation instructions suggest to source this setenv.sh
in .bashrc.
Have you done that, maybe?  If so, in which .bashrc - yours, or root's?
Anything else along these lines you might have done (intentionally or
inadvertently), but then have forgotten about?


Quoi qu'il en soit... salut, bonne chance, et bonne année!

Almut
(aka 'madame ldd' ;)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Sarge-built binaries running on Woody systems?

2005-12-29 Thread Almut Behrens
On Thu, Dec 29, 2005 at 01:45:55PM -0600, Matt England wrote:
 Sarge-built binaries running on Woody systems:
 
 Is this feasible?
 
 I'm not talking about package management...just the raw, binary.
 
 Are dynamic-library-management tricks needed?  Does the Debian testing 
 authority (or whoever is given responsibility of anointing Debian releases 
 for distribution) make any attempt at backwards compatibility for this kind 
 of stuff?
 
 As per similar motivation for my previous redhat-on-Debian binary porting 
 conversation:  I'm hoping that one Debian build will work on many Debian 
 systems.
 
 Can I at least count on a Woody-built binary working ok on a Sarge-based 
 system?  In this context, how far back can I go to get forward 
 compatibility?  (ie, how many revs before Sarge can I go back to build on 
 and still get Sarge compatibility?)
 
 If there are reasons why the answer is depends instead of a flat yes or 
 no: I would love to know these reasons.  This is what I'm specifically 
 hunting for.

The short answer is somewhere in between no and depends...
All in all, running binaries from a significantly newer system on an
older one is something you really want to avoid, unless you have very
good reasons for doing so.  Any approach to work around the various
problems would be rather cumbersome and ugly.  This really cannot be
recommended under most any circumstance, at least not if the goal is to
have a single binary that'll just run out of the box, on any system.
IOW, the ultra short answer is: forget about any such endeavour :)

Of course, it depends on what you're trying to achieve. If you want to
run a sarge program as it is, without any tweaking, on a woody system,
the answer is a clear no for any dynamically linked binary.  Beyond
that, there's no generic answer that would apply to every program.

For example, let's take the rather basic ls program.  If you try to
run a sarge-built binary on a woody system you get

$ /sarge/bin/ls
/sarge/bin/ls: error while loading shared libraries: libacl.so.1: cannot open 
shared object file: No such file or directory

As you can easily see, it's missing some library.  More importantly, if
you do an ldd

$ ldd /sarge/bin/ls 
/sarge/bin/ls: /lib/libc.so.6: version `GLIBC_2.3' not found (required by 
/sarge/bin/ls)
librt.so.1 = /lib/librt.so.1 (0x4001a000)
libacl.so.1 = not found
libc.so.6 = /lib/libc.so.6 (0x4002c000)
libpthread.so.0 = /lib/libpthread.so.0 (0x40149000)
/lib/ld-linux.so.2 = /lib/ld-linux.so.2 (0x4000)

you also see that, in addition to the missing lib, there's a version
problem with /lib/libc.so.6, because the libc from woody is too old.
Now, ls isn't fancy at all.  It's not too hard to imagine what you'd
get if you were to run a more complex application.

If you really need to do something like that, you essentially have the
following options:

(1) distribute/install all required newer libs on woody, and

  (a) run the program via chroot, or

  (b) fiddle with the lib paths (using LD_LIBRARY_PATH, LD_PRELOAD, etc.)

(2) recompile the program and

  (a) link everything statically, or

  (b) link dynamically against the new libraries in some special
  location - including libc.so and ld.so (!) - and ship them
  together with the application

The latter options, of course, presume that you in fact do have the
choice to recompile/relink the application -- as for example, when you
desperately want to run a new application on an outdated system (that
you can't upgrade for some reason), but otherwise don't care about
being able to run that same binary on a new system.

None of these approaches (maybe except static linking - to some degree)
achieve to create a single binary that'll just run everywhere.

Also, fiddling with the lib paths would soon get ugly, in particular
because the path to the dynamic linker/loader (/lib/ld-linux.so.2) is
hardcoded in every binary.  While you could still run a simple binary
directly via

$ LD_LIBRARY_PATH=/sarge/lib /sarge/lib/ld-linux.so.2 /sarge/bin/ls
or
$ /sarge/lib/ld-linux.so.2 --library-path /sarge/lib /sarge/bin/ls

things would get rather ugly if you were to try running some program
which itself is executing some other binary.  For example, assume you
wanted to check dynamic library binding, using the ldd that belongs to
the new system.  As ldd is a shell script, you might think one of the
following would work

$ LD_LIBRARY_PATH=/sarge/lib /sarge/lib/ld-linux.so.2 /sarge/bin/sh 
/sarge/usr/bin/ldd /sarge/bin/ls
/sarge/bin/ls: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required 
by /sarge/lib/libc.so.6)
/sarge/bin/ls: /lib/ld-linux.so.2: version `GLIBC_PRIVATE' not found (required 
by /sarge/lib/libpthread.so.0)
librt.so.1 = /sarge/lib/librt.so.1 (0x40014000)
libacl.so.1 = /sarge/lib/libacl.so.1 (0x40027000)
libc.so.6 = /sarge/lib/libc.so.6 (0x4002f000)
libpthread.so.0 = /sarge/lib/libpthread.so.0 (0x40162000)
 

Re: searching for a font - not a debian specific topic

2005-12-27 Thread Almut Behrens
On Tue, Dec 27, 2005 at 07:02:02PM +0100, LeVA wrote:
 
 I'm looking for a font, and I hope someone recognise it, or can show me 
 something similar. (...)

HelveticaNeue-BlackCond looks pretty similar.
Contact me off-list if you have further questions...

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: unsubscribe

2005-12-20 Thread Almut Behrens
On Tue, Dec 20, 2005 at 07:58:19PM +0200, Andrei Popescu wrote:
 On Mon, 19 Dec 2005 22:49:19 -0700
 Richard DeVillier [EMAIL PROTECTED] wrote:
 
  unsubscribe
  
  PLEASE!
 
 Maybe a pretty please will help :)))
 
 Don't send this to the list, send it to [EMAIL PROTECTED]

Sometimes I'm wondering whether it's that very REQUEST being
capitalized that's confusing people -- and whether simply using
[EMAIL PROTECTED] would make it appear more like a regular
email address after all.

Of course, capitalization was meant to make it stand out clearly, so
it won't, under no circumstance, be overlooked.  But does it really
achieve that?

Due to its difference in perceptual quality, that REQUEST might also
be taken as some strange constituent that can't seriously be part of an
actual address they're supposed to use.  Kind of like some junk left
over by mechanical processing, or some placeholder REQUEST, $REQUEST
they have no idea what to fill in for.  Or I dunno what...  So, they
figure to just strip it out and send to [EMAIL PROTECTED] instead, which
they know does exist.

But maybe I'm just thinking too complicated, and it's in fact nothing
more than people not reading (or actually not seeing) the appended
message at all.  OTOH, I'm then wondering where they get the idea from
to use the subject unsubscribe...

Anyhow, I apologize for having added to this unsubscribe spam :)

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: unsubscribe

2005-12-20 Thread Almut Behrens
On Tue, Dec 20, 2005 at 11:51:31PM -0500, Gene Heskett wrote:
 On Tuesday 20 December 2005 15:42, Almut Behrens wrote:
 Sometimes I'm wondering whether it's that very REQUEST being
 capitalized that's confusing people -- and whether simply using
 [EMAIL PROTECTED] would make it appear more like a regular
 email address after all.
 
 Of course, capitalization was meant to make it stand out clearly, so
 it won't, under no circumstance, be overlooked.  But does it really
 achieve that?
 
 Due to its difference in perceptual quality, that REQUEST might
  also be taken as some strange constituent that can't seriously be
  part of an actual address they're supposed to use.  Kind of like
  some junk left over by mechanical processing, or some placeholder
  REQUEST, $REQUEST they have no idea what to fill in for.  Or I
  dunno what...  So, they figure to just strip it out and send to
  [EMAIL PROTECTED] instead, which they know does exist.
 
 But maybe I'm just thinking too complicated, and it's in fact nothing
 more than people not reading (or actually not seeing) the appended
 message at all.  OTOH, I'm then wondering where they get the idea
  from to use the subject unsubscribe...
 
 
 A lot of email agents do NOT show the line beginning with --  as the 
 sig marker, or anything below it.  Having that stuff appended to the 
 end of a message does no good whatsoever for the folks using IE IIRC, 
 so that they never see the unsub message (...)

That's one theory :)   My personal hypothesis, OTOH, goes like this:
they actually do read, but have difficulties interpreting the message,
i.e. they get the part about using unsubscribe as the subject line,
but then somehow can't believe that [EMAIL PROTECTED]
really is the address to use.  The reason might be that weird -REQUEST
fragment in the address, as said above.

Well, we'll never know for sure, unless _they_ tell us what was going
on in their heads.  That typically won't happen, though...

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: find/replace in place

2005-12-19 Thread Almut Behrens
On Mon, Dec 19, 2005 at 03:38:23PM -0500, Tony Heal wrote:
 I have a database name I want to replace inside of an xml file. I can do
 this in several step using sed, but I would like to do it in a single step
 using perl. This is what I have in sed and what does not work in perl.
  
 SED
 #!/bin/bash
 echo -n Please enter the name of the new database: 
 read syelledb
 dbchange=`cat /tmp/data-sources.xml|grep database|cut -d  -f2|cut -d 
 -f1`
 sed s/$dbchange/$syelledb/ /tmp/data-sources.xml  /tmp/data-sources.xml.tmp
 mv /tmp/data-sources.xml /tmp/data-sources.xml.orig
 mv /tmp/data-sources.xml.tmp /tmp/data-sources.xml
 
 PERL (single line in bash script)
 #!/bin/bash
 echo -n Please enter the name of the new database: 
 read syelledb
 
 dbchange=`cat /tmp/data-sources.xml|grep database|cut -d  -f2|cut -d 
 -f1`
 /usr/bin/perl -pi -w -e 's/$dbchange/$syelledb/'

use double quotes:

/usr/bin/perl -pi -w -e s/$dbchange/$syelledb/

so the shell will interpolate the contents of the variables into the
s/// expression (with single quotes you'd replace the literal string
'$dbchange' by '$syelledb'...)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Apache2 error

2005-12-17 Thread Almut Behrens
On Fri, Dec 16, 2005 at 03:28:47PM +0100, Philippe Dhont  (Sea-ro) wrote:
 
 [emerg] (38)Function not implemented: Couldn't create accept lock
 
 Kernel is 2.6.13.4
 Dual xeon proc, 1gb ram
 
 New installation, everything installed with apt-get
 
 I have no idea what it means

Typically, this would mean that the specified method to create mutex
locks (for multiplexing/serializing several child processes) is not
supported by the system, or is failing for whatever reason.  You could
try to select some other method, such as 'fcntl', 'flock', ... with
the AcceptMutex directive.  See
http://httpd.apache.org/docs/2.0/mod/mpm_common.html#acceptmutex

For the file-based mechanisms, it could also mean that the file simply
cannot be created due to permission problems, or somesuch...

If that doesn't help: which apache version, which debian (sarge,...)
are you running?

Also try checking the BTS -- IIRC, there had once been a related bug
(quite some time ago, though, so that should really be fixed by now).
Even if it's not exactly the same issue, this sometimes does give hints
on where to start looking...

HTH,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Customize xfterm4 for a terminal-bound program in Xfce

2005-12-17 Thread Almut Behrens
On Sat, Dec 17, 2005 at 08:54:53AM -0200, Paulo Marcel Coelho Aragao wrote:
 Hi,
 
 I'm migrating from KDE to Xfce and I'm not being able to do this: to add a 
 launcher in the Panel for a program running on a terminal, with customized 
 terminal properties. For example, I'd like to launch mutt and mc within a 
 taller terminal window, without menu and scroll bars. If the program doesn't 
 run on a terminal, like xconsole, for example, I just need to add a -geometry 
 argument. I can't figure out how to do something similar to programs that run 
 on a terminal.

This somewhat depends on what options the terminal understands, but the
approach would generally be much the same, i.e. use -geometry for the
terminal that mc/mutt/... is running in.  Size is sometimes specified
in pixels, but usually in characters, e.g. for mc in rxvt I have a
command like

  rxvt -geometry 120x50+480+0 -e mc

-geometry applies to the terminal; the -e executes the application in
the terminal.  Feel free to add other options as required -- mc options
would of course go after -e mc.

For more complex start commands (where you need to set up environment
variables and stuff) I usually put everything in a small wrapper
script, which I then call from the window manager.  The last command
in such a script would typically be an exec (which avoids that an
unnecessary shell process is kept running...)

  #!/bin/sh

  # do some setup here, like specific locale changes, lib paths, colors
  # or whatever there's no commandline option for...

  exec rxvt -geometry 120x60+50+20 -e mutt


Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: WordPerfect 8.0 (installation)

2005-12-16 Thread Almut Behrens
On Thu, Dec 15, 2005 at 06:04:59PM -0500, Ken Heard wrote:
   So it would appear that packages xlibs and xlibs-data supercede 
   xlib6g. In fact, the properties list of xlibs_4.3.0dfsg.1-14sarge1 says 
 that it replaces xlib6g( 4.0). The xlib6g I was trying to install was
 3.3.5-1.0.1  I can find nowhere a version of xlib6g = 4.0.  In any
 event it would be redundant as I already  xlibs and xlibs-data in my
 box. It was consequently unnecessary to do what Mr. Behrens suggested.

(Ms. ..., btw :)

As to your problem: from your first post I seem to remember that the
missing xlib6g essentially was the only problem you had with installing
wp8, so I assume the other required parts would install properly(?)

IOW, now that you've figured out you might not need that package after
all... I'm wondering whether you have tried to install what's available
and isn't conflicting  -- using options like --ignore-depends, etc.; or
by extracting the packages' contents into some temporary directory
(using -X), and then manually moving the required parts to their final
location...).  Maybe, WP would work after that...  Sorry, I can't be
more specific, because I don't really have an idea what's on your CD.

 
   Next, I discovered that the version of xlib6 on the Corellinux CDROM 
 which I was trying to use was not the same as the one Mr. Wiseman was 
 using.  The one I had was 3.3.5-1.0.1; Mr. Wiseman's was 3.3.6-44.  I 
 downloaded that version from Mr. Wiseman's website.  When I tried to 
 install it I got the following response:
 
 SOL:~# dpkg -i xlib6_3.3.6-44_i386.deb
 (Reading database ... 87476 files and directories currently installed.)
 Unpacking xlib6 (from xlib6_3.3.6-44_i386.deb) ...
 dpkg: error processing xlib6_3.3.6-44_i386.deb (--install):
  corrupted filesystem tarfile - corrupted package archive: Success
 dpkg-deb: subprocess paste killed by signal (Broken pipe)
 Errors were encountered while processing:
  xlib6_3.3.6-44_i386.deb

This usually means the package got damaged somehow (while downloading,
or so), so the contents simply cannot be unpacked (not sure though, why
dpkg considers the corrupted archive a success ;)

 
 (...)
   It is very much of a disappointment that I cannot seem to be able to 
 use WP8.0  It makes the claim that Debian is about choice sound hollow.
 
   There is now another possible option: WINE.  Now that a beta version 
   of WINE is out, I may be able to install WP 12 using it.

That would probably be the best option, if you get it to work -- at
least you'd then have a recent version of WP.  My personal experiences
with WINE have always been somewhat disappointing, though... however,
my last try was more than a year ago.  The typical scenario was that
99.9% worked fine, but the remaining 0.1% were rather annoying,
usability-wise...

Well, if all else fails, and you don't feel like giving up yet, you
could try to install the whole stuff into some chroot environment.
That would avoid any conflicts with whatever else is installed, and
should basically always work (with a few restrictions, as mentioned
below).

Actually, I had done exactly this way back in 2001, and it worked fine.
The only problem I had was that I somehow didn't get printing to work
directly to the printer (don't remember exactly what the problem was),
so I simply printed to file using some builtin PS driver, and then
spooled the PS file using lpr from outside of the chroot (not a big
issue, for my taste.).  (Later, I stopped using WP altogether, because
I somehow do prefer batch formatting tools like latex, and generally
no longer have much document composition to do...)

In theory, I could send you a slightly re-packaged tarball of that
entire wp8 chroot directory, in case you're interested [1].
I just unpacked and ran it again -- still seems to work... the 'about'
box says it's version 8.0.0078.

The only problem is that it now seems to want a license number (which I
of course no longer have -- actually, I don't remember ever having had
to enter one, but my memories may have faded). IOW, it claims it will
quit working after a trial period of 90 days...  But well, maybe there
is a license number on your CD or book cover.

Another option would be that you somehow make the contents of your CD
available to me for download, and I'll try to setup a chroot install
from the very WP version that you have.  In principle, you could of
course do that yourself as well.  However, in case you don't have much
experience setting up chroot environments, I'd rather offer to simply
do it myself (instead of describing the steps, and elaborating on all
potential difficulties you might encounter -- sorry for the laziness).

Anyhow, before you say yes! (or no), please note that

* you'd have to start WP via sudo (because of the chroot -- WP itself
will be run under your regular UID).  This probably isn't a big issue,
if you simply want to use it on your private box.

* all documents will always have to be placed 

Re: file not found - hardware, fs, or driver problem?

2005-12-13 Thread Almut Behrens
On Tue, Dec 13, 2005 at 07:52:56PM +, Richard Lyons wrote:
 $ ldd qcad
 linux-gate.so.1 =  (0x)
 ...
 libXcursor.so.1 = not found
 ...
 
 I cannot find linux-gate on packages.debian, so I don't know about that.

That linux-gate.so is a virtual DSO, so there is no corresponding
file (see [1] for a concise explanation of what this is about).
IOW, that should be fine -- though I'd be marginally worried about it
being mapped to 0x (I'm not entirely sure how that's supposed
to behave in a mixed arch environment...  Anyone else knows?)

Well, we'll know if that's ok, once that libXcursor thing is resolved...
(Would be kind of a pity, though, if that turned out to be the final
show stopper :)

 
 It looks as though libXcursor is the only other problem.  So I
 downloaded the i386  package, and did, as you suggest:
 
 # dpkg -X libxcursor1_1.1.3-1_i386.deb /emul/ia32-linux
 
 There seems to be a file in the /emul/ia32-linux/usr/lib directory,
 but the output of ldd is unchanged

Have you run ldconfig(8) to update the ld.so.cache?
(/etc/ld.so.conf should already contain the dirs /emul/ia32-linux/lib,
/emul/ia32-linux/usr/lib and /emul/ia32-linux/usr/X11R6/lib)

If all else fails, you could also try to set LD_LIBRARY_PATH to point
to /emul/ia32-linux/usr/lib -- see ld.so(8).

Good luck (actually, I think you're almost there...),

Almut

[1] http://www.trilithium.com/johan/2005/08/linux-gate/

(generally recommended read, BTW (high geek factor) -- you never
know... one day, your GF might ask how system calls are being made,
and you wouldn't want to risk leaving a bad impression... ;)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: start-stop-daemon.... for the love of GOD! Why?

2005-12-12 Thread Almut Behrens
On Mon, Dec 12, 2005 at 03:32:57PM -0700, Mike wrote:
 
 normally you just have a set of source functions library like
 /etc/init.d/functions or some other path that you use for your init
 scripts. Debian has decided to daomonize it with this start-stop-daemon
 thing they made up. I guess I could create my own functions file and go
 through all of that but I figured when in rome...

Not exactly sure what you mean by has decided to daomonize it.  Just
because it's called start-stop-daemon doesn't mean it's running as a
daemon itself -- if that's what you meant.  It's just a tool that takes
care of a number of common tasks around starting and stopping daemon
processes, like checking whether the daemon is already running, doing
some sanity checks before killing it, UID switching, etc.  Other
distros have decided to stick similar functionality into some library
of shell functions you have to source.  What the heck?  If you don't
like start-stop-daemon, nothing is forcing you to use it in your own
init scripts...

What we perceive as normal largely depends on what we've become used
to.  It's all relative.  If you had started with debian, and would
later be switching to some other distro, you'd probably complain
What's all this mess of shell functions here!?  Debian has a nice and
simple binary for all that...!, wouldn't you? ;)

Just take it easy :)

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: XKB problem with x-terminal

2005-12-11 Thread Almut Behrens
On Fri, Dec 09, 2005 at 11:22:30PM +0200, Simo Kauppi wrote:
 On Thu, Dec 08, 2005 at 09:34:26PM +0100, Almut Behrens wrote:
  On Wed, Dec 07, 2005 at 02:16:09PM +0200, Simo Kauppi wrote:
   Is there a way to compile keyboard definitions for X and save them
   somewhere, where XServer can read them, when it starts?
  
  $ xkbcomp -xkm :0 keymap.xkm
  Then, simply transfer the resulting keymap.xkm to your thinclient,
  where you can make the X server load it upon startup using the option
  -xkbmap keymap.xkm.
 
 Thanks a lot, I hadn't figured this one out. Unfortunately, if I start X
 with -xkbmap keymap.xkm, it says it doesn't like it!
 
 If I start X with -xkbdb keymap.xkm, it doesn't complain, but the
 keyboard doesn't behave properly :(

Not too surprising.  That option's value is assigned to some variable
XkbDB (in xc/programs/Xserver/xkb/xkbInit.c) which isn't used anywhere
else beyond this assignment, in the entire X sources.  Whatever that's
supposed to do, it isn't implemented yet ;)

The fact that -xkbmap doesn't work as advertised, isn't too surprising
either, 'cos I was telling rubbish  (yeah, next time, Almut reminds
herself to actually verify the stuff she's claiming...)

Well actually, this option does work, kind of, but not in a manner as
straightforward as on might expect.  However, I knew for sure I once
had it working already, so I've played around with it some more... 
Well, found lots of weird things.  But don't worry, I won't bore you
with the details.  Just a brief description of how I finally got it
working, after lots of strace'ing and poking around in the X sources.

All in all, I can't help getting the impression, that Simo and me are
to the only two people in this world who have ever tried to make use of
this -xkbmap option :)

My main observations:

* you can't keep the X server from wanting to call xkbcomp, even in the
presence of a perfectly valid compiled xkbmap.  At least, I dunno how.

* even though the output of the xkbcomp run isn't really used in this
case (AFAICT), it is good to have it succeed, superficially.  In case
of any failures in this step, a couple of other things are being tried
which eventually lead to a complete failure of the whole shebang.

* you are expected to specify the bare name of the keymap (without the
.xkm extension);  the X server looks for the map in certain predefined
directories (which do vary depending on which UID the server is started as)

* unless you have a dot in the keymap name, the X server deletes the
map before reading it.  No kidding.  This seems to be because the
path of the temporary output file (created during the useless run of
xkbcomp) is being set to where the real keymap is expected to be found.
And tempfile cleanup happens before the actual -xkbmap-related code
gets a chance to read it...  Luckily, there's some odd sanitizing
being applied to the tempfile name (replacing '.' by '_').  That allows
us to play tricks here:

1: 6929  execve(/usr/bin/X11/XFree86, [/usr/X11R6/bin/X, :1, -xkbmap, 
default.map], [/* 14 vars */]) = 0
2: 6930  execve(/usr/X11R6/lib/X11/xkb/xkbcomp, 
[/usr/X11R6/lib/X11/xkb/xkbcomp, -w, 1, -R/usr/X11R6/lib/X11/xkb, 
-xkm, -em1, The XKEYBOARD keymap compiler (x..., -emp,  , -eml, 
Errors from xkbcomp are not fata..., keymap/default.map, 
compiled/default_map.xkm], [/* 14 vars */]) = 0
3: 6931  lstat64(/usr/X11R6/lib/X11/xkb/compiled/default_map.xkm, 0xba1c) 
= -1 ENOENT (No such file or directory)
4: 6931  stat64(/usr/X11R6/lib/X11/xkb/keymap/default.map, 
{st_mode=S_IFREG|0644, st_size=7804, ...}) = 0
5: 6931  open(/usr/X11R6/lib/X11/xkb/keymap/default.map, 
O_RDONLY|O_LARGEFILE) = 6
6: 6931  open(/usr/X11R6/lib/X11/xkb/compiled/default_map.xkm, 
O_WRONLY|O_CREAT|O_LARGEFILE, 0100644) = 7
7: 6929  open(/usr/X11R6/lib/X11/xkb/compiled/default_map.xkm, O_RDONLY) = 6
8: 6929  unlink(/usr/X11R6/lib/X11/xkb/compiled/default_map.xkm) = 0
9: 6929  open(/usr/X11R6/lib/X11/xkb/compiled/default.map.xkm, O_RDONLY) = 6

These are the relevant lines from the strace output when running X with
-xbkmap default.map (line numbers prepended).  As you can see, for
some calls it uses default_map in place of default.map.

The open() in line 9 apparently is what finally makes things work.
The open() of the tempfile in line 7 has to succeed, too, or else weird
things happen, and the real open() in line 9 never takes place.  Line 8
shows that tempfile cleanup now is no longer deleting the real file.

So, to summarize, here's what I did:

* moved away the original /usr/X11R6/lib/X11/xkb directory (a link to
/etc/X11/xkb, normally), preventing X from accessing anything in there.

* uncommented all 'Option Xkb*' entries in XF86Config-4/xorg.conf.

* created a dummy replacement for xkbcomp. This is just a simple script
which copies the precompiled keymap to the tempfile location where X is
expecting to find it -- hereby simulating a successful compile run:

  #!/usr/bin/perl
  
  $base = /usr/X11R6/lib/X11/xkb/;
  $dest = pop

Re: start-stop-daemon and java

2005-12-10 Thread Almut Behrens
On Fri, Dec 09, 2005 at 10:26:46PM -0800, Scott Muir wrote:
 Have a question which i think relates to s-s-d more than java but as i've
 been learning, what do i know?
 
 part of my init.d script.
 
 APPDIR=/usr/jsyncmanager
 APPUSER=jsync
 ARGS=-Xmx256M -jar jsyncmanager.jar --server
 PIDFILE=/var/run/jsyncmanager.pid
 
 # Carry out specific functions when asked to by the system
 case $1 in
   start)
 echo Starting jsyncmanager
 start-stop-daemon -Sbm -p $PIDFILE  -c $APPUSER -d $APPDIR  -x
 /usr/bin/java -- $ARGS
 
 
 this was derived from a command line
 
 java -Xmx256M -jar jsyncmanager.jar --server 2out2 
 
 The problem is getting a java application to start using an init.d script,
 honouring the pidfile to keep it from running more than once (which the
 s-s-d does) but also trap stderr in a log file.
 
 if I modify $ARGS to include ...--server 2/home/jsync/out2.txt
 
 the out2.txt file never gets created or added to.

This is a more general problem, not particularly related to java.
Actually, there's two ways to fix this, a good one and an easy one --
the latter one first:

Generally, at the risk of sounding repetitive: try eval if your shell
command doesn't behave as expected... :)   (see my recent post
http://lists.debian.org/debian-user/2005/12/msg00590.html -- not saying
you should've read it, just to make clear what I'm referring to)

A simple example:

  ARGS=foo bar stdout 2stderr
  echo $ARGS

is not the same as either of those

  echo foo bar stdout 2stderr
or
  ARGS=foo bar
  echo $ARGS stdout 2stderr

If you want it to be the same, you have to write

  ARGS=foo bar stdout 2stderr
  eval echo $ARGS

From this it follows that the trivial way to fix your redirection
problem would be to either write

ARGS=-Xmx256M -jar jsyncmanager.jar --server
...
start-stop-daemon -Sbm -p $PIDFILE  -c $APPUSER -d $APPDIR  -x 
/usr/bin/java -- $ARGS  2/home/jsync/out2.txt

or

ARGS=-Xmx256M -jar jsyncmanager.jar --server 2/home/jsync/out2.txt
...
eval start-stop-daemon -Sbm -p $PIDFILE  -c $APPUSER -d $APPDIR  -x 
/usr/bin/java -- $ARGS

Although this would probably work in your specific case, the ugly
thing about it is, that redirection here does apply to the entire
start-stop-daemon command, not only to the java command being run.
If the start-stop-daemon were to write anything on stderr, it would
also end up in out2.txt. 

OTOH, as you probably know, redirection operators like  always need
a shell to be interpreted.  Yet, the start-stop-daemon itself is not
executing stuff via some subshell (it uses the system call execve(2)). 

So, as you have it now, the last argument just gets passed through to
java, as the verbatim string 2/home/jsync/out2.txt, and java simply
doesn't know what to do with it... 

To do it properly, you'll have to use a seperate dedicated shell, for
example like this

APPDIR=/usr/jsyncmanager
APPUSER=jsync
EXE=/usr/bin/java
ARGS=-Xmx256M -jar jsyncmanager.jar --server
JCMD=exec $EXE $ARGS 2/home/jsync/out2.txt
PIDFILE=/var/run/jsyncmanager.pid

# Carry out specific functions when asked to by the system
case $1 in
  start)
echo Starting jsyncmanager
start-stop-daemon -Sbm -p $PIDFILE  -c $APPUSER -d $APPDIR  -x $EXE -a 
/bin/sh -- -c $JCMD
  ...

The remaining ugliness is that the name of the process would be
reported as /usr/bin/java, which is not most informative, e.g.

$ /etc/init.d/jsyncmanager start
Starting jsyncmanager
/usr/bin/java already running.

One way to work around this would be to create a dummy link
/usr/bin/jsyncmanager - /usr/bin/java, and then specify this as
the executable.

Also, in case you're still having problems, make sure the java-related
environment settings are identical to what you have when you run the
command normally...

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: XKB problem with x-terminal

2005-12-08 Thread Almut Behrens
(okay, took me while to reply, but just in case you haven't figured it
out yourself in the meantime...)

On Wed, Dec 07, 2005 at 02:16:09PM +0200, Simo Kauppi wrote:
 Hi,
 
 Is there a way to compile keyboard definitions for X and save them
 somewhere, where XServer can read them, when it starts?
 
 The reason I'm asking is that I set up a thinclient, i.e. an x-terminal
 with just Xorg running on it. When X starts it gives the '(EE) Couldn't
 load XKB keymap, falling back to pre-XKB keymap' -error and the keyboard
 doesn't work properly.
 
 It seems that when X starts it wants to compile the keyboard stuff on
 the fly and feed it to the $DISPLAY.
 
 The problem is that the xkb/rules are in the xlibs, which pulls all of
 the xlibraries with it. To make things worse the, xkbcomp is in the
 xbase-clients.
 
 So, to be able to run x-terminal I would have to install all the
 xlibraries and all the xclients into my terminal. For me this doesn't
 seem very rational.

The easiest way to create a compiled keymap is probably to extract
it from a running X server (on a machine which has xbase-clients
installed, and is running the desired xkb setup):

$ xkbcomp -xkm :0 keymap.xkm

Then, simply transfer the resulting keymap.xkm to your thinclient,
where you can make the X server load it upon startup using the option
-xkbmap keymap.xkm.

Another way would be to generate a specific keymap configuration from
options like you have in Xorg.conf.  For example, I have in my section
InputDevice
...
Option  XkbRules  xfree86
Option  XkbModel  pc105
Option  XkbLayout de
...

This would translate into the following commandline:

$ setxkbmap -rules xfree86 -model pc105 -layout de -print | xkbcomp -xkm -w 3 - 
keymap.xkm

The part before the pipe generates an xkb config file, like this

xkb_keymap {
xkb_keycodes  { include xfree86+aliases(qwertz)   };
xkb_types { include complete  };
xkb_compat{ include complete  };
xkb_symbols   { include pc/pc(pc105)+pc/de};
xkb_geometry  { include pc(pc105) };
};

which essentially contains parameterized include statements for
individual xkb files, according to what has been determined via the
rules.  This is then fed into xkbcomp to be compiled into keymap.xkm
(the -w 3 is just to reduce the warnings to a sensible level...).

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: mod_perl installation docs

2005-12-08 Thread Almut Behrens
On Thu, Dec 08, 2005 at 11:06:39AM -0500, Mark Copper wrote:
 
 In /usr/share/doc/apache-perl/README-perl.Debian it says 
   Apache can be configured to pass requests for dynamic content to a 
   second server.  See the mod_perl documentation for more details on this.
 
 Would any of you mind sharing more specific information?  Like
 approximately where in the mod_perl documentation this might be found?

Not exactly sure which mod_perl documentation this is referring to, but
there's very good docs at http://perl.apache.org, in particular the
mod_perl guide:

http://perl.apache.org/docs/1.0/guide/index.html

(this has been written for apache-1, but most of it still holds for
apache-2, and you'll also find other docs explaining the differences.)

As to running two servers (i.e. a light-weight frontend, and a
heavy-weight (mod_perl) backend server), these two chapters from the
guide will probably be most interesting:

http://perl.apache.org/docs/1.0/guide/strategy.html
http://perl.apache.org/docs/1.0/guide/scenario.html

specifically this section:

http://perl.apache.org/docs/1.0/guide/strategy.html#One_Plain_Apache_and_One_mod_perl_enabled_Apache_Servers

All not debian-specific, but highly recommended nevertheless, for all
levels from newbie to expert.  Once you have a clear idea of what you
want, you can begin figuring out how to do it the debian way... :)

HTH,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Help with syslog.conf syntax and structure?

2005-12-08 Thread Almut Behrens
On Thu, Dec 08, 2005 at 11:01:53AM +, Adam Funk wrote:
 I've been running leafnode, which generates a lot of news.info syslog
 entries -- making up over 95% of /var/log/syslog -- so I want to stop
 logging news.info at all, but without interfering with anything else.
 
 I've read various man pages but I can't figure out how to subtract
 that from /var/log/syslog.

If I were you I would try news.!=info (but I haven't tried actually,
so I can't tell whether it works :)   I.e., try to modify this line

*.*;auth,authpriv.none  -/var/log/syslog
*.*;auth,authpriv.none;news.!=info   -/var/log/syslog

If I'm reading syslog.conf(5) correctly, this should stop facility
'news' with a priority exactly equal to 'info' from being logged to
/var/log/syslog.  

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: file not found - hardware, fs, or driver problem?

2005-12-07 Thread Almut Behrens
On Wed, Dec 07, 2005 at 12:17:47PM +, Richard Lyons wrote:
 On Tuesday,  6 December 2005 at 19:41:04 +0100, Almut Behrens wrote:
  you probably want to install ia32-libs.  This should provide
  the 32-bit compatibility libraries, and the correct link
  
  /lib/ld-linux.so.2 - /emul/ia32-linux/lib/ld-2.3.2.so
 
 Almut, I tried your first suggestion (above) and now get:
 
 # ./qcad
 ./qcad: error while loading shared libraries: libXcursor.so.1: cannot
 open shared object file: No such file or directory

looks much better than before ;)

 
 So, partial success.  Is there more to do here, or do I go on to 
 method 2 (the chroot)?

Yes, you need to fetch all missing ia32 libs and put them into the
subtree /emul/ia32-linux/ which has been created by installing
ia32-libs.  The selection of libs that come with the package has been
carefully chosen, but, of course, it cannot provide every lib for any
application there is.  Use ldd to get an idea of what else is needed --
now that /lib/ld-linux.so.2 is being found, ldd should start to provide
useful output.

AFAICT, you'll probably at least need those (let's hope the Qt libs are
linked in statically...):

shared lib  in deb package

libXcursor.so.1 libxcursor1
libfreetype.so.6libfreetype6
libfontconfig.so.1  libfontconfig1

Lib-to-deb name mapping isn't always as obvious, but as you probably
know, there's a pretty intuitive web interface at packages.debian.org. 
Among other things, it allows you to find which package is containing
a certain file, and you can download packages for various architectures
and flavours of debian (you need arch i386, as you certainly figured). 
Of course, you could also use the respective commandline tools like
apt-file, if you prefer so.

Just download the respective .deb file and unpack it into the ia32
library tree, like so

# dpkg -X libxcursor1_1.1.3-1_i386.deb /emul/ia32-linux

When you think you're done dumping stuff in there, run ldconfig -- the
necessarry new lib paths below /emul/ia32-linux should've been added to
/etc/ld.so.conf during configuration of the package ia32-libs.

Repeat that procedure as needed, i.e. until all shared lib dependencies
are being resolved...  Well, you get the idea.

 
 Apropos, is 64-bit going to be out on the cutting edge for a long time?
 I am beginning to think it may have been an error to follow my son's
 advice and buy that setup.  This year, I mean.

I don't think I have the expertise to say anything of value here -- so
I'll leave that to others.

My gut feeling is, though, that it's still going to take a while
until all issues are resolved (like reworking the approach to multi-
architecture installations in debian), and everything is running so
smoothly that you'd want to drop the words cutting edge.
OTOH, considerable progress has been made already, so we should use
the occasion to say a big Thanks! to everyone who contributed!

IOW, I don't think it was an error to follow your son's advice...
We need beta testers ;)

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: file not found - hardware, fs, or driver problem?

2005-12-06 Thread Almut Behrens
On Tue, Dec 06, 2005 at 08:41:46AM +, Richard Lyons wrote:
 On Tuesday,  6 December 2005 at  1:11:32 +, Andrew Perrin wrote:
  On Tue, 6 Dec 2005, Richard Lyons wrote:
  
  
  The problem is that installed software sometimes refuses to run, giving
  'file not found', in spite of the fact that the file in question will ls
  normally, permissions are ok, etc, and I can even cat the file.  This
  happens only with certain files, for example two separate attempts to
  install qcad professional.  Other software, loaded via normal Debian
  routes, runs apparently normally.
  
  Any suggestions to how to proceed?
  
  
  Can you post the output from your attempt to run the software?
 
 Here is an example:
 $ qcad-2.1.0.0-rc1-1-prof.linux.lcpp5.x86$ ls
 bin doc examples fonts library patterns qcad qm README scripts
 $ ./qcad
 bash: ./qcad: No such file or directory
 
 showing that it even happens when in the same diectory. And, yes, the
 file qcad is world executable.

Maybe it's missing some vital library or somesuch [1].  Try ldd, or if
that doesn't help, strace -efile ... to find out...

Good luck,
Almut


[1] for example, during transition from libc5 to libc6, you sometimes
did get similarly weird messages because the corresponding ld.so was
no longer found (the path to ld-linux.so.1 or ld-linux.so.2 is
hard-wired in the binary).  I don't suppose this is your problem here,
though, as qcad-2.1 seems to be somewhat more recent...


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: file not found - hardware, fs, or driver problem?

2005-12-06 Thread Almut Behrens
On Tue, Dec 06, 2005 at 03:26:36PM +, Richard Lyons wrote:
 # ls -l /lib/ld-linux*
 lrwxrwxrwx  1 root root 11 2005-11-26 09:24 /lib/ld-linux-x86-64.so.2 - 
 ld-2.3.5.so
 
 so I did:
 
 # ln -s /lib/ld-2.3.5.so /lib/ld-linux.so.2
 
 after which 
 
 # ./qcad
 bash: ./qcad: Accessing a corrupted shared library
 
 So what next?  Just to recap., this is etch on an amd64 system, which may 
 perhaps 
 be the reason.

Sorry, didn't read your original post, so I wasn't aware of that
important background info...

Anyway, you probably want to install ia32-libs.  This should provide
the 32-bit compatibility libraries, and the correct link

/lib/ld-linux.so.2 - /emul/ia32-linux/lib/ld-2.3.2.so

(better first remove the one you set yourself... not sure whether that
would be done automatically)

This doesn't necessarily guarantee a life free of trouble and pain
(there might still remain problems with other dependent libs), but it's
at least worth a try. Alternatively, create a 32-bit chroot environment:
https://alioth.debian.org/docman/view.php/30192/21/debian-amd64-howto.html#id271960

If both approaches fail to work, try bugging whoever you paid for the
license to supply a true 64-bit version.  Or, if you don't need the
professional features, you could try building the community version
from source...

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: bash and variable holding directories with spaces

2005-12-06 Thread Almut Behrens
On Tue, Dec 06, 2005 at 04:07:02PM -0500, Andrew Cady wrote:
 On Sun, Dec 04, 2005 at 12:11:22AM +0100, Almut Behrens wrote:
  On Sat, Dec 03, 2005 at 05:58:28PM -0500, H.S. wrote:
 
   $ DIRS='file\ 1 file\ 2'; ls -ln $DIRS
   ls: 'file\ 1 file\ 2': No such file or directory
  
  in this case you probably want
  
  $ DIRS='file\ 1 file\ 2';  eval ls -ln $DIRS
 
 I'm not sure quite what the requirements are here, but although this
 works it would probably be more natural either to use perl or to use an
 array.  For a shell script, this would be more appropriate, being both
 portable and straightforward:
 
 set file 1 file 2
 ls -ln $@
 
 Of course, this clobbers the $@ array.  Better shell script style is
 to arrange your code to allow something like:
 
 ls_ln() { ls -ln $@; }
 ls_ln file 1 file 2
 
 You could use a non-portable bash array:
 
 DIRS=(file 1 file 2)
 ls -ln [EMAIL PROTECTED]
 
 Seriously though, shell scripting sucks.  Perl!  It's on every debian
 system with debconf.

I absolutely agree with you here.  I love perl myself.  OTOH, I'm
trying to resist the urge to go telling people to use it as the golden
hammer for each and every scripting problem they come up with.
I'm kinda feeling that might be perceived as inappropriate -- unless
they explicitly seem to be wanting advice on such matters, of course.
But maybe that's just me...  Anyway.

All I was trying to point out is that it's good to remember there's
eval when you find yourself wondering why some scripted shell command
containing interpolated variables doesn't behave as expected, i.e. as
if you had typed the same (expanded) string of letters on the
interactive commandline.  That's all.

Of course, as always, TIMTOWTDI, YMMV, and all that :)

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: file not found - hardware, fs, or driver problem?

2005-12-06 Thread Almut Behrens
On Tue, Dec 06, 2005 at 10:59:50PM +, Richard Lyons wrote:
 On Tuesday,  6 December 2005 at 19:41:04 +0100, Almut Behrens wrote:
  If both approaches fail to work, try bugging whoever you paid for the
  license to supply a true 64-bit version.  Or, if you don't need the
  professional features, you could try building the community version
  from source...
 
 Why wouldn't the professional version build?  -- I assume I can get the
 source from Andrew.

I was under the impression there's a reason they require you to purchase
a license for the professional version, but I may be wrong on that.
If you manage to get the sources from him, sure they should build...

 
 Thanks for your very informative reply.

You're welcome.  Hope you'll have it all working in the end!

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: postfix + osCommerce problem

2005-12-06 Thread Almut Behrens
On Tue, Dec 06, 2005 at 07:30:48PM +0100, Thomas Jollans wrote:
 I am creating an osCommerce-based webshop and am having probems with 
 postfix. the mail below gets returned by the mailer. (things in square 
 brackets were left because they are unimportant and/or to protect my and 
 other's privacy. [EMAIL ADDRESS] signifies a proper and existant email 
 address) I do not understand this problem because the recieptent *is* 
 specified.

Check how postfix/sendmail is being invoked by PHP.  This is configured
in php.ini, typically something like

sendmail_path = /usr/sbin/sendmail -t -i

Make sure there's -t to have postfix extract recipients from the
message headers To:, Cc: and Bcc: (otherwise recipients would need to
be passed as arguments on the commandline, which I believe PHP's mail()
function doesn't do...).

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Defining environment variables and cgi scripts

2005-12-04 Thread Almut Behrens
On Sat, Dec 03, 2005 at 10:15:55PM -0500, Tom Moore wrote:
 hi.
 I have a cgi script I want apache to execute for me.
 The problem I'm having is I don't know how to include editional 
 environmental variables that the script requires.
 How do I add a variable called PAYMENTIC_HOME to the list of environment 
 variables that perl knows about when executing a script under apache?
 If I define it like:
 export $PAYMENTECH_HOME=/usr/local/paymentech from the shel before I run 
 perl script.cgi everything owrks fine.
 If I run it under apache the system doesn't pass the PAYMENTECH_HOME 
 variable to the script.
 
 Any ideas how I can get this to work?

There are several ways to do this.  One way would be to use the
PassEnv directive in httpd.conf to tell apache to pass certain
environment variables from its own environment (i.e. that of the apache
server process) to any CGI scripts being run.  Apache doesn't blindly
pass on its environment to CGI processes (despite the fact that they're
all child processes of httpd) -- IOW, it filters variables by default.

Of course, for that to work you'd also first have to set the variable
in question from wherever apache is started.  Normally, that would be
/etc/init.d/apache, so you could put your definition right in there
(somewhere near the top):

export PAYMENTECH_HOME=/usr/local/paymentech

Or have it source some other file which holds these settings, etc.,
which would be easier to maintain in the long run.

Another way would be to use SetEnv to declare env variables directly.
For easier handling, I'd put such stuff in some seperate file, and have
that be included from the main httpd.conf.

In general, this kind of functionality is provided by the apache module
mod_env, so make sure that's loaded...

Also see http://httpd.apache.org/docs/2.0/env.html and in particular
http://httpd.apache.org/docs/2.0/mod/mod_env.html -- and if you still
have unanswered questions, don't hesitate to report back here :)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: bash and variable holding directories with spaces

2005-12-03 Thread Almut Behrens
On Sat, Dec 03, 2005 at 05:58:28PM -0500, H.S. wrote:
 Michael Marsh wrote:
  If I understand the behavior you want correctly, then
  DIRS='/cygdrive/c/Documents\ and\ Settings /cygdrive/d/My\ Data'
  works for me.
  
  This also works for constructions like
  DIRS='$dir1 $dir2'
 
 Okay, but this doesn't work:
 
 $ ls -nl file*
 -rw---  1 1000 1000 0 2005-12-03 16:56 file 1
 -rw---  1 1000 1000 0 2005-12-03 16:56 file 2
 
 $ DIRS='file\ 1 file\ 2'; ls -ln $DIRS
 ls: 'file\: No such file or directory
 ls: 1: No such file or directory
 ls: file\: No such file or directory
 ls: 2': No such file or directory
 
 $ DIRS='file\ 1 file\ 2'; ls -ln $DIRS
 ls: 'file\ 1 file\ 2': No such file or directory

in this case you probably want

$ DIRS='file\ 1 file\ 2';  eval ls -ln $DIRS


Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Kerberos acl permission

2005-12-01 Thread Almut Behrens
On Thu, Dec 01, 2005 at 02:00:49PM -0800, Curtis Vaughan wrote:
 Trying to set up keberos5 on a Debian Sarge server. As a note I am going 
 by the instructions provided by a Linux Journal article, which may be 
 found at: http://www.linuxjournal.com/article/7336
 
 Regardless, setting it up has been otherwise easy. But now I'm at the 
 part where I want to add other users. At one point in the set up, 
 however, the instructions said that you need to enable the administrator 
 to have all permissions (privileges), which is done by editing a 
 kadm5.acl file. But there is no such file. Because there is no such 
 permission file, apparently, I can't add users as the administrator. So, 
 I tried creating a kadm5.acl file (under /var/lib/krb5kdc/) but it that 
 didn't seem to help.

You could try /etc/krb5kdc/kadm5.acl instead -- at least that's what
is set up in kdc.conf.template (ends up as /etc/krb5kdc/kdc.conf after
postinst has run) as default:

   ...
   acl_file = /etc/krb5kdc/kadm5.acl
   ...

(not sure though, if the linuxjournal article suggested a different
directory layout..., so YMMV)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: vesa mode for MPlayer compile problem

2005-12-01 Thread Almut Behrens
On Thu, Dec 01, 2005 at 10:36:04PM +0100, Dirk wrote:
 which package containes vbe.h ?
 
 MPlayer needs it for the vesa mode output...

Just grepped in the mplayer sources for a reference of vbe.h, but
couldn't find anything -- so, not sure what exactly you need...

Normally, for the true VESA mode (running under LRMI (Linux real-mode
interface)) there's a library vbelib that comes with the mplayer
sources.  The respective header file is osdep/vbelib.h.  Maybe that's
what you need?

Which mplayer source version do you have?  Which of your source files
is referencing vbe.h?  What errors do you get, i.e. which symbols is
the compiler complaining about? -- that might help to figure out which
vbe.h/driver/library you need... (there's most likely more than one in
this world... :)

As a first approximation, I'd just try editing the respective file to
include something like osdep/vbelib.h instead (make sure the relative
path is correct, so it is found...).  That would seem to make more
sense to me (maybe it's just a typo in the sources you have).

HTH,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: ALL my email vanished

2005-12-01 Thread Almut Behrens
On Thu, Dec 01, 2005 at 10:27:14PM -0500, Hendrik Boom wrote:
 On Thu, Dec 01, 2005 at 04:56:09PM -0700, Nate Duehr wrote:
  Clive Menzies wrote:
  On (01/12/05 18:21), Hendrik Boom wrote:
  On Thu, Dec 01, 2005 at 10:55:10AM -0500, Hendrik Boom wrote:
  Last nighe all my email in /var/spool/mail/hendrik vanished
  without a trace.  Some new mail has appeared since the vanishing.
  Everyone else's email in intact.
  
  I don't mean I sent a message and it disappeared.
  I mean all my email in /var/spool/mail/hendrik
  vanished.
  
  Have you looked in /var/mail? 
  
  Although they aren't linked, both /var/spool/mail and /var/mail contain
  the mail here.
  
  Also did you fire up any software that copied it into a 
  /home/user/mail or /home/user/Mail or /home/user/Maildir directory?
  
  (Like when you exit mutt -- would you like to move your messages to ?)
 
 That's the first place I looked.  That's where I found all the mail from
 before November, but not the November mail.  Thank god I accidentally
 let it move my messages to mbox by accident at the start of November,
 or I might have lost a lot more.

That other problem you mentioned in your original post makes me think
there might be a problem with the filesystem/harddisk -- in which case
those files could be lost forever, unless you have a recent backup...
Have you run a filesystem check?  Any other files missing?

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Help about tftpd!

2005-12-01 Thread Almut Behrens
On Thu, Dec 01, 2005 at 04:08:16PM +0800, Li Weichen wrote:
 Hi all,
 
 I have encountered a question about tftpd.  I use tftpd-hpa to set up a tftp
 server in my Debian sarge 3.1.  If I start the tftpd use
 '/etc/init.d/tftpd-hpa start' command, I will get the file not found
 message after I input tftpboot command at client but the file I want to
 transfer is definitely right there.

Are you sure your files in /var/lib/tftpboot are readable by the tftp
daemon? (in case of doubt make them world-readable).
How exactly is the client side requesting the files (any path component)?
AFAICT, the server is started with option -s /var/lib/tftpboot, which
sets up a chroot, so file requests are supposed to be relative to that...

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Help understanding this apache2 directive from apache2.conf

2005-11-30 Thread Almut Behrens
On Wed, Nov 30, 2005 at 07:35:40PM +0200, Maxim Vexler wrote:
 In [/etc/apache2/apache2.conf] file there's this line :
 
 # Include generic snippets of statements
 Include /etc/apache2/conf.d/[^.#]*
 
 
 What does the [^.#]* say ?
 I know that it's regular expression but I've yet to see this regex syntax...
 
 Is it: Match any file Not beginning with any char followed by # 0 or
 more times ?

According to http://httpd.apache.org/docs/2.0/mod/core.html#include
it's a shell wildcard, not a regex in the strict sense of the meaning.
In particular, this makes a difference as to how the * is interpreted. 
The pair of brackets specifies a character class, and the ^ is negating
it.  And the * simply denotes anything, as usual.  So, it means
any file in /etc/apache2/conf.d/ not beginning with . or #.

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: screen blanking

2005-11-30 Thread Almut Behrens
On Wed, Nov 30, 2005 at 01:11:39PM -0600, Hugo Vanwoerkom wrote:
 Xset did not define the dpms states, this does:
 http://webpages.charter.net/dperr/dpms.htm
 
 Also you have to set the option in the monitor section, per googling for 
 dpms subject.
 
 However, doing all that, I got it to work once, thereafter no more. No 
 matter what I do after 10 minutes the screen blanks.

Probably not directly related to your problem... but ages ago (I think
it was in potato), I had a similarly weird issue, i.e. the console's
screensaver kept turning off my monitor while I was in X.  Particularly
annoying was that, to return to normal mode, every time I had to use
Ctrl-Alt-Fn to switch to a virtual console, type a key, and Ctrl-Alt-F7
to get back to X.  It took me quite a while to figure out that I had to
use a little tool called setvesablank to turn off this nonsense...

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: compiling kernel module question

2005-11-26 Thread Almut Behrens
On Sat, Nov 26, 2005 at 12:51:55AM -0500, Amish Rughoonundon wrote:
 lemme see if I understand what you meant: The kernel-source files that I 
 downloaded is common to all linux distribution while the kernel-header 
 files is particular to a certain version and distribution.

...not so much the distribution, but rather the _configuration_, i.e.
the specific combination of switches that were selected while running
one of make menuconfig, make xconfig (or even make config -- for
those die-hards, who don't mind wading through hundreds of questions).
This leaves behind a customized kernel source/header tree describing
the specific kernel that will be (or has been) built from these sources.

Think of it this way: when you buy a new PC, you make decisions as to
which CPU, mobo, network- and graphics-card, etc. you want or need.
Out of all conceivable combinations, you create a personalised
configuration.  Now, if you want to add another of piece hardware (a
'module') later, it's important (or at least useful) to know what your
specific PC looks like.
For example, if you were to ask here whether your favorite new geek
gadget would work, and all you say is I have a computer, you'd get
nothing more than one of those you'll need to tell us which ...
replies :)  (Of course, analogies don't ever match 100%, but this is
about the idea...)

When you build a custom kernel yourself, you'll automatically be left
behind with configured kernel sources, but when you use a stock kernel,
someone else has done this step for you.  So, rather than starting with
the pristine kernel sources and having to reproduce the exact settings
that were used, it's easier to just get the preconfigured header packages.

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: scripting problem

2005-11-26 Thread Almut Behrens
On Sat, Nov 26, 2005 at 08:16:31PM +0100, John Smith wrote:
   does somebody know why I keep losing the first character of the
 third resulting string?
 
 ===
 
 [EMAIL PROTECTED]:/home/user/tmp cat t4.sh
 #!/bin/sh
 
 export INPUT='$1$iW95z/HB$GFcYFxMKK6x8EUPglVkux.'
 
 echo 1 ===$INPUT===
 
 export MD5PW=$(echo -n $INPUT |  hexdump -v -e '  1/1 %02d')
 
 echo 2 ===${MD5PW}===
 
 echo -n 3 ===;echo -n ${MD5PW} | tr ' ' '\n' | while read char ; do awk 
 '{printf(%c,$char)}' ; done ; echo ===
 [EMAIL PROTECTED]:/home/user/tmp ./t4.sh
 1 ===$1$iW95z/HB$GFcYFxMKK6x8EUPglVkux.===
 2 === 36 49 36 105 87 57 53 122 47 72 66 36 71 70 99 89 70 120 77 75 75 54 
 120 56 69 85 80 103 108 86 107 117 120 46===
 3 ===1$iW95z/HB$GFcYFxMKK6x8EUPglVkux.===

There are two issues with this ;)

Firstly, the while read ... loop is not only superfluous, it's
exactly what's causing the strange effect you're observing.  What
happens is that read char reads one line from stdin and puts it in
$char.  Next, more or less unrelated, awk starts up, and consumes the
remaining lines on stdin all by itself, and executes the given printf
command for every line -- that's standard awk behaviour.  Then, the
loop is done, 'cos there's nothing left on stdin.  And, as the first
line had already been removed by read, awk simply never got it...
So, a first improvement would be to write

echo -n 3 ===;echo -n ${MD5PW} | tr ' ' '\n' | awk '{printf(%c,$char)}' ; 
echo ===

So far so good, but what's this $char?  As the whole awk command is in
single quotes, the shell won't interpolate anything for $char (in case
that's what was originally intended).  Strangely enough, you could just
as well write $foo, or $anything, and still get the same result. 
Apparently, awk is simply substituting $0 (the line read from stdin),
if it encounters anything it doesn't know what to do with (not sure
though, what exactly is going on here...(?))

Anyway, if you put $0 (or $1 (=the first field) -- both are identical
in this particular case) in place of $char, things should be fine

echo -n 3 ===;echo -n ${MD5PW} | tr ' ' '\n' | awk '{printf(%c,$0)}' ; echo 
===

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: compiling kernel module question

2005-11-25 Thread Almut Behrens
On Fri, Nov 25, 2005 at 05:36:26PM -0500, Amish Rughoonundon wrote:
 Hi,
 I have been trying to compile and insert a simple kernel module but
 without luck. This is what I did.
 Since the freshly installed debian sarge 3.1 distro did not have any
 source files under /usr/src, I di uname -a to make sure of the kernel
 version that is installed:
 Linux test 2.4.27-2-386 #1 Mon May 16 16:47:51 JST 2005 i686 GNU/Linux
 
 and then I downloaded the kernel-source-2.4.27.tar.bz2, unziped and
 untarred it. I then copied this program from  a book into example.c:
 
 #include linux/kernel.h
 #include linux/module.h
 #include linux/init.h
 static char __initdata hellomessage[] = KERN_NOTICE Hello, world!\n;
 static char __exitdata byemessage[] = KERN_NOTICE Goodbye, cruel world.\n;
 static int __init start_hello_world(void)
 {
printk(hellomessage);
return 0;
 }
 static void __exit go_away(void)
 {
printk(byemessage);
 }
 module_init(start_hello_world);
 module_exit(go_away);
 
 I then compiled it using 
 gcc -DMODULE -D__KERNEL__ -I/usr/src/kernel-source-2.4.27/include -c example.c
 
 
 I tried inserting it into the kernel using 
 /sbin/insmod example.o 
 
 but this is the message I got back:
 
 example.o: kernel-module version mismatch
 example.o was compiled for kernel version 2.6.0
 while this kernel is version 2.4.27-2-386.

If you want to build kernel modules, you need to use the kernel headers
_as configured for your current kernel_. The generic header files which
come with the original kernel sources won't work...

For a stock debian kernel such as 2.4.27-2-386, it's probably easiest
to just install the respective packages

* kernel-headers-2.4.27-2-386  (or kernel-headers-2.4-386 for that
  matter, which depends on kernel-headers-2.4.27-2-386), and

* kernel-headers-2.4.27-2  (containing the header files common to all
  architectures, referenced via symlinks from within the -386 package).

Then set your include path to -I/usr/src/kernel-headers-2.4.27-2-386/include.

I'm not entirely sure how you got that 2.6.0 version into your module,
but I guess the following happened:  as there's no version.h in the
unconfigured kernel sources, the file /usr/include/linux/version.h
probably got pulled in instead (because it's on the standard include
path)...  However, these include files (though they're kernel headers,
too) belong to libc, and must not necessarily match the current kernel
version (in fact, I believe those in sarge are version 2.6.0 -- btw,
this is the package linux-kernel-headers).

If you're interested in what went wrong in your original attempt, you
could run just the preprocessor (-E), and grep for version.h in its output

gcc -DMODULE -D__KERNEL__ -I/usr/src/kernel-source-2.4.27/include -E example.c 
| grep version.h

I'd think you see something like # 1 /usr/include/linux/version.h 1 3...

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: xkb Layout does not work altGr key

2005-11-20 Thread Almut Behrens
On Sun, Nov 20, 2005 at 08:33:33AM +0100, Jose Salavert wrote:
 I have made this dvorak layout , but AltGr does not work.
 I am mad trying to understand or see what I forgot.
 Another question is what to do to use a left menu key or similar as AltGr
 too, or shift control or something, is for fast writing in the right keys
 that need alt.
 
 Is an effort to work in french, spanish italian and catalan, with
 aphostrophe, cedilla and eñe. if any idea.
 
 I will look into:
 http://www.nothingisreal.com/dvorak/dvorak_intl-1.0.txt
 
 that was another effor and make a complete tutorial, but now I am focused in
 AltGr key that does not work.
 
 if (anyone can help me) thanks;
 else I become mad;

Hopefully, we'll save you from going mad -- not sure though, if the
following suggestions will suffice... ;)

Anyway, you could try to configure your AltGr key to generate the
Mode_switch symbol. On most keyboards I've seen, the key labeled
AltGr corresponds to Right Alt, i.e. its physical scancode (an
integer) is mapped to the symbolic name RALT (this is normally done
via configs in xkb/keycodes/, e.g. RALT = 113; ).  Accordingly,
in the xkb/symbols/* files, you'll typically find something like

key RALT {[ Mode_switch,  Multi_key   ]   };

for keyboard setups allowing more than two characters per key.[1]

For the Mode_switch to have any effect, the following action
definition must also have been included from somewhere

interpret Mode_switch {
useModMapMods= level1;
virtualModifier= AltGr;
action= SetGroup(group=+1);
};

This increments the symbol group by one, so for definitions like

key AE01 {[   1,  EuroSign,   ]
[ bar,  cent]   };

the second one (i.e. [ bar, cent]) will be used.
Normally, that Mode_switch action is defined in xkb/compat/basic, so,
unless you've severely messed up your config, that should be fine.

You'll probably also want to define

modifier_map Mod3   { Mode_switch };

to give programs relying on the standard X11 modifier Mod3 some way
to figure out what's going on  (not strictly required for the symbol
groups switch to take effect, though).


Other than that, I can only point you to

http://www.charvolant.org/~doug/xkb/html/xkb.html

For me, his attempts to shed some light on the issue have more than
once proven to be a valuable resource when I felt like messing with my
keyboard. Take the time to read it from the beginning to get a thorough
understanding of the concepts and terms.  Xkb is an extremely flexible
(and complex) beast, so some additional experimentation of your own
might be required...  There's a multitude of different ways to achieve
the same effects.  That not being enough, I guess you're aware that
xmodmap(1) could in principle be used to redefine your initial xkb
setup beyond recognition. So, better make sure that's not the case...

If you're still stuck, the following info would help us to help you :)

* which X server (XFree86, Xorg), which debian flavor (sarge,...)?

* what settings do you have in your XF86Config-4/xorg.conf?
  (the InputDevice section containing the Xkb* options is the
  interesting part)

* what exactly have you modified so far in your attempts to get this
  working, etc.

* what does xev(1) show when you press your AltGr key?

* how does xkbprint(1) represent your current/effective xkb setup?
  (this generates a PostScript diagram)

* are you using KDE or Gnome?
  AFAIK, they add another level of obfusca^H^H^H^H^H^H^Hconfiguration
  on top of the conventional methods of keyboard setup...
  (meaning someone else would have to help you with this)

Cheers,
Almut


[1] In your dvorak-plus config I see that you've configured

key MDSW {[ Mode_switch   ]   };

so - if you know which key that is on your keyboard - holding that key
should actually have the desired AltGr effect...  Have you tried
that, or, put differently, have you intentionally configured MDSW
instead of RALT?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: xkb Layout does not work altGr key

2005-11-20 Thread Almut Behrens
On Sun, Nov 20, 2005 at 09:13:03PM +0100, Almut Behrens wrote:
 On Sun, Nov 20, 2005 at 08:33:33AM +0100, Jose Salavert wrote:
  I have made this dvorak layout , but AltGr does not work.
 (...)
 
 key AE01 {[   1,  EuroSign,   ]
 [ bar,  cent]   };

...another thing I just realised is that you often omitted the comma
between symbol groups.  For example, I think the above definition
(which I copied from your dvorak-plus) would have to read

key AE01 {[   1,  EuroSign],
[ bar,  cent]   };

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: WordPerfect 8.0

2005-11-19 Thread Almut Behrens
On Sat, Nov 19, 2005 at 07:58:00AM -0600, Nate Bargmann wrote:
  If you want Reveal Codes, you could always edit the XML in Vim...
 
 Ugggh!  I'll use FTE instead.  ;-)

Heh! another fte user -- unbelievable :)

(That's my secret love, too, but so far I got the impression I'm the
only one in this whole world using it.  Unfortunately, if you often
have to work on other people's machines, you'll hardly ever find it
installed -- well, never, actually ;(  On the positive side is, though,
that when the next vim vs. emacs thread comes up (and I have no clear
preference as to those two), I can simply lean back, relax and watch
them argue from the distance ;)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: WordPerfect 8.0

2005-11-19 Thread Almut Behrens
(having nothing to contribute to that Corel/SCO/whoever issue, I'll
try to answer the original question...;)

On Thu, Nov 17, 2005 at 06:33:25PM +, Ken Heard wrote:
 (...)
   Debian being Debian, surely there is a way to install xlib6g without 
 having to remove all 175 of those other packages.  As the name is 
 unique, and no other package besides the WP 8.0 one depends on it, 
 presumably its presence will not affect adversely any of those others.
 
   This package, xlib6g, contains many small files which it would 
   install in subfolders in the /usr/X11R6/include/X11/ folder.  Most of 
 these 
 files would be put into two subfolders which do not now exist in my box, 
 /xkb/ and /locale/.
 
   The others would be put in the /bitmaps/ folder.  Many but not all 
   of the files to be put there have the same names and sizes as files 
 already 
 there.  Since these are bitmap files, surely the files already there 
 could be safely overwritten -- unless there is a way to prevent 
 overwriting on installation of xlib6g.
 
   I short, can I install xlib6g in such a way that it does not remove 
   175 other packages I need and use?  If so, how do I do it?

To install only specific parts of a debian package, you can always
proceed as follows.  That's somewhat low-level, no doubt -- but why
not, if it helps... 

Use dpkg-deb --fsys-tarfile xlib6g.deb to dump the contents of the
package in tar format, on stdout.  That way you can use any facilities
tar provides, e.g. to only extract specific subdirectories, etc.

The contents section of debian packages is typically packaged
_relative_ to the root directory, so the following commands (run as
root) should install just the two subdirectories you mentioned above:

$ cd /
$ dpkg-deb --fsys-tarfile /path/to/xlib6g.deb | tar xv 
./usr/X11R6/include/X11/xkb/ ./usr/X11R6/include/X11/locale/

(simply modify as required -- in case of doubt, use tar tv to check
what would be unpacked)

However, note that, from a package management point of view, this is
no better that unpacking tarballs, 'cos that's essentially what you're
doing here.  IOW, use with care (!) to not mess up your installation...

Good luck,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: bad md5sum for initrd in hd-media?

2005-11-18 Thread Almut Behrens
On Fri, Nov 18, 2005 at 01:21:22PM -0700, David Emerson wrote:
 I downloaded initrd.gz and vmlinuz from:
 http://cdimage.debian.org/debian-cd/3.1_r0a/i386/other/hd-media/2.6/
 
 (I'm intending to install sarge from hard drive on a computer with a flaky 
 cd-rom)
 
 There are md5sums at
 http://cdimage.debian.org/debian-cd/3.1_r0a/i386/other/MD5SUMS
 
 My vmlinuz md5 matches up, but the initrd.gz md5sum does not match. I 
 re-downloaded with no change, still in disagreement the archive:
 
 My md5sum:
 ca838dff73b15963544ea21537ccaa8f  initrd.gz
 
 archive md5sum:
 250db6ab320fc5edcecf3ed9d1c185a6  ./hd-media/2.6/initrd.gz

How exactly did you download the file?  What seems to have happened is
that the file got unzipped while downloading:

$ ls -l initrd*
-rw-r--r--1 ab   ab   10645504 18. Nov 22:55 initrd
-rw-r--r--1 ab   ab3205673 18. Nov 22:54 initrd.gz

$ md5sum initrd*
ca838dff73b15963544ea21537ccaa8f  initrd
250db6ab320fc5edcecf3ed9d1c185a6  initrd.gz

== the md5sum of the uncompressed file is exactly what you have...

IOW, you've already got the right file -- you just need to gzip it again :)

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: VSFTPD problems

2005-11-16 Thread Almut Behrens
On Wed, Nov 16, 2005 at 07:55:08PM +, Adam Hardy wrote:
 Rudi Starcevic on 16/11/05 07:52, wrote:
 Hello,
 
 Just can't get vsftpd to work?
 
 apt-get install vsftpd always used to work 
 
 This is my error:
 
 *500 **OOPS*: *cap_set_proc*
 
 
 [quote]
 On Linux systems, if capability support was disabled in the kernel or
 
 built as a module and not loaded, vsftpd will fail to run.  You'll see
 this error message:
   *500 **OOPS*: *cap_set_proc*
 Build and load the appropriate kernel module to continue.
 
 [/quote]
 
 What is the 'appropriate kernel module to continue' ??
 
 I've been there and done that, but I looked up my notes and 
 unfortunately this is all I wrote down:
 
 
 Just solved a problem with kernel 2.6.11 where I had opted to have a 
 module capability not loaded at boot time (dunno why) but it came up 
 with the weird error cap_set_proc and vsf_sysutil_recv_peek
 
 I think googling the mail archives or just the whole net should turn up 
 the offending module.

the module is called capability, i.e. modprobe capability should do
the trick...

(For anyone interested, it's about providing facilities to segment
the almighty power of the superuser into a more fine-grained set of
discrete capabilities (i.e. privileges), e.g. for running daemons.
The userland side of it is handled by libcap...)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: redirecting the output with sudo

2005-11-16 Thread Almut Behrens
On Wed, Nov 16, 2005 at 10:29:43PM -0500, kamaraju kusumanchi wrote:
 Is there a way to do this in one single command?
 
 $wajig doc  /tmp/wajig_doc.txt
 $sudo mv /tmp/wajig_doc.txt /root/wajig_doc.txt
 
 ie I want to redirect the output of a command to a file and store this 
 in a directory owned by root. Is there a way to achieve the above with a 
 single command?
 
 I have tried
 
 $sudo wajig doc  /root/wajig_doc.txt

this should work:

  sudo bash -c 'wajig doc  /root/wajig_doc.txt'

(it passes the whole command to a subshell run as root)

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: fortran compiler bug

2005-11-16 Thread Almut Behrens
On Thu, Nov 17, 2005 at 04:55:22PM +1300, Steven Jones wrote:
 We seem to be having issues getting a bug registered
 ... because I don't know what the bug-reporting web site
 http://gcc.gnu.org/bugzilla/enter_bug.cgi
 means by host,target and build triplets.

I'd guess they're referring to the options given to configure when
building the compiler (relevant for cross-compiling), i.e.

  build:  where the compiler has been built
  host:   where it will run
  target: what binaries it will create

The triplets themselves usually consist of

  architecture-vendor-OS

e.g. i686-pc-linux.  (Except if you're crosscompiling, all three
triplets should be the same...)

HTH,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: speaker bell in konsole

2005-11-14 Thread Almut Behrens
On Mon, Nov 14, 2005 at 02:50:12PM -0600, Hugo Vanwoerkom wrote:
 I use konsole with fvwm.
 
 Bell is set to system bell but no PC speaker beep is produced.
 
 Xterm does.
 
 Does Konsole enable the PC speaker as bell sound?

I'm not using konsole/kde, so this may be utter nonsense...
Anyway, I'd suspect there's some kde-specific volume setting for the
system bell used by konsole -- have you made sure that's 0 ? 
You probably know better than I which config dialog would let you
adjust that setting.  In case of doubt I'd do a recursive grep in
~/.kde/ for Volume... and then something like vi k*bellrc ;)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to downgrade libc6 ?

2005-11-12 Thread Almut Behrens
On Fri, Nov 11, 2005 at 04:14:07PM +0100, [EMAIL PROTECTED] wrote:
 today, I aptitude installed apache2, which took a new libc6 with it.
 Now I can't link against one of my libs anymore:
 
 /usr/local/lib/libcpdbg.a(cpdebug.o)(.text+0x374): In function 
 `dbgSprintx':
 : undefined reference to `__ctype_b'
 ./lib_x86/libtabe.a(tabe_zuyin.o)(.text+0x7c): In function 
 `tabeZozyKeyToZuYinIndex':
 : undefined reference to `__ctype_tolower'
 
 I might be able to recompile it, but this could take a long time so I may 
 be forced to go back to the elder libc6 - unfortunately, I don't know 
 how...

Instead of downgrading your system's libc, you could simply fetch the
appropriate old version, and then explicitly link against that libc.

It's probably easiest to get the .deb from http://packages.debian.org,
e.g. for stable

http://packages.debian.org/stable/libdevel/libc6-dev

Then, once you've downloaded the package, extract the file you need.
For static linking that would be just libc.a:

$ dpkg-deb --fsys-tarfile libc6-dev_2.3.2.ds1-22_i386.deb | tar x -O 
./usr/lib/libc.a libc-2.3.2.a

(Even though it's unusual to statically link libc, I'd recommend to do
so in this particular case -- it might save you from some hair-pulling...)

Then use the option -nodefaultlibs and link with the old libc-2.3.2.a.
As -nodefaultlibs removes -lc, -lgcc, -lgcc_eh from gcc's internal
linker commandline, you might want to re-add -lgcc and -lgcc_eh (but
not -lc), if you need them.

A simple example probably demonstrates it best.  Lacking any better
ideas, I'll use a slightly modifed variant of that dreadfully overused
hello world program:

/*--- hello.c ---*/
#include stdio.h
#include ctype.h

void print_hello_world() {
char *s = HELLO WORLD!\n;
int c;
/* deliberately use one of the problematic symbols... */
while (c = *s++) printf(%c, __ctype_tolower[c]);
}

That represents your old lib, which you don't want to rebuild.
Let's assume you had once done something like

$ gcc -c hello.c
$ ar -r libhello.a hello.o

and now still have that old libhello.a.  Naturally, you also have an
associated header file:

/*--- hello.h ---*/
void print_hello_world();

Now, you've created some new code you need to link against that lib:

/*--- hellomain.c ---*/
#include hello.h

int main() {
print_hello_world();
return 0;
}

But when you try to do so on a system with a new libc you get

$ gcc -o hello hellomain.c libhello.a
libhello.a(libhello.o): In function `print_hello_world':
libhello.o(.text+0x34): undefined reference to `__ctype_tolower'
collect2: ld returned 1 exit status

However, when you link against the old libc-2.3.2 you fetched above,
things should be fine

$ gcc -o hello_static -nodefaultlibs hellomain.c libhello.a libc-2.3.2.a -lgcc

(to document what you're doing, you can also add the switch -static,
but in this particular case the result should be no different...)

And you get, as expected

$ ./hello_static
hello world!

Also, ldd should confirm that no dynamic libs are required:

$ ldd ./hello-static
not a dynamic executable


OK, if you're feeling adventurous, or have many small programs to link
against your old lib, you can of course also link them dynamically,
which is somewhat more involved:

You need libc-2.3.2.so, ld-2.3.2.so (i.e. the dynamic loader/linker)
and libc_nonshared.a.  The latter is from libc6-dev, the other two
are in the package libc6.
Decide on where you want to keep them - preferably in some seperate
directory, e.g. /usr/local/lib/libc-old - then extract them there

$ cd /usr/local/lib/libc-old
$ dpkg-deb --fsys-tarfile /tmp/libc6-dev_2.3.2.ds1-22_i386.deb | tar x -O 
./usr/lib/libc_nonshared.a libc_nonshared
$ dpkg-deb --fsys-tarfile /tmp/libc6_2.3.2.ds1-22_i386.deb | tar x -O 
./lib/libc-2.3.2.so libc-2.3.2.so
$ dpkg-deb --fsys-tarfile /tmp/libc6_2.3.2.ds1-22_i386.deb | tar x -O 
./lib/ld-2.3.2.so ld-2.3.2.so

(the libc_nonshared.a is required during linking only, the other two
are needed both at compiletime and when you run the program)

$ ln -s libc-2.3.2.so libc.so.6
$ chmod +x ld-2.3.2.so
$ ls -l
-rwxr-xr-x1 ab   ab  90248 12. Nov 11:30 ld-2.3.2.so
-rw-r--r--1 ab   ab1244688 12. Nov 11:29 libc-2.3.2.so
-rw-r--r--1 ab   ab  10400 12. Nov 11:29 libc_nonshared.a
lrwxrwxrwx1 ab   ab 13 12. Nov 11:31 libc.so.6 - 
libc-2.3.2.so

Now, you should be able to link the stuff:

$ LIBPATH=/usr/local/lib/libc_old
$ gcc -o hello \
-nodefaultlibs \
-Wl,-rpath $LIBPATH \
-Wl,--dynamic-linker $LIBPATH/ld-2.3.2.so \
hellomain.c libhello.a \
$LIBPATH/libc.so.6 \
$LIBPATH/libc_nonshared.a \
$LIBPATH/ld-2.3.2.so

$ ./hello
hello world!

and ldd should show

$ ldd ./hello
libc.so.6 = /usr/local/lib/libc_old/libc.so.6 (0x40018000)
/usr/local/lib/libc_old/ld-2.3.2.so = 
/usr/local/lib/libc_old/ld-2.3.2.so (0x4000)

You'll probably have to tweak things somewhat to integrate this in 

Re: Can't Start KDE from Normal User; Only from Root

2005-11-12 Thread Almut Behrens
On Sat, Nov 12, 2005 at 10:31:22AM -0500, David R. Litwin wrote:
 When I try to log in to KDE using KDM through my normal user, the screen
 goes black, then returns to KDM. This is indicative of X having crashed,
 since I told KDM to re-appear if that should happen. Loging in through a
 consle tells me that /etc/X11/X is not executable. I checked: It is a
 symbolic link to /usr/bin/X11/Xorg. I tried to copy that file directly to
 /etc/X11 and then rename it X, but it said some thing about not being able
 to read it and some thing to do with a symbolic link. So, I'm assuming there
 needs to be one and that some thing has happened so that the root can access
 it but not a normal user.
 
 Perhaps I need to change the permission on it?

probably -- though I've no idea how that got changed in the first
place...
How about leaving the X link in /etc/X11 as it is and making sure the
permissions of the actual X server /usr/X11R6/bin/Xorg are 755? :) 
(/usr/bin/X11 normally is another link to ../X11R6/bin, i.e. /usr/X11R6/bin/)

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Portable way to get my NICs' IPs

2005-10-27 Thread Almut Behrens
On Thu, Oct 27, 2005 at 04:54:22PM -0400, Stephen R Laniel wrote:
 On Thu, Oct 27, 2005 at 01:45:36PM -0700, John Purser wrote:
  You might consider one of the multi-platform languages like Python or
  Perl.  Both have modules that will do this I believe.
 
 Yeah, that's what I'm looking for. Does anyone know the Perl
 function for this task?

Sorry to disappoint you, but if you want to write something portable,
you've chosen the wrong task :)

At the system call level there's essentially two ways to get a list
of interfaces: (a) the SIOCGIFCONF request to the ioctl(2) call on a
socket, and (b) using the appropriate sysctl(2) request to examine the
kernel's interface list.  Both are definitely non-trivial to use [1].
So, your best bet probably _is_ to parse the output of ifconfig --
certainly much easier.

There's a perl module Net::Ifconfig::Wrapper, which more or less tries
to do exactly this for a number of platforms.  I'm not aware of any
direct and easy-to-use standard perl function for this (of course you
can use perl's interface to ioctl, but that would gain you nothing at
all).  Not sure about Python, Ruby, etc., though.

AFAIK, there's a C lib call getifaddrs(3) on BSD, however that doesn't
seem like a good idea if you're worried about portability...

Cheers,
Almut


[1] in case you're interested in the nitty-gritty details, see the
in-depth discussion in Stevens' networking bible [2], chapter 16/17.
Or google for SIOCGIFCONF.

[2] W. Richard Stevens: Unix network programming, vol 1, 2nd ed.
ISBN 0-13-490012-X


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Building Tuxmath with KDevelop

2005-10-10 Thread Almut Behrens
On Mon, Oct 10, 2005 at 05:20:23PM -0400, David Bruce wrote:
 
 I installed the source package for TuxMath using the source option for 
 apt-get, and also grabbed libsdl-dev.

AFAIK, there is no such package (libsdl-dev), but rather several
libsdl-*-dev packages, containing various components (though, maybe
I haven't looked properly, or that's just a typo on your part...). 
Anyhow, the two header files the build is unhappy about, should be in

  libsdl-mixer1.2-dev  (- SDL_mixer.h)
  libsdl-image1.2-dev  (- SDL_image.h)

Have you installed those?  

 I tried the Import Existing Project 
 option in KDevelop.  When I try to build the project, it fails (see below).  
 I think the problem may be that I need to tell it where to find the SDL libs.

I'm not a expert for KDevelop issues, but I think that should be OK.
The option '-I/usr/include/SDL' (on the cc commandline - see below)
should be all that's needed...

 
 cd '/home/dbruce/programming/tuxmath/tuxmath-0.0.20050316'  make -k 
 BUILDING tuxmath.o
 mkdir obj
 cc -Wall -g -I/usr/include/SDL -D_REENTRANT 
 -DDATA_PREFIX=\/usr/share/tuxmath/\ -DDEBUG -DVERSION=\2005.01.03\ 
 -DSOUND src/tuxmath.c -c -o obj/tuxmath.o
 In file included from src/tuxmath.c:21:
 src/setup.h:26:23: error: SDL_mixer.h: No such file or directory

What files do you actually have in /usr/include/SDL/ ?

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apache2 mod_perl sarge not working

2005-09-30 Thread Almut Behrens
On Fri, Sep 30, 2005 at 05:33:16PM -0700, [EMAIL PROTECTED] wrote:
 I am trying to get mod_perl working on my Debian Sarge box. I have
 installed the libapache2-mod-perl2 and its dependencies via Synaptic.
 
 My /etc/apache2/mods-enabled/perl.load file has the following:
 
 ---
 
 LoadModule perl_module /usr/lib/apache2/modules/mod_perl.so
 PerlModule Apache2
 
 Location /home/marek/public_html/perl 
   SetHandler  perl-script
   PerlHandler ModPerl::Registry
 /Location

you probably want Directory /home/marek/public_html/perl, not
Location   The Location directive is referring to URL-paths (i.e.
the request as sent by the browser), while Directory is referring to
physical file-paths as mapped internally by the webserver.

See here for the details:
http://httpd.apache.org/docs/2.0/mod/core.html#location
http://httpd.apache.org/docs/2.0/mod/core.html#directory

Have fun mod_perl'ing!

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: annoying eog warnings

2005-09-29 Thread Almut Behrens
On Thu, Sep 29, 2005 at 11:26:32AM +0900, Miles Bader wrote:
 When I run eog to view an image, it always outputs some warnings:
 
$ LANG=C eog ../images/image73-s1280x1024-a3-Ftriang-g1.8.png
** (eog:5433): WARNING **: Failed to lock:  No locks available
** (eog:5433): WARNING **: Failed to lock:  No locks available
** (eog:5433): WARNING **: Failed to lock:  No locks available
 
 It's not just ugly either -- it seems to delay startup significantly (I
 gather it's timing out on something).
 
 eog --version says:   Gnome eog 2.10.2
 
 The above is from my work machine.  This _doesn't_ happen on my home
 system, and eog starts up much more quickly there despite my home
 machine being far slower.
 
 I wonder if it has something to do with my homedir being in NFS (though
 the image file above is on the local disk)?
 
 Anyone know what's going on and how I can fix it?

Your suspicion is probably correct.  It wouldn't be the first time that
file locking (i.e. fcntl(2)) and NFS don't play nice with each other...

Most likely, eog is trying to update Gnome's recently-used files stuff
(for which it needs to get a lock on the file before writing).
AFAIK, the file in question is ~/.recently-used -- not 100% sure though
(in case of doubt, use strace and grep for 'recent' in its output).

In such cases, it sometimes helps to move the file to some local file
system, and then create a symlink to it, i.e.

$ mkdir /tmp/yourusername
$ mv ~/.recently-used /tmp/yourusername/
$ ln -s /tmp/yourusername/.recently-used ~/.recently-used

(/tmp/ might not be the best choice, in case it gets purged on a
regular basis... but you get the idea :)

It's kinda ugly, but for me, this approach solved similar issues with
some Qt apps, so you might want to give it a try.  If it doesn't work,
you can always move the file back into your home.

Good luck,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: counting bandwidth usage

2005-09-29 Thread Almut Behrens
On Thu, Sep 29, 2005 at 06:14:41PM -0400, kamaraju kusumanchi wrote:
 Very trivial question. I have two machines workA, homeB. Let's say I am 
 sitting at workA and run an nxclient session to connect to homeB. Now in 
 this homeB session, I open a konsole and download 1GB file (using wget). 
 Will this be counted as network traffic of 1GB on homeB or network 
 traffic of 1GB on workA? I ask because, I pay for network usage at workA 
 but at homeB it is free.

Unless you've tunneled port 80 (or whichever port wget is using) from
home back to work, the file will be downloaded via your home network.
(I'm assuming there's a seperate internet connection at work and at
home.)  By default, NX will only forward your X display, so that's the
only traffic you'll have to pay for...  IOW, nothing to worry about.

 
 Will there be any difference in the answer if I use ssh instead of nxclient?

Hardly any.  nxclient might cause somewhat less traffic than ssh with X
forwarding, because NX is highly optimized for just that...  If you can
live with just a remote terminal login (no X GUI), then ssh will cause
even less traffic, of course.

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: embedding fonts

2005-09-20 Thread Almut Behrens
On Tue, Sep 20, 2005 at 01:39:48AM -0700, Aaron Maxwell wrote:
 Hi, I'm running testing.  I am having trouble with creating PDFs with 
 embedded fonts.  (I'm generating them from LyX and LaTeX sources.)  The 
 process worked before an apt-upgrade I did a week ago.
 
 I'm still investigating; I think it's an issue with ghostscript, or 
 perhaps dvips or ps2pdf.  In the meantime, has anyone else had this 
 problem?  If so please post about it - the extra data points will help.  
 Thanks.
 
 PS:  When I start the LyX editor, I get the following error messages.  
 Clues!  Based on them, can anyone suggest what may be good for me to 
 look at?  
 
 | jashenki% lyx ia-lulu-print.lyx
 | xset:  bad font path element (#70), possible causes are:
 | Directory does not exist or has wrong permissions
 | Directory missing fonts.dir
 | Incorrect font server address or syntax
 | Unable to add font path.

This error message might in fact be related to the problem of fonts
not being embedded...

Upon startup, lyx is trying to issue an xset fp+ FONTPATH command to
make sure the X server can access the TeX fonts (Type1 versions).
As xset is complaining, apparently something is wrong with that
FONTPATH, e.g. invalid path specification, doesn't exist, doesn't
contain 'fonts.dir', whatever... (IIRC, TeX's Type1 fonts should be in
/usr/share/texmf/fonts/type1/public/*/*.pfb -- not sure though).

I suspect that those are the same fonts which are supposed to be
embedded in the PS/PDF output (via ghostscript), so it's probably a
good idea to check whether they're installed properly.
BTW, are you getting any errors from ghostscript?

You could run lyx -dbg font ... to get more verbose debugging info
on font handling.  Among other things this should print something like
Adding FONTPATH to the font path.  Then commence your bug chase by
trying to figure out what's wrong with FONTPATH ;)

In case you're using the Qt frontend with Xft2 and fontconfig (to
check, look for libfontconfig.so in the output of running 'ldd' on the
lyx binary), things are somewhat different.  I believe you need to have
some other package installed (latex-xft-fonts ?), though I'm not sure
in what way that could be related to your font embedding problem.
I'm afraid any ramblings of mine won't be of much help in that case,
as I haven't fully grokked fontconfig myself, yet :)

Also, I currently don't have lyx installed, so this is all a bit
vague... but good luck anyway,

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: OT [Re: Advice needed about submitting bugs]

2005-09-19 Thread Almut Behrens
On Mon, Sep 19, 2005 at 11:50:05PM +0200, Maurits van Rees wrote:
 On Mon, Sep 19, 2005 at 10:55:31PM +0200, Maurits van Rees wrote:
  On Mon, Sep 19, 2005 at 09:42:23PM +0200, Alberto Zeni wrote:
   Dear Sirs,
  
  There are some Madams here as well. :-)
 
 It has come to my attention (thanks Mike McCarty) that in English,
 Sir can mean Sir or Ma'am (though that seems strange to me) and
 that at any rate Madam is probably not what I really meant...  I
 apologize to the wonderful *ladies* on this list. :-)

No need to apologize, the intention is what counts... :)

Actually, the nice thing about Debian is - in addition to being
an excellent distro, of course - that the men who are generally
welcoming to us geek gals outnumber anything I've seen elsewhere.
Much appreciated!

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: bash script

2005-09-17 Thread Almut Behrens
On Sat, Sep 17, 2005 at 02:33:03PM -0500, Rodney Richison wrote:
 Am trying to automate some stuff for future installs.
 I'd like to echo multiple lines to the end of a file. Like this
 
 echo '. /etc/bash_completion'  ~/.bashrc
 
 But I'd like to add multiple lines at one time. Like the ones below. I 
 realize I could cat them in from a text file, but I'd like to make this 
 script non-dependant of other text files.
 
 export LS_OPTIONS='--color=auto'
 eval `dircolors`
 alias ls='ls $LS_OPTIONS -F'
 alias ll='ls $LS_OPTIONS -l'
 alias l='ls $LS_OPTIONS -lA'

If I'm understanding you correctly, you probably want to use the
so-called here document feature that virtually any shell provides.
I.e., to append the above lines to 'somefile', you could write

cat somefile ENDOFQUOTE
export LS_OPTIONS='--color=auto'
eval `dircolors`
alias ls='ls $LS_OPTIONS -F'
alias ll='ls $LS_OPTIONS -l'
alias l='ls $LS_OPTIONS -lA'
ENDOFQUOTE

See the section Here Documents in the bash manpage for details (such
as parameter expansion within the quoted text, etc.).

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: adventures with window managers

2005-09-11 Thread Almut Behrens
On Sun, Sep 11, 2005 at 11:47:04PM +0100, Adam Hardy wrote:
 Adam Hardy on 11/09/05 23:00, wrote:
 
 UWM: Your X-Server doesn't support the SHAPES extension . terminating
 
 I can't find any reference to SHAPES extension in synaptic. Where does 
 it reside?
 
 Just realised you probably don't install SHAPES, rather upgrade 
 something to a version that does implement it, right?
 
 Saw in the archives an answer to a similar question that I should do 
 load extmod in my XF86Config Module section.

That's correct.  Just make sure you have this in your XF86Config:

Section Module
Loadextmod
...
...
EndSection

 
 Can't find any extmod in my kernel config. Any advice?

It has nothing to do with the kernel config -- it's an extension to the
X protocol.  The respective module/lib implementing the SHAPE extension
(and various others) comes with the X-server

/usr/X11R6/lib/modules/extensions/libextmod.a

BTW, with xdpyinfo you can check which extensions your X-server
supports.  See the list following number of extensions:.

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: cyrus getting slower over time

2005-09-09 Thread Almut Behrens
On Thu, Sep 08, 2005 at 09:20:16PM +0200, oneman wrote:
 
 I've got a problem with cyrus getting slower over time when checking  
 mail, up to a point where clients start timing out... First cyrus  
 works flawlessly, then it starts responding slower to mailchecking  
 and eventually becomes unusable. After some hours of non use, the  
 problem disappears by itself, so it seems something simply times out  
 after a while, I just can't see what that might be.
 (...)
 I've been looking at top and tailing mail.log but can't find anything  
 else but cyrus processes starting, sitting idle for a very long time  
 and exiting
 
 I don't know where to look next to find out where the problem lies.  

You could attach strace or ltrace to one or several of the cyrus
processes, to find out what system calls they're making (ltrace would
also trace shared library calls in addition to system calls -- in case
the strace info doesn't seem fine-grained enough).

This way, you might find out, _just as an example_, that the process
is spending lots of time waiting on a socket that isn't ready for
reading/writing, or something.  In that case you'd most likely see
select(2) calls on a socket handle in the strace output. You would then
lookup what socket that handle corresponds to, et voila, with a bit of
luck you'd be one step closer to where the problem is originating...

Just do a strace -p PID-of-process-to-trace, possibly adding other
useful options like -o, -f, -e (see the manpage for details).  And, if
you should need help interpreting the output you get, don't hesitate to
report back here.

Of course, this approach isn't guaranteed to get you anywhere, but in
the absence of any better ideas, it's at least worth a try... :)

Good luck,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: pb with Printing using CUPS v1.1.x and lpd LaserJet1100

2005-09-08 Thread Almut Behrens
On Thu, Sep 08, 2005 at 03:03:24PM +0200, Jean-Louis Crouzet wrote:
 D [08/Sep/2005:10:45:57 +0200] [Job 20] Spooler: cups
 D [08/Sep/2005:10:45:57 +0200] [Job 20] Printer: HPLaserJet1100CUPSv1.1.x
 D [08/Sep/2005:10:45:57 +0200] [Job 20] PPD file: 
 /etc/cups/ppd/HPLaserJet1100CUPSv1.1.x.ppd
 D [08/Sep/2005:10:45:57 +0200] [Job 20] Printer model: Raw queue

I don't think this should be Raw queue -- raw means that any PS code
will be sent through to the printer _as is_.  As the HP LJ 1100 cannot
natively handle Postscript, the result is exactly what you described in
your first post: many pages of PS code are being printed verbatim as
plain text, instead of being rendered (via ghostscript).

From taking a peek at the source code of foomatic-rip, it becomes
rather clear that Printer model: Raw queue is only being set when
there's no *FoomaticRIPCommandLine: directive found in the .ppd file:

  (...)
  } elsif (m!^\*FoomaticRIPCommandLine:\s*\(.*)$!) {
  # *FoomaticRIPCommandLine: code
  my $line = $1;
  (...)
  $line =~ m!^([^\]*)\!;
  $cmd .= $1;
  $dat-{'cmd'} = unhtmlify($cmd);

  (...)  
  # Was the RIP command line defined in the PPD file? If not, we assume a
  # PostScript printer and do not render/translate the input data
  if (!defined($dat-{'cmd'})) {
  $dat-{'cmd'} = cat%A%B%C%D%E%F%G%H%I%J%K%L%M%Z;
  if ($dontparse) {
  # No command line, no options, we have a raw queue, don't check
  # whether the input is PostScript and ignore the docs option,
  # simply pass the input data to the backend.
  $dontparse = 2;
  $model = Raw queue;
  }
  }

  ## Summary for debugging
  print $logh ${added_lf}Parameter Summary\n;
  print $logh -${added_lf}\n;
  print $logh Spooler: $spooler\n;
  print $logh Printer: $printer\n;
  print $logh PPD file: $ppdfile\n;
  print $logh Printer model: $model\n;


Typically, for non-PS printers, there's a line such as

*FoomaticRIPCommandLine: gs -q -dBATCH ... ...

in the PPD file.  So that's what I would check first.  Maybe you have
a messed up PPD file -- for whatever reason.  It claims to be reading
/etc/cups/ppd/HPLaserJet1100CUPSv1.1.x.ppd, so obviously that's the
file to look at.  Also, could it be that different PPD files are being
used under different circumstances? (which might explain why the test
page from within the wizard did work, but no printing attempts
afterwards...)

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apache2 and php rendering

2005-09-08 Thread Almut Behrens
On Thu, Sep 08, 2005 at 06:01:07PM -0400, Bernd Prager wrote:
 
 I'm trying to use some Flickr images on my website and wrote a php
 script that gets me the link.
 Within the html page I want to use now a tag like img
 src=/php/getImage.php / for the image.
 If I call the script directly in the browser with
 http://myserver/php/getImage.php; I get the desired image.
 But when I try to include the img tag above in the /index.html file
 above it doesn't get rendered.
 
 I'm using apache2 and libapache2-mod-php4.
 
 Has anybody done this? What do I miss?

What exactly does the script deliver to the browser?
To be used within img src=.../, the script would have to return
the image data itself, i.e. some stream of MIME type image/jpeg, for
example.  Returning a link/URL to the image won't work here, as the
browser does not resolve indirections in this context.

You could dynamically create the HTML with the image link properly
interpolated in between the quotes of src= Or use Javascript...

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Desperation with Hylafax and AVM Fritz ISDN Card PCI on a debian system

2005-09-07 Thread Almut Behrens
Hi Andreas,

quick side note: please keep the thread on-list, so other people who
might google this up some time in the future will have a chance to see
if/how the problem was solved.
It's kinda frustrating when you google and only find others having the
same problem, but no solutions... don't you think so? :)
I didn't see anything personal in what you sent to me, so I assume it's
ok if I take it back on-list (I masked out the phone number).


On Wed, Sep 07, 2005 at 11:01:14AM +0200, Andreas Moser wrote:
 
 If I try to send a facsimile from the commandline, it seems to work 
 fine.

Right, so we at least know this part works.  Next thing to check is
whether Hylafax-internal format conversions are performed properly.

Short background info:
c2faxsend (which is the replacement program from the capi4hylafax
package for the program faxsend that comes with Hylafax) accepts
various formats (specified with the -f option), e.g. TIFF -- which
apparently does work, as you just verified.  As default (i.e. without
option -f), it expects the HYLAFAX format. Essentially, this is TIFF
too, but some specific sub-variant (details irrelevant here).  It's
what Hylafax generates as output.  On the input side, Hylafax accepts
Postscript, PDF or TIFF.  It uses several tools (ps2fax, pdf2fax,
tiff2fax) to convert those formats to its (HYLA)FAX format.  Those
tools in turn rely on other external packages, e.g. ghostscript for
PS and PDF conversions.

See below for what you might want to check.

 I used the following command:
 
 alpha:/etc/hylafax# c2faxsend -v -C /etc/hylafax/config.faxCAPI -d 
 02244** -f TIFF /home/andreas/testfax.tif
 Try to connect to fax number 02244** in TIFF mode on controller 1.
 Dial and starting transfer of TIFF-File /home/andreas/testfax.tif with 
 normal resolution.
 Connection established.
StationID = +49 2244 **
BaudRate  = 14400
Flags = HighRes, JPEG, MR_compr, MMR_compr
 Page 1 was sended. - Last Page!
 Fax file completely transfered to CAPI.
 Connection dropped with Reason 0x3400 (No additional information).
 
 In the capi4hylafax log I got the following messages:
 Sep 07 12:31:57.70: [ 7142]: CapiFaxSend - INFO: Try to connect to fax number 
 02244** in TIFF mode on controller 1.
 Sep 07 12:31:57.70: [ 7142]: CapiFaxSend - INFO: Dial and starting transfer 
 of TIFF-File /home/andreas/testfax.tif with normal resolution.
 Sep 07 12:32:09.70: [ 7142]: CapiFaxSend - INFO: Connection established.
 Sep 07 12:32:09.70: [ 7142]: CapiFaxSend - INFO:StationID = +49 2244 
 **
 Sep 07 12:32:09.70: [ 7142]: CapiFaxSend - INFO:BaudRate  = 14400
 Sep 07 12:32:09.70: [ 7142]: CapiFaxSend - INFO:Flags = HighRes, 
 JPEG, MR_compr, MMR_compr
 Sep 07 12:32:34.33: [ 7142]: CapiFaxSend - INFO: Page 1 was sended. - Last 
 Page!
 Sep 07 12:32:34.33: [ 7142]: CapiFaxSend - INFO: Fax file completely 
 transfered to CAPI.
 Sep 07 12:32:50.32: [ 7142]: CapiFaxSend - INFO: Connection dropped with 
 Reason 0x3400 (No additional information).
 
 In the syslog I got the following messages:
 Sep  7 12:32:03 alpha kernel: capilib_new_ncci: kcapi: appl 2 ncci 0x10101 up
 Sep  7 12:32:50 alpha kernel: kcapi: appl 2 ncci 0x10101 down
 
 But if I try to send the same fax with WHFC (Windows Hylafaxclient)
 it does not work.

Just to be sure: with the same fax you mean the testfax.tif file
that you successfully sent via c2faxsend?  (I'm asking because of the
.ps (- Postscript?) part appearing in the tempfile names in the
tiff2fax conversion commandline below (doc9.ps.9).  However, I'm not
familiar with Hylafax's tempfile naming conventions, and I currently
don't have an installation to test -- IOW, the .ps might have nothing
to do with Postscript at all...)

 
 Sep  7 12:22:30 alpha HylaFAX[7066]: Filesystem has SysV-style file creation 
 semantics.
 Sep  7 12:22:30 alpha FaxQueuer[6964]: FIFO RECV Sclient/7066:9
 Sep  7 12:22:30 alpha FaxQueuer[6964]: SUBMIT JOB 9
 Sep  7 12:22:30 alpha FaxQueuer[6964]: JOB 9 (suspended dest  pri 127 tts 
 0:00 killtime 3:00:00): CREATE
 Sep  7 12:22:30 alpha FaxQueuer[6964]: Apply CanonicalNumber rules to 
 02244**
 Sep  7 12:22:30 alpha FaxQueuer[6964]: -- match rule ^0, result now 
 +492244**
 Sep  7 12:22:30 alpha FaxQueuer[6964]: -- return result +492244**
 Sep  7 12:22:30 alpha FaxQueuer[6964]: JOB 9 (ready dest +492244** pri 
 127 tts 0:00 killtime 3:00:00): READY
 Sep  7 12:22:30 alpha FaxQueuer[6964]: FIFO SEND client/7066 msg S*
 Sep  7 12:22:30 alpha FaxQueuer[6964]: JOB 9 (ready dest +492244** pri 
 127 tts 0:00 killtime 3:00:00): PROCESS
 Sep  7 12:22:30 alpha FaxQueuer[6964]: JOB 9 (active dest +492244** pri 
 127 tts 0:00 killtime 3:00:00): ACTIVE
 Sep  7 12:22:30 alpha FaxQueuer[6964]: JOB 9 (active dest +492244** pri 
 127 tts 0:00 killtime 3:00:00): PREPARE START
 Sep  7 12:22:30 alpha FaxQueuer[7067]: JOB 9 (active dest +492244** pri 
 127 tts 0:00 killtime 3:00:00):
   

Re: Desperation with Hylafax and AVM Fritz ISDN Card PCI on a debian system

2005-09-06 Thread Almut Behrens
On Tue, Sep 06, 2005 at 04:17:17PM +0200, Andreas Moser wrote:
 Hello Almut,
 
 after changing the rights for the capi20 device, there was no
 difference in the delivery of the facsimile.
 Any other ideas?

Not really ;) -- well, ok, next thing I would try is to manually call
c2faxsend from the commandline.  This should help to narrow down on
which step is failing.  Try something like:

  c2faxsend -vL -C /path/to/config.faxCAPI -d DestNumber -f TIFF test.tif

(it's probably a good idea to use the example TIFF file fritz_pic.tif
that comes with the original package
ftp://ftp.avm.de/tools/capi4hylafax.linux/capi4hylafax-01.03.00.tar.gz
because that's guaranteed to be the correct format...)

What seems a little weird is that you don't get any messages in the
syslog about dialing out to the FAX destination number etc., in between
CMD START /usr/local/bin/c2faxsend ... and CMD DONE: exit status 0.
Also, I wouldn't expect a return status of 0 (normally means OK), when
things apparently are going wrong...

Well, before we try to form any hypotheses about why that is, it might
help to know whether manually faxing out with the above command works,
in principle.  See what you get in /var/spool/hylafax/log/capi4hylafax.

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Desperation with Hylafax and AVM Fritz ISDN Card PCI on a debian system

2005-09-05 Thread Almut Behrens
On Mon, Sep 05, 2005 at 08:53:27PM +0200, Andreas Moser wrote:
 (...)
 Seems so that my controller is working.
 Now I had to check whether the group dialout can access on /dev/capi20 
 and the user uucp is member in this group too.
 
 alpha:/dev# ls -l capi20
 crw--- 1 uucp dialout 68, 0 2005-09-01 14:14 capi20

I have to admit I haven't read your entire problem description (so I
might be missing something essential), but to me this looks like group
dialout does _not_ have read/write permission on /dev/capi20. In other
words, you, as a member of that group, also do not have access (only
user uucp has)...  You probably want to chmod g+rw on the device.

Good luck,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Re: SSL problem + Apache 1.3.33

2005-08-19 Thread Almut Behrens
On Fri, Aug 19, 2005 at 11:45:36AM +0200, Belikov Anton wrote:
 
 Hi All,
 I have the same problem with Apache 2 / mod_ssl under Red Hat. Can you please 
 give me a hint. What was the reason in this case? I also can use 
 http://myserver:443 while https://myserver doesn't work and  I see in 
 access_log
 10.10.10.99 - - [19/Aug/2005:11:39:38 +0200] ?L? 501 405 - -
 10.10.10.99 - - [19/Aug/2005:11:39:38 +0200] ?L? 501 405 - -
 In error log 
 [Fri Aug 19 11:39:38 2005] [error] [client 10.10.10.99] Invalid method in 
 request ?L?
 [Fri Aug 19 11:39:38 2005] [error] [client 10.10.10.99] Invalid method in 
 request ?L?

I suppose you're referring to this old thread
http://lists.debian.org/debian-user/2005/04/msg00274.html

The original poster back then had two IP-based virtual host sections
for the same IP address, the first without a port specification, the
second with :443.

In my final reply in this thread I suggested to explicitly specify the
non-SSL port, too, i.e.:

  VirtualHost IPaddress:80
^^^
... HTTP setup ...

  /VirtualHost

  VirtualHost IPaddress:443

... HTTPS setup ...

  /VirtualHost

the idea being that, in case the virtual host IP without port would
match _both_ ports, it might inadvertendly cause requests on 443 be
routed to the wrong virtual host section) -- in particular, as the
HTTP section appeared first (the order is important, when things are
ambiguous).

Unfortunately, I can't tell for sure whether that worked[1], because
I didn't get any feedback form the original poster.  So, if _you_
eventually do solve the issue, _please_ report back on-list to let
other people benefit, who will google this up in the future.
Apparently, this problem occurs rather frequently (during the last
couple of months I got five (!) related inquiries off-list...).

HTH,
Almut

[1] I was able to reproduce the problem myself with a similar config
(without :80 on the first virtual host), when additionally _not_
specifying the Port 80 directive (i.e. only having Listen 80 and
Listen 443).


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: European chars to ascii

2005-08-19 Thread Almut Behrens
On Fri, Aug 19, 2005 at 09:34:24AM -0400, Tong wrote:
 
 Is there any tools that can convert European characters to plain
 7bit-Ascii?
 
 E.g., ä = a, ö = o, etc. 

I don't know if there's a better tool, but I would do something like:

$ tr 'äöüß' 'aous' isolatin1-in ascii-out

(simply extend the char lists as required)

This only works with a 1-char = 1-char mapping.  If you rather want
a 1-char = multiple-char mapping (e.g, in German, we'd typically
substitute ä = ae, ö = oe, etc.), you could start with a little
script like this

#!/usr/bin/perl

%mapping = (
'ä' = 'ae',
'ö' = 'oe',
'ü' = 'ue',
'ß' = 'ss',
# ...
);

$set = join '', map sprintf(\\x%x, ord $_), keys %mapping;

while () {
s/([$set])/$mapping{$1}/ge;
print;
}

Or, if you'd like to specify the special characters' hex codes (in case
you have problems entering them directly...), you could write instead

#!/usr/bin/perl

%mapping = (
'e4' = 'ae',
'f6' = 'oe',
'fc' = 'ue',
'df' = 'ss',
# ...
);

$set = join '', map \\x$_, keys %mapping;

while () {
s/([$set])/$mapping{sprintf %x, ord $1}/ge;
print;
}

Cheers,
Almut


P.S. Normally, you'd use iconv for encoding conversions.  However,
iconv -f 8859_1 -t ASCII isolatin1-file doesn't work, because ASCII
can only represent a subset of characters present in 8859_1 -- which
makes iconv complain...


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: UT2k4 install questions

2005-08-11 Thread Almut Behrens
On Thu, Aug 11, 2005 at 06:36:31PM +, Mike Markiw III wrote:
 
 On the DVD is a file named linux-installer.sh. I su to root and try to 
 execute it when I get the following message:
 bash: ./linux-installer.sh: /bin/sh: bad interpreter: Permission denied

Most likely, the script doesn't have the executable bit set, or the DVD is
mounted 'noexec'.  You have several options, e.g.

* call the shell explicitly, i.e. type
  /bin/sh linux-installer.sh

* copy linux-installer.sh to somewhere where you have write (and execute)
  permission and do a chmod +x linux-installer.sh.  Then, cd back into
  the directory on the DVD where the original script resides, and call
  the other script (the one you copied) with absolute path from there...

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: stable mod_python apache not working

2005-07-11 Thread Almut Behrens
On Mon, Jul 11, 2005 at 04:57:25PM -0500, Indraneel Majumdar wrote:
 I am unable to get the example from the mod_python tutorial working with
 apache, on debian stable. No error in the server logs. Apache just shows
 me the script contents when I point the browser at it.

To have Python code be handled by mod_python, make sure you have one of
the following directives in the respective section of your apache config:

  AddHandler python-program .py

  (to have files with extension .py be handled by mod_python)
  
or
  SetHandler python-program

  (applies to all files in the scope of the directive)

Unfortunately, this isn't clearly mentioned in the mod_python docs (at
least not in 2.3.2 Configuring Apache, where you would look first...).

Normally you also want some mod_python-specific directives, such as

  PythonHandler mod_python.publisher

...depending on what exactly you want to do, of course.

See the docs for what directives are available and what they do.
http://www.modpython.org/live/mod_python-2.7.8/doc-html/

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Why has find ... -exec rm -i '{}' ';' stopped working?

2005-07-05 Thread Almut Behrens
On Tue, Jul 05, 2005 at 12:41:48PM +0100, Adam Funk wrote:
 michael wrote:
  Well on 'sarge', under bash, the
find . -name 'whatever' -exec rm -i {} ;
  works as expected for me, but the above example exhibits the same
  performance as you note (I'm no 'xargs' expert and can's see what the
  '-0r' option is meant to do)
  
  If I were you I'd check that the first form works from the command line
  and then take it from there. 
 
 Sorry if I wasn't clear.  When I said this:
 
 -- Typing find -name '*~' -exec rm -i '{}' ';' directly
 -- prints a list of rm-questions, doesn't get an answer, and so
 -- does nothing.
 
 I meant that typing find... directly at a shell prompt doesn't work.  I
 also tried it as find -name '*~' -exec rm -i {} ';' and got the same
 problem.  (For some reason I used to have to quote {}.)
 
  Are you running bash under sarge? 
 
 I'm running testing, and dpkg says I'm using bash 3.0-15 and findutils
 4.2.22-1.

It seems to be a bug (or feature?) of find.
(I can even reproduce the behaviour when moving the debian-testing find
binary to a somewhat older SuSE box -- where the command in question
does read from stdin otherwise)...

Actually, it appears to be caused by a change in the upstream sources
of find:  comparing the respective strace outputs, one can observe
that stdin in fact is being closed in the new (4.2.22) version before
launching the child process rm -- while it is not in the old.

Digging a little deeper, one finds in the sources a function which is
most likely responsible (-- close(0), followed by reopening it to
/dev/null):

  static void
  prep_child_for_exec (void)
  {
const char inputfile[] = /dev/null;
/* fprintf(stderr, attaching stdin to /dev/null\n); */

close(0);
if (open(inputfile, O_RDONLY)  0)
  {
/* This is not entirely fatal, since
 * executing the child with a closed
 * stdin is almost as good as executing it
 * with its stdin attached to /dev/null.
 */
error (0, errno, %s, inputfile);
  }
  }

(It's called from within launch(), which is handling the option -exec)

This function is simply not present in the old sources (4.1.20).

Well, I guess it's worth filing a bug report, to let the original
authors figure out what it was that made them add this code -- and
whether there is a way to work around the issue.  Apparently, they
didn't think anyone would ever want to do something like you do... ;)

Cheers,
Almut


P.S.  The reason that find ... -print0 | xargs -0r rm -i exhibits the
same behaviour is a different one:  Here, find's stdin filehandle would
somehow have to be passed through to rm (via xargs, which in turn has
its stdin attached to find's stdout), in order for rm to be able to
read from the current interactive tty.  I'd think that such indirect,
bidirectional pipes simply seemed too cumbersome to implement...


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Printing on a shared printer from debian to Novell Server

2005-06-30 Thread Almut Behrens
On Thu, Jun 30, 2005 at 09:37:26PM +0700, Prabu Subroto wrote:
 
 I work at a workstation and my workstation is associated in a LAN. My
 internet connection is only through http proxy server. So if I want to
 go to internet, I have to set my internet browser for the ip number and
 the port number of the proxy server such as http://10.152.16.10:8080.
 
 Now, I have to get and install some softwares from the internet for my
 debian linux. I want to use apt-get (I am tired of dependancy problem)
 but connected to the internet only through proxy server. How can I do
 that?
 
 I now /etc/apt/sources.list. but there, I can only define the server of
 destination such as ftp://ftp.de.debian.org but I can not define the
 proxy server of our LAN and also its port number.

I think you need to set (in /etc/apt/apt.conf)

Acquire::http::Proxy  http://user:[EMAIL PROTECTED]:8080

(leave out the user:pass@ part, if you don't need to login)

The manpage apt.conf(5) also states it would honour the setting of the
general environment variable http_proxy -- which is also used by
other tools, such as wget.

Similar options exist for FTP, in case you should need those, too.

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Printing on a shared printer from debian to Novell Server

2005-06-30 Thread Almut Behrens
On Thu, Jun 30, 2005 at 10:05:01PM +0200, Almut Behrens wrote:
 On Thu, Jun 30, 2005 at 09:37:26PM +0700, Prabu Subroto wrote:
  
  I work at a workstation and my workstation is associated in a LAN. My
  internet connection is only through http proxy server. So if I want to
  go to internet, I have to set my internet browser for the ip number and
  the port number of the proxy server such as http://10.152.16.10:8080.
  
  Now, I have to get and install some softwares from the internet for my
  debian linux. I want to use apt-get (I am tired of dependancy problem)
  but connected to the internet only through proxy server. How can I do
  that?
  
  I now /etc/apt/sources.list. but there, I can only define the server of
  destination such as ftp://ftp.de.debian.org but I can not define the
  proxy server of our LAN and also its port number.
 
 I think you need to set (in /etc/apt/apt.conf)
 
 Acquire::http::Proxy  http://user:[EMAIL PROTECTED]:8080

Minor correction... That should in fact be:

Acquire::http::Proxy  http://user:[EMAIL PROTECTED]:8080;


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Accessing a program started in another term

2005-06-30 Thread Almut Behrens
On Thu, Jun 30, 2005 at 04:29:50PM -0500, Colin Ingram wrote:
 Lets say, I'm at work and I start a remote session for an interactive 
 program, like octave.  Then I go home with and my remote terminal 
 session and the octave program are still running at work.  Is there a 
 way for me to take control of that terminal and interact with the 
 running octave program?  I use octave as an example but I'm looking for 
 solution (if it exists) that would be generally applicable.

You most likely want 'screen'.  Very useful tool in many respects...

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: System exiting due to kernel....

2005-06-30 Thread Almut Behrens
On Thu, Jun 30, 2005 at 08:55:23PM +0200, Marco Calviani wrote:
   ..., but what is an OOM Killer?

OOM = out of memory.

The OOM killer is responsible for killing processes when the system is
running out of virtual memory (so the system itself will stay alive).

(In your case it hit the X server -- which might be considered
suboptimal, with respect to keeping damage to the end user to a
minimum... ;)

Cheers,
Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: apache 1.33 ?

2005-06-29 Thread Almut Behrens
On Wed, Jun 29, 2005 at 04:38:40PM +0200, Brent Clark wrote:
 ...
 IfModule mod_alias.c
  Alias /dspam /var/www/dspam/html/
 /IfModule
 VirtualHost dspam.eccotours.local
   DocumentRoot /var/www/dspam/html
 ...
 
 Every Time type in my URL and leave out the alias, the Dspam alias still 
 kicks in and for the likes of me, I cant figure it out.

I suspect it's not the alias, but rather the DocumentRoot
/var/www/dspam/html in the virtual host section that kicks in...
Have you tried setting that to the appropriate directory?

Almut


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Perl upgrade risks

2005-06-29 Thread Almut Behrens
On Wed, Jun 29, 2005 at 05:45:56PM -0300, Antonio Lobato wrote:
 
   I have a Debian Woody server running a lot of services such as 
 cyrus/postfix, apache, mysql-server, jabberd2, and more... Such as a 
 good woody system, my Perl is 5.6.1, and there are tons of scripts 
 that I made and even normal system scripts that uses this perl 
 version. Now I'll install a software (bandersnatch, for jabberd2 chat 
 logging) that needs Perl 5.8.0 a least.
 
   Well, I can pinning the system to woody/oldstable and upgrade only 
 Perl to 5.8.7 (stable), but my question is: Will such Perl upgrade 
 (from 5.6.1 to 5.8.7) break some old script?
   
   This is a production server and I have to be sure that I can do it. 

I don't think there's a clear Yes or No answer. It very much depends on
exactly what features the perl code in question is using.
Having issued that warning, I should add that, according to my
experience with upgrading perl versions, it usually doesn't cause any
problems -- except if the code contains exceptionally dirty hacks.
The main problem here is that you'd typically not know...

So, I think, you essentially have two options:

(1) Read through all perldelta[1] documents to get an idea of what
has changed from version to version, and then somehow figure out whether
any of that applies to the perl code you're using.  (Definitely sounds
like a lot of work, with an unsure outcome...)

(2) Install the new perl 5.8.7 in /usr/local and leave 5.6.1 as it is.
This is probably the safest bet.  In your software that needs 5.8.0,
make sure you're calling the new version, i.e. replace #!/usr/bin/perl
with #!/usr/local/bin/perl (or wherever you've put it).
I'm not entirely sure how involved it is to install the standard debian
package in a different location (such that the binary also pulls in its
correct libs...(!)), so in case of doubt I'd simply build perl from
source.

Good luck,
Almut

[1] with each perl version there's a perldelta.pod file containing the
changes since the previous release (there's also a Changes file, but
that's probably too detailed for the purpose at hand).  You can read it
with perldoc perldelta, but as mentioned, that only contains the most
recent changes.  To get an idea of the cumulative changes from 5.6.1 to
5.8.7, you'd have to read (and merge in your head) all perldelta files
that have appeared in between...  In case you're still not scared off
by now ;)  you can find them all here:
http://perldoc.perl.org/perl.html#Miscellaneous


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



  1   2   >