Hi all,

Replying to my own email here.  I think this might have been related
to Redhat 9 kernel stuff, as I suspected in my orginal email.  I found
this in the RH9 release notes:

 If an application does not work properly with NPTL, it can be run using
 the old LinuxThreads implementation by setting the following
 environment variable:

 LD_ASSUME_KERNEL=<kernel-version>

 The following versions are available:

 - 2.4.1 - Linuxthreads with floating stacks

 - 2.2.5 - Linuxthreads without floating stacks-- 

So, I've been running lpd with LD_ASSUME_KERNEL=2.4.1.

I'm running LPRng-3.8.27.

However, I'm still seeing runaway lpd processes - it's always the
'server' process and it consumes as much CPU as it can - an lpc kill
fixes the problem, but obviously this impacts on the overall
reliability of the printing system - it's happened twice in the last
day or so.

I was wondering if anyone else has seen this problem at all.  Here's
an example of it happening:

[kant]toby: lpq -Pat8
Printer: [EMAIL PROTECTED] 'HP Laserjet 8150DN in 5.05 (Level 5 West Lab) AT'
 Queue: 7 printable jobs
 Server: pid 8178 active
 Status: job '[EMAIL PROTECTED]' removed at 16:02:46.028
 Rank   Owner/ID               Pr/Class Job Files                 Size Time
1      [EMAIL PROTECTED]                A   372 print.ps            224344 16:04:36
[kant]toby:

.. with the 8178 process chewing up all CPU, until an lpc kill kills
this process and gets the queue moving again.  Note that strace
doesn't reveal anything at all - not a single line of output.  I have
enabled debugging on this queue, so will hopefully get some
information if/when I get the next runaway.

I'd appreciate any advice that people can offer, or if anyone has seen
similar problems.  Also, if people are successfully running LPRng
under RH9.  It's interesting that I see runaways both before and after
setting the LD_ASSUME_KERNEL env variable.

Thanks
Toby 

On Tue, 11 May 2004, Toby Blake wrote:

> Date: Tue, 11 May 2004 13:28:06 +0100 (BST)
> From: Toby Blake <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: lpd multithreaded? futex lockups under RH9
> 
> Hi all,
> 
> Firstly, I should point out that my knowledge of kernel type things is
> very limited, so some of this may make little sense.
> 
> Firstly, is lpd a multithreaded application?  I ask because I'm seeing
> the occasional runaway lpd process (it's always the queue server
> process) which I think might be connected with multithreading under
> Redhat 9.  This process uses up close to 100% of CPU time and prevents
> the queue it serves from doing anything.  It will stop doing this if I
> do an 'lpc kill' of the process.
> 
> What I see when doing a strace of the process is...
> 
> [monotype]root: ps -ef|grep 24016
> lp       24016 13270 18 May09 ?        07:31:54 lpd (Server) 'bp8'
> [monotype]root: strace -p 24016
> futex(0x42133220, FUTEX_WAIT, 2, NULL <unfinished ...>
> [monotype]root:
> 
> .... which some web searching reveals to be connected with NPTL,
> futexes, multithreading, etc, under Redhat 9, which is where the limit
> of my kernel knowledge is comfortably surpassed.
> 
> Has anyone else seen this and, more importantly, can they suggest a
> fix?
> 
> Many thanks in advance
> Toby Blake
> University of Edinburgh
> 
> 


-----------------------------------------------------------------------------
YOU MUST BE A LIST MEMBER IN ORDER TO POST TO THE LPRNG MAILING LIST
The address you post from MUST be your subscription address

If you need help, send email to [EMAIL PROTECTED] (or lprng-requests
or lprng-digest-requests) with the word 'help' in the body.  For the impatient,
to subscribe to a list with name LIST,  send mail to [EMAIL PROTECTED]
with:                           | example:
subscribe LIST <mailaddr>       |  subscribe lprng-digest [EMAIL PROTECTED]
unsubscribe LIST <mailaddr>     |  unsubscribe lprng [EMAIL PROTECTED]

If you have major problems,  send email to [EMAIL PROTECTED] with the word
LPRNGLIST in the SUBJECT line.
-----------------------------------------------------------------------------

Reply via email to