> From [EMAIL PROTECTED] Tue Jul 27 12:44:50 2004
> Date: Tue, 27 Jul 2004 11:24:50 -0400
> From: Mike Shaddock <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Subject: LPRng: Question about interaction with Pharos Uniprint
>
> I am running LPRng 3.8.20 on Suns running Solaris 9.  The config is
set 
> up so that all print jobs are routed to a Pharos Uniprint (version
6.0) 
> server running on a Windows box (not sure exactly which version of 
> Windows).  When printing individual jobs, everything works fine, but 
> when we put the system into production with people printing from all 
> over campus, the Windows box would crash at least once a day.  I've
been
>
> looking everywhere that I can think of, but haven't found anyone else 
> having this problem.  Is anyone else out there using this combination?

> I'm including an editted version of my printcap file:
>
> lp
>       :client
>       :tc=.general,.eprint
>       oh=xxx*.acpub.duke.edu
>
> .general
>       :force_localhost@
>       :mx=0
>       :mc=0
>
> .eprint
>       :[EMAIL PROTECTED]
>       :if=/usr/local/libexec/filters/lpf
>       :lpr_bounce=true
>
> Thanks!
>
> Mike Shaddock
> Senior Analyst, IT
> Systems and Core Services
> Office of Information Technology
> Duke University

I have had reports of various print spoolers running on Microsoft
Windows systems dying under heavy or continuous load.  One solution
was to 'rate limit' incoming jobs.

You can do this as follows:

in LPRng/src/common/lpd_jobs.c,  about ling 2357, you will find:


    /* we put a timeout before each attempt */
    if( attempt > 0 ){
        n = 8;
        if( attempt < n ) n = attempt;
        n = Connect_interval_DYN * (1 << (n-1)) + Connect_grace_DYN;
        if( Max_connect_interval_DYN > 0 && n > Max_connect_interval_DYN
){
            n = Max_connect_interval_DYN;
        }
        DEBUG1("Service_worker: attempt %d, sleeping %d", attempt, n);
        if( n > 0 ){
            SETSTATUS(&job) _("attempt %d, sleeping %d before retry"),
attempt+1, n );
            plp_sleep(n);
        }
    }


   Change this to:
    {
        n = 8;
        if( attempt < n ) n = attempt;
                if( n == 0 ) n = Connect_grace_DYN; else
        n = Connect_interval_DYN * (1 << (n-1)) + Connect_grace_DYN;
        if( Max_connect_interval_DYN > 0 && n > Max_connect_interval_DYN
){
            n = Max_connect_interval_DYN;
        }
        DEBUG1("Service_worker: attempt %d, sleeping %d", attempt, n);
        if( n > 0 ){
            SETSTATUS(&job) _("attempt %d, sleeping %d before retry"),
attempt+1, n );
            plp_sleep(n);
        }
    }

    Now you will sleep at least Connect_grace_DYN seconds between jobs.

    Use the 'connect_grace' value in the printcap to set the interval
between jobs.

    A value of 2  (2 seconds) seemed to solve somebody elses problems.

Patrick Powell                 Astart Technologies
[EMAIL PROTECTED]            6741 Convoy Court
Network and System             San Diego, CA 92111
  Consulting                   858-874-6543 FAX 858-751-2435
LPRng - Print Spooler (http://www.lprng.com)



-----------------------------------------------------------------------------
YOU MUST BE A LIST MEMBER IN ORDER TO POST TO THE LPRng MAILING LIST
The address you post from or your Reply-To address MUST be your
subscription address

If you need help, send email to [EMAIL PROTECTED] (or lprng-requests
or lprng-digest-requests) with the word 'help' in the body.
To subscribe to a list with name LIST,  send mail to [EMAIL PROTECTED]
with:                           | example:
subscribe LIST <mailaddr>       |  subscribe lprng-digest [EMAIL PROTECTED]
unsubscribe LIST <mailaddr>     |  unsubscribe lprng [EMAIL PROTECTED]

If you have major problems,  call Patrick Powell or one of the friendly
staff at Astart Technologies for help.  Astart also does support for LPRng.
Also, check the Web Page at: http://www.lprng.com for any announcements.
Astart Technologies  (LPRng - Print Spooler http://www.lprng.com)
6741 Convoy Court
San Diego, CA 92111
858-874-6543 FAX 858-751-2435
-----------------------------------------------------------------------------

Reply via email to