Re: Failing to reconnect after Oracle shutdown abort (Apache::DBI)

1999-11-02 Thread Greg Stark


Tim Bunce [EMAIL PROTECTED] writes:

 Has anyone experienced a situation where a process (httpd for example)
 can't reconnect to Oracle after a "shutdown abort"?
 
 Tim.

As far as I can tell we never get a clean reconnection after any sort of
connection problem. I don't even think it takes a shutdown abort to cause
this.

-- 
greg



[Perl Q]: Calendar function

1999-11-02 Thread Martin A. Langhoff

hi list,

[sorry for the OT] I'm about to develop a monthly calendar,
something akin to netscape's scheduler. To start, I'm looking for a
reliable function to calculate each day of any given month/year.

Or pointers to a usable formula to calculate it.

I don't really mind doing it myself, just trying not to reinvent the
calendar.

Thanks!


martin

ps: I know most of you laugh at such a Q. This must be a college
exercise, but I
've never been into a programming college. Sorry.

--
   - Martin Langhoff @ S C I M  Multimedia Technology -
- http://www.scim.net  | God is real until  -
- mailto:[EMAIL PROTECTED]  | declared integer   -




Memory problems

1999-11-02 Thread Clinton Gormley

Hi

I had huge problems yesterday.  Our web site made it in to the Sunday
Times and has had to serve 1/2 million request in the last 2 days.

Had I set it up to have proxy servers and a separate mod_perl server?
No.  DOH!  So what happened to my 1Gig baby? It died. A sad and unhappy
death.

I am in the process of fixing that, but if anybody could help with this
question, it'd be appreciated.

I am running Apache 1.3.9, mod_perl 1.21, Linux 2.2.11, mysql
3.23.3-alpha.

What happened was this:  My memory usage went up and up until I got "Out
of memory" messages MySQL bailed out.  Memory usage was high, and the
server was swapping as well.  

So I thought - restart MySQL and restart Apache.  But I couldn't reclaim
memory.  It was just unavailable.  How do you reclaim memory other than
by stopping the processes or powering down?  Is this something that
might have happened because it went past the Out of Memory stage?

Thanks

Clint



Re: Memory problems

1999-11-02 Thread Stas Bekman

 I had huge problems yesterday.  Our web site made it in to the Sunday
 Times and has had to serve 1/2 million request in the last 2 days.

Oh, I thought there was a /. effect, now it's a sunday effect :)

 Had I set it up to have proxy servers and a separate mod_perl server?
 No.  DOH!  So what happened to my 1Gig baby? It died. A sad and unhappy
 death.
 
 I am in the process of fixing that, but if anybody could help with this
 question, it'd be appreciated.
 
 I am running Apache 1.3.9, mod_perl 1.21, Linux 2.2.11, mysql
 3.23.3-alpha.
 
 What happened was this:  My memory usage went up and up until I got "Out
 of memory" messages MySQL bailed out.  Memory usage was high, and the
 server was swapping as well.  

First, what you had to do in first place, is to set MaxClients to
such a number, that when you take the worst case of the process growing to
X size in memory, your machine wouldn't swap. Which will probably return
an Error to some of the users, when processes would be able to queue all
the requests, but it would never keep your machine down!

Other than that you have to set a limit on the resources if you need , see
Apache::SizeLimit, Apache::GTopLimit, BSD::Resource

 So I thought - restart MySQL and restart Apache.  But I couldn't reclaim
 memory.  It was just unavailable.  How do you reclaim memory other than
 by stopping the processes or powering down?  Is this something that
 might have happened because it went past the Out of Memory stage?

Sure, when machine starts to make a heavy swapping you might wait for
hours before it gets stabilized. Since all it does it swapping in and
immediately swapping out. But when it's finished - you wouldn't see swap
zeroed - it would be still used when you check it with top (or other tool)
but it wouldn't be actually used if you have enough free real memory
(well not really - it would use it for a while). What I usually do is 

swapoff /dev/hdxxx 
swapon  /dev/hdxxx

on the swap partition if I know that there is enough
real memory available to absorb the data that will be deleted from swap.
it might take a while and make sure that you have ENOUGH memory to absorb,
it otherwise the machine will stack (I wouldn't it on a live server for
sure :)

Just remember that what you see in the stat, is not what really being
used, since linux and other OSes do lots of caching... 

Hope this helps...
___
Stas Bekman  mailto:[EMAIL PROTECTED]www.singlesheaven.com/stas  
Perl,CGI,Apache,Linux,Web,Java,PC at  www.singlesheaven.com/stas/TULARC
www.apache.org   www.perl.com  == www.modperl.com  ||  perl.apache.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



OT: Re: mod-perl logo

1999-11-02 Thread Keith G. Murphy

Chris Thompson wrote:
 
 On Mon, Nov 01, 1999 at 04:42:50PM -0400, Neil Kandalgaonkar wrote:
  Joshua Chamas [EMAIL PROTECTED] wrote:
  That said, they do allow non-profits and others to use the camel, e.g. the
  Perl Mongers. It's not evil, they're just trying to protect a trademark
  which they built. AFAIK no one associated a camel w/perl before the ORA
  books.
 
 There's a slightly different reason that you are all missing. It's not
 that they WANT to defend their trademark, it's that they HAVE to defend
 their trademark, or they lose it. As I recall under US trademark law, if I
 can prove that you knew of a use of your trademark and did nothing, The
 courts can say that you werent defending it, and take it away.
 
[cut]
 
 Just goes to show, in the US if you dont vigorously defend your rights to
 any registered or implicit trademark, you can lose it.
 
Trademark law is probably designed to avoid the very kind of situation
that is going on with Unisys and GIFs.  Unfortunately, that's a
*patent*.



Re: setting cookies?

1999-11-02 Thread Andrei A. Voropaev

On Mon, Nov 01, 1999 at 05:03:58PM -0500, Robin Berjon wrote:
 I've never tried this but doesn't sending two 401s in a row for the same
 document have the auth popup appear again ?

I feel like this topic gets slightly confusing. Browser sends request,
gets 401 back, asks user for username and password if it doesn't have
one cached already. If it has one cached for this particular realm
then it attempts to send the cached values. If in response it gets 401
again then it asks user for new username and password for this realm.
As far as I know it always takes 2 requests to get protected
document. First one returns with 401 code and realm for authentication,
second request is done with appropriate user name and password.

So if for some reason you decide that some user name and password is
not valid any more then you should make sure that if they are sent any
number of  times later then your authentication handler says no
always.

Andrei

-- 



Re: [Perl Q]: Calendar function

1999-11-02 Thread Naren Dasu

Hi,
Check out the man pages for strftime and and the perldoc pages for
localtime for more detailed info.


For example 
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time);
$currentDate = POSIX::strftime("%Y%m%d", localtime(time));


Hope this helps 

naren 


At 10:12 AM 11/2/99 +, Martin A. Langhoff wrote:
hi list,

[sorry for the OT] I'm about to develop a monthly calendar,
something akin to netscape's scheduler. To start, I'm looking for a
reliable function to calculate each day of any given month/year.

Or pointers to a usable formula to calculate it.

I don't really mind doing it myself, just trying not to reinvent the
calendar.

Thanks!


martin

ps: I know most of you laugh at such a Q. This must be a college
exercise, but I
've never been into a programming college. Sorry.

--
   - Martin Langhoff @ S C I M  Multimedia Technology -
- http://www.scim.net  | God is real until  -
- mailto:[EMAIL PROTECTED]  | declared integer   -






Checking for valid dates

1999-11-02 Thread Tubbs, Derric L

Is there a reasonably easy method to make sure that an entered date is
valid, I.E. is not 30 Jan 99?  I am using Time::Local to convert a date
entered through an HTML form into the epoch offset (or whatever you call it)
and would like to make sure that valid dates are entered.  Thanks

Derric L. Tubbs
CITIS Administrator
Boeing - Fort Walton Beach, Florida
[EMAIL PROTECTED]
(850)302-4494



Re: Checking for valid dates

1999-11-02 Thread Jeffrey Baker

"Tubbs, Derric L" wrote:
 
 Is there a reasonably easy method to make sure that an entered date is
 valid, I.E. is not 30 Jan 99?  I am using Time::Local to convert a date
 entered through an HTML form into the epoch offset (or whatever you call it)
 and would like to make sure that valid dates are entered.  Thanks

Just because I'm curious, please explain what isn't valid about 30 Jan
99.

ObPerl: The Date::Calc module might be what you need.
-- 
Jeffrey W. Baker * [EMAIL PROTECTED]
Critical Path, Inc. * we handle the world's email * www.cp.net
415.808.8807



Re: Memory problems

1999-11-02 Thread Greg Stark


Stas Bekman [EMAIL PROTECTED] writes:

  I had huge problems yesterday.  Our web site made it in to the Sunday
  Times and has had to serve 1/2 million request in the last 2 days.
 
 Oh, I thought there was a /. effect, now it's a sunday effect :)

The original concept should be credited to Larry Niven, he called the effect
"Flash crowds"

  Had I set it up to have proxy servers and a separate mod_perl server?
  No.  DOH!  So what happened to my 1Gig baby? It died. A sad and unhappy
  death.

I strongly suggest you move the images to a separate hostname altogether. The
proxy is a good idea but there are other useful effects of having a separate
server altogether that I plan to write about in a separate message sometime.
This does mean rewriting all your img tags though.

  What happened was this:  My memory usage went up and up until I got "Out
  of memory" messages MySQL bailed out.  Memory usage was high, and the
  server was swapping as well.  
 
  So I thought - restart MySQL and restart Apache.  But I couldn't reclaim
  memory.  It was just unavailable.  How do you reclaim memory other than
  by stopping the processes or powering down?  Is this something that
  might have happened because it went past the Out of Memory stage?

Have you rebooted yet? Linux has some problems recovering when you run out of
memory really badly. I haven't tried debugging but our mail exchangers have
done some extremely wonky things once they ran out of memory even once
everything had returned to normal. Once non-root users couldn't fork, they
just got "Resource unavailable" but root was fine and memory usage was low.

 First, what you had to do in first place, is to set MaxClients to
 such a number, that when you take the worst case of the process growing to
 X size in memory, your machine wouldn't swap. Which will probably return
 an Error to some of the users, when processes would be able to queue all
 the requests, but it would never keep your machine down!

I claim MaxClients should only be large enough to force 100% cpu usage whether
from your database or the perl script. There's no benefit to having more
processes running if they're just context switching and splitting the same
resources finer. Better to queue the users in the listen queue.

On that note you might want to set the BackLog parameter (I forget the precise
name), it depends on whether you want users to wait indefinitely or just get
an error.

-- 
greg



Re: setting cookies?

1999-11-02 Thread Greg Stark


I think if you send a 401 in response to a request that contained auth data
the user will typically see a "Authentication failed" box, which may look bad
compared to just getting the password dialog.

Actually I couldn't get this to work a while back, but I didn't try very hard.


"Andrei A. Voropaev" [EMAIL PROTECTED] writes:

 On Mon, Nov 01, 1999 at 05:03:58PM -0500, Robin Berjon wrote:
  I've never tried this but doesn't sending two 401s in a row for the same
  document have the auth popup appear again ?
 
 I feel like this topic gets slightly confusing. Browser sends request,
 gets 401 back, asks user for username and password if it doesn't have
 one cached already. If it has one cached for this particular realm
 then it attempts to send the cached values. If in response it gets 401
 again then it asks user for new username and password for this realm.
 As far as I know it always takes 2 requests to get protected
 document. First one returns with 401 code and realm for authentication,
 second request is done with appropriate user name and password.
 
 So if for some reason you decide that some user name and password is
 not valid any more then you should make sure that if they are sent any
 number of  times later then your authentication handler says no
 always.
 
 Andrei
 
 -- 
 

-- 
greg



Re: Checking for valid dates -- off topic!

1999-11-02 Thread Neil Kandalgaonkar

At 10:33 -0800 1999-11-02, Tubbs, Derric L wrote:
Is there a reasonably easy method to make sure that an entered date is
valid, I.E. is not 30 Jan 99?  I am using Time::Local to convert a date
entered through an HTML form into the epoch offset (or whatever you call it)
and would like to make sure that valid dates are entered.  Thanks

This isn't on topic. Could people please stick to the list topic? If you
are looking for general perl help, please try other avenues.


Anyway, as you may already know, Time::Local does not reject impossible dates.

[neil@tux neil]$ perl -MTime::Local -e 'print scalar localtime ( timelocal
(0,0,0,30,1,1999) ) . "\n"'
Tue Mar  2 00:00:00 1999

As usual, if you have a problem you know other people have dealt with you
should always check the CPAN http://search.cpan.org/ first.

Date::Calc has a check_date function.



--
Neil Kandalgaonkar [EMAIL PROTECTED]
Systems Architect, Stylus Inc.   http://www.stylus.ca/





Re: setting cookies?

1999-11-02 Thread Wyman Eric Miles


Success!

The problem is, the tokencode sent by the user expires the instant its
validity is determined.  That the browser caches this and returns it over
and over is not only a nuisance, it can cause the SecurID server to
disable the token.

Problem was, the client kept coughing up an invalid cookie which was
checked, deemed invalid, and the AUTH_REQUIRED sent back.  Just made a
loop the module could never escape.  Now I (correctly) hand expired
cookies off to the SecurID portion of our show, which forces another basic
auth.

At any rate, point is, two 401s in quick succession will throw an
authorization failed message at the user, then prompt for a new
username/password.  I haven't had a user who didn't understand, in some
vague way, that his surfing had come to an end and he'd have to fish the
tokencard out one more time.

Thanks!

Wy


On 2 Nov 1999, Greg Stark wrote:

 
 I think if you send a 401 in response to a request that contained auth data
 the user will typically see a "Authentication failed" box, which may look bad
 compared to just getting the password dialog.
 
 Actually I couldn't get this to work a while back, but I didn't try very hard.
 
 
 "Andrei A. Voropaev" [EMAIL PROTECTED] writes:
 
  On Mon, Nov 01, 1999 at 05:03:58PM -0500, Robin Berjon wrote:
   I've never tried this but doesn't sending two 401s in a row for the same
   document have the auth popup appear again ?
  
  I feel like this topic gets slightly confusing. Browser sends request,
  gets 401 back, asks user for username and password if it doesn't have
  one cached already. If it has one cached for this particular realm
  then it attempts to send the cached values. If in response it gets 401
  again then it asks user for new username and password for this realm.
  As far as I know it always takes 2 requests to get protected
  document. First one returns with 401 code and realm for authentication,
  second request is done with appropriate user name and password.
  
  So if for some reason you decide that some user name and password is
  not valid any more then you should make sure that if they are sent any
  number of  times later then your authentication handler says no
  always.
  
  Andrei
  
  -- 
  
 
 -- 
 greg
 
 

Wyman Miles
Senior Systems Administrator, Rice University, Texas.
(713) 737-5827, e-mail:[EMAIL PROTECTED], pager:[EMAIL PROTECTED]



RE: Checking for valid dates -- off topic!

1999-11-02 Thread Tubbs, Derric L

Sorry about that, you're absolutely right.  I got several responses and they
solved my problem.  Thanks.  And now back to the topic ...

 --
 From: Neil Kandalgaonkar[SMTP:[EMAIL PROTECTED]]
 Sent: Tuesday, November 02, 1999 12:40 PM
 To:   Tubbs, Derric L; '[EMAIL PROTECTED]'
 Subject:  Re: Checking for valid dates -- off topic!
 
 At 10:33 -0800 1999-11-02, Tubbs, Derric L wrote:
 Is there a reasonably easy method to make sure that an entered date is
 valid, I.E. is not 30 Jan 99?  I am using Time::Local to convert a date
 entered through an HTML form into the epoch offset (or whatever you call
 it)
 and would like to make sure that valid dates are entered.  Thanks
 
 This isn't on topic. Could people please stick to the list topic? If you
 are looking for general perl help, please try other avenues.
 
 
 Anyway, as you may already know, Time::Local does not reject impossible
 dates.
 
 [neil@tux neil]$ perl -MTime::Local -e 'print scalar localtime ( timelocal
 (0,0,0,30,1,1999) ) . "\n"'
 Tue Mar  2 00:00:00 1999
 
 As usual, if you have a problem you know other people have dealt with you
 should always check the CPAN http://search.cpan.org/ first.
 
 Date::Calc has a check_date function.
 
 
 
 --
 Neil Kandalgaonkar [EMAIL PROTECTED]
 Systems Architect, Stylus Inc.   http://www.stylus.ca/
 
 
 



validating dates

1999-11-02 Thread Tubbs, Derric L

Sorry that I had put this on the modperl list.  I got several responses both
on the list and privately and they solved my problem.  Thanks

Derric L. Tubbs
CITIS Administrator
Boeing - Fort Walton Beach, Florida
[EMAIL PROTECTED]
(850)302-4494



OT: database redundancy in web env

1999-11-02 Thread Renzo Toma


Hi guys, I know its totally off-topic and prolly better suited for the
DBI list. But I believe this list is read by more experienced people.
Anyways my apologies.

I am looking for working setups for database redundancy in a web
environment; 24/7 stuff!!

Using a farm of machines running mod_perl and slimlined apaches behind a
box like the ArrowPoint content smart switch (wicked layer 5 load
balancer) has proved successfull. But how to further redundancy in the
back end; the databases?!

The only things I can think of are:

- replication (like Oracle does, but suitable for web ?!?)
- multiplexing process to forward write-transactions to all db's

.. and they both don't really sound smashing :(

I love to hear from Ask, imdb guys (Rob you alive?!) or anyone else
offcourse how they maintain ~100% uptime. And again sorry for the
off-topic.

Cheers,

-Renzo


ps. anyone working with BroadVision products? Like to talk..



mod_perl programming logic error

1999-11-02 Thread Michael Douglass


(Note: I'm no longer on the mailing list; so if you wish to respond
to me, please be certain to include my email address in the response.
Also, [EMAIL PROTECTED] no longer works FYI.  Thanks.)

Granted we're running an older copy of mod_perl; but when we noticed
that we were getting hammered by the spam of "rwrite returned -1"
in our log files, I took a look at the source code.  I then compared
it to the current source code; I believe I have found a programming
logic error that is still in the current version.  Below is the
code (from the current version) that I believe needs to be modified
for correctness (in the write_client function):

while(len  0) {
sent = 0;
if(len  HUGE_STRING_LEN) {
sent = rwrite(buffer, len, r);
}
else {
sent = rwrite(buffer, HUGE_STRING_LEN, r);
buffer += HUGE_STRING_LEN;
}
if(sent  0) {
rwrite_neg_trace(r);
break;
}
len -= sent;
RETVAL += sent;
}

Notice that if len  HUGE_STRING_LEN; we REQUEST to write 'len' bytes,
but we only write 'sent' bytes.  If 0  sent  len; then we iterate
back over this while loop because len  0 after the len -= sent command.
On the next write, however, we are rewriting from the position the
buffer had on the LAST write because the buffer was not incremented.

Notice also, that if len = HUGE_STRING_LEN that we are requesting
to write HUGE_STRING_LEN bytes; but we only write 'sent' bytes.  Now,
if sent  HUGE_STRING_LEN, then we decrement our len by the correct
number of bytes--but we increment our buffer HUGE_STRING_LEN - sent
bytes beyond the correct position.

Below is code that corrects for these counting errors.  I've consolidated
the rwrite into one command using the ?: operator to decide the length
to request be written.  Then I've incremented buffer by the correct the
amount sent--no matter what is written this incrementation will be correct.

while(len  0) {
sent = rwrite (buffer,
  (lenHUGE_STRING_LEN) ? len : HUGE_STRING_LEN,
  r );
if(sent  0) {
rwrite_neg_trace(r);
break;
}
buffer += sent;
len -= sent;
RETVAL += sent;
}

I look forward to any comments you have on this; including "you've missed
the point of the function entirely" if that is the case. :)

-- 
Michael Douglass
Texas Networking, Inc.

  Any sufficiently advanced bug is indistinguishable for a feature.
-- from some indian guy



make test errors

1999-11-02 Thread Scott R. Every

trying to install mod_perl and everything appears to compile fine, but a
make test gives the following errors:
make[1]: Leaving directory `/data/test/ssl_apache/mod_perl-1.21/Util'
../apache_1.3.9/src/httpd -f `pwd`/t/conf/httpd.conf -X -d `pwd`/t 
httpd listening on port 8529
will write error_log to: t/logs/error_log
letting apache warm up...\c
Syntax error on line 3 of
/data/test/ssl_apache/mod_perl-1.21/t/conf/httpd.conf:
Invalid command '=pod', perhaps mis-spelled or defined by a module not
included
in the server configuration
done
/usr/bin/perl t/TEST 0
still waiting for server to warm upnot ok
server failed to start! at t/TEST line 95.
make: *** [run_tests] Error 9


Any ideas why?

s

--
Scott R. Every - mailto:[EMAIL PROTECTED]
EMJ Internet - http://www.emji.net
voice : 1-888-258-8959  fax : 1-919-363-4425



Apache::ASP

1999-11-02 Thread Andrew Mayo

We are trying to store references in session variables but for some reason
this does not work. The reference cannot be regained.

For example, in simple Perl I can create an array containing two anonymous
hashes, then place a reference to this array in
$d, then dereference $d to recover the array.

@c[0]={'k1','v1','k2','v2'};
@c[1]={'k3','v3','k4','v4'};
$d=\@c;
print $$d[0]{'k1'},"\n";
@e=@$d;
print $e[0]{'k1'},"\n";

Using a session variable instead of $d allows the assignment but the session
variable cannot then be dereferenced successfully. The problem also occurs
with Apache::DBI database handles, which are some kind of hash reference.
These cannot be stored in session variables, either.

I assume there is some arcane Perl reason for this, but I lack the guru
skills to divine this. Is there any way to store non-scalar variables in
session variables?.



mod_perl and DBI

1999-11-02 Thread Dan Srebnick

I'm making a first attempt to run a working Perl CGI run under mod_perl.
It uses perl dbi successfully under CGI.  When invoking the script under
mod_perl, I get the following error:

[Tue Nov  2 11:49:43 1999] [error] Can't load
'/usr/lib/perl5/site_perl/5.005/i386-linux/auto/DBI/DBI.so' for module
DBI: /usr/lib/perl5/site_perl/5.005/i386-linux/auto/DBI/DBI.so: undefined
symbol: PL_dowarn at /usr/lib/perl5/5.00503/i386-linux/DynaLoader.pm line
169.

Would someone kindly point me in the right direction?

Thanks,

Dan



Re: Checking for valid dates

1999-11-02 Thread Michael J. Miller

I did this using a JavaScript function that runs before the form is
submitted to validate the date locally (in the client, using JavaScript
Date objects) before submission.  Has the side benefit of giving the
user real time feed back as well without a send back to the server 

Brgds,

Mike.

On Tue, 2 Nov 1999 10:33:22 -0800, Tubbs, Derric L wrote:

Is there a reasonably easy method to make sure that an entered date is
valid, I.E. is not 30 Jan 99?  I am using Time::Local to convert a date
entered through an HTML form into the epoch offset (or whatever you call it)
and would like to make sure that valid dates are entered.  Thanks

Derric L. Tubbs
CITIS Administrator
Boeing - Fort Walton Beach, Florida
[EMAIL PROTECTED]
(850)302-4494





RE: Memory problems

1999-11-02 Thread clinton

Thanks Greg

 I strongly suggest you move the images to a separate hostname
 altogether. The
 proxy is a good idea but there are other useful effects of
 having a separate
 server altogether that I plan to write about in a separate
 message sometime.
 This does mean rewriting all your img tags though.

Look forward to that with interest

 Have you rebooted yet? Linux has some problems recovering
 when you run out of
 memory really badly. I haven't tried debugging but our mail
 exchangers have
 done some extremely wonky things once they ran out of memory even once
 everything had returned to normal. Once non-root users
 couldn't fork, they
 just got "Resource unavailable" but root was fine and memory
 usage was low.

I have rebooted.  Eventually what happened was that my ethernet driver
stopped working - gave some error message about trying to restart transition
zone (I think?  I wasn't actually there - I had to get others to pull the
plug to power down)

  First, what you had to do in first place, is to set MaxClients to
  such a number, that when you take the worst case of the
 process growing to
  X size in memory, your machine wouldn't swap. Which will
 probably return
  an Error to some of the users, when processes would be able
 to queue all
  the requests, but it would never keep your machine down!

Done


 I claim MaxClients should only be large enough to force 100%
 cpu usage whether
 from your database or the perl script. There's no benefit to
 having more
 processes running if they're just context switching and
 splitting the same
 resources finer. Better to queue the users in the listen queue.

I have two PIII 500's, so CPU usage is no problem.  Amazingly, it's the 1Gig
of memory which expires first.


 On that note you might want to set the BackLog parameter (I
 forget the precise
 name), it depends on whether you want users to wait
 indefinitely or just get
 an error.
Sounds like the route to go.  I'm also busy implementing the  proxy erver
bit.

Thanks a lot Greg

Clint



referenced symbol not found error

1999-11-02 Thread Arkadiy Goykhberg

*This message was transferred with a trial version of CommuniGate(tm) Pro*
Hello, I'm trying to compile mod_perl-1.21 as DSO module for apache 
version 1.3.9 on Solaris 2.6 using gcc version 2.8.1 and perl version 
5.00562. Right now I'am using out the box vanilla configuration of apache.

Everything compiles ok, but then I try to start apache, I get the 
following error:
Syntax error on line 224 of /usr/local/apache/conf/httpd.conf:
Cannot load /usr/local/apache/libexec/libperl.so into server: ld.so.1:
/usr/local/apache/bin/httpd: fatal: relocation error: file
/usr/local/apache/libexec/libperl.so: symbol perl_eval_pv: referenced 
symbol not found
/usr/local/apache/bin/apachectl start: httpd could not be started. 

Here is the output of make test:
SNIP
/usr/bin/perl t/TEST 0
Syntax error on line 1 of 
/local/packages/apache/mod_perl-1.21/t/conf/httpd.conf:
Cannot load 
/local/packages/apache/mod_perl-1.21/t/../../apache_1.3.9/src/modules
/perl/libperl.so into server: ld.so.1: ../apache_1.3.9/src/httpd: 
fatal: relocation error: file 
/local/packages/apache/mod_perl-1.21/t/../../apache_1.3.9/src/modules
/perl/libperl.so: symbol perl_eval_pv: referenced symbol not found
still waiting for server to warm upnot ok
server failed to start! at t/TEST line 95.
make: *** [run_tests] Error 146
/SNIP

# perl -V   
Summary of my perl5 (revision 5.0 version 5 subversion 62) 
configuration:
  Platform:
osname=solaris, osvers=2.6, archname=sun4-solaris
uname='sunos arkadiy 5.6 generic_105181-05 sun4u sparc 
sunw,ultra-5_10 '
config_args=''
hint=previous, useposix=true, d_sigaction=define
usethreads=undef useperlio=undef d_sfio=undef
use64bits=define usemultiplicity=undef
  Compiler:
cc='gcc', optimize='-O', gccversion=2.8.1
cppflags='-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 
-DUSE_LONG_LONG -DSOCKS -I/usr/local/include -DUSE_LONG_LONG'
ccflags ='-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 
-DUSE_LONG_LONG -DSOCKS -I/usr/local/include -DUSE_LONG_LONG'
stdchar='unsigned char', d_stdstdio=define, usevfork=false
intsize=4, longsize=4, ptrsize=4, doublesize=8
d_longlong=define, longlongsize=8, d_longdbl=define, 
longdblsize=16
alignbytes=8, usemymalloc=y, prototype=define
  Linker and Libraries:
ld='gcc', ldflags ='  -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib /usr/ccs/lib
libs=-lsocket -lnsl -ldl -lm -lc -lcrypt \lsec
libc=, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' '
cccdlflags='-fPIC', lddlflags='-G -L/usr/local/lib'


Characteristics of this binary (from libperl): 
  Built under solaris
  Compiled at Oct 29 1999 13:01:49
  @INC:
/usr/local/lib/perl5/5.00562/sun4-solaris
/usr/local/lib/perl5/5.00562
/usr/local/lib/site_perl/5.00562/sun4-solaris
/usr/local/lib/site_perl
.
  
Have anybody seen this before, or better yet, found a solution for 
this problem?
  
Thanks in advance.

PS I apologize if this message was sent more then once.





Re: mod_perl 2.21 segmentation fault

1999-11-02 Thread Jeffrey Baker


 glibc 2.0.7
  ^^^

Maybe you should consider upgrading to a non-experimental libc.  Aside
from that, I have no clue.  I know dozens of people who use this same
setup, and they all work out fine.  Did you compile everything together
yourself?  What happense when you remove mod_perl or mod_ssl?

Maybe you should wait for openSSL to actually be released.

-jwb

 apache 1.3.9
 mod_perl 2.21
 mod_ssl 2.2.4
 openssl 0.9.4
 CGI.pm 2.56
 Oracle 8.0.5
 DBD::Oracle 1.03
 DBI 1.13
 XML::Parser 2.27
 gcc 2.7.2.3
 linux kernel 2.2.13
 
 --
  Radovan Semancik ([EMAIL PROTECTED])
  System Engineer, Business Global Systems a.s.
http://storm.alert.sk

-- 
Jeffrey W. Baker * [EMAIL PROTECTED]
Critical Path, Inc. * we handle the world's email * www.cp.net
415.808.8807



RE: Embperl and StarOffice problem

1999-11-02 Thread Gerald Richter


 I am currently using the StarOffice HTML editor to edit my
 Embperl programs. The problem I have is that it generates HTML
 that separates lines exceeding 80 characters with a newline. This
 'readability' feature has the effect of separating my perl
 comments into two lines, which creates syntax errors on my perl
 code because the HTML tag remover in Embperl will leave the extra
 newlines in place, making the right end of my comment lines
 appear as barewords.


You may use the [# #] block for longer comments


 My comment lines are shorter than 80 chars, but the HTML
 generator is stupid and will always precede them with long useless tags.


This shouldn't be a problem, because as long as the tags are inside a perl
block they should be removed by Embperl, before the perl code is compiled

 I'm using Embperl 1.2b7. Would this be a bug in Embperl, or has
 the tag remover been kept simple for performance reasons?


If the removeing of tags doesn't work as you expect, please send me an
example of what you type, what StarOffice creates out of it and what errors
you get.

Gerald


---
Gerald Richter  ecos electronic communication services gmbh
Internet - Infodatenbanken - Apache - Perl - mod_perl - Embperl

E-Mail: [EMAIL PROTECTED] Tel:+49-6133/925151
WWW:http://www.ecos.de  Fax:+49-6133/925152
---




Re: embperl - EMBPERL_MAIL_ERRORS_TO - trigger

1999-11-02 Thread Gerald Richter



 I would like to be able to trigger an "error" so that the log section
 associated with that embperl session gets mailed to me when
 that session closes.

 This is for errors in my logic or processing problems which are not
 fatal enough to cause embperl to fail on its own.



You can trigger an "error" with "die", but I am not sure if this is what you
want...

The EMBPERL_MAIL_ERRORS_TO will mail you some information about the request
(%fdat, %udat, %ENV, @errors), but not the logfile.

Gerald



---
Gerald Richter  ecos electronic communication services gmbh
Internet - Infodatenbanken - Apache - Perl - mod_perl - Embperl

E-Mail: [EMAIL PROTECTED] Tel:+49-6133/925151
WWW:http://www.ecos.de  Fax:+49-6133/925152
---




Re: Apache::ASP

1999-11-02 Thread Adi

Andrew Mayo wrote:
 
 We are trying to store references in session variables but for some reason
 this does not work. The reference cannot be regained.
 
 For example, in simple Perl I can create an array containing two anonymous
 hashes, then place a reference to this array in
 $d, then dereference $d to recover the array.
 
 @c[0]={'k1','v1','k2','v2'};
 @c[1]={'k3','v3','k4','v4'};
 $d=\@c;
 print $$d[0]{'k1'},"\n";
 @e=@$d;
 print $e[0]{'k1'},"\n";


The Apache::ASP $Session object is tied to a SDBM_File or DB_File via MLDBM
Data::Dumper.  Therefore you cannot simply store a reference using "\@c" and
expect it to dereference next time.  Use an anonymous array [] or anonymous
hash {}.

Try this:

@c[0]={'k1','v1','k2','v2'};
@c[1]={'k3','v3','k4','v4'};
$Session-{'array'} = [ @c ];
print $Session-{'array'}-[0]-{'k1'},"\n";
@e = @{$Session-{'array'}};
print $e[0]{'k1'},"\n";

DBI handles should be storable also, just be sure to use anonymous hash
syntax, since they are blessed hashes.

- Adi



Re: Apache::ASP

1999-11-02 Thread Joshua Chamas

Adi wrote:
 
 Andrew Mayo wrote:
 
  We are trying to store references in session variables but for some reason
  this does not work. The reference cannot be regained.
 
  For example, in simple Perl I can create an array containing two anonymous
  hashes, then place a reference to this array in
  $d, then dereference $d to recover the array.
 
  @c[0]={'k1','v1','k2','v2'};
  @c[1]={'k3','v3','k4','v4'};
  $d=\@c;
  print $$d[0]{'k1'},"\n";
  @e=@$d;
  print $e[0]{'k1'},"\n";
 
 The Apache::ASP $Session object is tied to a SDBM_File or DB_File via MLDBM
 Data::Dumper.  Therefore you cannot simply store a reference using "\@c" and
 expect it to dereference next time.  Use an anonymous array [] or anonymous
 hash {}.
 
 Try this:
 
 @c[0]={'k1','v1','k2','v2'};
 @c[1]={'k3','v3','k4','v4'};
 $Session-{'array'} = [ @c ];
 print $Session-{'array'}-[0]-{'k1'},"\n";
 @e = @{$Session-{'array'}};
 print $e[0]{'k1'},"\n";
 
 DBI handles should be storable also, just be sure to use anonymous hash
 syntax, since they are blessed hashes.
 ...

Thanks for answering this Adi.  Andrew, in general, 
read up in Apache::ASP docs  perldoc MLDBM for 
how $Session  $Application are limited when storing data.

To clarify, DBI handles are not storable in $Session, 
because they often have per process file handles  such which 
will not be preserved across requests.  Use Apache::DBI to 
keep your database connections persistent per process.

-- Joshua
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



mod_perl list(s) administrativa (was: OT: database redundancy in web env)

1999-11-02 Thread Ask Bjoern Hansen


On Wed, 3 Nov 1999, Renzo Toma wrote:

 I love to hear from Ask, imdb guys (Rob you alive?!) or anyone else
 offcourse how they maintain ~100% uptime. And again sorry for the
 off-topic.

Please please please take it somewhere else, maybe the DBI list as you
suggested yourself. I would be happy to answer there, or anywhere, just
not here.


Somewhat related:

This list will soon move to an ezmlm setup and move to
@perl.apache.org.

Suggestions for what the new list should be called would also be
appreciated. Brian suggested [EMAIL PROTECTED], I don't like
that so if I don't get any good suggestions I will probably call it
[EMAIL PROTECTED]

I think that I at the same time will create [EMAIL PROTECTED]
(for HTML::Embperl) and [EMAIL PROTECTED] (for Apache::ASP) and
decree those topics off-topic here, if nobody comes with very good
arguments. It is not because I don't like Embperl or Apache::ASP, it's
just two large chunks of traffic that it would make sense to separate
out.

Other suggestions for splitting up the traffic would be more than
welcome. We need to get the traffic on this list down or I will never
be able to catch up again.


Reply to me and *not* to the list on this, please.


 - ask

PS. The short answer for the database question: avoid "online" database
lookups, or do mirroring (which sometimes can be a lot easier than it
might seem).

PPS. We were actually down @valueclick.com for some minutes some weeks
ago. :-)

-- 
ask bjoern hansen - http://www.netcetera.dk/~ask/
more than 60M impressions per day, http://valueclick.com



Re: mod_perl and DBI

1999-11-02 Thread Stas Bekman

 I'm making a first attempt to run a working Perl CGI run under mod_perl.
 It uses perl dbi successfully under CGI.  When invoking the script under
 mod_perl, I get the following error:
 
 [Tue Nov  2 11:49:43 1999] [error] Can't load
 '/usr/lib/perl5/site_perl/5.005/i386-linux/auto/DBI/DBI.so' for module
 DBI: /usr/lib/perl5/site_perl/5.005/i386-linux/auto/DBI/DBI.so: undefined
 symbol: PL_dowarn at /usr/lib/perl5/5.00503/i386-linux/DynaLoader.pm line
 169.
 
 Would someone kindly point me in the right direction?

http://perl.apache.org/guide/warnings.html#Can_t_load_auto_DBI_DBI_so_

 
 Thanks,
 
 Dan
 
 



___
Stas Bekman  mailto:[EMAIL PROTECTED]www.singlesheaven.com/stas  
Perl,CGI,Apache,Linux,Web,Java,PC at  www.singlesheaven.com/stas/TULARC
www.apache.org   www.perl.com  == www.modperl.com  ||  perl.apache.org
single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com



Re: Failing to reconnect after Oracle shutdown abort (Apache::D BI)

1999-11-02 Thread Greg Stark


"Young, Geoffrey S." [EMAIL PROTECTED] writes:
 
 Incidentally, I have also noticed that on my Linux installation Oracle will
 not shutdown (or shutdown abort) while any of the httpd processes have
 persistent connections.  That is, httpd must come down first for Oracle to
 shutdown cleanly.  Just thought I'd mention it...

There are different kinds of shutdowns

shutdown - waits for every connection to disconnect
shutdown immediate - cleans up partially executed transactions
shutdown abort - leaves dirty blocks on disk which get cleaned up as needed

Shutdown immediate can take a while to execute but shutdown abort should be
fast. According to people on comp.databases.oracle.server unless you plan to
do a backup you can just use shutdown abort. A plain shutdown is useless for
web server databases unless you shut down every client including the web
server first, and even then it doesn't always seem to work.

In our experience pmon sometimes goes wonky if you shutdown abort and then
bring the database up. Tech support told us to shutdown abort, bring up the
database, then shutdown immediate and bring it up again. The second shutdown
cleans up the dirty blocks. I'm not sure if this is just superstition or
actually helps but that's what we do now.

-- 
greg



RE: Failing to reconnect after Oracle shutdown abort (Apache::DBI)

1999-11-02 Thread Young, Geoffrey S.

I can second that...

I've had the listener time out and the httpd process never picks up a new
connection.  I have also seen where a perl script encounters an OCI error
(usually my fault ;) after the connection and the httpd child can never
connect again.

Incidentally, I have also noticed that on my Linux installation Oracle will
not shutdown (or shutdown abort) while any of the httpd processes have
persistent connections.  That is, httpd must come down first for Oracle to
shutdown cleanly.  Just thought I'd mention it...

--Geoff

 -Original Message-
 From: Greg Stark [SMTP:[EMAIL PROTECTED]]
 Sent: Tuesday, November 02, 1999 4:39 AM
 To:   Tim Bunce
 Cc:   mod-perl Mailing List; DBI Users Mailing List
 Subject:  Re: Failing to reconnect after Oracle "shutdown abort"
 (Apache::DBI)
 
 
 Tim Bunce [EMAIL PROTECTED] writes:
 
  Has anyone experienced a situation where a process (httpd for example)
  can't reconnect to Oracle after a "shutdown abort"?
  
  Tim.
 
 As far as I can tell we never get a clean reconnection after any sort of
 connection problem. I don't even think it takes a shutdown abort to cause
 this.
 
 -- 
 greg



Re: Failing to reconnect after Oracle shutdown abort (Apache::DBI)

1999-11-02 Thread Tim Bunce

On Mon, Nov 01, 1999 at 09:01:48PM +, Tim Bunce wrote:
 Has anyone experienced a situation where a process (httpd for example)
 can't reconnect to Oracle after a "shutdown abort"?

Thanks for your replies.

The problem reported to me which prompted this email has actually
proven to be a user script error. The old database handle was being
reused, rather than the new one returned from a fresh call to
DBI-connect after the shutdown and restart.

I'd be interested in seeing a DBI trace file (level 2) from anyone else
who thinks they have this problem. Please send me (directly, not to the
list) _just_ the part of the log that shows the original connect, the
point of failure (from the shutdown etc) and the failed attempt to
reconnect. Don't send the whole log.

Tim.



Re: Failing to reconnect after Oracle shutdown abort (Apache::DBI)

1999-11-02 Thread Jeffrey W. Baker

Greg Stark wrote:
 
 Tim Bunce [EMAIL PROTECTED] writes:
 
  Has anyone experienced a situation where a process (httpd for example)
  can't reconnect to Oracle after a "shutdown abort"?
 
  Tim.
 
 As far as I can tell we never get a clean reconnection after any sort of
 connection problem. I don't even think it takes a shutdown abort to cause
 this.

The project I am currently working on has always managed to reconnect
after all sorts of database problems.  I don't know if Oracle was ever
killed with "shutdown abort" though.  Could you elaborate on exactly
what kind of project you are building and how the problem is
manifested?  We are doing web stuff with Apache, mod_perl, and DBI.

-jwb



Re: Failing to reconnect after Oracle shutdown abort (Apache::DBI)

1999-11-02 Thread Jeffrey Baker

Greg Stark wrote:
 
***  From dbi-users - To unsubscribe, see the end of this message.  ***
*** DBI Home Page - http://www.symbolstone.org/technology/perl/DBI/ ***
 
 Tim Bunce [EMAIL PROTECTED] writes:
 
  On Mon, Nov 01, 1999 at 09:01:48PM +, Tim Bunce wrote:
   Has anyone experienced a situation where a process (httpd for example)
   can't reconnect to Oracle after a "shutdown abort"?
 
  Thanks for your replies.
 
  The problem reported to me which prompted this email has actually
  proven to be a user script error. The old database handle was being
  reused, rather than the new one returned from a fresh call to
  DBI-connect after the shutdown and restart.
 
 With a web server you always want to reuse the database handle as long as
 possible. That's true of any long-running application though.
 
 How is the application supposed to know it has been disconnected?
 Maybe the driver should (maybe optionally) automatically reconnect?

That's what the driver handle's ping method is for.

if (!$dbh-ping) { reconnect; }

-jwb
-- 
Jeffrey W. Baker * [EMAIL PROTECTED]
Critical Path, Inc. * we handle the world's email * www.cp.net
415.808.8807