%session and local * problem

2000-09-08 Thread Joseph Yanni

I am currently using Apache::Session 1.53 and using
%session from two other PerlHandler's beside
HTML::Mason.  I am having the problem of %session hash
not being cleared (returns old data).  FYI, at the end
of a request, I 'untie %session'.  On logout, I
perform the 'tied(%session)-delete' command.  Also,
the hash gets a new Id, but the data within the hash
still exists.  Why?

Within the handler() subrouting including Mason's, I
have following assignments (%session is global within
each handler module via 'use vars qw(%session)'.

Mason handler()
local *HTML::Mason::Commands::session =
\%IT::Config::session;

Other handler()
local *Other::Handler::session =
\%IT::Config::session;

Could someone tell me what is wrong with this
implementation and how I might resolve?


Thanks in advance,
-Joe Yanni

__
Do You Yahoo!?
Yahoo! Mail - Free email you can access from anywhere!
http://mail.yahoo.com/



Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Nicolas MONNET

On Thu, 7 Sep 2000, Andrew Dunstan wrote:

|Could someone please explain to me why everybody seems so intent on
|having a mod_perl handler fork in order to send mail? Why not just use
|the very common Net::SMTP package which just talks on an SMTP socket to
|whatever mailhost you have (localhost or other). There are other
|packages on CPAN which perhaps have more power, but still don't fork, if
|that's what you need. Every benchmark I've done (quite a few ;-) shows
|that this is far faster way of sending mail.

To answer your question, I have tried some times ago, with one such module
(not sure if it was this one) and as the documentation was lacking I had
no clear idea of what was going on and could'nt get it to work properly.

As far as benchmarks are concerned, I'm sending one mail after having
displayed the page, so it shoul'dnt matter much ...

|My understanding (correct me if I'm wrong) is that in general having a
|mod_perl handler fork is a Bad Thing (tm).

Dunno about this; never read anything against it.

|(and of course there is even less danger with funky email addresses with
|shell metacharacters that way, too)

I pay attention to that kind of issues; as a matter of fact, the command I
use is: open(OUTP,'|/path/qmail-inject -H') -- so no variable in the
shell, and print the recipient in the output.

|I recall with some fondness Randal's "useless use of cat" awards - maybe
|we need to create a "useless use of fork" award :-)




Re: mod_perl DSO on NT

2000-09-08 Thread Matt Sergeant

On Fri, 8 Sep 2000, Daniel Watkins wrote:

 Hi,
   Does anyone know if a mod_perl dso
 can be loaded into any of the more commercial flavours of apache
 (Such as the IBM http server)
 I have done some work with mod_perl on NT but now a
 few mindless beauracratic nazi IT managers are waving
 their rulebooks around. The problem being they dont understand
 the difference between free software and freeware.
 So, I need a to buy a box with apache in it that supports
 mod_perl.

Yes, a mod_perl dso should work just fine on a commercial apache. In fact
with thanks to Gurusamy Sarathy we almost have a ppm of mod_perl available
for ActivePerl, subject to sorting out a few linking issues. This should
make installing mod_perl on Win32 almost as easy as installing an RPM on
Linux.

(and I'll have AxKit ppm's available too...)

-- 
Matt/

Fastnet Software Ltd. High Performance Web Specialists
Providing mod_perl, XML, Sybase and Oracle solutions
Email for training and consultancy availability.
http://sergeant.org | AxKit: http://axkit.org




Re: mod_perl security :: possible solution

2000-09-08 Thread Stas Bekman

On Thu, 7 Sep 2000, Félix C.Courtemanche wrote:

 Hi,
 
 I have been looking around for some time already about this and here are the
 2 solutions I came up with... I would like some comments, especially if you
 think it would be safe / fast to use.

Uhm, did you read the proposed solutions at
http://perl.apache.org/guide/multiuser.html

 Solution #1 (apache solution)
 ¯
 - Use a centralized apache server for all html request, graphics, etc.
 mod_php and mod_perl disabled on this server
 - Redirect a certain directory or sub domains to a personalized apache
 server (on an unprivileged port), running under the client's uid.
 - That personalized server would be compiled with mod_perl and mod_php, and
 running with the following apache directives:
   - RLimitMEM (http_core.c) :: Soft/hard limits for max memory usage per
 process
   - RLimitNPROC (http_core.c) :: Soft/hard limits for max number of
 processes per uid
 - It would also have the Apache-Watchdog-RunAway perl module installed to
 kill zombies.
 
 That solution would allow the fastest setup (as far as I am concerned) but I
 am afraid that redirecting the directory to a personalized apache server
 could generate some problems...  I thought of redirect using the [P] flag
 (proxy) so that the url viewed in the browser stay the same... however, for
 each queries, 2 httpd process will have to handle it.  This may hurt the
 performances for a web site using a lot of scripts.

Nauh, it won't hurt the performance. Almost everybody uses this
scenario. See http://perl.apache.org/guide/strategy.html

 Solution #2 (perl module solution)
 ¯
 - Only use 1 apache server for everyone
 - Use Apache:SizeLimit (included with mod_perl) (memory watchdog)
 - Use Apache-watchdog-runaway (same as above)
 - Use apache:resources for other control
 - Use Apache:safe and apache:safe:hole to restrict the use of mod_perl...
 however I may have to fight with it a bit to allow DBI and other similar
 modules to be used as well
 
 That solution appears to be faster for me, but a lot harder to set up and
 configure.  It may involve some programmation, etc.
 
 
 What is your opinion on these... and do you have a better solution? Wich one
 is the best?
 I am open for any comments and help... I plan to set up some package or at
 least a web page to explain to others how to do it once it is working
 perfectly for me.  I noticed that perl security (along with shell security)
 is one of the worst seucirty/privacy treat in almost all web hosting
 companies... and I intend to solve this. :)

I don't see any security-wise differences between #1 and #2. These are
performance issues, where #1 wins in most cases, while #2 is Ok for
specific content delivery setups. See the Strategy chapter link above.
You still run mod_perl in both setups, so this is the only thing that you
have to solve.

I've an overdue article in the queue of things that I've to do, that talks
about this, mainly based on the multiuser.html chapter and the information
I've collected from ISPs a month ago. (Not much though, so if you have
some information to share with public and plug the name of your mod_perl
ISP service in make sure to contact me.).

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://jazzvalley.com
http://singlesheaven.com http://perlmonth.com   perl.org   apache.org





Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Stas Bekman

On Fri, 8 Sep 2000, Nicolas MONNET wrote:

 On Thu, 7 Sep 2000, Andrew Dunstan wrote:
 
 |Could someone please explain to me why everybody seems so intent on
 |having a mod_perl handler fork in order to send mail? Why not just use
 |the very common Net::SMTP package which just talks on an SMTP socket to
 |whatever mailhost you have (localhost or other). There are other
 |packages on CPAN which perhaps have more power, but still don't fork, if
 |that's what you need. Every benchmark I've done (quite a few ;-) shows
 |that this is far faster way of sending mail.
 
 To answer your question, I have tried some times ago, with one such module
 (not sure if it was this one) and as the documentation was lacking I had
 no clear idea of what was going on and could'nt get it to work properly.

Net::SMTP works perfectly and doesn't lack any documentation. If there is
a bunch of folks who use mod_perl for their guestbook sites it's perfectly
Ok to run sendmail/postfix/whatever program you like... But it just
doesn't scale for the big projects. 

For a simple replacement of sendmail routine with Net::SMTP grab:
http://www.stason.org/works/scripts/mail-lib.pl

Why you see only posts which talk about forking the external
process? Because all those who use Net::SMTP don't have any problems,
therefore they are all quiet :)

 As far as benchmarks are concerned, I'm sending one mail after having
 displayed the page, so it shoul'dnt matter much ...

Yeah, and everytime you get 1M process fired up...

 |My understanding (correct me if I'm wrong) is that in general having a
 |mod_perl handler fork is a Bad Thing (tm).
 
 Dunno about this; never read anything against it.

A lame excuse :) You know, not knowing the law, doesn't wave away a
responsibility for your deeds... .  Just to tell you that there is
*something* to read against it:
http://perl.apache.org/guide/performance.html#Forking_and_Executing_Subprocess

 |(and of course there is even less danger with funky email addresses
 with |shell metacharacters that way, too)
 
 I pay attention to that kind of issues; as a matter of fact, the command I
 use is: open(OUTP,'|/path/qmail-inject -H') -- so no variable in the
 shell, and print the recipient in the output.

This issue is explained very well at:
http://perl.apache.org/guide/performance.html#Executing_system_in_the_Right_
No metachars issue anymore.

P.S. I'm not responding to this email in order to flame Nicolas, but to
correct things that otherwise might be assumes as correct for those among
us with less experience. Whatever you see posted on the list should never
be taken for granted since we all make mistakes, even the experts among
us.

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://jazzvalley.com
http://singlesheaven.com http://perlmonth.com   perl.org   apache.org





[OT] FYI: What Pitfalls Exist When Outsourcing Code?

2000-09-08 Thread Stas Bekman

Knowing the shortage of the good mod_perl people on the human resources
market, you may find this thread educating and learn from the mistakes and
experiences of the others.
http://slashdot.org/comments.pl?sid=00%2F09%2F07%2F1613204cid=pid=0startat=threshold=4mode=nestedcommentsort=0op=Change

P.S. As before I ask you to discuss things at the right forum. I think the
link in my post will be of an added value for many of us, but this
shouldn't trigger the thread *here* on the mod_perl list, which strives
hard to be on-topic most of the time.

_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://jazzvalley.com
http://singlesheaven.com http://perlmonth.com   perl.org   apache.org





[Mason] Problem: no-Content.

2000-09-08 Thread Guido Moonen

 Hello all,
  
  -- The Problem -- 
  
  When i try try retrieve http://www.clickly.com/index.html 
  (Test-Site: not the real clickly.com) i get a blank return  and i 
  mean a real blank content return (tried it with telnet to port 80 and 
  the server only sends back the headers of the web-page?)
  
  Does anybody know what the problem is? I'va tried all sorts of 
  things but nothing worked.
  
  Thanks in advance,
Guido Moonen
  
  -- The Stuff i have --
  
  * Solaris V 2.6 op Sun ultrasparc.
  * perl, v5.6.0 built for sun4-solaris 
  * Server version: Apache/1.3.12 (Unix)
Server built:   Sep  6 2000 14:51:05   
  * mod_perl-1.24
  * Mason v. 0.88
  
  -- Handler.PL --
   SNIP 
  my (%parsers, %interp, %ah);
  foreach my $site qw(www modified management)
  {  $parsers{$site} = new HTML::Mason::Parser(allow_globals = 
  [qw($dbh %session)]);

 $interp{$site} = new HTML::Mason::Interp (parser=$parsers{$site},
   
  comp_root="/clickly/html/$site/",
   
  data_dir="/clickly/masonhq/$site/",
   system_log_events="ALL");
 $ah{$site} = new HTML::Mason::ApacheHandler(interp=$interp{$site});
  
 chown (scalar(getpwnam "nobody"), scalar(getgrnam "nobody"),
 $interp{$site}-files_written);
  }
  
  sub handler
  {   my ($r) = @_;
  my $site = $r-dir_config('site');
  return -1 if $r-content_type  $r-content_type !~ m|^text/|i;
  my $status = $ah{$site}-handle_request($r);
  return $status;
  }
   SNIP 
  
  -- httpd.conf -- 
  
   SNIP 
  # www.clickly.com (Default)
  VirtualHost 192.168.0.210
  ServerAdmin [EMAIL PROTECTED]
  DocumentRoot /clickly/html/www
  ServerName www.clickly.com
  PerlSetVar site 'www'
  
  Directory "/clickly/html/www"
  Options Indexes FollowSymLinks MultiViews ExecCGI
AllowOverride None
Order allow,deny
Allow from all
/Directory
alias /mason /clickly/html/www
Location /mason
SetHandler perl-script
PerlHandler HTML::Mason
/Location
  /VirtualHost
   SNIP 
  
  ==
  Guido Moonen
  Clickly.com
  Van Diemenstraat 206
  1013 CP Amsterdam
  THE NETHERLANDS
  
  Mob: +31 6 26912345
  Tel: +31 20 6934083
  Fax: +31 20 6934866
  E-mail: [EMAIL PROTECTED]
  http://www.clickly.com
  
  
  Get Your Software Clickly!
  ==
  




Re: Auto rollback using Apache::DBI

2000-09-08 Thread Honza Pazdziora

On Thu, Sep 07, 2000 at 11:06:00AM -0700, Perrin Harkins wrote:
 On Thu, 7 Sep 2000, Nicolas MONNET wrote:
  |Well, Apache::DBI does push a cleanup handler that does a rollback if
  |auto-commit is off.  Are you saying this isn't working?
  
  I've run into a situation where it was'nt. I wanted to make sure
  it's not the desired behaviour, before I can dig more into it to look how
  it's heppening.
 
 With AutoCommit off, you should definitely get a rollback on every
 request, provided you actually called DBI-connect on that request.  Turn
 on the debug flag ($Apache::DBI::DEBUG = 2) and see if the cleanup handler
 is being run or not.

The code

my $needCleanup = ($Idx =~ /AutoCommit[^\d]+0/) ? 1 : 0;
if(!$Rollback{$Idx} and $needCleanup and Apache-can('push_handlers')) {
print STDERR "$prefix push PerlCleanupHandler \n" if $Apache::DBI::DEBUG  1;
Apache-push_handlers("PerlCleanupHandler", \cleanup);

of Apache::DBI line 90 suggests that if AutoCommit isn't zero upon
_connect_, the cleanup won't even be called. So if you do

my $dbh = DBI-connect('dbi:Oracle:sid');
$dbh-{'AutoCommit'} = 0;

such a $dbh won't be rollbacked.

-- 

 Honza Pazdziora | [EMAIL PROTECTED] | http://www.fi.muni.cz/~adelton/
 .project: Perl, DBI, Oracle, MySQL, auth. WWW servers, MTB, Spain, ...




Re: Auto rollback using Apache::DBI

2000-09-08 Thread Nicolas MONNET

On Fri, 8 Sep 2000, Honza Pazdziora wrote:

|The code
|
|my $needCleanup = ($Idx =~ /AutoCommit[^\d]+0/) ? 1 : 0;
|if(!$Rollback{$Idx} and $needCleanup and Apache-can('push_handlers')) {
|print STDERR "$prefix push PerlCleanupHandler \n" if $Apache::DBI::DEBUG  1;
|Apache-push_handlers("PerlCleanupHandler", \cleanup);
|
|of Apache::DBI line 90 suggests that if AutoCommit isn't zero upon
|_connect_, the cleanup won't even be called. So if you do
|
|   my $dbh = DBI-connect('dbi:Oracle:sid');
|   $dbh-{'AutoCommit'} = 0;
|
|such a $dbh won't be rollbacked.

I did the AutoCommit upon connect. I don't manage to reproduce the error
condition, though. That's a problem.




Re: SELECT cacheing

2000-09-08 Thread Drew Taylor

Roger Espel Llima wrote:
 
 I've written a very small module to cache SELECT results from DBI
 requests.  The interface looks like:
 
   use SelectCache;
 
   my $db = whatever::get_a_handle();
   my $st = qq{ select this, that ... };
   my $rows = SelectCache::select($db, $st, 180);
 
 this returns an arrayref of rows (like the selectall_arrayref function),
 and caches the result in a file, which gets reused for 180 seconds
 instead of asking the db again.
 
 The names of the cache files are the md5's of the select statement,
 using the last hex digit as a subdirectory name.  There's no file
 cleanup function; you can always do that from cron with find.
 
 This is all very simple, but it's pretty useful in combination with
 mod_perl, to speed up things like showing the "latest 10 posts", on
 frequently accessed webpages.
 
 The question now is: is there any interest in releasing this?  I could
 write some minimal docs and give it a 'proper' module name, if there's
 interest.
I'm certainly interested. One question though - in the module do you
blindly use the cache? I ask because in my instance I display the
contents of a shopping cart on every page. And while only a few pages
change the cart contents, the cart listing does need to be current. How
do you handle this situation?

-- 
Drew Taylor
Vialogix Communications, Inc.
501 N. College Street
Charlotte, NC 28202
704 370 0550
http://www.vialogix.com/



Re: SELECT cacheing

2000-09-08 Thread Peter Skipworth

I don't know about Roger, but in my situation queries are called as
follows.

my $queryhandle=Query("select blah from blah where blah")

the Query routine can be overloaded with a timeout value (a default
capable of being set), with a timeout of 0 meaning that the select
should never be cached and should always be selected live from the
database. I'd assume Roger would need to have something similar in the
module he's developing.

regards,

P



On Fri, 8 Sep 2000, Drew Taylor wrote:

 Roger Espel Llima wrote:
  
  I've written a very small module to cache SELECT results from DBI
  requests.  The interface looks like:
  
use SelectCache;
  
my $db = whatever::get_a_handle();
my $st = qq{ select this, that ... };
my $rows = SelectCache::select($db, $st, 180);
  
  this returns an arrayref of rows (like the selectall_arrayref function),
  and caches the result in a file, which gets reused for 180 seconds
  instead of asking the db again.
  
  The names of the cache files are the md5's of the select statement,
  using the last hex digit as a subdirectory name.  There's no file
  cleanup function; you can always do that from cron with find.
  
  This is all very simple, but it's pretty useful in combination with
  mod_perl, to speed up things like showing the "latest 10 posts", on
  frequently accessed webpages.
  
  The question now is: is there any interest in releasing this?  I could
  write some minimal docs and give it a 'proper' module name, if there's
  interest.
 I'm certainly interested. One question though - in the module do you
 blindly use the cache? I ask because in my instance I display the
 contents of a shopping cart on every page. And while only a few pages
 change the cart contents, the cart listing does need to be current. How
 do you handle this situation?
 
 

-- 
.-.
|   Peter SkipworthPh: 03 9897 1121   |
|  Senior Programmer  Mob: 0417 013 292   |
|  realestate.com.au   [EMAIL PROTECTED] |
`-'




Re: [OT?] Cross domain cookie/ticket access

2000-09-08 Thread Joe Pearson

I thought you could set a cookie for a different domain - you just can't
read a different domain's cookie.  So you could simply set 3 cookies when
the user authenticates.

Now I'm curious, I'll need to try that.

--
Joe Pearson
Database Management Services, Inc.
208-384-1311 ext. 11
http://www.webdms.com

-Original Message-
From: Aaron Johnson [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: Thursday, September 07, 2000 10:08 AM
Subject: [OT?] Cross domain cookie/ticket access


I am trying to implement a method of allowing access to three separate
servers on three separate domains.

The goal is to only have to login once and having free movement across
the three protected access domains.

A cookie can't work due to the limit of a single domain.

Has anyone out there had to handle this situation?

I have thought about several different alternatives, but they just get
uglier and uglier.

One thought was that they could go to a central server and login.  At
the time of login they would be redirected to a special page on each of
the other two servers with any required login information.  These pages
would in turn return them to the login machine.  At the end of the login
process they would be redirected to the web site they original wanted.

This is a rough summary of what might happen -

domain1.net - user requests a page in a protected directory.   They
don't have a cookie.
They are redirected to the cookie server.  This server asks for the user
name and pass and authenticates the user.  Once authenticated the cookie
server redirects the client to each of the other (the ones not matching
the originally requested domain) domains.  This redirect is a page that
hands the client a cookie and sets up the session information.
domain2.net gets the request and redirects the user to a page that will
return them to the cookie machine which will add the domain2.net to the
list of domains in the cookie. And then the process will repeat for each
domain that needs to be processed.

Am I crazy?  Did I miss something in the documentation for the current
Session/Auth/Cookie modules?

I did some hacking of the Ticket(Access|Tool|Master) Example in the
Eagle book, but the cookie limit is keeping it from working correctly.
( BTW: I already use it for a single server login and it works great. )

Any information would be appreciated.

Aaron Johnson






Fwd: ActiveState...

2000-09-08 Thread AaugustJ

Would it be possible to create a binary version of mod_perl for Win32 that 
works with Activestate Perl?

-
In a message dated 09/07/2000 9:41:30 PM Pacific Daylight Time, 
[EMAIL PROTECTED] writes:

 On Tue, 5 Sep 2000 [EMAIL PROTECTED] wrote:
 
  How is it that all Win32 binaries of mod_perl I have found are not 
compatible 
  with Activestate Perl?Activestate is the most widely used and supported 
port 
  of perl.Is there anything I can do?
 
 the cvs version of mod_perl works with activestate, if you want a binary
 ask [EMAIL PROTECTED] or randy kobes directly, i think he's put one
 together. 




On Tue, 5 Sep 2000 [EMAIL PROTECTED] wrote:

 How is it that all Win32 binaries of mod_perl I have found are not compatible 
 with Activestate Perl?Activestate is the most widely used and supported port 
 of perl.Is there anything I can do?

the cvs version of mod_perl works with activestate, if you want a binary
ask [EMAIL PROTECTED] or randy kobes directly, i think he's put one
together.





Broad Question re: custom response codes

2000-09-08 Thread David Veatch

Greetings all,

This is an extremely broad question, but I was wondering if any of you 
knew, off the top of your head, of any circumstances in which the 
generation of custom response codes would be ignored.  Say...

A subroutine is called in the case of an error that logs the warning to the 
log file, kicks off an e'mail to the site admin, and then generates a 
custom response page.  Very basically, it looks like this:

sub error_out {
my($self,$error) = @_;
warn "ERROR: $error\n\n";
# using Net::SMTP to send e'mail, which is does just fine
my($r) = Apache-request();
$r-err_headers_out-{'error_title'} = $error_title;
$r-err_headers_out-{'error'} = $error;
# /Error is a custom handler defined in apache conf file
$r-custom_response(SERVER_ERROR, "/Error");
return SERVER_ERROR;
}

It logs to the database, then sends out the e'mail, but just... skips... 
the custom error page.  What's odd, is that in some cases, it works like a 
champ.

THIS WORKS:

my($error) = q{Doh!};
sub method {
do something or return undef;
}

my($text) = method() or $self-error_out($error);

THIS DOESN'T WORK:

sub method {
my($error) = q{Doh!};
do something or $self-error_out($error);
}

I realize this is NOT enough info, but not knowing what IS enough, I'm 
hoping, more than anything, for ideas, and questions that lead me in the 
right direction.

FWIW:  I'm looking to understand what's going on, and through that find a 
fix, rather than find a quick fix and move on.

David - caught in the middle between knowing enough and knowing nowhere 
near enough, and thus missing the obvious.

David Veatch - [EMAIL PROTECTED]

"Many people would sooner die than think.
In fact, they do." - Bertrand Russell




Re: [OT?] Cross domain cookie/ticket access

2000-09-08 Thread darren chamberlain

Joe Pearson ([EMAIL PROTECTED]) said something to this effect:
 I thought you could set a cookie for a different domain - you just can't
 read a different domain's cookie.  So you could simply set 3 cookies when
 the user authenticates.

You sure can -- otherwise Navigator wouldn't have the "Only accept cookies
originating from the same server as the page being viewed" option.

Set-Cookie: foo=my%20foot%20hurts; domain=.apache.org; path=/; expires=*mumble*

(darren)

-- 
Any technology indistinguishable from magic is insufficiently advanced.



RE: [OT?] Cross domain cookie/ticket access

2000-09-08 Thread Jerrad Pierce

Cookies cannot be shared across domains (except the supercookie, due to a
bug in IE and Netscape? See http://cookiecentral.com for more info)

Cookies are bound to either a domain (domain.com) or a FQDN host.domain.com

Netscape sees everything as a FQDN if you select originating server only.
This means host1.domain.com cannot see host2.domain.com's cookies.
And in all cases (except super promiscuous cookie) host1.domain.com cannot
see host1.domain2.com's cookies...

-Original Message-
From: darren chamberlain [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 08, 2000 10:24 AM
To: Joe Pearson
Cc: [EMAIL PROTECTED]
Subject: Re: [OT?] Cross domain cookie/ticket access


Joe Pearson ([EMAIL PROTECTED]) said something to this effect:
 I thought you could set a cookie for a different domain - 
you just can't
 read a different domain's cookie.  So you could simply set 3 
cookies when
 the user authenticates.

You sure can -- otherwise Navigator wouldn't have the "Only 
accept cookies
originating from the same server as the page being viewed" option.

Set-Cookie: foo=my%20foot%20hurts; domain=.apache.org; path=/; 
expires=*mumble*

(darren)

-- 
Any technology indistinguishable from magic is insufficiently advanced.




Re: [OT?] Cross domain cookie/ticket access

2000-09-08 Thread Simon Rosenthal

At 11:37 PM 9/7/00 -0600, Joe Pearson wrote:
I thought you could set a cookie for a different domain - you just can't
read a different domain's cookie.  So you could simply set 3 cookies when
the user authenticates.

I don't think you can set a cookie for a completely different domain, based 
on my reading of RFC2109 and some empirical tests ... it would be a massive 
privacy/security hole, yes ?

- Simon


Now I'm curious, I'll need to try that.

--
Joe Pearson
Database Management Services, Inc.
208-384-1311 ext. 11
http://www.webdms.com

-Original Message-
From: Aaron Johnson [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: Thursday, September 07, 2000 10:08 AM
Subject: [OT?] Cross domain cookie/ticket access


 I am trying to implement a method of allowing access to three separate
 servers on three separate domains.
 
 The goal is to only have to login once and having free movement across
 the three protected access domains.
 
 A cookie can't work due to the limit of a single domain.
 
 Has anyone out there had to handle this situation?
 
 I have thought about several different alternatives, but they just get
 uglier and uglier.
 
 One thought was that they could go to a central server and login.  At
 the time of login they would be redirected to a special page on each of
 the other two servers with any required login information.  These pages
 would in turn return them to the login machine.  At the end of the login
 process they would be redirected to the web site they original wanted.
 
 This is a rough summary of what might happen -
 
 domain1.net - user requests a page in a protected directory.   They
 don't have a cookie.
 They are redirected to the cookie server.  This server asks for the user
 name and pass and authenticates the user.  Once authenticated the cookie
 server redirects the client to each of the other (the ones not matching
 the originally requested domain) domains.  This redirect is a page that
 hands the client a cookie and sets up the session information.
 domain2.net gets the request and redirects the user to a page that will
 return them to the cookie machine which will add the domain2.net to the
 list of domains in the cookie. And then the process will repeat for each
 domain that needs to be processed.
 
 Am I crazy?  Did I miss something in the documentation for the current
 Session/Auth/Cookie modules?
 
 I did some hacking of the Ticket(Access|Tool|Master) Example in the
 Eagle book, but the cookie limit is keeping it from working correctly.
 ( BTW: I already use it for a single server login and it works great. )
 
 Any information would be appreciated.
 
 Aaron Johnson
 
 

-
Simon Rosenthal ([EMAIL PROTECTED])  
Web Systems Architect
Northern Light Technology   222 Third Street, Cambridge MA 02142
Phone:  (617)621-5296  :   URL:  http://www.northernlight.com
"Northern Light - Just what you've been searching for"




Re: SELECT cacheing

2000-09-08 Thread Rodney Broom

Some good ideas, I think that this package might come out a bit thin though.
I've written a package that does arbitrary variable caching (like everybody
else). But it has a list of other bells and whistles. Things like cache
expiration and data refresh hooks. It's a pretty simple process.

From there, I've have (but addmittedly don't use yet) a little DB package that
sits as an interface between the programmer and the DB, and incorporates things
like this caching package at the same time.

So you do:

$dbh = DB-new(...)
$sth = $dbh-prepare($q)
%results1 = $sth-fetch...

$sth = $dbh-prepare($q)
%results2 = $sth-fetch...

# Results are the same, %results2 comes from cache.

$sth = $dbh-prepare($insert)
$sth-execute

$sth = $dbh-prepare($q)
%diff_results = $sth-fetch...
# %diff_results is new data because the DB has changed.


Just some thoughts for y'all to mull over.


Rodney Broom





Re: SELECT cacheing

2000-09-08 Thread Tim Sweetman

DeWitt - this started as a reply to the modperl mailing list,  I had a
look at File::Cache as my reply grew. See the end of this for the
relevant bit :) - think I've found a bug...

Drew Taylor wrote:
 
 Roger Espel Llima wrote:
 
  I've written a very small module to cache SELECT results from DBI
  requests.  The interface looks like:
 
use SelectCache;
 
my $db = whatever::get_a_handle();
my $st = qq{ select this, that ... };
my $rows = SelectCache::select($db, $st, 180);
 
  this returns an arrayref of rows (like the selectall_arrayref function),
  and caches the result in a file, which gets reused for 180 seconds
  instead of asking the db again.

"Storable" is probably a good way to store this sort of result.

  The names of the cache files are the md5's of the select statement,
  using the last hex digit as a subdirectory name.  There's no file
  cleanup function; you can always do that from cron with find.
 
  This is all very simple, but it's pretty useful in combination with
  mod_perl, to speed up things like showing the "latest 10 posts", on
  frequently accessed webpages.

  The question now is: is there any interest in releasing this?  I could
  write some minimal docs and give it a 'proper' module name, if there's
  interest.

This can be an extremely powerful approach to speeding up web
applications. We use a similar module which ended up fairly large - it
takes a method name  arguments, rather than an SQL string, meaning that
you can cache the result of operations other than SQL queries. It's also
grown several other enhancements: a mutual-exclusion-and-backoff
algorithm, so if one process is looking for the answer, others wait for
it rather than performing the same query at the same time, and several
ways to expire results that have become outdated (specifying lifetime,
or via timestamp files that get touched when major changes happen)

I always thought it'd make a good thing to CPANify but never got round
to it :(

The one thing I'd advise is: BE VERY CAREFUL WITH RACE CONDITIONS. You
can easily end up with something that will, in an unusual case, store
garbled data. I think you'd need to either use flock(), or write to
files then rename them, since rename is an atomic operation - and I
don't know how well that works under OSs other than UNIXes.

 I'm certainly interested. One question though - in the module do you
 blindly use the cache? I ask because in my instance I display the
 contents of a shopping cart on every page.

I think this would be tricky to use with a cache - cart contents will
change in real time, and there's one copy per user, so you'd need a way
of expiring the cached data according to user ID. 

Some RDBMSs get Large performance improvements from using placeholders
("select * from foo where userid = ?") and cacheing the statement
handles - I don't know if this applies to MySQL. With your sort of
application I'd try those measures before trying to use a complex cache
mechanism. Where up-to-date results are not critical, a cache mechanism
has great merit, IMHO.

Reading back along this thread,

Perrin Hawkins wrote:
 - Use the DBIx namespace for the module. 

Possibly. SQL is not the only application for this sort of tool, though
it seems to be the main one.

 - If possible, use some existing cache module for the storage, like
 Apache::Session or one of the m/Cache/ modules on CPAN.

IIRC, Apache::Session *generates* its own key for each session. This
isn't going to work with a MD5-keyed-cache, where the key is generated
from the SQL.

File::Cache seems to do something rather similar, though without the MD5
bit. However, from a cursory look at the code, I think it's vulnerable
to concurrency conditions such as:
+ process (a) reads a file whilst (b) is still writing it
+ processes (a) and (b) both write to a file simultaneously, possibly
corrupting it?!
  (this may be impossible, not sure)
+ process fails whilst writing a file (eg. process catches a KILL);
subsequent reads of that file
  will get fatal error

... which will pop up only Sometimes, usually on a busy site open to the
public :) This is Not Nice, assuming it's true.

Many CPAN things that do this sort of thing use tied hashes, which
(mostly, at least) won't work in a multi-process environment because
they don't handle concurrent reads  writes.

Cheers

--
Tim Sweetman
A L Digital



Re: Fwd: ActiveState...

2000-09-08 Thread Matt Sergeant

On Fri, 8 Sep 2000 [EMAIL PROTECTED] wrote:

 Would it be possible to create a binary version of mod_perl for Win32 that 
 works with Activestate Perl?

We're working on sorting out the linking issues so that building external
compiled modules such as AxKit and Embperl is trivial. Once thats done and
we've done more testing you'll be notified of a URL to get a ppm
from. Maybe we can even persuade activestate to host a ppm in their server
(although mod_perl competes with PerlEx, so I don't know how likely that
is).

-- 
Matt/

Fastnet Software Ltd. High Performance Web Specialists
Providing mod_perl, XML, Sybase and Oracle solutions
Email for training and consultancy availability.
http://sergeant.org | AxKit: http://axkit.org




Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread chicks

On 7 Sep 2000 Randal L. Schwartz wrote:
 This is neither necessary nor sufficient.  Please stop with this
 nonsense. An email address can have ANY CHARACTER OF THE PRINTABLE
 ASCII SEQUENCE. An email address NEVER NEEDS TO GET NEAR A SHELL, so
 ALL CHARACTERS ARE SAFE. Clear? Man, if I see ONE MORE script that
 checks for a "legal email", I'm gonna scream.  Matter of fact, I
 already did. :)

I have an immense amount of respect for you Randal, but I think you're
generalizing a bit much here.  There are a number of cases where checking
an email address' validity makes perfectly good sense.  The most obvious
is just plain human-computer interface design.  If I can give the user a
message "hay, that's not a valid email address" instead of them wondering
why they never received an email, that's makes the interaction more
intuitive.  Many, many scripts end up calling sendmail to send mail. I'm
not for a moment going to applaud that method, but it does mean that shell
escapes in an email address will cause problems.

-- 
/chris

   If you're not part of the solution, you're part of the precipitate.
 - Steven Wright





RE: [OT?] Cross domain cookie/ticket access

2000-09-08 Thread Ian Mahuron


Why not do this...

Implement sessions via DBI.  All three servers will use the same table in the same 
database for setting/getting session data (ie
'authenticated_uid' = 1425).

Pass the session id around in the path or in query string.  Make sure your 
applications include this data when linking to the other
servers.

It's too early.. so I doubt this method is without flaws.  Session hijacking comes to 
mind.  Bad juju.




Re: mod_perl DSO on NT

2000-09-08 Thread Naren Dasu

Buy a PC from Penguin Computing, it has Red Hat 6.2 and comes preloaded
with Apache/perl.. and tons of other goodies.

At 12:33 PM 9/8/00 +0700, Daniel Watkins wrote:
Hi,
   Does anyone know if a mod_perl dso
can be loaded into any of the more commercial flavours of apache
(Such as the IBM http server)
I have done some work with mod_perl on NT but now a
few mindless beauracratic nazi IT managers are waving
their rulebooks around. The problem being they dont understand
the difference between free software and freeware.
So, I need a to buy a box with apache in it that supports
mod_perl.

Any Suggestions?

Daniel




Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Bill Moseley

At 10:31 AM 09/08/00 +0200, Stas Bekman wrote:
Net::SMTP works perfectly and doesn't lack any documentation. If there is
a bunch of folks who use mod_perl for their guestbook sites it's perfectly
Ok to run sendmail/postfix/whatever program you like... But it just
doesn't scale for the big projects. 

For a simple replacement of sendmail routine with Net::SMTP grab:
http://www.stason.org/works/scripts/mail-lib.pl

I guess I need some help understanding this, as it seems I have things
backwards.

I don't use Net::SMTP as I feel I'm at the whim of the SMTP server
(possibly on another machine) -- at least forking and asking sendmail to
queue the file, although eating memory and requiring a fork, seems like a
constant.  This is probably all hogwash, but I like constants.

So for times when I send mail just once in a while I fork  exec sendmail.
We got 2G of RAM for a reason.

But for times when I need to send mail on many requests I write to a queue
file and use cron to process the mail.

I don't know how well either of these scale.  But if scaling is important
I'd think it best not to rely on some smtp daemon.

I just looked at my old mail sending module a few days ago that uses
sendmail and would fallback to Net::SMTP if sendmail wasn't available (it
was running on Win at one point, argh!).  I just removed the Net::SMTP
part.  Are you saying that I removed the wrong code?

BTW -- I just looked and I have some other places where I use open3 to open
sendmail.  I have a note about sendmail generating an error by writing to
STDERR or STDOUT only and not returning failure.  This sound familiar to
anyone?



Bill Moseley
mailto:[EMAIL PROTECTED]



Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread brian moseley

On Fri, 8 Sep 2000, Bill Moseley wrote:

 I don't know how well either of these scale.  But if
 scaling is important I'd think it best not to rely on
 some smtp daemon.

this is a joke, right?

'i want to send lots of mail, i better not use a MAIL
SERVER'.




Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Jens-Uwe Mager

On Fri, Sep 08, 2000 at 11:17:31AM -0400, [EMAIL PROTECTED] wrote:
 I have an immense amount of respect for you Randal, but I think you're
 generalizing a bit much here.  There are a number of cases where checking
 an email address' validity makes perfectly good sense.  The most obvious
 is just plain human-computer interface design.  If I can give the user a
 message "hay, that's not a valid email address" instead of them wondering
 why they never received an email, that's makes the interaction more
 intuitive.  Many, many scripts end up calling sendmail to send mail. I'm
 not for a moment going to applaud that method, but it does mean that shell
 escapes in an email address will cause problems.

If you use sendmail's -t option to let it read the mail addresses from
the message itself you do not need to pass email addresses on command
lines. This is much more secure and relies instead on sendmail's rather
largish machinery to parse email adresses.

-- 
Jens-Uwe Mager

HELIOS Software GmbH
Steinriede 3
30827 Garbsen
Germany

Phone:  +49 5131 709320
FAX:+49 5131 709325
Internet:   [EMAIL PROTECTED]



Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Bill Moseley

At 10:07 AM 09/08/00 -0700, brian moseley wrote:
On Fri, 8 Sep 2000, Bill Moseley wrote:

 I don't know how well either of these scale.  But if
 scaling is important I'd think it best not to rely on
 some smtp daemon.

this is a joke, right?

'i want to send lots of mail, i better not use a MAIL
SERVER'.

'some' was the operative word.  Sorry if that was unclear.

I wouldn't want to depend on sending a lot of mail to a mail server I
didn't have control over in the middle of a request.











Bill Moseley
mailto:[EMAIL PROTECTED]



Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Perrin Harkins

On Fri, 8 Sep 2000, Stas Bekman wrote:
  As far as benchmarks are concerned, I'm sending one mail after having
  displayed the page, so it shoul'dnt matter much ...
 
 Yeah, and everytime you get 1M process fired up...

Nevertheless, in benchmarks we ran we found forking qmail-inject to be
quite a bit faster than Net::SMTP.  I'd say that at least from a
command-line script qmail-inject is a more scalable approach.

- Perrin




RE: init in Apache::ASP

2000-09-08 Thread Jerrad Pierce

No. But you can create subroutines and call them...
Or setup an include which defines various things to be subsitiuted...

-Original Message-
From: Issam W. Alameh [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 08, 2000 1:35 PM
To: Modperl list
Subject: init in Apache::ASP


Hello All,

Can I put my asp code at the end of my html page, but let it 
execute before
showing the html???


I want to have something like


Hello %=$name%
%
$name="something";
%


Can this show

Hello something


Regards
Issam




Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Matt Sergeant

On Fri, 8 Sep 2000 [EMAIL PROTECTED] wrote:

 There is a very important reason for having to fork qmail-inject. Qmail
 by default will not allow mail relaying as a good security measure. You
 don't want your mail server to be used for spamming especially if you
 have a T3 or a T1 link. Anyone who is allowing sendmail to relay is in
 for trouble - there has been occassions when people have been sued just
 because the spamming originated from their servers.
 
 You cant use Net::SMTP because when you try to send out emails to other
 domains by connecting to the localhost Qmail will reject it because the
 reciepient is not defined as a local user or a virtual domain.

You add these lines to /etc/hosts.allow and sleep peacefully:

tcp-env: 192.168. : setenv = RELAYCLIENT
tcp-env: 10. : setenv = RELAYCLIENT
tcp-env: host-ip-address : setenv = RELAYCLIENT
tcp-env: 127.0.0.1 : setenv = RELAYCLIENT

(the last one there might not be necessary).

Oh, and ensure your firewall doesn't allow incoming users pretending to
come from 10.* or 192.168.*

-- 
Matt/

Fastnet Software Ltd. High Performance Web Specialists
Providing mod_perl, XML, Sybase and Oracle solutions
Email for training and consultancy availability.
http://sergeant.org | AxKit: http://axkit.org




init in Apache::ASP

2000-09-08 Thread Issam W. Alameh

Hello All,

Can I put my asp code at the end of my html page, but let it execute before
showing the html???


I want to have something like


Hello %=$name%
%
$name="something";
%


Can this show

Hello something


Regards
Issam




Re: [OT?] Cross domain cookie/ticket access

2000-09-08 Thread Kee Hinckley

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

At 10:21 PM -0400 9/7/00, [EMAIL PROTECTED] wrote:
  
  I don't think there's any pretty way to do it.  The only thing I can
  think of off-hand is to generate the cross-server links dynamically,
   including an encrypted token in the URL which will notify that server


If you ever implement something like this, just be sure you
patent it before Amazon does ;

Actually, I have a strong suspicion that this may be covered by the 
OpenMarket patents.  I know their authentication software worked 
cross-domain, and I know their ordering software worked with 
encrypted URL tokens.

At 10:24 AM -0400 9/8/00, darren chamberlain wrote:
Joe Pearson ([EMAIL PROTECTED]) said something to this effect:
  I thought you could set a cookie for a different domain - you just can't
  read a different domain's cookie.  So you could simply set 3 cookies when
  the user authenticates.

You sure can -- otherwise Navigator wouldn't have the "Only accept cookies
originating from the same server as the page being viewed" option.

Nope, that's for cookies being set by images that are from a 
different server than the one you are on.  But yes, you could use 
that, with a fair bit of trickery.  Primary domain sets cookie in 
database, page includes image references to secondary domains with 
encrypted token.  Fetching those images causes a lookup in the 
database which then sets the appropriate cookie.  Of course, if 
someone has set the above-mentioned netscape option, it won't work, 
and it won't work if the user doesn't hang around for those two 
(probably somewhat delayed) images.
- -- 

Kee Hinckley - Somewhere.Com, LLC - Cyberspace Architects
(Now playing: http://www.somewhere.com/playlist.cgi)

I'm not sure which upsets me more: that people are so unwilling to accept
responsibility for their own actions, or that they are so eager to regulate
everyone else's.

-BEGIN PGP SIGNATURE-
Version: PGPfreeware 6.5.2 for non-commercial use http://www.pgp.com

iQA/AwUBObkjaCZsPfdw+r2CEQIwmgCfVt0lfvamfD3TqpXs3mLcglmwr+EAoIAL
/CTdiqk1T4Ik/gHwqwQg6CMu
=bVrB
-END PGP SIGNATURE-



Re: SELECT cacheing

2000-09-08 Thread Perrin Harkins

On Fri, 8 Sep 2000, Tim Sweetman wrote:
  - Use the DBIx namespace for the module. 
 
 Possibly. SQL is not the only application for this sort of tool, though
 it seems to be the main one.

The module we're discussing is DBI-specific.  At least the interesting
part of it is.  The actual caching part is the second most re-invented
wheel on the mod_perl list, right behind templating systems.

  - If possible, use some existing cache module for the storage, like
  Apache::Session or one of the m/Cache/ modules on CPAN.
 
 IIRC, Apache::Session *generates* its own key for each session.

It only does that if you don't hand it one.

- Perrin




Re: [OT?] Cross domain cookie/ticket access

2000-09-08 Thread Aaron Johnson

Well even if I thought it might be possible with a single cookie the user
agents are by
by RFC2109 supposed to not allow it so even if I got something to work there is
no guarantee that it will work in the future, since it will most likely be a
security hole of the user agent.
See RFC2109 section 8.3 - Unexpected Cookie Sharing

Thanks, Simon, for the RFC number it really helped.

So based on some other responses here is my new idea -

Each machine has a login screen that is custom to its "app".  It shares the
login information via LDAP or SQL for user information across the different
domains.

The user will only see the login app if they do not have a valid session or
cookie for that app/domain

The user gives user name and pass and are authenticated.  The user, once
authenticated is updated in the "session" table as being online and are given a
unique number and various information about them is stored under that key (user
name, pass, time of access, IP*, etc.).

When the user on domain1.net is given a link to domain2.net we could do one of
two things:
a) the link actually goes to a local page that then pulls the unique code for
that user and appends it to the
URL for the domain2.net site and they are sent with the unique code via post.
domain2.net then looks up the info for unique code in the shared session
database.  Along with the code as the key the session database also would hold
the user name and "clearance" of the user (possibly other fields like IP) and
the server would also check the HTTP_REFERER  to see if it is in the "valid"
list.

- or -

b) The link to domain2.net or domain3.net is pre appended with the unique code
when the navigation is generated and we rely only on the HTTP_REFERER.

(a) is a little more paranoid and doesn't require processing on every page to
add the code to the off domain URLS, but requires more "work" to get the person
to the correct URL.

(b) Is certainly quick and dirty, but has more potential to expose the unique
code.

Is it hard to spoof a HTTP_REFERER?
Is it as easy as sending a modified header?

domain2.net once it has received the URL will create it owns session/cookie
information for the user based on the information in the session database and
subsequent requests to that domain would automatically be passed to the correct
URL with out creating session information again.  ( I am basing that off
existing work with Apache::ASP and app entry points )

Any major flaws with this plan or suggested improvements?

Aaron Johnson

Simon Rosenthal wrote:

 At 11:37 PM 9/7/00 -0600, Joe Pearson wrote:
 I thought you could set a cookie for a different domain - you just can't
 read a different domain's cookie.  So you could simply set 3 cookies when
 the user authenticates.

 I don't think you can set a cookie for a completely different domain, based
 on my reading of RFC2109 and some empirical tests ... it would be a massive
 privacy/security hole, yes ?

 - Simon

 Now I'm curious, I'll need to try that.
 
 --
 Joe Pearson
 Database Management Services, Inc.
 208-384-1311 ext. 11
 http://www.webdms.com
 
 -Original Message-
 From: Aaron Johnson [EMAIL PROTECTED]
 To: [EMAIL PROTECTED] [EMAIL PROTECTED]
 Date: Thursday, September 07, 2000 10:08 AM
 Subject: [OT?] Cross domain cookie/ticket access
 
 
  I am trying to implement a method of allowing access to three separate
  servers on three separate domains.
  
  The goal is to only have to login once and having free movement across
  the three protected access domains.
  
  A cookie can't work due to the limit of a single domain.
  
  Has anyone out there had to handle this situation?
  
  I have thought about several different alternatives, but they just get
  uglier and uglier.
  
  One thought was that they could go to a central server and login.  At
  the time of login they would be redirected to a special page on each of
  the other two servers with any required login information.  These pages
  would in turn return them to the login machine.  At the end of the login
  process they would be redirected to the web site they original wanted.
  
  This is a rough summary of what might happen -
  
  domain1.net - user requests a page in a protected directory.   They
  don't have a cookie.
  They are redirected to the cookie server.  This server asks for the user
  name and pass and authenticates the user.  Once authenticated the cookie
  server redirects the client to each of the other (the ones not matching
  the originally requested domain) domains.  This redirect is a page that
  hands the client a cookie and sets up the session information.
  domain2.net gets the request and redirects the user to a page that will
  return them to the cookie machine which will add the domain2.net to the
  list of domains in the cookie. And then the process will repeat for each
  domain that needs to be processed.
  
  Am I crazy?  Did I miss something in the documentation for the current
  Session/Auth/Cookie modules?
  

Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Randal L. Schwartz

 "Bill" == Bill Moseley [EMAIL PROTECTED] writes:

Bill I wouldn't want to depend on sending a lot of mail to a mail server I
Bill didn't have control over in the middle of a request.

Unless the mail is for very local delivery, EVERY piece of mail
goes to a mail server that you don't have control over in the middle
of the request. :-)

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!



Inheritance within Apache::Registry(NG)

2000-09-08 Thread Philip Molter

I want to write a handler that will basically do a bunch of
processing, create some information, and then run a script in the
handled directory passing in the information, afterwords, doing
some more work before finally returning.  So, like:

  sub handler {
my $r = shift;

# bunches of processing
$r = __PACKAGE__-new( $r );
# $r now is of our class, but inherits Apache.pm's methods

my $rc = Apache::RegistryNG( $r, ... #other info );

# more processing
return $rc;
  }

However, whenever I pass $r, it invariably gets recreated as an
Apache object or, if I fiddle around with Apache::PerlRun, it gets
maybe recreated as an Apache::RegistryNG object, depending on how
I make PerlRun inherit things.  The point is, I /want/ Registry or
RegistryNG to manage the compilation and management of my script
code, but the $r I pass in I want to be an Apache object with
several extended and overridden methods of my design.

Is this feasible at all?  The sorts of things that RegistryNG and
PerlRun do to their request objects sort of destroys any sort of
inheritance that you setup.  Is it better off just to incorporate
the compilation code into a new module, rather than using what's
already there?

* Philip Molter
* DataFoundry.net
* http://www.datafoundry.net/
* [EMAIL PROTECTED]



Re: Auto rollback using Apache::DBI

2000-09-08 Thread Jeff Horn

Yes, I ran into this while I was making a version of Apache::DBI which uses
'reauthenticate' to maintain a single connection per Apache child (per
database) and simply reauthenticate on that connection.  It turned out that
I modified what $Idx was composed of and didn't understand why I was not
getting rollbacks when sessions ended without commits.

I too think that the cleanup handler should ALWAYS be pushed and that the
handler itself should check for the AutoCommit status before issuing a
rollback.  Should be easy enough to implement.

-- Jeff
- Original Message -
From: "Honza Pazdziora" [EMAIL PROTECTED]
To: "Nicolas MONNET" [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, September 07, 2000 9:17 AM
Subject: Re: Auto rollback using Apache::DBI


 On Thu, Sep 07, 2000 at 04:03:04PM +0200, Nicolas MONNET wrote:
 
  I might get something wrong, but while in non-autocommit, if a script
dies
  before rollbacking or commiting, looks like the transaction never gets
  cancelled until I kill -HUP httpd! Quite a problem ...
 
  Is there any known way to catch this?

 Looking at the code in Apache::DBI 0.87, the handle is only rollbacked
 if the AutoCommit is set to zero during connect, not if you do

 $dbh-{'AutoCommit'} = 0;

 in your script.

 I wonder if the $needCleanup test is wanted at all. We could make it
 a configuration option, not to push the cleanup handler, but I believe
 that generally the rollback is wanted thing in all cases.

 --
 
  Honza Pazdziora | [EMAIL PROTECTED] | http://www.fi.muni.cz/~adelton/
  .project: Perl, DBI, Oracle, MySQL, auth. WWW servers, MTB, Spain, ...
 





Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Stas Bekman

On Fri, 8 Sep 2000, Perrin Harkins wrote:

 On Fri, 8 Sep 2000, Stas Bekman wrote:
   As far as benchmarks are concerned, I'm sending one mail after having
   displayed the page, so it shoul'dnt matter much ...
  
  Yeah, and everytime you get 1M process fired up...
 
 Nevertheless, in benchmarks we ran we found forking qmail-inject to be
 quite a bit faster than Net::SMTP.  I'd say that at least from a
 command-line script qmail-inject is a more scalable approach.

Quite possible, I was talking about the fat sendmail binaries :)


_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://jazzvalley.com
http://singlesheaven.com http://perlmonth.com   perl.org   apache.org





Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Stas Bekman

On Fri, 8 Sep 2000, Bill Moseley wrote:

 At 10:31 AM 09/08/00 +0200, Stas Bekman wrote:
 Net::SMTP works perfectly and doesn't lack any documentation. If there is
 a bunch of folks who use mod_perl for their guestbook sites it's perfectly
 Ok to run sendmail/postfix/whatever program you like... But it just
 doesn't scale for the big projects. 
 
 For a simple replacement of sendmail routine with Net::SMTP grab:
 http://www.stason.org/works/scripts/mail-lib.pl
 
 I guess I need some help understanding this, as it seems I have things
 backwards.
 
 I don't use Net::SMTP as I feel I'm at the whim of the SMTP server
 (possibly on another machine) -- at least forking and asking sendmail to
 queue the file, although eating memory and requiring a fork, seems like a
 constant.  This is probably all hogwash, but I like constants.

Net::SMTP is just an interface to your mail (SMTP) server, which makes
Net::SMTP a client. When you fire sendmail or an equivalent you run a
client. It's how you configure the mailserver what makes the difference.
In case of sendmail it's both a client and server of course.

I'm not expert in mailing servers, so I guess someone on the list may
provide the necessary details of making the server queue the mails.

 I just looked at my old mail sending module a few days ago that uses
 sendmail and would fallback to Net::SMTP if sendmail wasn't available (it
 was running on Win at one point, argh!).  I just removed the Net::SMTP
 part.  Are you saying that I removed the wrong code?

As Perrin has suggested, benchmark it an see what's faster. It's so
simple.


_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://jazzvalley.com
http://singlesheaven.com http://perlmonth.com   perl.org   apache.org






Re: [OT?] Cross domain cookie/ticket access

2000-09-08 Thread Kee Hinckley

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

At 2:23 PM -0400 9/8/00, Aaron Johnson wrote:
a) the link actually goes to a local page that then pulls the unique code for
that user and appends it to the
URL for the domain2.net site and they are sent with the unique code via post.
domain2.net then looks up the info for unique code in the shared session
database.  Along with the code as the key the session database also would hold
the user name and "clearance" of the user (possibly other fields like IP) and
the server would also check the HTTP_REFERER  to see if it is in the "valid"
list.

Note that that is neither secure nor reliable.  HTTP_REFERER can be 
trivially forged, and reloads cause it not to appear at all.  That's 
why I recommend an encrypted version of the login information in the 
URL.  You can encrypt a timestamp with it, or allow a given 
encryption key to be used only once, so as to ensure that the URL 
can't be reused by a third party.  Remember too that anything you 
pass in the URL will end up in your log files--do you trust everyone 
who can get access to those?  Are they kept secure?

Is it hard to spoof a HTTP_REFERER?

Trivial.

Is it as easy as sending a modified header?

Yes.

- -- 

Kee Hinckley - Somewhere.Com, LLC - Cyberspace Architects
(Now playing: http://www.somewhere.com/playlist.cgi)

I'm not sure which upsets me more: that people are so unwilling to accept
responsibility for their own actions, or that they are so eager to regulate
everyone else's.

-BEGIN PGP SIGNATURE-
Version: PGPfreeware 6.5.2 for non-commercial use http://www.pgp.com

iQA/AwUBOblXOiZsPfdw+r2CEQJ1qACfeRX8RNhAFIGWPzYNS4P96Je5oEsAn1ds
0XDzJ4RJdpHZhueoyjXvQzvZ
=OHRN
-END PGP SIGNATURE-



Re: SELECT cacheing

2000-09-08 Thread Roger Espel Llima

On Thu, Sep 07, 2000 at 06:22:40PM -0700, Perrin Harkins wrote:
 I'd say this is probably useful to some people, so go ahead.  A few
 suggestions: 
 - Use the DBIx namespace for the module. 

Sounds reasonable.  The question then is: what should the API be like?

The way it works right now is with an API of its own:
$arrayref_of_arrays = SelectCache::select($dbh, $st, $timeout);

It'd be nice to have an API that mimics the DBI one more closely, with
the different fetchrow_* and fetchall_* interfaces.  Then again, for a
module whose main purpose in life is to speed up SELECTs, maybe
restricting it to mimic the selectall_arrayref(), selectrow_array() and
selectcol_arrayref() would be enough.  

I really don't see much of a purpose on writing iterators a-la
fetchrow_* for a module that gets all the rows at the same time.  This
is actually an important thing to decide, because AFAICS
fetchrow_hashref is the only method that returns hashes and therefore
needs to care about column names.  So if we decide to support only the
select* interfaces, all we have to store are arrayrefs of arrayrefs.  If
we do support fetchrow_hashref, then we need either two kinds of storage
(so that the results of the same SELECT can be cached twice, if one
scrpit wants arrays and the other wants hashes), or a way to get arrays
from hashes or vice versa, which looks hard because one loses the order
and the other the names.

Another matter is: should this be a subclass of DBI, mimicing its API,
with an interface like:

my $dbc = DBI::SelectCache-new($db);
$dbc-expiration(180);
my $st = qq{ select ... };
my $rows = $dbc-selectall_arrayref($st);

and letting everything else (prepare, fetch*, etc) fall through to the
superclass, or should it be passing the expiration time as part of the
main function all, as I was doing before?  That woudl be something like:

my $st = qq{ select ... };
my $rows = DBI::SelectCache-selectall_arrayref($db, $st);

The first option fits in better with DBI, but the second is more
practical for the user, who doesn't need to create another object, and
can think of the expiration as a per-statement thing (which it is),
rather than per-connection.

Any suggestions?  I'm a bit lost on how to give this thing a good,
extendable interface.

 - If possible, use some existing cache module for the storage, like
 Apache::Session or one of the m/Cache/ modules on CPAN.

Others have suggested Storable.  I've used this one before, and I can
agree that it's probably a good solution.

Right now, what I'm doing is just join()ing the arrays with null
characters as separators, and using spilt() to get them back.  But null
chars are allowed in databases, so I agree that switching to Storable
would be a good idea.

 - Provide a safety check so that if a query brought back a few million
 rows by accident you wouldn't try to write the whole mess to disk.
 - Maybe try to support the other results interfaces in DBI?

Sounds good.  This means that this module would need a config file.

-- 
Roger Espel Llima, [EMAIL PROTECTED]
http://www.iagora.com/~espel/index.html



Re: SELECT cacheing

2000-09-08 Thread Roger Espel Llima

On Fri, Sep 08, 2000 at 09:26:23AM -0400, Drew Taylor wrote:
 I'm certainly interested. One question though - in the module do you
 blindly use the cache? I ask because in my instance I display the
 contents of a shopping cart on every page. And while only a few pages
 change the cart contents, the cart listing does need to be current. How
 do you handle this situation?

the module gets the expiration time.  if it's 0 or negative, it ignores
the cache and reads straight from the db.

-- 
Roger Espel Llima, [EMAIL PROTECTED]
http://www.iagora.com/~espel/index.html



Re: SELECT cacheing

2000-09-08 Thread Roger Espel Llima

On Fri, Sep 08, 2000 at 03:46:25PM +0100, Tim Sweetman wrote:
 This can be an extremely powerful approach to speeding up web
 applications. We use a similar module which ended up fairly large - it
 takes a method name  arguments, rather than an SQL string, meaning that
 you can cache the result of operations other than SQL queries. It's also
 grown several other enhancements: a mutual-exclusion-and-backoff
 algorithm, so if one process is looking for the answer, others wait for
 it rather than performing the same query at the same time, and several
 ways to expire results that have become outdated (specifying lifetime,
 or via timestamp files that get touched when major changes happen)

That sure sounds powerful!

 The one thing I'd advise is: BE VERY CAREFUL WITH RACE CONDITIONS. You
 can easily end up with something that will, in an unusual case, store
 garbled data. I think you'd need to either use flock(), or write to
 files then rename them, since rename is an atomic operation - and I
 don't know how well that works under OSs other than UNIXes.

I use the latter approach: write to a temp name, then rename.  I really
think this should be safe anywhere, I don't think any OS would be broken
enough to make a rename non atomic, and let other processes read garbled
stuff when the original file was written to and closed.

 Many CPAN things that do this sort of thing use tied hashes, which
 (mostly, at least) won't work in a multi-process environment because
 they don't handle concurrent reads  writes.

I really prefer Storable or something like it, for this application.  So
each cached value is a file, and we can use the filesystem and its
lastmod metadata, and standard tools like find or File::Find (or
whatever its name is) to clean up.

-- 
Roger Espel Llima, [EMAIL PROTECTED]
http://www.iagora.com/~espel/index.html



Re: Inheritance within Apache::Registry(NG)

2000-09-08 Thread Ken Williams

[EMAIL PROTECTED] (Philip Molter) wrote:
However, whenever I pass $r, it invariably gets recreated as an
Apache object or, if I fiddle around with Apache::PerlRun, it gets
maybe recreated as an Apache::RegistryNG object, depending on how
I make PerlRun inherit things.  The point is, I /want/ Registry or
RegistryNG to manage the compilation and management of my script
code, but the $r I pass in I want to be an Apache object with
several extended and overridden methods of my design.

Is this feasible at all?  The sorts of things that RegistryNG and
PerlRun do to their request objects sort of destroys any sort of
inheritance that you setup.  Is it better off just to incorporate
the compilation code into a new module, rather than using what's
already there?

I've got the same gripe.  I don't think RegistryNG should be a subclass
of Apache, and I don't think it should have to make so many changes to
the request object.  I've submitted patches to RegistryNG to Doug, but he
hasn't had time to look at them yet.

My changed versions are at
http://forum.swarthmore.edu/~ken/modules/Apache-Filter/lib/Apache/
if you're interested in seeing what I've done.


  ------
  Ken Williams Last Bastion of Euclidity
  [EMAIL PROTECTED]The Math Forum





Re: init in Apache::ASP

2000-09-08 Thread G.W. Haywood

Hi there,

On Fri, 8 Sep 2000, Issam W. Alameh wrote:

 I want to have something like
 
 Hello %=$name%
 %
 $name="something";
 %
 

Try to think of it as a program.  You can't use the variable's value
until you've set it.  Why does it matter where it goes in the page?
If you really want to separate it, use Response-Include or something.

73,
Ged.




Re: SELECT cacheing

2000-09-08 Thread Perrin Harkins

On Fri, 8 Sep 2000, Roger Espel Llima wrote:
  - If possible, use some existing cache module for the storage, like
  Apache::Session or one of the m/Cache/ modules on CPAN.
 
 Others have suggested Storable.  I've used this one before, and I can
 agree that it's probably a good solution.

Storable is just a way to turn a complex data structure into a single
scalar.  You still need to handle the file manipulation yourself.  Most of
the existing cache modules use Storable to serialize to a scalar and then
files or shared memory or a dbm for actual storage.

 This means that this module would need a config file.

Or some PerlSetVar directives in httpd.conf.

- Perrin




RE: open(FH,'|qmail-inject') fails

2000-09-08 Thread David Harris


Stas wrote:
 On Fri, 8 Sep 2000, Perrin Harkins wrote:
  Nevertheless, in benchmarks we ran we found forking qmail-inject to be
  quite a bit faster than Net::SMTP.  I'd say that at least from a
  command-line script qmail-inject is a more scalable approach.

 Quite possible, I was talking about the fat sendmail binaries :)

Yes, quite possible.

Using SMTP and qmail-inject both have the overhead of a fork, because the SMTP
tcpserver will fork off a copy of qmail-smtpd to handle the request.

Additionally, the SMTP tcpserver is probably doing a reverse DNS lookup and
probably an ident lookup which would probably cause another fork for identd.
(Both reverse DNS and ident lookup are enabled by default in ucspi-tcp-0.84.)
This network activity and possibly another fork will cause delay.

The overhead of forking directly off from mod_perl does not seem so bad when
you look at copy-on-write memory managers. The fork: (a) does not cause copying
of the big mod_perl process at fork thanks to copy-on-write, and (b) there will
be virtually no dirtying of pages and copying because a exec() will be
immediately done. A possible problem is qmail-inject inheriting a bunch of
filehandles from mod_perl, but they should all be marked close-on-exec.

David Harris
President, DRH Internet Inc.
[EMAIL PROTECTED]
http://www.drh.net/






Re: init in Apache::ASP

2000-09-08 Thread Dmitry Beransky


At 03:59 PM 9/8/00, G.W. Haywood wrote:
Try to think of it as a program.  You can't use the variable's value
until you've set it.  Why does it matter where it goes in the page?

Not exactly true for Perl, is it? -- the BEGIN subroutine comes to 
mind.  What follows is just a speculation on my part, hopefully Joshua or 
someone else who knows better will correct me if I'm wrong...  As I 
understand the way ASP code is parsed, everything is converted to a perl 
script with patches of HTML replaced by print statements.  This perl script 
is then submitted to eval().  Assuming this is true, I think you should be 
able to use the BEGIN subroutine to initialize your variables just like in 
a normal perl script.

  Hello %=$name%
  %
   BEGIN {
 $name="something";
}
  %

Of course, if you have variables that need to be initialized for every 
script you run, you should probably use Script_OnStart.

Dmitry

PS  Perhaps I should've test this first, before talking off my ass.

If you really want to separate it, use Response-Include or something.

On Fri, 8 Sep 2000, Issam W. Alameh wrote:

  I want to have something like
  
  Hello %=$name%
  %
  $name="something";
  %
  




Re: SELECT cacheing

2000-09-08 Thread Brian Cocks

I'm wondering how much improvement this caching is over the database caching
the parsed SQL statement and results (disk blocks)?

In Oracle, if you issue a query that is cached, it doesn't need to be parsed.
If the resulting blocks are also cached, there isn't any disk access.  If the
database is tuned, you should be able to get stuff out of cache over 90% of
the time.  I don't know what other databases other than Oracle do.

What are the advantages of implementing your own cache?  Is there any reason
other than speed?

--
Brian Cocks
Senior Software Architect
Multi-Ad Services, Inc.
[EMAIL PROTECTED]





Re: SELECT cacheing

2000-09-08 Thread Tim Bishop


On Fri, 8 Sep 2000, Perrin Harkins wrote:

 On Fri, 8 Sep 2000, Roger Espel Llima wrote:
   - If possible, use some existing cache module for the storage, like
   Apache::Session or one of the m/Cache/ modules on CPAN.
  
  Others have suggested Storable.  I've used this one before, and I can
  agree that it's probably a good solution.
 
 Storable is just a way to turn a complex data structure into a single
 scalar.  You still need to handle the file manipulation yourself.  Most of
 the existing cache modules use Storable to serialize to a scalar and then
 files or shared memory or a dbm for actual storage.

I would delegate the tieing, serialization, and locking to a module
like Apache::Session  (It uses Storable internally).

Then the user can specify their own favorite backing store and locking
mechanism by subclassing Apache::Session.

I would also look to the Memoize module for
ideas:  http://search.cpan.org/search?dist=Memoize

-Tim




Eval block error trapping bug????

2000-09-08 Thread Chuck Goehring

Hi,

Having a big problem here.

When I use an eval{} block to trap dbi errors, it doesn't seam to work as
documented under mod_perl.
When I found this problem, I created a test program that connects, prepares
and executes a bogus sql
statement.  The $lsth-execute() || die $DBI::errstr; is triggering the
default "Software Error" message
and the cgi ends.  Toggling RaiseError changes the wording of the error
message but does not affect the
error trapping.  Under mod_perl, the die() within the eval block causes the
program to really die
even with RaiseError = 1.  I thought I had this all nailed down but now I
get un-gracefull error
handling in my web app.  The command line version works correctly.  Both
test programs and the
version info are listed below.

Deploying two major systems  need some help.  Any suggestions?

Thanks
Chuck Goehring
=

Versions are as follows:
Windows NT SP5
perl 5.6.0
Apache 1.3.12
ApacheDBI-0.87
mod_perl 1.24
DBD-Oracle-1.03
Dbi-1.13
Jserv 1.1.1
The manual fix for Carp in sub ineval was put in.
The manual fix for Carp::Heavy regarding 'DB'/'DB_tmp' was put in.

== startup.pl

use strict;
use Apache ();
use Apache::Registry;
use Apache::DBI();
use CGI qw(-compile :cgi);
use Carp();

1;
__END__

=== mod_perl test program
use strict;
my $cgiOBJ;
use CGI qw/:standard :html3/;
use CGI::Carp qw(fatalsToBrowser);
use Apache::DBI();
$cgiOBJ = new CGI;

print $cgiOBJ-header(-expires='-1d'),
  $cgiOBJ-start_html(-TITLE='Testing error handling',
  -BGCOLOR='#FF');
test3('select dork, pork, cork from rourk'); # A BOGUS sql statement.

print $cgiOBJ-end_html;

sub test3 {  # eval execute
  my ($aSQLStatement) = @_;
  print "test3: eval executebrUsing SQL: $aSQLStatement";
  my ($dbh, $DefaultDSN, $user, $pa, $lsth);
  $DefaultDSN = 'DBI:Oracle:rciora';
  $user = 'dis';
  $pa = 'dis';

  $dbh = DBI-connect($DefaultDSN, $user, $pa, { RaiseError = 1, PrintError
= 0, AutoCommit = 0 });   # Returns undef on failure (pg 86).
  eval {
$lsth = $dbh-prepare($aSQLStatement) or die "Prepare failed:
$DBI::errstr";
if (!(defined($lsth))) { # Prepare is supposed to return undef
on failure (pg 108).
  print 'lsth not defined at 1';  # This seams to never print for
Oracle.
}
$lsth-execute() || die $DBI::errstr; # die should trigger jump to 'if'
below.
  };
  if ($@) {   # $@ is null after an eval per camel page 161
print 'Message reported by @ = ' . $@;
  }
  if (!(defined($lsth))) { # Prepare is supposed to return undef on
failure (pg 108).
print 'sth not defined at 2';  # This seams to never print for Oracle.
  }
  $dbh-disconnect();
  return;
}

=== command line perl test program

use strict;
use DBI();

test3('select dork, pork, cork from rourk'); # A BOGUS sql statement.


sub test3 {  # eval execute
  my ($aSQLStatement) = @_;
  my ($dbh, $DefaultDSN, $user, $pa, $lsth);
  $DefaultDSN = 'DBI:Oracle:rciora';
  $user = 'dis';
  $pa = 'dis';

  $dbh = DBI-connect($DefaultDSN, $user, $pa, { RaiseError = 1, PrintError
= 0, AutoCommit = 0 });   # Returns undef on failure (pg 86).
  eval {
$lsth = $dbh-prepare($aSQLStatement) or die "Prepare failed:
$DBI::errstr";
if (!(defined($lsth))) { # Prepare is supposed to return undef
on failure (pg 108).
  print "\nlsth not defined at 1";  # This seams to never print for
Oracle.
}
$lsth-execute() || die $DBI::errstr; # die should trigger jump to 'if'
below.
  };
  if ($@) {   # $@ is null after an eval per camel page 161
print "\nMessage reported by @ = $@";
  }
  if (!(defined($lsth))) { # Prepare is supposed to return undef on
failure (pg 108).
print "\nsth not defined at 2";  # This seams to never print for Oracle.
  }
  $dbh-disconnect();
  return;
}





Re: [OT?] Cross domain cookie/ticket access

2000-09-08 Thread joe


Kee Hinckley [EMAIL PROTECTED] writes:

 At 10:21 PM -0400 9/7/00, [EMAIL PROTECTED] wrote:
   
   I don't think there's any pretty way to do it.  The only thing I can
   think of off-hand is to generate the cross-server links dynamically,
including an encrypted token in the URL which will notify that server
 
 
 If you ever implement something like this, just be sure you
 patent it before Amazon does ;
 
 Actually, I have a strong suspicion that this may be covered by the 
 OpenMarket patents.  I know their authentication software worked 
 cross-domain, and I know their ordering software worked with 
 encrypted URL tokens.
 

That's what I was afraid of. 
However, I searched the Open Market patents at http://patents.ibm.com; 
and I didn't see any directly relevant listings.  They apparently hold a patent
related to embedding session data in the path-info; your particular 
problem appears cookie-related.  

My recommendation for using cookies is to do what banner advertisers do.  
I would embed a dummy link (image, stylesheet, javascript) in the ticket 
authentication's confirmation page ("Congratulations, you've successfully 
logged in... redirecting to ...").  

Say you use blank images. Put one in for each domain, and put the 
authentication token in the url or query args.  When the browser 
fetches the dummy link FROM EACH DOMAIN, presumably the code you 
run for that url will return a set-cookie header 
for that domain.  It's like doing the 'round-robin' thing all at once.
The end user shouldn't notice any difference.

Also, I'm pretty sure the netscape setting for 'accepting cookies from other
domains' only applies to this kind of usage.  Domain x should NEVER be able
to set a cookie in domain y.  Period.  Advertisers deliver banners from domains
other that the one you requested in the browser's url.  Disabling this feature
within netscape prevents those images from setting cookies 
(in thier OWN domain,of course).  This would conceivably break thi
implementation above, in which case you can use dummy frames instead!

Best of luck.
-- 
Joe Schaefer
[EMAIL PROTECTED]

SunStar Systems, Inc.



Re: init in Apache::ASP

2000-09-08 Thread Joshua Chamas

"Issam W. Alameh" wrote:
 
 Hello All,
 
 Can I put my asp code at the end of my html page, but let it execute before
 showing the html???
 
 I want to have something like
 
 
 Hello %=$name%
 %
 $name="something";
 %
 
 
 Can this show
 
 Hello something
 

No, probably the place you want to put this code is Script_OnStart
like Dmitry mentioned, where you can set up globals for all of
your scripts.  You cannot hide code like Mason by putting it after
the HTML, better to put it into some kind of init() subroutine.

--Joshua

_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Re: open(FH,'|qmail-inject') fails

2000-09-08 Thread Andrew Dunstan


Regarding cost of forking etc.:

Your mileage will undoubtedly vary, according to OS and MTA.

Last time I did work on this was about a year ago on Solaris 
2.6, with sendmail and postfix. In both cases using Net::SMTP 
was far faster. IIRC, with postfix there is no forking cost at all, 
as its daemon does not fork on connect (it uses a select() loop 
instead). Talking on the SMTP port is actually Wietse Venema's 
recommended method for fastest injection into the postfix queue.

It also has the advantage over other methods that it is totally 
MTA independent.

andrew



RE: open(FH,'|qmail-inject') fails

2000-09-08 Thread Shane Adams
Title: RE: open(FH,'|qmail-inject') fails





Another approach to is to write the email directly into the queue. I've used this approach and it's very fast. After you write your email to the qmail queue, you write a value of 1 to a named pipe that qmail reads off of. This causes a qmail process (there are like 20 different ones and I forget which is which - check the docs) to wake up and drain the queue.

If you want anymore speed then that, you have to either install ram disks or seriously write your own mta. We installed ram disks =)

Shane



-Original Message-
From: Andrew Dunstan [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 08, 2000 9:52 PM
To: Modperl
Subject: Re: open(FH,'|qmail-inject') fails




Regarding cost of forking etc.:


Your mileage will undoubtedly vary, according to OS and MTA.


Last time I did work on this was about a year ago on Solaris 
2.6, with sendmail and postfix. In both cases using Net::SMTP 
was far faster. IIRC, with postfix there is no forking cost at all, 
as its daemon does not fork on connect (it uses a select() loop 
instead). Talking on the SMTP port is actually Wietse Venema's 
recommended method for fastest injection into the postfix queue.


It also has the advantage over other methods that it is totally 
MTA independent.


andrew





Re: Eval block error trapping bug????

2000-09-08 Thread Eric L. Brine

 Under mod_perl, the die() within the eval block causes the
 program to really die.

Does your program (maybe CGI.pm or something used by CGI.pm?) set
$SIG{'DIE'}? IIRC, $SIG{'DIE'} has precedence over eval{}, something
many consider to be a bug.

If so, I'd try:
  eval {
local $SIG{'DIE'}; # undefine $SIG{'DIE'} in this block
...normal code...
  };
  if ($@) { ...normal code... }

ELB



Core dumping

2000-09-08 Thread Shane Adams
Title: Core dumping 





Hello -


I am experiencing a situation where apache core dumps. We are using HTML-Mason. The relevant revision numbers are:


Apache_1.3.12
mod_perl-1.24
perl-5.6.0
HTML-Mason .87
redhat 6.1 (no patches)


Our apache server is built in 2 flavors, one that uses Mason, another that doesn but uses mod_perl.


httpd-mason -l reveals
Compiled-in modules:
 http_core.c
 mod_log_config.c
 mod_mime.c
 mod_rewrite.c
 mod_access.c
 mod_setenvif.c
 mod_perl.c


whereas httpd-soap (straight mod_perl, but it is used to answer SOAP requests via SOAP.pm)
http_core.c
 mod_log_config.c
 mod_mime.c
 mod_rewrite.c
 mod_access.c
 mod_setenvif.c
 mod_perl.c


Running a barebonse component (nothing tricky, just loads a normal html component) under Mason causes apache to core dump after a few dozen requests. A stack trace reveals:

eading symbols from /lib/libnss_nis.so.2...done.
#0 0x8143b34 in Perl_pp_entersub ()
(gdb) where
#0 0x8143b34 in Perl_pp_entersub ()
#1 0x813aeda in Perl_runops_debug ()
#2 0x80e4b2f in perl_call_sv ()
#3 0x80e4780 in perl_call_sv ()
#4 0x8075aac in perl_call_handler ()
#5 0x8074fd8 in perl_run_stacked_handlers ()
#6 0x80726a8 in perl_handler ()
#7 0x80a0913 in ap_invoke_handler ()
#8 0x80b3f29 in ap_some_auth_required ()
#9 0x80b3f8c in ap_process_request ()
#10 0x80ab82e in ap_child_terminate ()
#11 0x80ab9bc in ap_child_terminate ()
#12 0x80abb19 in ap_child_terminate ()
#13 0x80ac146 in ap_child_terminate ()
#14 0x80ac8d3 in main ()
#15 0x400d31eb in __libc_start_main (main=0x80ac58c main, argc=2,
 argv=0xb944, init=0x806286c _init, fini=0x81a998c _fini,
 rtld_fini=0x4000a610 _dl_fini, stack_end=0xb93c)
 at ../sysdeps/generic/libc-start.c:90
(gdb)


I've tried building perl and apache and mod_perl with debugging turned on. I can't seem to get more out of the core dump than this.

My point in listing the 2 flavors of our apache configurations is that httpd-soap does *not* core whereas the mason server does.

Normally I'd just try upgrading but I've hit the latest releases of each piece it seems. 


I don't know if this will shed any details as to what the problem is.


Any help is appreciated.


Shane