Re: libapreq: could not create/open temp file

2002-06-08 Thread Ask Bjoern Hansen

On Sat, 8 Jun 2002, Stas Bekman wrote:

  Has anybody already seen this error ???
[...]
  [libapreq] could not create/open temp file

sounds like something is running out of filehandles; or a temp file
system of some kind running out of space.

Try applying the patch Stas sent and see what it changes to.
 
 - ask

-- 
ask bjoern hansen, http://ask.netcetera.dk/ !try; do();




Re: [Templates] Re: Separating Aspects (Re: separating C from V in MVC)

2002-06-08 Thread Tony Bowden

On Sat, Jun 08, 2002 at 08:51:48AM +0800, Gunther Birznieks wrote:
 I'm a huge fan of passing Date::Simple objects, which can then take a
 strftime format string:
   [% date.format(%d %b %y) %]
   [% date.format(%Y-%m-%d) %]
 
 And the latter does not require a programmer?

Of course not. It just requires someone who can read a simple chart of
strftime formats. I've never worked with a designer who hasn't been able
to understand the DESCRIPTION section of, for example,
  http://unixhelp.ed.ac.uk/CGI/man-cgi?strftime+3

It wouldn't be difficult to make up a more designer friendly version
of it if it was really needed.

Tony





Re: [Templates] Re: Separating Aspects (Re: separating C from V i n MVC)

2002-06-08 Thread Tony Bowden

On Fri, Jun 07, 2002 at 11:22:13PM -0400, Jesse Erlbaum wrote:
 I'm a huge fan of passing Date::Simple objects, which can then take a
 strftime format string:
   [% date.format(%d %b %y) %]
   [% date.format(%Y-%m-%d) %]
 Out of curiosity, at what point of flexibility do you feel it is OK for your
 designers to go back to the programmers?  In your book, where does a bit of
 flexibility cross the line?

I think that will depend both on the organisation, and the case in hand.

Most organisations I've worked with have much stricter change control on
work by programmers than designers. As such, things which are purely
presentational and can be achieved fairly trivially by a designer with a
small amount of effort (such as the date example above), compared to a
request/develop/test/review/integrate code cycle, should always be in
designers' control, where possible. 

In this case, even if you haven't got that level of separation, or of
process, I'd still say that it should go to the template (how a date
_looks_, rather than what the date _is_ is always a display issue IMO).
But there are many examples where things aren't so clear cut.

I firmly believe in laziness, so my general rule of thumb, even when
working on personal projects where I'm doing the code and the templates
myself, is to imagine that there is a team of designers, and that I'm
the only programmer. Then I imagine that this team of designers report
to a committee of PHBs who keep changing their mind on how everything
should look. Then I try to ensure that I can minimise the amount of
things that the designers end up having to come back to me for!

Maybe I'm lucky with the designers I've worked with - but in general
I've found that designers are happy to learn things like basic strftime
formatting, or the basic control structures etc of something like Template
Toolkit if it means they can get their job done on their own terms,
without having to keep coming back to programmers and asking for simple
changes that'll probably go on a list of things to get done sometime.

I think this crosses the line when it goes against the laziness
principle in the other direction. It's just as bad if a business
decision means having to change 50 templates as if it means having to
change 50 perl modules. 

TT is fairly unique in my experience of templating systems in that it
allows you to fairly simply have many levels of abstraction in the
templates too. I would always set up a macro for something like
page_title - even if at this stage that just translates into an H1 of
a certain class which can by styled with JavaScript. Then, if you want
to do something with it that can't really be done with JavaScript,
you're still only needing to change one template. You can build up quite
a library of abstractions fairly quickly.

Basically this is all a long winded version of saying it depends :)

I'm happy to pontificate on various scenarios if you want to throw any
out, though!

Tony




AuthenNTLM, IE, KeepAlives

2002-06-08 Thread Harnish, Joe
Title: AuthenNTLM, IE, KeepAlives





I am running an Apache server using AuthenNTLM for authentication. This is because we are migrating an old NT site to Linux. 

The issue that I am having is that when I have KeepAlive turned on my scripts won't get the params from the URI. But this only happens in Internet Explorer. Opera works fine. 

I have played around with KeepAliveTimeout, MaxKeepAliveRequests trying to get it to work. Which puts me into a catch-22. If I put low numbers in them, the scripts will work fine. But the pop-up login window keeps poping up. And if I put in higher number the pop-up windows go away but the scripts only work the first time.

Thanks for any help


Joe





Re: libapreq: could not create/open temp file

2002-06-08 Thread Joe Schaefer


 Jean-Denis Girard wrote:
  
  Everything worked flawlesly, the web site was still working, but after a 
  few days, visitors started to complain  that uplaods didn't work. 
  mod_perl dies with the message:
  [libapreq] could not create/open temp file
  What is really funny, is that it works after rebooting the system, and 
  the error shows up later.

Where are the temp files being created, on a RAM disk or something?
pre 1.0 apreq's had a bug that caused filehandles to leak (there's
a refcount problem in the IO parts of perl's ExtUtils/typemap), which
would eventually fill up /tmp (this is the default location of your 
spooled apreq files) until apache was restarted.


  I upgraded libapreq to 1.0, which didn't solve the problem. Next step 
  will be to upgrade APache, mod_perl, etc. but I would like some help.

In 1.0, we no longer use perl's ExtUtils/typemap for this, which should
take care of the aforementioned leak.  One other possible candidate is 
that your apache server is segfaulting after the file is received, which 
prevents apache from cleaning up the temp files.  If so, you should see
a bunch of apreq files filling up your spool directory.

-- 
Joe Schaefer
[EMAIL PROTECTED]




Exceptional MVC handling (was something about MVC soup)

2002-06-08 Thread Jeff


 From: Perrin Harkins [mailto:[EMAIL PROTECTED]] 
 Sent: 07 June 2002 18:05

 For example, if you have a form for registering as a user which has 
 multiple fields, you want to be able to tell them everything that was 
 wrong with their input (zip code invalid, phone number invalid, etc.),

 not just the first thing you encountered.

 Putting that into a model object is awkward, since coding your 
 constructor or setter methods to keep going after they've found the 
 first error feels wrong.  You can write a special validate_input() 
 method for it which takes all the input and checks it at once
returning 
 a list of errors.  You could also just punt and push this out to the 
 controller.  (Not very pure but simple to implement.)  Either way
you 
 can use one of the convenient form validation packages on CPAN.

When collecting multiple values for a single model, there generally is
both individual field validation, and an overall validation check. I
work in financial applications - often, there are inter-field
dependencies where validity cannot be determined until some decision has
been made, or some other detail gathered. As a general rule, there will
_always_ be a point in time after all details have been gathered, where
the Model must ask itself, 'Am I complete and integral?'.

What I am saying, is that you can't get around it! At some point in
time, you have to ask 'Are you OK, honey?'

There are a number of obvious stategies for dealing with this.

Some folks like to pass it all to the constructor, and get it over with
during the pangs of birth. Sometimes when this happens, instantiation
will fail, and you end up hanging your error information at a class
level ala DBD/DBI. Others don't mind instantiating an invalid Model, to
hold details of what went wrong (argh!). Some smarties pass an Exception
object into the instantiation, that collects all the exceptional detail
on the way, leaving the Controller at least with an instance handle on
what when badly wrong.

An alternative approach is to instantiate a mini-me Model without much
going on, and then to iterate through assigning properties or calling
methods and reaping any exceptions along the way into a collection for
later user castigation. Such mini-me Modellers must always remember to
ask the 'Was that good for you, Honey?' question at the end, or they end
up in purgetory, and have to bring home flowers!

Another intereting introspection is: Should I let the default View deal
with the errors, or should I have an Error View?

And the answer is... well, I guess it depends on your bent. 

For finicky field errors, it feels natural that the default View should
indicate problems where they occurred [ala stars next to the required
fields, or little digs at the users, next to the offending erroneous
zone.

For major whamoo! if ( not defined $universe ) big bangs, it may be
better to redirect to a safer plane. If this is the case, the Controller
must be able to grok the exception and redirect as appropriate.

All in all, I think prefer a Controller something like this:

  my $Exeption = My::Exception-new();
  my $params = cleanupParams( $r );
  my $Model = My::Model-new( %$params, exception = $Exception, );

  my $View;
  if ( $Exception-had_Fatal() or not defined $Model ) {
$View = My::ErrorView-new( exception = $Exception, model = $Model
);
  } else {
$View = My::View-new( exception = $Exception, model = $Model );  
  }
  print $View if $View;
  
  print STDERR $Exception if $Exception-had_Any();


£0.04,

Regards
Jeff





RE: [OT] MVC soup (was: separating C from V in MVC)

2002-06-08 Thread Bill Moseley

At 12:13 PM 06/08/02 +0100, Jeff wrote:
The responsibility of the Controller is to take all the supplied user
input, translate it into the correct format, and pass it to the Model,
and watch what happens. The Model will decide if the instruction can be
realised, or if the system should explode.

I'd like to ask a bit more specific question about this.  Really two
questions.  One about abstracting input, and, a bit mundane, building links
from data set in the model.

I've gone full circle on handling user input.  I used to try to abstract
CGI input data into some type of request object that was then passed onto
the models.  But then the code to create the request object ended up
needing to know too much about the model.

For example, say for a database query the controller can see that there's a
query parameter and thus knows to pass the request to the code that knows
how to query the database.  That code passes back a results object which
then the controller can look at to decide if it should display the results,
a no results page and/or the query form again.

Now, what happens is that features are added to the query code.  Let's say
we get a brilliant idea that search results should be shown a page at a
time (or did Amazon patent that?).  So now we want to pass in the query,
starting result, and the page size.

What I didn't like about this is I then had to adjust the so-called
controller code that decoded the user input for my request object to
include these new features.  But really that data was of only interest to
the model.  So a change in the model forced a change in the controller.

So now I just have been passing in an object which has a param() method
(which, lately I've been using a CGI object instead of an Apache::Request)
so the model can have full access to all the user input.  It bugs me a bit
because it feels like the model now has intimate access to the user input.

And for things like cron I just emulate the CGI environment.

So my question is: Is that a reasonable approach?

My second, reasonably unrelated question is this: I often need to make
links back to a page, such as a link for page next.  I like to build
links in the view, keeping the HTML out of the model if possible.  But for
something like a page next link that might contain a bunch of parameters
it would seem best to build href in the model that knows about all those
parameters.

Anyone have a good way of dealing with this?

Thanks,

P.S. and thanks for the discussion so far.  It's been very interesting.


-- 
Bill Moseley
mailto:[EMAIL PROTECTED]



RE: [OT] MVC soup (was: separating C from V in MVC)

2002-06-08 Thread Jeff


 From: Bill Moseley [mailto:[EMAIL PROTECTED]] 
 Sent: 08 June 2002 20:48

 I've gone full circle on handling user input.  I used to try to
abstract
 CGI input data into some type of request object that was then passed
onto
 the models.  But then the code to create the request object ended up
 needing to know too much about the model.

 For example, say for a database query the controller can see that
there's a
 query parameter and thus knows to pass the request to the code that
knows
 how to query the database.  That code passes back a results object
which
 then the controller can look at to decide if it should display the
results,
 a no results page and/or the query form again.

So in pseudo code-speak, how about something like:

# Note that I am ignoring Exceptions for the sake of dealing with the
# Controller / Model interaction question.

# $param is a ref to an Apache::Table that contains all the user
submitted
# parameters from the request. The main job of cleanupParams() is to do
# things like URDDecode() etc, and marshal all the user input into a
simple
# structure.
my $param = cleanupParams($r);

# Instantiate Model. Pass it ALL user parameters - Model can cherry pick
only
# the ones it is interested in, and ignore the others. Adding new
parameters
# in the preceeding View that gave rise to this request makes no
difference
# to the Controller - only the Model and View needed to change.
my $Model = My::Model-new( %$param );

# And which View should we instantiate? Well, you might choose one in
the 
# Controller, but I only do this if there was a major Model meltdown.
For
# no result searches, the usual search View should be able to handle
things
# with a nice message.



 Now, what happens is that features are added to the query code.  Let's
say
 we get a brilliant idea that search results should be shown a page at
a
 time (or did Amazon patent that?).  So now we want to pass in the
query,
 starting result, and the page size.

As shown above, the Controller doesn't really care about any new
parameters, it passes them all, including new ones through transparently
to the model.

The model I like for paginated results is straight-forward. When the
Model is instantiated, it does NOT find a query_id field in the passed
parameters, so it assumes a brand new query, and returns the first N
results. A brand new, unique query_id is issued, and becomes a property
of the Model. In the paginated View, this query_id is inserted into a
hidden field (or cookied if you prefer). A session is created using the
query_id that contains all of the parameters that the Model considers
important. The paginated View contains First, Last, Next, Prev links
that just call the same URL with an action=next, last, prev etc.

When the Model is instantiated for a subsequent page, it sees a
query_id, loads all the query details in from the session storage, and
retrieves the appropriate set of records for the this-time-round View.

 What I didn't like about this is I then had to adjust the so-called
 controller code that decoded the user input for my request object to
 include these new features.  But really that data was of only interest
to
 the model.  So a change in the model forced a change in the
controller.

I think covered above? 

 So now I just have been passing in an object which has a param()
method
 (which, lately I've been using a CGI object instead of an
Apache::Request)
 so the model can have full access to all the user input.  It bugs me a
bit
 because it feels like the model now has intimate access to the user
input.

I don't like this either, but probably need a concrete example of
exactly what Request properties you find it necessary to use in your
Model. The way I see it is that the Controller is interested in the gory
details of the Request object, after all it is a Web Controller, but the
Model should only be interested in the parameters. The Controller uses
the Request object context, and sometimes basic parameters to decide
which Model to instantiate, it doesn't care about Model parameter
requirements - the Model must validate itself.


 links back to a page, such as a link for page next.  I like to build
 links in the view, keeping the HTML out of the model if possible.  But

 for something like a page next link that might contain a bunch of 
 parameters it would seem best to build href in the model that knows 
 about all those parameters.

As described above, I like to use a session to store Model state over
multiple executions / pagination of a collection type Model.


Regards
Jeff





Weird headers under mod_perl

2002-06-08 Thread Dodger



Hi.
I've set up my system to move gradually over to 
mod_perl and been clearing hurdles for several weeks now -- things like 
Apache::DBI cached connections to mysql never timing out and eventually running 
mysql out of connections, strange sudden bogging-down of the server, and so on, 
and I've worked my way past them.

To implement this, I set up my server to treat 
scripts ending in .cgi as normal cgi scripts, and to treat scripts ending in .mp 
as mod_perl CGIs.

Now, however, I've hit a really annoying weirdness. 
I received reports from several users that they suddenly couldn't login. After 
some frustrating grilling of them (it's almost impossible to get useful 
information out of a user --it always starts with 'Why is it broke?!?!' 
and helpful things like OS, browser, etc are like pulling teeth). I found out 
that they seemed almost universally to be using Netscrape or WebTV, with a 
Mozilla here and there and a single Opera. No IE users reported an error, which 
is why it apparently took weeks for me to know about this (I'd tested web design 
against multiple browsers but had no reason to suspect that HTTP header 
interpretation would work differently).

Well, it seems that there are strange headers being 
passed out with mod_perl, and mixed into them come carriage 
returns.

This is, of course, bad. Technically, IE is parsing 
the headers wrong, because it's sweeping mast the CRLFs like there's nothing 
wrong with them. NS and other browsers are parsing them correctly, and as a 
result, the Cookie information I'm setting up comes out in the body of the 
response, not the headers.

I'm not sure what to do about this, or why it's 
happening.

Below, I am including the headers both from the .mp 
mod_perl and the .cgi standard CGI. There is NO difference between these -- as a 
matter of fact, they even sharre teh same inode as rather thasn copying the file 
I simply hard linked it.

I've used c-style comments in this below. Such 
comments are not part of the headers, but are included to provide a clear 
delimiter between the two sets of headers and to add necessary comments. The 
2\n\n\n\n15f\n part is particularly weird, but doesn't do anything because of 
the extra CRLF after the Client-Response-Num header.

/* response headers from mod_perl -- sessionID has 
been altered for security purposes */Client-Date: Sat, 08 Jun 2002 21:02:11 
GMTClient-Response-Num: 1

Cookie: session=d1af22bd5dd71c2585be72b86e119212; 
domain=.gothic-classifieds.com; path=/; expires=Sat, 08-Jun-2002 22:02:11 
GMTbrHTTP/1.1 200 OK
Date: Sat, 08 Jun 2002 21:02:11 GMT
Server: Apache/1.3.19 (Unix) 
mod_perl/1.25
Set-Cookie: 
session=d1af22bd5dd71c2585be72b86e119212; domain=.gothic-classifieds.com; 
path=/; expires=Sat, 08-Jun-2002 22:02:11 GMT
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; 
charset=ISO-8859-1

2 



15f
/* response headers from standard CGI 
*/
Connection: closeDate: Sat, 08 Jun 2002 
21:02:54 GMTServer: Apache/1.3.19 (Unix) mod_perl/1.25Content-Type: 
text/html; charset=ISO-8859-1Client-Date: Sat, 08 Jun 2002 21:02:55 
GMTClient-Response-Num: 1Client-Transfer-Encoding: chunkedCookie: 
session=d1af22bd5dd71c2585be72b86e119212; domain=.gothic-classifieds.com; 
path=/; expires=Sat, 08-Jun-2002 22:02:55 GMTbrLink: 
css/gc.css; rel="stylesheet"Set-Cookie: 
session=d1af22bd5dd71c2585be72b86e119212; domain=.gothic-classifieds.com; 
path=/; expires=Sat, 08-Jun-2002 22:02:55 GMTTitle: GC Login Successful: 
Redirecting

/* end examples */


Re: persistent Mail::ImapClient and webmail

2002-06-08 Thread Medi Montaseri


I was wondering why you implemented your own vs using any of the following
- twig http://twig.screwdriver.net
- Open WebMail http://www.openwebmail.org
- WING http://users.ox.ac.uk/~mbeattie/wing/
- IMPhttp://www.horde.org/imp/
I am asking because I'm also interested in such an application (ie a
webmail app)
Did you find something wrong with the above list, etc...
I tried WING, its PostgreSQL and Perl based, and very scalable but
I found the
installation a hellas all complex systems would be
Thanks
Joe Breeden wrote:
We implemented a webmail front end with Mail::IMAPClient
and Mail::IMAPClient::BodyStructure without persistent connections and
it seems to work fine with several hundred connections. We just opened
up a connection to server do what we want then disconnect on each request.
I'm sure through persistent objectification we could have reduced the load
on the IMAP server and sped up the retrieval process, but what we did worked
fine.
We use qmail/maildrop/courier-imap for the mail storage see http://howtos.eoutfitters.net/email
for destructions on how to config that setup. I would share the code we
used for the IMAP client, but my company does sell that as a service so
I think they might get mad if I gave away our product.
I hope this helps.
Joe
> -Original Message-
> From: Richard Clarke [mailto:[EMAIL PROTECTED]]
> Sent: Friday, June 07, 2002 9:28 AM
> To: [EMAIL PROTECTED]
> Subject: persistent Mail::ImapClient and webmail
>
>
> List,
> I have the task in my hands of creating a
web mail
> application. Initial
> thoughts lead me to think I would use an external popper to
> pop mail and
> parse it into a database for retrieval by the modperl
> application. The only
> problem here is that I must provide the implementation of the
> mail storage
> and folder management etc. Something I would rather not spend
> my time on. So
> my thoughts turned to IMAP. Retrieve the mail from an IMAP
> server. IMAP
> itself supports most mail management methods such as move
> message, delete
> message, save draft, mark seen etc. So a few lines of perl
> later I had a
> PerlChildInitHandler which connected to the IMAP server and saved
the
> connection object. I wanted to know if people saw any
> immediate problems
> with this solution and also if anyone could explain the following
> percularities.
>
> If I store a single imap object in $imap, e.g.
> my $imap;
> sub connect {
> my ($self,$centro_id) = @_;
> print STDERR $imap,"\n";
> unless (defined $imap) {
> print STDERR "Connecting
to IMAP for $centro_id\n";
> $imap = Mail::IMAPClient->new(
Server =>
> 'cyrus.andrew.cmu.edu',
>
User => 'anonymous',
>
Password => '[EMAIL PROTECTED]',
>
);
> }
> return $imap;
> }
>
> This seems to successfully save the connection object.
> However if I attempt
> to store the object in a hash, e.g.
> my %imap_cache;
> sub connect {
> my ($self,$centro_id) = @_;
> print STDERR $imap,"\n";
> unless (exists $imap_cache{$centro_id}) {
> print STDERR "Connecting
to IMAP for $centro_id\n";
> $imap_cache{$centro_id}
= Mail::IMAPClient->new( Server =>
> 'cyrus.andrew.cmu.edu',
>
User => 'anonymous',
>
Password => '[EMAIL PROTECTED]',
>
);
> }
> return $imap_cache{$centro_id};
> }
>
> I seem to have intermitent success in retrieving an already connected
> object. Using the first example, as far as I can tell the
> object remains
> available flawlessley. But storing the object in the hash
> doesn't. Am I
> making a mistake here?
>
> Another question sprung to mind, should I think about using
> Persistant::Base
> or some similar approach to store the IMAP objects?, or should I
lean
> towards Randal's and others suggestions of having a seperate
> (possibles SOAP
> or LWP::Daemon or even apache server in single user mode) server
> specifically designed for performing IMAP requests?
>
> Finally, does anyone with experience in having to write
> webmail interfaces
> see any problems with using the functionality provided by IMAP.
>
> Richard
>
> p.s. Yes quite obviously if I have 100 children then I'll be
> connected to
> the IMAP server 100 times per user, hence possibly the need
> to have a either
> a dedicated daemon connected to the IMAP server once or some
> successfuly way
> of sharing IMAP objects between children.
>
>

--
-
Medi Montaseri [EMAIL PROTECTED]
Unix Distributed Systems Engineer HTTP://www.CyberShell.com
CyberShell Engineering
-



Apache/mod_perl still not ready for OS X?

2002-06-08 Thread Bas A . Schulte

Hi,

I've been postponing to move my Linux Apache/mod_perl development to my 
highly appreciated iBook running Mac OS X due to all the required tweaks 
until now. I would imagine things have been sorted out by now so I 
downloaded apache 1.3.24 to give it a go.

The system I'm working on has a self-contained build-script which 
fetches everything from CVS, executes the right build commands etc. 
without me having to think. This has a built-in step to compile mod_perl 
statically (i.e. not DSO as i don't want that) into Apache.

Now I find that out that the part that compiles Apache bails out with 
this:

env LD_RUN_PATH=/opt/ttgp/dev/applications/perl/lib/5.6.1/darwin/CORE cc 
-c -I.. -I/opt/ttgp/dev/applications/perl/lib/5.6.1/darwin/CORE 
-I../os/unix -I../include   -DDARWIN -DMOD_PERL -DUSE_PERL_SSI -pipe 
-fno-common -DHAS_TELLDIR_PROTOTYPE -fno-strict-aliasing -DUSE_HSREGEX 
-DNO_DL_NEEDED -pipe -fno-common -DHAS_TELLDIR_PROTOTYPE 
-fno-strict-aliasing `../apaci` alloc.c
alloc.c: In function `spawn_child_core':
alloc.c:2291: `STDOUT_FILENO' undeclared (first use in this function)
alloc.c:2291: (Each undeclared identifier is reported only once
alloc.c:2291: for each function it appears in.)
alloc.c:2297: `STDIN_FILENO' undeclared (first use in this function)
alloc.c:2303: `STDERR_FILENO' undeclared (first use in this function)
make[4]: *** [alloc.o] Error 1
make[3]: *** [subdirs] Error 1
make[2]: *** [build-std] Error 2
make[1]: *** [build] Error 2
make: *** [apaci_httpd] Error 2

I almost find this appalling. It can't find something basic as 
STDOUT_FILENO (which is in /usr/include/unistd.h)...

So I went back to Google to find solutions and the first hit sends me to 
stepwise.com to a tutorial that tells me in detail what commands to type 
in. Great, but it forces me to use DSO which I don't want!

So, what is missing in the Apache configuration part that will make this 
work in a sensible way?

Regards,

Bas.

ps. I know I'm a programmer that loves to tweak but when it comes to 
something as basic as this I'm just a user that's looking for the any 
key;)




Re: persistent Mail::ImapClient and webmail

2002-06-08 Thread Ask Bjoern Hansen

On Fri, 7 Jun 2002, Richard Clarke wrote:

 p.s. Yes quite obviously if I have 100 children then I'll be connected to
 the IMAP server 100 times per user, hence possibly the need to have a either
 a dedicated daemon connected to the IMAP server once or some successfuly way
 of sharing IMAP objects between children.

the trivial way would be to have the mod_perl processes login (once
each) as some kind of super user and then access the folders as
[username]/INBOX etc.


 - ask

-- 
ask bjoern hansen, http://ask.netcetera.dk/   !try; do();




modperl 2

2002-06-08 Thread Jaberwocky



I figured this would be the place 
wheresomeone would know...

Does any one know of any modperl 2 resources? 
mailing lists, stuff like that. 

I know it's in dev but I'm having serious problems 
finding anything.. Thanks for any help