Re: ping...

2004-12-04 Thread Perrin Harkins
On Sat, 2004-12-04 at 14:35 -0600, [EMAIL PROTECTED] wrote:
 I subscribed to this list a few months ago and haven't seen any activity.  Is
 anyone on this list?

There is no one here.  Please hand over your camera and get back in your
car.



Re: ENGINE::MVC and p5ee

2003-04-03 Thread Perrin Harkins
Aaron Trevena wrote:
I was also thinking of putting together a diagram / chart of different
possible architectures suitable for an enterprise web application..
stuff like Tangram / Pixie / Class::DBI providing object persistance

engines like OpenFrame and ENGINE::MVC powering the core of the
application server
that kind of stuff
Good idea.  Many people get lost in the expanse of CPAN and end up not 
realizing how much value they could be getting from it.  I'm giving a 
talk at the O'Reilly Perl conference this year on O/R database tools 
like Tangram, Class::DBI, etc.  Tools for structuring applications, like 
CGI::Application, OpenFrame, etc. have not received a solid comparison yet.

- Perrin



Re: asynchronous execution, was Re: implementing a set of queue-processing servers

2002-11-26 Thread Perrin Harkins
Bas A.Schulte wrote:

none of
them seemed to be stable/fast under heavy load even though I would have 
preferred that as it would allow me to do something to handle 
data-sharing between children via the parent which always seems to be in 
issue in Apache/mod_perl.

What are you trying to share?  In addition to Rob's suggestion of using 
a database table (usually the best for important data or clustered 
machines) there are other approaches like IPC::MM and MLDBM::Sync.

Basically, I need some way to 
coordinate the children so each child can find out what the other 
children are doing.

Either of the approaches I just mentioned would be fine for this.


BTW: I've been reading up a lot on J2EE lately and it appears more and 
more that a J2EE app server could quite nicely provide for my needs 
(despite all shortcomings and issues of course).

What is it that you think you'd be getting that you don't have now?

- Perrin




Re: asynchronous execution, was Re: implementing a set of queue-processing servers

2002-11-26 Thread Perrin Harkins
Nigel Hamilton wrote:

	I need to fork a lot of processes per request ... the memory cost 
of forking an apache child is too high though.
	
	So I've written my own mini webserver in Perl

It doesn't seem like this would help much.  The thing that makes 
mod_perl processes big is Perl.  If you run the same code in both they 
should have a similar size.

- Perrin



Re: production mail server in Perl, was Re: asynchronous execution

2002-11-26 Thread Perrin Harkins
Bas A.Schulte wrote:

What I like even more is that it's built upon a generic server framework 
(Avalon/Phoenix) that is also used by totally different types of servers 
(e.g. Tomcat).

I don't think Tomcat uses Avalon.


Would be great to have a good generic server framework upon which other 
types of servers could be built.

There are several for Perl.  I think the best one will be mod_perl 2, 
which will allow you to use the Apache framework but replace protocols, 
process models, etc. as necessary.

- Perrin



Re: production mail server in Perl, was Re: asynchronous execution

2002-11-26 Thread Perrin Harkins
Stephen Adkins wrote:

Similar to my question earlier about an all-Perl HTTP server,
I have asked myself whether there is a production-quality all-Perl
mail server.  This would allow you to write code in Perl to 
process mail messages without forking a Perl interpreter per
message.

I think you'd be better off using a well-known and reliable mail server 
and simply solving the forking problem with something like 
PersistentPerl or Matt's PPerl (or a small stub that sends requests to 
mod_perl).

P.S. There is a mail server written entirely in Java, called James,
 hosted by Apache.


I don't think it's very popular.  Most people use the JavaMail API, 
which is part of J2EE.  It's just an API, and does not require that the 
server be written in Java.

- Perrin



Re: asynchronous execution, was Re: implementing a set of queue-processing servers

2002-11-26 Thread Perrin Harkins
Bas A.Schulte wrote:
 Quite odd. I read the performance thread that's on the P5EE page which
 showed that DBI (with MySQL underneath) was very fast, came in 2nd.
 Anyone care to elaborate why this is? After all, shared-memory is a
 thing in RAM, why isn't that faster?

I have an article that I'm working which explains all of this, but the 
short explanation is that they work by serialzing the entire memory 
structure with Storable and stuffing it into a shared memory segment, 
and even reading it requires loading and de-serializing the whole thing. 
  IPC::MM and the file-based ones are much more granular.  Also, file 
systems are very fast on modern OSes because of efficient VM systems 
that buffer files in memory.

 I'm not saying I want entity beans here ;) It's just that I've been
 doing perl to pay for bills and stuff the past few years and see a lot
 of people having some (possibly perceived?) need for something missing
 in perl.

It may be that they just want someone to tell them how they should do
things.  J2EE does provide that to a certain degree.

 If I read your mail, you mention some solutions/directions for some
 problems I'm dealing with, but that's just my issue (I think; it's just
 coming to me): we have a lot of raw metal but we do have to do a lot
 of welding and fitting before we can solve our business problems.
 
 That is basically the point.

I don't think it's nearly that bad.  After my eToys article got 
published, I got several e-mails from people saying something like we 
want to do this, but our boss says we have to buy something because of 
all the INFRASTRUCTURE code we would have to write.

Infrastructure?  What infrastructure?  The only stuff we wrote that was 
really independent of our application logic were things like a logging 
class and a singleton class, which can now be had on CPAN.  We wrote our 
own cache system, but that's because it worked in a very specific way 
that the available tools didn't handle.  I think I could do that with 
CPAN stuff now too.

 To illustrate that, I'll try to give a real-world example

Thanks, it's much easier to talk about specific situations.

 To deliver these messages, I send them off to another server (using my
 own invented pseudo-RMI to call a method on that server).

I would use HTTP for that, because I'm too lazy to write the RMI code 
myself.

 1. The server that does the delivery has plenty of threads (er, a
 Apache/mod_perl child) so I hope I have enough of them to deliver the
 messages at the rate the backend server generates them: one child might
 take up to 5 seconds to deliver the message but there are plenty childs.

 Not good. I've seen how this works and miserably fails when a delivery
 mechanism barfs.

If they were so quick to process that you could do it that way, I would
have just handled them in the original mod_perl server with a
cleanup_handler.  Obviously they are not, so that's not an option here.

 2. Same as 1 but I never allow one delivery mechanism to use all my
 Apache/mod_perl children by adding some form of IPC (darned, need to
 solve my data sharing issues first!)

I think they are already solved if you look at the modules I suggested.

 so the children check what the
 others are currently doing: if a request comes in for a particular
 delivery mechanism, I check if we're already doing N delivery attempts
 and drop the request somewhere (database/file, whatever) if not. I have
 a daemon running that monitors that queue.

I would structure it like this:
- Original server takes request, and writes it to a database table that
holds the queue.
- A cron job checks the queue for messages, reads the status from
MLDBM::Sync to see if we have free processes, and passes the request to
mod_perl if we do.  (Not that this could also be done with something
like PersistentPerl instead.)  If there are no free processes, they are
left on the queue.

 That daemon gets complicated quickly as it also has to throttle delivery
 attempts

My approach only puts that logic in the cron job.

 I need some form of persistent storage (with locking)

The relational database.  Or MLDBM::Sync if you prefer.

 what do
 I do when the delivery mechanism has failed for 6 hours and I have 12000
 messages in the queue *and* make sure current messages get sent in time?

I don't know, that's an application-specific choice.  Of course JMS
doesn't know either.

 3. I install qmail on the various servers, and use that to push messages
 around. This'll take me a week or so (hopefully) to get it running
 reliably in production

One of the major selling points for qmail is easier setup.  You could
use pretty much any mail server though if you have more experience with
something else.  I just like qmail because it's fast.

 Later on, I
 realise that for each messages, a fullblown process is forked *per
 message*: load up perl, compile perl code etc..

I described how to avoid this in another message: use PersistentPerl or
equivalent, or pass 

Re: asynchronous execution, was Re: implementing a set of queue-processing servers

2002-11-20 Thread Perrin Harkins
Aaron Johnson wrote:


This model has eased my testing as well since I can run the script
completely external of the web server I can run it through a debugger if
needed.



You realize that you can run mod_perl in the debugger too, right?  I use 
the profiler and debugger with mod_perl frequently.

- Perrin



Re: asynchronous execution, was Re: implementing a set of queue-processing servers

2002-11-20 Thread Perrin Harkins
Aaron Johnson wrote:


I know you _can_ , but I don't find it convenient.



For me it's pretty much the same as debugging a command-line script.  To 
debug a mod_perl handler I just do something like this:

httpd -X -Ddebug

Then I hit the URL with a browser or with GET and it pops me into the 
debugger.  I have httpd.conf set up to add the PerlFixupHandler 
+Apache::DB line when it sees the debug flag.

I still don't like to give apache long processes to manage, I feel this
can be better handled external of the server and in my case it allows
for automation/reports on non-mod_perl machines.



I try to code it so that the business logic is not dependent on a 
certain runtime environment, and then write a small mod_perl handler to 
call it.  Then I can use the same modules in cron jobs and such.  It can 
get tricky in certain situations though, when you want to optimize 
something for a long-running environment but don't want to break it for 
one-shot scripts.

- Perrin



Re: asynchronous execution, was Re: implementing a set of queue-processing servers

2002-11-19 Thread Perrin Harkins
Stephen Adkins wrote:


So what I think you are saying for option 2 is:

   * Apache children (web server processes with mod_perl) have two
 personalities:
   - user request processors
   - back-end work processors
   * When a user submits work to the queue, the child is acting in a
 user request role and it returns the response quickly.
   * After detaching from the user, however, it checks to see if fewer
 than four children are processing the queue and if so, it logs into
 the mainframe and starts processing the queue.
   * When it finishes the request, it continues to work the queue until
 no more work is available, at which time, it quits its back-end
 processor personality and returns to wait for another HTTP request.





This just seems a bit odd (and unnecessarily complex).



It does when you put it like that, but it doesn't have to be that way. 
I would separate the input (user or queue) from the processing part. 
You'd have a module that runs in mod_perl which knows how to process 
requests.  You have a separate module which can provide a UI for placing 
 requests.  Synchronous ones go straight to processing, while asynch 
ones get added to the queue.

You'd also have a controlling process that polls the queue and if it 
finds anything it uses LWP to send it to mod_perl for handling.  I would 
make this a tiny script triggered from cron if possible, since cron is 
robust and can handle outages and error reporting nicely.

Why not let there be web server processes and queue worker processes
and they each do their own job?  Web servers seem to me to be for
synchronous activity, where the user is waiting for the results.



When I think of queue processing, I think of a system for handling tasks 
in parallel that provides a simple API for plugging in logic, a 
well-defined control interface, logging, easy configuration... sounds 
like Apache to me.  You just need a tiny control process to trigger it 
via LWP.  Apache is already a system for handling a queue of HTTP 
requests in parallel, so you just have to make your requests look like HTTP.

You certainly could do this other ways, but you'd probably have to write 
a lot more code or else use something far less reliable than Apache.

P.S. Another limitation of the use Apache servers for all server 
processing
philosophy seems to be scheduled events or system events (those not
initiated by an HTTP request, which are user events).


Cron/at + LWP.

- Perrin




Re: OSCON BOF report

2002-07-31 Thread Perrin Harkins

Gunther Birznieks wrote:
 What might be nice (but perhaps too overreaching) is an area of P5EE 
 where people can submit success stories

I remember O'Reilly was doing this for a while and publishing them on 
perl.com and in brochures.  Maybe Nat knows more about that effort?

- Perrin




Re: OSCON BOF report

2002-07-30 Thread Perrin Harkins

Johan Vromans wrote:
 Leon Brocard [EMAIL PROTECTED] writes:
 
 
Did I miss any other points? Any opinions?
 
 
 http://linuxtoday.com/news_story.php3?ltsn=2001-08-13-009-20-OP

I hope you meant that as a joke.  Fotango.com, where Leon works, has a 
whole system built on web services in Perl: http://opensource.fotango.com/

Here's an amusing article that makes fun of web services:
http://www.redherring.com/insider/2002/0205/1554.html

Here's a less funny one about how companies are realizing that most of 
them have no use for web services:
http://www.infoworld.com/articles/hn/xml/02/07/25/020725hnwebstall.xml

If web services are the killer app of J2EE (or .NET), we can all breathe 
easy.  Even if you happen to have a use for them, Perl is already better 
at web services than most of the competition.

The breathless tone of that Linux Today article makes it sound like 
there is something technically difficult about web services and 
Microsoft is beating us to it.  It's really just fetching a URL and 
parsing some XML data.  No sweat.

- Perrin




Re: P5EE Sessions

2002-06-25 Thread Perrin Harkins

Gunther Birznieks wrote:
 But from a programming API perspective, I am not sure that the Cache 
 modules that exist are so different from the Apache::Session module in 
 terms of how they are coded except maybe that the handle to the cache 
 data and the Apache::Session id is generated conceivably in a different 
 way.

But that's because Apache::Session is not a session module (or an Apache 
module).  It's just a storage module.  To get a session module, you have 
to glue it to something like Apache::AuthTicket.  I seem to recall your 
Extropia stuff having something that's more like a real session module 
in it.

- Perrin




Re: A reminder of why we're here...

2002-06-09 Thread Perrin Harkins

 I was talking to a TA from Accenture recently about Perl, mod_perl and
 Java and he told me that some java application server (i forgot to ask
 him which) could maintain less JDBC connections than http session
 handling threads and share them between the threads as needed without
 prior knowledge of wheter or not the thread needed to do DB work.

Sure, any multi-threaded Java program can do that.  It's not as useful
as it sounds, unless your program has a lot of threads that don't do any
database work.  That's not very common.  For more on this, see this
post:
http:[EMAIL PROTECTED]
g

 Is there an equivalent way to do this with mod_perl?

With the current mod_perl you should use the recommended reverse proxy
architecture to keep requests for non-mod_perl content from tying up
database connections.  With mod_perl 2, those requests will not invoke a
perl interpreter and thus will not tie up a database connection.

- Perrin




Re: A reminder of why we're here...

2002-06-06 Thread Perrin Harkins

Matt Sergeant wrote:
 To reduce costs and fast-track enterprise application design and
 development, the Java2 Platform, Enterprise Edition (J2EE) technology
 provides a component- based approach to the design, development, assembly,
 and deployment of enterprise applications.

I don't know who they think they're kidding with that reduce costs 
business.  Commercial J2EE software has the most outageous prices.  It's 
common for companies to spend millions on it just to put up a simple web 
store.

- Perrin




Re: A reminder of why we're here...

2002-06-06 Thread Perrin Harkins

[EMAIL PROTECTED] wrote:
 With iplanet shipping free with Solaris 9, and the availability of JBoss, 
 the cost of the app Server software is removed from the equation of 
 'costs'.

If you're using the free stuff, you're in the minority.  Most companies 
use WebLogic or WebSphere, with a few using iPlanet and even fewer using 
Oracle.  The company I work for now uses ATG Dynamo, which is priced so 
high it makes the hardware sound cheap.

This is not a complaint about J2EE, but rather about managers who insist 
on spending millions of dollars rather than using free or low-cost 
alternatives like JBoss, Resin, and Orion.  This attitude seems to be 
the norm among most big companies using Java.

To make this at least slightly relevant to this list, I see Perl's value 
in these situations as being ease of use, speed of development, quality 
of support, and source code availability.  The price is just gravy.

- Perrin




Re: A reminder of why we're here...

2002-06-06 Thread Perrin Harkins

Adam Turoff wrote:
 Have you considered the alternatives?  Like developing with other
 development platforms (like CORBA ORBs), or component technologies
 (COM/COM+/DCOM)?

Hey, I'm just trying to bitch about my managers and the vendors they 
love to pay.  However, J2EE is mostly used for server-side web 
development, and that's what I was talking about.  There's no need for 
any distributed object technology there about 99% of the time.  The real 
alternative is cleanly designed Perl modules leveraging CPAN code.

- Perrin





Re: What does middle tier mean?

2002-03-02 Thread Perrin Harkins

 On Friday, March 1, 2002, at 11:59 AM, Rob Nagler wrote:
  I argue strongly against storing state in
  the middle tier, which adds complexity.  The same argument applies
to
  stored procedures.  Databases are good at storing data, not
executing
  code.

Middle tier is the application between the client and the database:
mod_perl, FastCGI, etc.

- Perrin




Re: stored procs? why?

2002-03-02 Thread Perrin Harkins

 2 - Why can't you just get several database servers and load the
 stored procedures into all of them?

Because they all need access to the same data, and synchronizing
read/write data between multiple databases servers in real time is a
non-trivial problem.  It's much easier to have lots of application
servers hitting one database than to have lots of databases, and keeping
as much as possible out of the database lets you go much further with
that architecture.

- Perrin







Re: stored procs? why?

2002-03-02 Thread Perrin Harkins

 One might think you'd gain similar advantages by doing a prepare on
your
 sql queries prior to running them, but preparing your SQL queries
prior to
 running them only really helps when you are going to run them more
than once
 during the same connection, and they provide no query optimization on
the
 database side, whereas stored procedures are compiled, optimized, and
stored
 in the db server's memory for later use by any process.

I could be wrong, but I think that with Oracle the queries you send will
be kept in the database's query cache, just as if they were saved as
stored procedures.  Using bind variables helps limit the number of
unique queries and keep things in the cache.  It doesn't really matter
though.  This discussion is really about stored procedures that have
application logic in them, not just saved SQL queries.

 I'm just saying that I don't see why it has to be so black and white
here.
 SPs are good; n-tier is good; in fact, it's all good, unless you
overindulge
 in any of it.

I don't think SPs are good.  That's why I raised this question: to hear
why other people think they are.

 One other question:  Why is this discussion happening on this
particular
 list?

There was a thread with several posts from people who seemed excited at
the prospect of writing SPs in perl.  I thought that was a strange thing
to want, so I asked why.

- Perrin




Re: bivio online transaction processing system

2002-02-25 Thread Perrin Harkins


att1.eml
Description: Binary data


Re: bivio online transaction processing system

2002-02-25 Thread Perrin Harkins


att1.eml
Description: Binary data


Re: Better Definitions and Analysis

2002-02-15 Thread Perrin Harkins

 I still believe the quickest route to P5EE acceptance is if it is first
and
 foremost a *documentation project* that basically provides a 1-stop place
 to go for people who intend to do Enterprise programming in Perl and
want
 to know where to go when they want to solve certain problems.

I agree.  I decided a while back that the most useful thing I could do to
further mod_perl development and Perl development in general would be to
write up some CPAN guides to help people with the biggest FAQs on the
mod_perl list.  The templating article I wrote was the first part of this,
and now I'm working on a guide to sharing data between processes
(Cache::Cache, Apache::Session, MLDBM::Sync, etc.) which I hope to present
at the next Perl Conference.  This stuff will probably get folded into the
mod_perl Guide at some point, but applies pretty generally to any serious
programming effort in Perl.

- Perrin




Re: some observations, and a proposed project

2001-11-08 Thread Perrin Harkins

 my only concrete reason for preferring xml, other than that
 it feels right ;), is that you get much better error
 handling right out of the box, especially when you turn on
 validation. that's something that would have to be
 implemented as part of a perl-based config file processor.

Can't you just do something like a require() wrapped in an eval{}?

I'm not against an XML config, although I've always been happy with perl
config files in the past.  (I still want to see the layered config idea that
was discussed earlier.)  It may be that a CGI implementation would need to
cache the data with Storable and stat the XML files to see if they've
changed.

- Perrin