Re: tracking down why a module was loaded?;
Gunther Birznieks wrote: > I unfortunately have to agree. > > And in the end, the salaries for mod_perl programmers > are pretty high right now because of it -- so will a system really cost > less to develop in mod_perl than in Java if Java programmers are becoming > less expensive than mod_perl programmers? > Mod_perl programmers are more expensive as individuals, because mod_perl is more powerful, and allows you access to the Apache API; mod_perlers are more saavy. One or two mod_perlers could do the work of a java shop of ten in half the time. Still a savings. Not to mention the hardware that goes with Java by fiat! ed
Re: [OT] advice needed.
Mike, I think many developers share a similar desire to not have projects (that leverage free software) close down what are really generic programming techniques, routines, classes, protocols, etc. And further, we'd like to contribute enhancements and documentation based upon our work. I'd like to find a lawyer who has experience and/or want to pursue legal means of removing the friction that keeps us from giving back. Part of that work would of course involve contract writing/editing. I'm hiring. Contact me if you are such. It is up to you to educate your potential employers about just how much of what you do is pior open art and how free software can empower them. That means the first contract has to be amended. ;-) Be very explicit about your intentions from the get go, and repeat yourself a few times; never assume they'll look at the code or even closely read your written self-description. Ed Michael Dearman wrote: > Where the heck does trying to do the right thing by > GPL (or similar), in attempting to return some improved > OpenSource code to the community. Or however the license > phrases it. Shouldn't these contracts address that issue > specifically, especially when the project is _based_ on > OpenSource/GPL'd code? > > Mike D.
Re: Determining when a cached item is out of date
On Thu, Jan 16, 2003 at 06:33:52PM +0100, Honza Pazdziora wrote: > On Thu, Jan 16, 2003 at 06:05:30AM -0600, Christopher L. Everett wrote: > > > > Do AxKit and PageKit pay such close attention to caching because XML > > processing is so deadly slow that one doesn't have a hope of reasonable > > response times on a fast but lightly loaded server otherwise? Or is > > it because even a fast server would quickly be on its knees under > > anything more than a light load? > > It really pays off to do any steps that will increase the throughput. > And AxKit is well suited for caching because it has clear layers and > interfaces between them. So I see AxKit doing caching not only to get > the performance, but also "just because it can". You cannot do the > caching easily with more dirty approaches. > > > With a MVC type architecture, would it make sense to have the Model > > objects maintain the XML related to the content I want to serve as > > static files so that a simple stat of the appropriate XML file tells > > me if my cached HTML document is out of date? > > Well, AxKit uses filesystem cache, doesn't it? > > It really depends on how much precission you need to achieve. If you > run a website that lists cinema programs, it's just fine that your > public will see the updated pages after five minutes, not immediatelly > after they were changed by the data manager. Then you can really go > with simply timing out the items in the cache. > > If you need to do something more real-time, you might prefer the push > approach of MVC (because pull involves too much processing anyway, as > you have said), and then you have a small problem with MySQL. As it > lacks trigger support, you will have to send the push invalidation > from you applications. Which might or might not be a problem, it > depends on how many of them you have. I have pages that update as often as 15 seconds. I just use mtime() and has_changed() properly in my custom provider Provider.pm's or rely on the File::Provider's checking the stat of the xml files. Mostly users are getting cached files. For xsp's that are no_cache(1), the code that generates the inforation that gets sent throught the taglib does its own caching. Just as if it were a plain mod_perl handler. they use IPC::MM and Cache::Cache (usually filecache) I've fooled w/ having the cache use different databases but finally decided it didn't make much of a difference since the os and disk can be tuned effectively. The standard rules apply: put the cache on its own disk spindle, ie. not on the same physical disk as your sql database etc. Makes a big difference ... you can see w/ vmstat, systat etc. The only trouble is cleaning up the ever growing stale cache. So, I use this simple script in my /etc/daily.local file, or a guy could use cron. Its similar to what's openbsd uses for its cleaning of /tmp,/var/tmp in the /etc/daily script. Ed. # cat /etc/clean_www.conf CLEAN_WWW_DIRS="/u4/www/cache /var/www/temp" # cat /usr/local/sbin/clean_www #!/bin/sh - # $Id: clean_www.sh,v 1.2 2003/01/03 00:18:27 entropic Exp $ : ${CLEAN_WWW_CONF:=/etc/clean_www.conf} clean_dir() { dir=$1 echo "Removing scratch and junk files from '$dir':" if [ -d $dir -a ! -L $dir ]; then cd $dir && { find . ! -name . -atime +1 -execdir rm -f -- {} \; find . ! -name . -type d -mtime +1 -execdir rmdir -- {} \; \ >/dev/null 2>&1; } fi } if [ -f $CLEAN_WWW_CONF ]; then . $CLEAN_WWW_CONF fi if [ "X${CLEAN_WWW_CONF}" != X"" ]; then echo "" for cfg_dir in $CLEAN_WWW_DIRS; do clean_dir "${cfg_dir}"; done fi
Re: [error] Can't locate CGI.pm in @INC
On Wed, May 28, 2003 at 09:11:06PM -0700, Brown, Jeffrey wrote: > Here are the results from the log file: > > [Wed May 28 20:50:21 2003] [error] No such file or directory at > /htdocs/perl/first.pl line 6 during global destruction. openbsd's httpd is chrooted. Ed.
Re: [error] Can't locate CGI.pm in @INC
On Thu, May 29, 2003 at 04:12:51PM +1000, Stas Bekman wrote: > Brown, Jeffrey wrote: > >Problem solved! > > > >You all are a fantastic resource to newbies! > > > >Jeff > > > >-Original Message- > >From: Ed [mailto:[EMAIL PROTECTED] > >Sent: Wednesday, May 28, 2003 9:28 PM > >To: Brown, Jeffrey; [EMAIL PROTECTED] > > > >On Wed, May 28, 2003 at 09:11:06PM -0700, Brown, Jeffrey wrote: > > > >>Here are the results from the log file: > >> > >>[Wed May 28 20:50:21 2003] [error] No such file or directory at > >>/htdocs/perl/first.pl line 6 during global destruction. > > > > > >openbsd's httpd is chrooted. > > Again, can someone please post a patch/addition for the troubleshooting.pod > doc explaining the problem and the solution in details. I've seen this kind > of questions more than once here. > > Should go into OpenBSD cat at: > http://perl.apache.org/docs/1.0/guide/troubleshooting.html#OS_Specific_Notes > Get the pod by clicking on the [src] button. For the list archive: - rtfm "-u" disables chroot. httpd(8) http://www.openbsd.org/faq/faq10.html#httpdchroot - set up chroot basics The doc for setting up an anoncvs mirror could be adapted for mod_perl. http://www.openbsd.org/anoncvs.shar Ofcourse much of it doesn't apply, but the part about ld.so, etc. is helpful. - List archives dreamwvr figured out how to actually get things to work and posted notes to the list. (so see archives) - 3.3-current (soon to be 3.4) And one last bit added after 3.3 was released, Revision 1.7 to apachectl: http://www.openbsd.org/cgi-bin/cvsweb/src/usr.sbin/httpd/src/support/apachectl pick's up httpd_flags from /etc/rc.conf, so you can just add "-DSSL -u" to httpd_flags. - ports The openbsd ports system is not by default configured to install perl modules or packages in the chroot environment. You would have to set PREFIX or LOCALBASE. see bsd.port.mk(5) and ports(7) (PHP ports are set up for chroot installs). - goolge A nice HOWTO run mod_perl chrooted would be nice. maybe someone's already written it? I hope this helps some. Ed.
Re: proxy front to modperl back with 1.3.24
FYI, There is a patch this morning from the mod_proxy maintainer. http://marc.theaimsgroup.com/?l=apache-httpd-dev&m=101810478231242&w=2 Ed On Fri, Apr 05, 2002 at 02:33:35PM -0800, ___cliff rayman___ wrote: > i had trouble using a proxy front end to both > a mod_perl and mod_php back end servers. > > this works fine for me at 1.3.23, so I reverted > back to it. i copied the httpd.conf files > from the 1.3.24 to my downgraded 1.3.23 > and everything worked correctly on the first > try. > > i was getting garbage characters before the first > or doctype tag, and a 0 character at > the end. also, there was a delay before the > connection would close. i tried turning keep > alives off and on in the back end server, > but i did not note a change in behavior > i also tried some different buffer directives, > including the new ProxyIOBufferSize. > > these garbage characters and delays were > not present serving static content from the > front end server, or when directly requesting > content directly from either of the back end > servers. > > i know they've made some mods to the > proxy module, including support for HTTP/1.1, > but i did not have time to research the exact > cause of the problem. > > just a word of warning before someone > spends hours in frustration, or perhaps > someone can give me a tip if they've > solved this problem. > > -- > ___cliff [EMAIL PROTECTED]http://www.genwax.com/ > >
Re: [RFC] Dynamic image generator handler
On Fri, May 10, 2002 at 10:46:11AM -0700, Michael A Nachbaur wrote: > On Fri, 10 May 2002 08:32:55 +0200 > Robert <[EMAIL PROTECTED]> wrote: > > > Take a look at Apache::ImageMagick > > In my benchmarks I ran, ImageMagick was way slower than GD. I wrote a > little test, rendering a little text image of 120x30. With ImageMagick, > I was getting 0.3 rps, and under GD with similar circumstances I was > getting 1.5rps. I'm sure I could've optimized the ImageMagick one a bit > further, but that quick test settled it for me. > > I looked at Apache::ImageMagick last night however, and although it > seems pretty usefull, it doesn't really address what I want to do with > my module. I'm using Imlib2 w/ the c interface (http://freshmeat.net/projects/imlib2perl/) I needed antialias lines, alpha's etc. I modified my app to use a 'dbi' like interface for potentially any media driver. The diferent 'media drivers', gd, imlib2, *pdf/*tex etc all have different ideas how to draw a line, circle, polygon, text, add colors etc. Now all I have to do to use diffent libraries such as Media->new(Driver => 'imlib2'), or Media->new(Driver => 'gd'), Media->new(Driver => 'svg'), Media->new(Driver => 'pdflib') etc. There are may libraries out there, gd, imlib, imlib2, libart, povray, gdk, flash, pdfAPI2, pdflib, tex, latex, svg, imager, imagemagic, ... There are may good reasons to be able to 'just drop in' a driver ... just look at why the unified interface 'DBI' was developed for RDBM's . Ed
Re: [RFC] Dynamic image generator handler
gt; leverage that, although I'm not certain how difficult it would be to > interface with CSS files. As far as I know, there are Perl CSS parsers, > but I have yet to use them. The configuration for a preset config > template would be layered, so the earlier the definition, the lower the > layer is. The real important part here, is the "name" attribute of any > element, as this identifies where input can be indicated. The above > preset could be used by invoking the following URI. I used CSS.pm for a bit but it was too fat w/ Parse::RecDecent. To unify my app and the browser I use axkit to 'generate' the css from an xml file. black geneva arial 7px white .back { color: black; font-family: geneva, arial; font-size: 7px; background-color: white; } Creating complicated css files are difficult but my drawing app can load its info from the uri, an xml file, a rdbm , Config::General, inifiles or whatever and use different output methods such as axkit's providers to parse the 'color config' to render the *.css file to the browser. This way the document, style, skin and images are all unified. Graphics::ColorNames works wonderfully to help handle all the different color needs. > > http://localhost/genImage/preset=thumbnail-image;src=/images/ducks.jpg > > As you can see, the preset is invoked by passing it's name as an > attribute, and any element that has a name attribute, it's value can be > provided on the URI. If an element has both a value and a name > attribute, the value in the config file can be used as a default. > > *) Caching Schemes > > A caching scheme similar to AxKit could be used. The current module > takes all the input arguments, sorts them (including all values that are > not provided, for completeness), and takes it's MD5 checksum. That > becomes the image's filename on the system. It is placed in a temporary > directory, and any further requests to that same URI, the file is pulled > from the filesystem without regenerating the image. Further, the code > has been blatantly ripped off from AxKit, which separates the directory > into two sub-levels, to prevent performance problems of having too many > files in one directory. > > Note: To prevent the filesystem from filling up, due to DoS attacks, it > may be prudent to have a cron job periodically cull files that have the > oldest access time. Cache::Cache is appropriate here ... > > *) Image Manipulation Modules > > My current code uses GD for text writing, and I'm quite happy with it. > It is extremely fast, and creates nice text output when compiled with a > TTF font engine. Looking forward however, it may not be as desirable if > things like drop shadows is to be done. GD can work with multiple > images, can resize them, etc, but the advanced features are still > unknown. > > *) File Expiration Headers and Browser Caching > > With my current code, it seems that browsers are reluctant to cache > these dynamically generated images. I have passed Expires: headers to > tell the browser to cache file file for a long period of time (2+ > weeks), but I have been unsuccessful. I know the caching headers are > complex, and needs more than one simple header, but fixing this has > moved to the back-burner of my project. However, if more complicated > processing is to be done, and with more images, it will be crucial to > make browsers cache these images. I create a digest w/ MD5 or SHA1 for the image/pdf and use it as the filename and the Cache::Cache key. The cache is easily invalidated if the source image file failes a -e test. I also use cron to delete stale image files. The generated now static image is redirected-to or referenced in the html. I found that it is important to complete the processing of the images before the referencing html document gets served, ... rather than having the html document initiate dynamic imbeded links to create the image. Letting apache serve images as static image-files has proved rock solid for me ... (note keep-alives). There is nothing worse than to pull a page and have to wait for each of the images to show up. Browsers, proxies and users are all real pains to deal w/ when the uri has a query string. Digest's are ugly but they play much better w/ everybody. 014d1c89fc3da6e15e0069000dfa381e44239af71021057594.png Ed
Re: [Templates] Re: Separating Aspects (Re: separating C from V in MVC)
On Fri, Jun 07, 2002 at 09:14:25AM +0100, Tony Bowden wrote: > On Thu, Jun 06, 2002 at 05:08:56PM -0400, Sam Tregar wrote: > > > Suppose you have a model object for a concert which includes a date. On > > > one page, the designers want to dipslay the date in a verbose way with > > > the month spelled out, but on another they want it abbreviated and fixed > > > length so that dates line up nicely. Would you put that formatting in > > > the controller? > > In the script: > > > >$template->param(long_date => $long_date, > > short_date => $short_date); > > In the template: > > > >The long date: > >The short date: > > Can I vote for "yick" on this? > > A designer should never have to come to a programmer just to change the > formatting of a date. > > I'm a huge fan of passing Date::Simple objects, which can then take a > strftime format string: > > [% date.format("%d %b %y") %] > [% date.format("%Y-%m-%d") %] > > Tony > xmlns:date="http://exslt.org/dates-and-times"; wins for me. date:date-time() date:date() date:time() date:month-name() ... etc xslt solutions win for me because it its supported (or seems to be) by many major languages, and applications. xslt stylesheets can be processed, reused and shared with my c,perl, java,javascript, ruby, mozilla, ieexplorer ... kde apps, gnome apps ... etc Imagine having your templates and data supported and interoperable ... Aren't we trying to rid the world of proprietary (only works here) things? Ed (an axkit lover)
Re: E-commerce payment systems for apache/mod_perl
On Tue, Jul 02, 2002 at 10:43:14PM -0500, David Dyer-Bennet wrote: > Any obvious choices for a relatively small-scale e-commerce payment > processing system for a server running apache / mod_perl? http://interchange.redhat.com/ - it's mature - we wrote our own but i'd use it instead if I had to start over http://www.ipaymentinc.com/ - reseller for authorize.net http://authorize.net/ - big transaction provider - supported cpan module (simple/trivial) http://www.dhl.com/ - we get really cheap rates for dhl's next day shipping service world wide (1-2 days continential us @ $6) (3-4 days door2door to pakistan from indianapolis @ $21) ... much, much cheaper than even the cheapest ups-residential-ground - ups has well developed xml API's, dhl dosn't Ed
Re: Apache Hello World Benchmarks Updated
Hi, (as far as i can tell after a quick peek at the code and some debugging) It looks like there is a bug w/ AxKit::run_axkit_engine() and/or Apache::AxKit::Cache::_get_stats() run_axkit_engine() wants to create a .gzip cachefile when AxGzipOutput is off. When AxGzipOutput is off the .gzip file is never made and _get_stats() returns w/ !$self->{file_exists} effectivly disabling delivering cached copies. With AxGzipOutput enabled both files are created and appropriate cached copies are delivered as expected. I haven't decided for myself a best fix except for just enabling AxGzipOutput. So, I reran hello/bench.pl w/ AxGzipOutput On and sped axkit up quite a bit. attached are some diffs and a couple of 1 sec bench.pl runs. Would be interesting to see how axkit compares now? Thanks, Ed On Mon, Oct 14, 2002 at 12:26:06AM -0700, Josh Chamas wrote: > Hey, > > The Apache Hello World benchmarks are updated at > > http://chamas.com/bench/ > > The changes that affect performance numbers include: > > Set MaxRequestsPerChild to 1000 globally for more realistic run. > > Set MaxRequestsPerChild to 100 for applications that seem to leak > memory which include Embperl 2.0, HTML::Mason, and Template Toolkit. > This is a more typical setting in a mod_perl type application that > leaks memory, so should be fairly representative benchmark setting. > > Note that the latter change seemed to have the most benefit for Embperl 2.0, > with some benefit for Template Toolkit & less ( but some ) for HTML::Mason > on the memory usage numbers. > > Regards, > > Josh > > Josh Chamas, Founder phone:925-552-0128 > Chamas Enterprises Inc.http://www.chamas.com > NodeWorks Link Checkinghttp://www.nodeworks.com > --- hello/bench.pl Sun Oct 13 04:07:35 2002 +++ hello-gz/bench.pl Tue Oct 15 00:15:48 2002 @@ -106,7 +106,7 @@ # FIND AB my $httpd_dir = $HTTPD_DIR; -$AB = "$httpd_dir/ab"; +$AB = '/usr/sbin/ab'; #"$httpd_dir/ab"; unless(-x $AB) { print "ab benchmark utility not found at $AB, using 'ab' in PATH\n"; $AB = 'ab'; --- hello/bench.pl Sun Oct 13 04:07:35 2002 +++ hello-gz/bench-gz.plTue Oct 15 00:16:32 2002 @@ -106,7 +106,7 @@ # FIND AB my $httpd_dir = $HTTPD_DIR; -$AB = "$httpd_dir/ab"; +$AB = '/usr/sbin/ab'; #"$httpd_dir/ab"; unless(-x $AB) { print "ab benchmark utility not found at $AB, using 'ab' in PATH\n"; $AB = 'ab'; @@ -583,6 +583,7 @@ AxAddStyleMap application/x-xpathscript +Apache::AxKit::Language::XPathScript AxAddProcessor text/xsl hello.xsl AxCacheDir $TMP/axkit + AxGzipOutput On }], 'AxKit XSLT Big' => ['hxsltbig.xml', qq{ @@ -593,6 +594,7 @@ AxAddStyleMap application/x-xpathscript +Apache::AxKit::Language::XPathScript AxAddProcessor text/xsl hxsltbig.xsl AxCacheDir $TMP/axkit + AxGzipOutput On }], 'AxKit XSP Hello' => ['hello.xsp', qq{ @@ -601,6 +603,7 @@ AxAddStyleMap application/x-xsp +Apache::AxKit::Language::XSP AxAddProcessor application/x-xsp NULL AxCacheDir $TMP/axkit + AxGzipOutput On }], 'AxKit XSP 2000' => ['h2000.xsp', qq{ @@ -609,6 +612,7 @@ AxAddStyleMap application/x-xsp +Apache::AxKit::Language::XSP AxAddProcessor application/x-xsp NULL AxCacheDir $TMP/axkit + AxGzipOutput On }], # new Embperl 2.x series [2002-10-15 00:16:53] Found apache web server at /usr/local/sbin/httpd_perl [2002-10-15 00:16:53] running 1 groups of benchmarks for 1 seconds [2002-10-15 00:16:56] testing AxKit v1.6 XSP 2000 at http://localhost:5000/h2000.xsp?title=Hello%20World%202000&integer=2000 [2002-10-15 00:17:11] testing AxKit v1.6 XSP Hello at http://localhost:5000/hello.xsp [2002-10-15 00:17:25] testing AxKit v1.6 XSLT Hello at http://localhost:5000/hxslt.xml [2002-10-15 00:17:40] testing AxKit v1.6 XSLT Big at http://localhost:5000/hxsltbig.xml Test Name Test File Hits/sec # of Hits Time(sec) secs/Hit Bytes/Hit - - - - - - - AxKit v1.6 XSP 2000 h2000.xsp14.8 20 1.35 0.067600 28680 AxKit v1.6 XSP Hellohello.xsp 245.5261 1.06 0.004073 353 AxKit v1.6 XSLT Hello hxslt.xml 157.6169 1.07 0.006343 331 AxKit v1.6 XSLT Big hxsltbig.x 37.3 38 1.02
Re: code evaluation in regexp failing intermittantly
On Wed, Oct 23, 2002 at 02:24:48PM -0500, Rodney Hampton wrote: > > Can any of you gurus please help! > A wise guru would help by directing you to: http://perl.apache.org/docs/tutorials/tmpl/comparison/comparison.html
Re: Random broken images when generating dynamic images
On Wed, Oct 23, 2002 at 05:55:05PM -0500, Dave Rolsky wrote: > So here's the situation. > > I have some code that generates images dynamically. It works, mostly. > > Sometimes the image will show up as a broken image in the browser. If I > reload the page once or twice, the image comes up fine. > > On a page with 5 different dynamic images (all generated by the same chunk > of code, it's a set of graphs), I'll often see 1 or 2 as a broken image, > but the rest work. Sometimes all 5 are ok. > > I tried out a scheme of writing them to disk with dynamically generated > files, but since I still need to do some auth checking, they end up being > served dynamically and I have the same problem. > > To make it even weirder, I just took a look at one of the image files that > showed up as broken, and it's fine (I can't view it directly in my > browser). I've seen the problem before. My solution was to save the dynamic images on disk and serve them just like plain 'ol static files from the front-end server. This way everything is served from the same "Keep-Alive" request. And apache does all the http/1.1 headers/chunked-encoding for me. Your MaxKeepAliveRequests would then be the culprit on your end but not likely unless its set really low. I'm not sure how the browser determines the equivalent limit. tcpdump showed that opera created a second keep-alive request after 10 images for me (could be limiting on bytes rather than requests ... don't know). You can still serve dynamicly and handle the custom auth w/ the backend and maintain the clients keep-alive. The current mod_proxy will maintain the clients keep-alive eventhough your backend has keepalive off. Be sure all the required http/1.1 components/headers are sent to maintain a keep-alive. I'm interested in what you finally work out. thanks, Ed
Re: repost: [mp1.0] recurring segfaults on mod_perl-1.27/apache-1.3.26
Daniel, Could be bad hardware. Search google for Signal 11. Probably your memory (usual cause I've seen). good luck. Ed On Tue, Oct 08, 2002 at 09:46:16AM -0700, [EMAIL PROTECTED] wrote: > Sorry for the repost, but no responses so far, and I need some help with > this one. > > I've managed to get a couple of backtraces on a segfault problem we've > been having for months now. The segfaults occur pretty rarely on the > whole, but once a client triggers one on a particular page, they do not > stop. The length and content of the request are key in making the > segfaults happen. Modifying the cookie or adding characters to the > request line causes the segfaults to stop. > > example (word wrapped): > > > This request will produce a segfault (backtrace in attached gdb1.txt) > and about 1/3 of the expected page : > > > nc 192.168.1.20 84 > GET /perl/section/entcmpt/ HTTP/1.1 > User-Agent: Mozilla/5.0 (compatible; Konqueror/3; Linux 2.4.18-5) > Pragma: no-cache > Cache-control: no-cache > Accept: text/*, image/jpeg, image/png, image/*, */* > Accept-Encoding: x-gzip, gzip, identity > Accept-Charset: iso-8859-1, utf-8;q=0.5, *;q=0.5 > Accept-Language: en > Host: 192.168.1.20:84 > Cookie: > >mxstsn=1033666066:19573.19579.19572.19574.19577.19580.19576.19558.19560.19559.19557.19567.19566.19568.19544.19553.19545.19551.19554.19546.19548.19547.19532.19535.19533.19538.19534:0; > > > Apache=192.168.2.1.124921033666065714 > > > Adding a bunch of zeroes to the URI (which does not change the code > functionality) causes the page to work correctly: > > > nc 192.168.1.20 84 > GET > /perl/section/entcmpt/? > > HTTP/1.1 > User-Agent: Mozilla/5.0 (compatible; Konqueror/3; Linux 2.4.18-5) > Pragma: no-cache > Cache-control: no-cache > Accept: text/*, image/jpeg, image/png, image/*, */* > Accept-Encoding: x-gzip, gzip, identity > Accept-Charset: iso-8859-1, utf-8;q=0.5, *;q=0.5 > Accept-Language: en > Host: 192.168.1.20:84 > Cookie: > >mxstsn=1033666066:19573.19579.19572.19574.19577.19580.19576.19558.19560.19559.19557.19567.19566.19568.19544.19553.19545.19551.19554.19546.19548.19547.19532.19535.19533.19538.19534:0; > > > Apache=192.168.2.1.124921033666065714 > > > > > Some info: > /usr/apache-perl/bin/httpd -l > Compiled-in modules: >http_core.c >mod_env.c >mod_log_config.c >mod_mime.c >mod_negotiation.c >mod_status.c >mod_include.c >mod_autoindex.c >mod_dir.c >mod_cgi.c >mod_asis.c >mod_imap.c >mod_actions.c >mod_userdir.c >mod_alias.c >mod_access.c >mod_auth.c >mod_so.c >mod_setenvif.c >mod_php4.c >mod_perl.c > > > > Please forgive any obvious missing info (i'm not a c programmer). The > first backtrace shows the segfault happening in mod_perl_sent_header(), > and the second shows it happening in the ap_make_array() which was from > Apache::Cookie. I don't have one handy now, but I've also seen it happen > in ap_soft_timeout() after an XS_Apache_print (r->server was out of bounds). > > I've added a third backtrace where r->content_encoding contains the > above 'mxstsn' cookie name. > > > > > Any help would be greatly appreciated. > > -- > -- > Daniel Bohling > NewsFactor Network > > [root@proxy dumps]# gdb /usr/apache-perl/bin/httpd core.12510 > GNU gdb Red Hat Linux (5.2-2) > Copyright 2002 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and you are > welcome to change it and/or distribute copies of it under certain conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for details. > This GDB was configured as "i386-redhat-linux"... > Core was generated by `/usr/apache-perl/bin/httpd'. > Program terminated with signal 11, Segmentation fault. > Reading symbols from /lib/libpam.so.0...done. > Loaded symbols for /lib/libpam.so.0 > Reading symbols from /usr/lib/libmysqlclient.so.10...done. > Loaded symbols for /usr/lib/libmysqlclient.so.10 > Reading symbols from /lib/libcrypt.so.1...done. > Loaded symbols for /lib/libcrypt.so.1 > Reading symbols from /lib/libresolv.so.2...done. > Loaded symbols for /lib/libresolv.so.2 > Reading symbols from /lib/i686/libm.so.6...done. > Loaded symbols for /lib/i686/libm.so.6 > Reading symbols from /lib/libdl.so.2...done. > Loaded symbols for /lib/libdl.so.2 > Reading symbols from /lib/libnsl.so.1...done. > Loaded symbols for /li
Re: repost: [mp1.0] recurring segfaults on mod_perl-1.27/apache-1.3.26
On Fri, Oct 18, 2002 at 03:54:22PM -0400, Perrin Harkins wrote: > Ed wrote: > >Could be bad hardware. Search google for Signal 11. > > That's actually pretty rare. Segfaults are usually just a result of > memory-handling bugs in C programs. I saw the problem when someone had their memory speed too low in their bios using an asus-a7v motherboard. Apps such as bzip2 croaked and memory intensive compiles failed in random places. very similar to the first answer here: http://www.bitwizard.nl/sig11/ When we were trying to debug, the failures were a giant mystery. We spent days inside of gdb and such trying to figure out what the heck was up. It turns out the bios reset to incorrect settings after a power failure and took a week or so till the random sig 11's showed up. It was at a remote colocation too ... (checking the bios was last on our list) We ended up replacing the box at the colo ... this melted/sig11 box is still able to run netbsd with the bios under-clocked (up 183 days) but they dont use it for anything important Anyway I have a story about a bad nic cable too but will save it /me paronoid about mysterious sig 11's ... Ed
Re: Random broken images when generating dynamic images
On Wed, Oct 23, 2002 at 05:55:05PM -0500, Dave Rolsky wrote: > So here's the situation. > > I have some code that generates images dynamically. It works, mostly. > > Sometimes the image will show up as a broken image in the browser. If I > reload the page once or twice, the image comes up fine. > > On a page with 5 different dynamic images (all generated by the same chunk > of code, it's a set of graphs), I'll often see 1 or 2 as a broken image, > but the rest work. Sometimes all 5 are ok. > > I tried out a scheme of writing them to disk with dynamically generated > files, but since I still need to do some auth checking, they end up being > served dynamically and I have the same problem. > > To make it even weirder, I just took a look at one of the image files that > showed up as broken, and it's fine (I can't view it directly in my > browser). I've seen the problem before. My solution was to save the dynamic images on disk and serve them just like plain 'ol static files from the front-end server. This way everything is served from the same "Keep-Alive" request. And apache does all the http/1.1 headers/chunked-encoding for me. Your MaxKeepAliveRequests would then be the culprit on your end but not likely unless its set really low. I'm not sure how the browser determines the equivalent limit. tcpdump showed that opera created a second keep-alive request after 10 images for me (could be limiting on bytes rather than requests ... don't know). You can still serve dynamicly and handle the custom auth w/ the backend and maintain the clients keep-alive. The current mod_proxy will maintain the clients keep-alive eventhough your backend has keepalive off. Be sure all the required http/1.1 components/headers are sent to maintain a keep-alive. I'm interested in what you finally work out. thanks, Ed
Re: More Segfaultage - FreeBSD, building apache, ssl, mod_perl from ports
On Tue, Nov 12, 2002 at 04:29:19PM +, Rafiq Ismail (ADMIN) wrote: > I'm a bit irritated by FreeBSD ports at the moment and need somoene to > shine some light. I need to build Apache from ports on a BSD box - it has > to be from ports - but i don't want to include mod_perl in as a dso. > Thus, I'd like to go to ports and 'Make' with a bunch of options which > will compile mod_perl straight into my apache1.3-ssl package. Having run > make on www/apache1.3-ssl and www/mod_perl, all I get is segfaults. I > simply want to run one make to build it in one go. > > How??? > > I'm sure that the BSD users amoungst you have all done it 101 times. > > Help please? Attached is a port i use for OpenBSD. (It needs cleaning, but works for me) There are a bunch of "customizations" but some key points to the Makefile are: DISTFILES= PATCH_LIST_SUP= FAKE_FLAGS= post-patch: Ed. www-mod_perl.tar.gz Description: application/tar-gz
Re: Image Magick Alternatives?
On Mon, Feb 18, 2002 at 09:26:57PM -, Jonathan M. Hollin wrote: > The WYPUG migration from Win2K to Linux is progressing very nicely. > However, despite my best efforts, I can't get Perl Magick to work > (Image::Magick compiled successfully and without problems). All I use > Perl Magick for is generating thumbnails (which seems like a waste > anyway). So, is there an alternative - a module that will take an image > (gif/jpeg) and generate a thumbnail from it? I have searched CPAN but > haven't noticed anything suitable. If not, is there anyone who would be > willing to help me install Perl Magick properly? Imager can do what you want. many formats, antialias, freetype, etc. Ed
pod and EmbPerl
Does anyone know whether it is possible to pod-ify an EmbPerl document? When embedding pod directives in my EmbPerl pages and then running pod2html on them, the pod2html interpreter returns a blank page. thanks, Ed
RE: :Oracle && Apache::DBI
Ian-- I very occasionally get these errors while using DBI and DBD::Oracle under mod_perl. I find that it generally happens when a random, perfectly good SQL statement causes the Oracle process dump the connection and write the reason to alert.log. Try doing the following: from your oracle home, run: > find . -name 'alert*' -print Go to that directory, read the alert files, and look through any corresponding trace files. The trace files contain the sql that actually cause the trace dump. I find that I can usually rewrite the sql statement in such a way that it no longer dumps core. Again, this happens _very_ rarely. Hope this helps, Ed -Original Message- From: Ian Kallen [mailto:[EMAIL PROTECTED]] Sent: Monday, May 22, 2000 9:37 PM To: [EMAIL PROTECTED] Subject: DBD::Oracle && Apache::DBI I've done everything I can think of to shore up any DB connection flakiness but I'm still plagued by errors such as these: DBD::Oracle::db selectcol_arrayref failed: ORA-12571: TNS:packet writer failure ...this is only a problem under mod_perl, outside of the mod_perl/Apache::DBI environment everything seems fine. Once the db connection is in this state, it's useless until the server gets a restart. My connect strings look good and agree, I put Stas' ping method in the DBD::Oracle::db package, set a low timeout, called Oracle (they don't want to hear about it). Everything is the latest versions of mod_perl/Apache/DBI/DBD::Oracle connecting to an Oracle 8.1.5 db on Solaris. Is Apache::DBI not up to it? (it looks simple enough) Maybe there's a better persistent connection method I should be looking at? -- Salon Internet http://www.salon.com/ Manager, Software and Systems "Livin' La Vida Unix!" Ian Kallen <[EMAIL PROTECTED]> / AIM: iankallen / Fax: (415) 354-3326
apache.org down
"Hughes, Ralph" wrote: > COOL! > I couldn't wait... > I built and installed mod_perl 1.24 and it fixed the problem! Now if I can > just get the CGI module > to recognize my domainname .. :-) > > -Original Message- > From: Hughes, Ralph > Sent: Friday, June 02, 2000 2:02 PM > To: Geoffrey Young; 'Michael Todd Glazier'; ModPerl > Subject: RE: Segmetation Fault problem > > I'm not too good on back traces myself. ` > I'm using a dynamic build of mod_perl, so I may try building the 1.24 > version next week sometime. > I hadn't thought of changing the PerFreshStart parameter, it might make a > difference... > > -Original Message- > From: Geoffrey Young [mailto:[EMAIL PROTECTED]] > Sent: Friday, June 02, 2000 1:11 PM > To: Hughes, Ralph; 'Michael Todd Glazier'; ModPerl > Subject: RE: Segmetation Fault problem > > hmmm, did you try upgrading your installation then? > you are using a static mod_perl? > PerlFreshRestart Off? > > I'm no good at reading backtraces, but posting that is probably the next > step (see SUPPORT doc section on core dumps in the distribution) > > sorry I can't be of more help... > > --Geoff
was apache.org down
Level 3 is broken. They know and are working on it. hmmm Ed
Re: was apache.org down
Replying to myself. It is back up, obviously. sorry for the noise Ed Phillips wrote: > Level 3 is broken. > > They know and are working on it. hmmm > > Ed
Re: [benchmark] DBI/preload (was Re: [RFC] improving memory mappingthru code exercising)
Yes, very cool Stas! Perrin Harkins wrote: > On Sat, 3 Jun 2000, Stas Bekman wrote: > > > correction for the 3rd version (had the wrong startup), but it's almost > > the same. > > > > Version Size SharedDiff Test type > > > > 1 3469312 2609152 860160 install_driver > > 2 3481600 2605056 876544 install_driver & connect_on_init > > 3 3469312 2588672 880640 preload driver > > 4 3477504 2482176 995328 nothing added > > 5 3481600 2469888 1011712 connect_on_init > > Cool, thanks for running the test! I will put this information to good > use...
Re: [OT now] Re: Template techniques
I'm just using XML on the backend for content management and as a way to standardize what I recieve from partners and content folks, then storing parsed content in a database from which I output text, HTML, and/or XML. XML::Parser suits quite fine for the above. So, Perl has plenty of XML support, imo. I've taken a look at what Matt is up to and I'm intrigued, but don't have a need for it as yet. Joshua, what is the itch that you are scratching if you care to opine? Ed Drew Taylor wrote: > Joshua Chamas wrote: > > > > Perrin Harkins wrote: > > > > > > On Fri, 9 Jun 2000, Drew Taylor wrote: > > > > I really like the fact that templates can be compiled to perl code & > > > > cached. Any others besides Mason & EmbPerl (and TT in the near future)? > > > > > > Sure: Apache::ePerl, Apache::ASP, Text::Template, and about a million > > > unreleased modules that people wrote for their own use. (I think writing > > > a module that does this should be a rite of passage in Perl hacking.) > > > > > > > For my second rite of passage, I'm hacking XML::XSLT > > integration into Apache::ASP for realtime XSLT document > > rendering with a sophisticated caching engine utilizing > > Tie::Cache. Moving forward, the XML buzzword seems to be > > just about a necessity. > > > > Take it as a sign of respect Matt :) > Cool! The thing that perl is missing the most right now is XML support. > The more (and the sooner :-) packages support XML easily and natively, > the better. I'm still an XML newbie, so all this recent perl XML > development is very exciting for me! > > -- > Drew Taylor > Vialogix Communications, Inc. > 501 N. College Street > Charlotte, NC 28202 > 704 370 0550 > http://www.vialogix.com/
[OT] Contract Language for free software
Hi All, This is very OT, but is related to mod_perl. I have a contract as yet unsigned with a Web Company to possibly rewrite their horrible perl 4 era CGI apps as nice clean mostly OO mod_perl apps. It's a simple job, rather generic. I usually license my code with the Artistic License, so this is not a Licensing question, exactly. In the contract, they have sent me, they have a paragraph that signs the rights to all inventions over to them. >From my perspective, what little invention there might be in these apps, is drawn directly from freely available documentation and advice, and the apps will of course rely heavily on CPAN modules, as well as things I've written before. So, what I'm looking for is advice or a paragraph that establishes that the greater part of invention is already public, that these programs use free software, and that non-specific to their business inventions, if any should be returned to the community, etc. I'm not a lawyer. :-) In the past, I've just ignored such paragraphs and I've lowered the boom in a banner at the top of my code. But, I'd like to put something in any contract that I sign and these people have no idea how to deal with free software. They want me to sign an NDA now that I've seen their hideous spaghetti, which is of course full of copyright notices. I told them that if anyone stole this code, they would need their head checked! If someone has crafted any pro free software contracts that cover invention and they would like to share, I'd be grateful. Cheers, Ed
Re: [OT] Contract Language for free software
Please excuse the horrible formatting, The version below should be more readable. Ed Phillips wrote: > Hi All, > > This is very OT, but is related to mod_perl. I have a contract as yet > unsigned > with a Web Company to possibly rewrite their horrible perl 4 era CGI apps > as nice clean mostly OO mod_perl apps. It's a simple job, rather generic. > > I usually license my code with the Artistic License, so this is not a > Licensing > question, exactly. In the contract, they have sent me, they have a > paragraph > that signs the rights to all inventions over to them. > > From my perspective, what little invention there might be in these apps, > is drawn directly from freely available documentation and advice, and > the apps will of course rely heavily on CPAN modules, as well as > things I've written before. So, what I'm looking > for is advice or a paragraph that establishes that the greater part of > invention > is already public, that these programs use free software, and that > non-specific > to their business inventions, if any should be returned to the community, > etc. > > I'm not a lawyer. :-) In the past, I've just ignored such paragraphs and > I've > lowered the boom in a banner at the top of my code. But, I'd like to put > something > in any contract that I sign and these people have no idea how to deal with > free software. > They want me to sign an NDA now that I've seen their hideous spaghetti, > which is > of course full of copyright notices. I told them that if anyone stole > this code, they would > need their head checked! > > If someone has crafted any pro free software contracts that cover > invention and > they would like to share, I'd be grateful. > > Cheers, > > Ed
Re: [OT] [JOB] mod_perl and Apache developers wanted
It is interesting and and somewhat ironic that the Engineering dep at eToys is part of the open source community and culture while their management's behavior was so disastrously misguided and so misunderstanding of net culture and precedent. They shot themselves in the foot pretty badly. Would eToys have paid for the legal expenses of the Etoy group if they weren't clued in by their Engineering department? Have they learned a hard lesson? Perrin is an exemplary figure, and I commiserate with him, but some basic precedents of net culture need to be respected for the network to function and the culture to flourish. If we had not protested the attempted eToys domain grab, and I was one who protested, they may have never recanted and Etoy might still be fighting at absurd personal cost. Cheers, Ed Paul Singh wrote: > Regardless of what eToys' intentions were, the way I see it, this was a case > in which a billion dollar corporation (well, at least it was back then) > filed suit against a handful of artists who had the etoy.com domain way > before eToys came along. eToys had no legitimate stake to the domain... and > I don't associate legitimacy with the law... they seldom coincide. So if > this isn't a case of the bigger guy bullying the little guy, what is it? > Granted, I have a distant association with the eToy crew so my opinions will > be biased... however, even with staying to the facts and ignoring eToys' > motivations, their actions alone reek of unfairness (at best). > > Of course, this says little of what type of work environment eToys is and > the people that work there... but it does comment on the corporation and the > people running it. > > But as you said, this is definitely off-topic, and I will cease further > comment... take care. > > - jps > > > -Original Message- > > From: Perrin Harkins [mailto:[EMAIL PROTECTED]] > > Sent: Friday, June 16, 2000 4:48 PM > > To: Paul Singh > > Cc: ModPerl Mailing List > > Subject: RE: [OT] [JOB] mod_perl and Apache developers wanted > > > > > > On Thu, 15 Jun 2000, Paul Singh wrote: > > > While that may be true (as with many publications), I hope you're not > > > denying the facts of this case > > > > The basic facts are correct: eToys received complaints from parents about > > the content their children found on the etoy.com site and, after failing > > to reach an agreement with the site's operators, filed a lawsuit involving > > trademarks which led to etoy being ordered to shut down their site by a > > judge. > > > > Slashdot's coverage ignored or underreported some aspects of the situation > > (the motivation behind the lawsuit, epxloitation of the name confusion on > > the part of etoy), and reported some conjecture and pure flights of fancy > > as fact (evil intentions, scheming lawyers). You have no idea how painful > > it is to read things like that from a source that you trust and consider > > part of your community. I guess I should have known better though: > > Slashdot is an op/ed site. If you want the news, you still have to read > > the New York Times (who had much more accurate coverage of the events). > > > > Anyway, I don't claim that eToys was right to take legal action, just that > > the reports about an evil empire were greatly exaggerated and that eToys > > is a good place to work, full of good people. Anyone who doesn't believe > > me at this point probably never will, so I'm going to stop spamming the > > list about this subject and go back to spamming about mod_perl. > > > > - Perrin > >
RE: setting LD_LIBRARY_PATH via PerlSetEnv does not work
I ran into this exact same problem this weekend using: -GNU ld 2.9.1 -DBD::Oracle 1.06 -DBI 1.14 -RH Linux 6.0 -Oracle 8i Here's another, cleaner (I think) solution to your problem: after running perl Makefile.PL, modify the resulting Makefile as follows: 1. search for the line LD_RUN_PATH= 2. replace it with LD_RUN_PATH=(my_oracle_home)/lib (my_oracle_home) is, of course, the home path to your oracle installation. In particular, the file libclntsh.so.8.0 should exist in that directory. (If you use cpan, the build directory for DBD::Oracle should be in ~/.cpan/build/DBD-Oracle-1.06/ if you're logged in as root.) Then, just type make install, and all should go well. FYI, setting LD_RUN_PATH has the effect of hard-coding the path to (my_oracle_home)/lib in the resulting Oracle.so file generated by the DBD::Oracle so that at run-time, it doesn't have to go searching through LD_LIBRARY_PATH or the default directories used by ld. The reason I think this is cleaner is because this way, the Oracle directory is not hardcoded globally into everyone's link paths, which is what ldconfig does. For more information, check out the GNU man page on ld: http://www.gnu.org/manual/ld-2.9.1/html_mono/ld.html or an essay on LD_LIBRARY_PATH: http://www.visi.com/~barr/ldpath.html cheers, Ed -Original Message- From: Stas Bekman [mailto:[EMAIL PROTECTED]] Sent: Monday, August 21, 2000 6:51 AM To: Richard Chen Cc: Yann Ramin; [EMAIL PROTECTED] Subject: Re: setting LD_LIBRARY_PATH via PerlSetEnv does not work On Mon, 21 Aug 2000, Richard Chen wrote: > It worked like a charm! If PerlSetEnv could not do it, I think > this should be documented in the guide. I could not find any mention done. thanks for the tip! > about ldconfig in the modperl guide. May be I missed it somehow. > > The procedure on linux is very simple: > # echo $ORACLE_HOME/lib >> /etc/ld.so.conf > # ldconfig > > Thanks > > Richard > > On Sun, Aug 20, 2000 at 08:11:50PM -0700, Yann Ramin wrote: > > As far as FreeBSD goes, LD_LIBRARY_PATH is not searched for setuid > > programs (aka, Apache). This isn't a problem for CGIs since they don't > > do a setuid (and are forked off), but Apache does, and mod_perl is in > > Apache. I think thats right anyway :) > > > > You could solve this globaly by running ldconfig (I assume Linux has it, > > FreeBSD does). You'd be looking for: > > > > ldconfig -m > > > > Hope that helps. > > > > Yann > > > > Richard Chen wrote: > > > > > > This is a redhat linux 6.2 box with perl 5.005_03, Apache 1.3.12, > > > mod_perl 1.24, DBD::Oracle 1.06, DBI 1.14 and oracle 8.1.6. > > > For some odd reason, in order to use DBI, I have to set > > > LD_LIBRARY_PATH first. I don't think I needed to do this when I > > > used oracle 7. This is fine on the command line because > > > I can set it in the shell environment. For cgi scripts, > > > the problem is also solved by using apache SetEnv directive. However, > > > this trick does not work under modperl. I had tried PerlSetEnv > > > to no avail. The message is the same as if the LD_LIBRARY_PATH is not set: > > > > > > install_driver(Oracle) failed: Can't load > > > '/usr/lib/perl5/site_perl/5.005/i386-linux/auto/DBD/Oracle/Oracle.so' for module DBD::Oracle: > > > libclntsh.so.8.0: cannot open shared object file: No such file or directory at > > > /usr/lib/perl5/5.00503/i386-linux/DynaLoader.pm line 169. at (eval 27) line 3 Perhaps a required shared > > > library or dll isn't installed where expected at /usr/local/apache/perl/tmp.pl line 11 > > > > > > Here is the section defining LD_LIBRARY_PATH under Apache::Registry: > > > > > > PerlModule Apache::Registry > > > Alias /perl/ /usr/local/apache/perl/ > > > > > > PerlSetEnv LD_LIBRARY_PATH /u01/app/oracle/product/8.1.6/lib > > > SetHandler perl-script > > > PerlHandler Apache::Registry > > > Options ExecCGI > > > PerlSendHeader On > > > allow from all > > > > > > > > > Does anyone know why PerlSetEnv does not work in this case? > > > How come SetEnv works for cgi scripts? What is the work around? > > > > > > Thanks for any info. > > > > > > Richard > > > > -- > > > > > > Yann Ramin [EMAIL PROTECTED] > > Atrus Trivalie Productions www.redshift.com/~yramin > > Monterey High ITwww.montereyhigh.com > > ICQ 46805627 > > AIM
Re: OT: Server-push client page reload
A very impressive 95 lines o' Perl Randal! Ed "Randal L. Schwartz" wrote: > >>>>> "Michael" == Michael Nachbaur <[EMAIL PROTECTED]> writes: > > Michael> This is off-topic, but I need an answer pretty quick, and I > Michael> *am* writing this app using mod_perl, so its sorta related > Michael> (also, I don't want the headache of re-subscribing to a new > Michael> list). > > Michael> You know those online web-based tech support chat systems? > Michael> Its commonly frame based, but its just like IRC, but over > Michael> HTML. when a user posts a message it immediatly pops up on > Michael> the chat frame, and you submit your message through a > Michael> regular-ol' HTML form. I don't think this is an applet, > Michael> because this works in all sorts of browsers. I think its > Michael> javascript, but I'm not sure. My main question, is when the > Michael> server knows that a new message has been posted, how does it > Michael> push that new page out to the client web browser? I'm used > Michael> to all page-views originating from the client...not the > Michael> server. > > Michael> Any ideas? > > Server push is not universal. Client Pull is more available. > See <http://www.perlmonks.org/index.pl?node_id=31083> for an example > of a very short client-pull webchat, from an upcoming WebTechniques > column (past columns at <http://www.stonehenge.com/merlyn/WebTechniques/>). > > -- > Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 > <[EMAIL PROTECTED]> http://www.stonehenge.com/merlyn/> > Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. > See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!
Re: open(FH,'|qmail-inject') fails
Greg Stark wrote: > A better plan for such systems is to have a queue in your database for > parameters for e-mails to send. Insert a record in the database and let your > web server continue processing. > > Have a separate process possibly on a separate machine or possibly on multiple > machines do selects from that queue and deliver mail. I think the fastest way > is over a single SMTP connection to the mail relay rather than forking a > process to inject the mail. > > This keeps the very variable -- even on your own systems -- mail latency > completely out of the critical path for web server requests. Which is really > the key measure that dictates the requests/s you can serve. > Exactly, Greg. This is homologous to proxy serving http requests. Ideally, the data/text should be relayed to a separate, dedicated mail server. This has come up repeatedly for me on performance tuning projects. If there are a number of mail processes negotiating with remote hosts even running on the same machine as you are web serving from, you may, under significant load, degrade performance.
Re: Forking in mod_perl?
Hi David, Check out the guide at http://perl.apache.org/guide/performance.html#Forking_and_Executing_Subprocess The Eagle book also covers the C API subprocess details on page 622-631. Let us know if the guide is unclear to you, so we can improve it. Ed "David E. Wheeler" wrote: > Hi All, > > Quick question - can I fork off a process in mod_perl? I've got a piece > of code that needs to do a lot of processing that's unrelated to what > shows up in the browser. So I'd like to be able to fork the processing > off and return data to the browser, letting the forked process handle > the extra processing at its leisure. Is this doable? Is forking a good > idea in a mod_perl environment? Might there be another way to do it? > > TIA for the help! > > David > > -- > David E. Wheeler > Software Engineer > Salon Internet ICQ: 15726394 > [EMAIL PROTECTED] AIM: dwTheory
Re: Forking in mod_perl?
I hope it is clear that you don't want fork the whole server! Mod_cgi goes to great pains to effectively fork a subprocess, and was the major impetus I believe for the development of the C subprocess API. It (the source code for mod_cgi) is a great place to learn some of the subtleties as the Eagle book points out. As the Eagle book says, Apache is a complex beast. Mod_perl gives you the power to use the beast to your best advantage. Now you are faced with a trade off. Is it more expensive to detach a subprocess, or use the child cleanup phase to do some extra processing? I'd have to know more specifics to answer that with any modicum of confidence. Cheers, Ed "David E. Wheeler" wrote: > ed phillips wrote: > > > > Hi David, > > > > Check out the guide at > > > > http://perl.apache.org/guide/performance.html#Forking_and_Executing_Subprocess > > > > The Eagle book also covers the C API subprocess details on page 622-631. > > > > Let us know if the guide is unclear to you, so we can improve it. > > Yeah, it's a bit unclear. If I understand correctly, it's suggesting > that I do a system() call and have the perl script called detach itself > from Apache, yes? I'm not too sure I like this approach. I was hoping > for something a little more integrated. And how much overhead are we > talking about getting taken up by this approach? > > Using the cleanup phase, as Geoffey Young suggests, might be a bit > nicer, but I'll have to look into how much time my processing will > likely take, hogging up an apache fork while it finishes. > > Either way, I'll have to think about various ways to handle this stuff, > since I'm writing it into a regular Perl module that will then be called > from mod_perl... > > Thanks, > > David
Re: Apache trouble reading in large cookie contents
Explictly echoing Gunther, don't go there! Use cookies, think crumbs of info, as flyweights. Significant chunks of data need to be passed and stored in other ways. Ed Gunther Birznieks wrote: > Caveat: even if you modify apache to do larger cookies, it's possible that > there will be a set of browsers that won't support it. > > At 04:48 PM 10/20/00 -0700, ___cliff rayman___ wrote: > >i'm not an expert with this, but, a quick grep for your error in > >the apache source (mine is still 1.3.9 ) and some digging yield: > > > >./include/httpd.h:#define DEFAULT_LIMIT_REQUEST_FIELDSIZE 8190 > > > >so you're right, 8K is currently the apache limit. if you try to change > >this value in > >the source code, you will probably also have to muck with IOBUFSIZE and > >possibly other things as well. IOBUFSIZE is 8192 and the > >DEFAULT_LIMIT_REQUEST_FIELDSIZE is set to 2 bytes below that to make > >room for the extra \r\n after the last header. > > > >looks like you'll have to take responsibility for mucking with the apache > >source, or > >sending smaller cookies and using some other techniques such as HIDDEN fields. > > > > > >-- > >___cliff [EMAIL PROTECTED]http://www.genwax.com/ > > > >"Biggs, Jody" wrote: > > > > > I'm having trouble when a browser sends a fair sized amount of data to > > > Apache as cookies - say around 8k. > > > > > > > > Apache then complains (and fails the request) with > > > a message of the sort: > > > > > [date] [error] [client 1.2.3.4] request failed: error reading the headers > > > > > I assume this is due to a compile time directive to Apache specifying the > > > maximum size of a header line. > > > > > __ > Gunther Birznieks ([EMAIL PROTECTED]) > eXtropia - The Web Technology Company > http://www.extropia.com/
[Http-webtest-general] [ANNOUNCE] HTTP-WebTest-Plugin-TagAttTest-1.00
The uploaded file   HTTP-WebTest-Plugin-TagAttTest-1.00.tar.gzhas entered CPAN as file: $CPAN/authors/id/E/EF/EFANCHE/HTTP-WebTest-Plugin-TagAttTest-1.00.tar.gz size: 5312 bytes  md5: 940013aada679fdc09757f119d70686e   NAME   HTTP::WebTest ::Plugin::TagAttTest - WebTest plugin providing a higher level tag and attribute search interface.  DESCRIPTION   see also http://search.cpan.org/search?query=HTTP%3A%3AWebTest&mode=all   This module is a plugin extending the functionality of the WebTest module to allow tests of the form: my $webpage='http://www.ethercube.net'; my @result;   @result = (@result, {test_name => "title junk", url  => $webpage, tag_forbid => [{ tag=>"title", tag_text=>"junk"}]}); @result = (@result, {test_name => "title test page", url  => $webpage, tag_require => [{tag=> "title", text=>"test page"}]}); @result = (@result, {test_name => "type att with xml in value", url  => $webpage, tag_forbid => [{attr=>"type", attr_text=>"xml" }]}); @result = (@result, {test_name => "type class with body in value", url  => $webpage, tag_require => [{attr=>"class", attr_text=>"body" }]}); @result = (@result, {test_name => "class att", url  => $webpage, tag_require => [{attr=>"class"}]}) ; @result = (@result, {test_name => "script tag", url  =>$webpage, tag_forbid => [{tag=> "script"}]}); @result = (@result, {test_name => "script tag with attribute language=_javascript_", url  => $webpage, tag_forbid => [{tag=>"script",attr=>"language",attr_text=>"_javascript_"}]}) ; my [EMAIL PROTECTED];     my $params = {    plugins => ["::FileRequest","HTTP::WebTest::Plugin::TagAttTest"] };my $webtest= HTTP::WebTest->new;#4check_webtest(webtest =>$webtest, tests=> $tests,opts=>$params, check_file=>'t/test.out/1.out');#$webtest->run_tests( $tests,$params);  Ed FancherEthercube Solutionshttp://www.ethercube.netPHP, Perl, MySQL, _javascript_ solutions.
Re: Working directory of script is "/" !
On Wed, 30 Jul 2003, Stas Bekman wrote: > Perrin Harkins wrote: >> On Tue, 2003-07-29 at 07:23, Stas Bekman wrote: >> >>>That's correct. This is because $r->chdir_file in compat doesn't do >>>anything. The reason is that under threaded mpm, chdir() affects all >>>threads. Of course we could check whether the mpm is prefork and do >>>things the old way, but that means that the same code won't work the >>>same under threaded and non-threaded mpms. Hence the limbo. Still >>>waiting for Arthur to finish porting safecwd package, which should >>>resolve this problem. >> >> When he does finish it, won't we make the threaded MPM work just like >> this? It seems like it would be reasonable to get prefork working >> properly, even if the threaded MPM isn't ready yet. > > It's a tricky thing. If we do have a complete implementation then it's > cool. If not then we have a problem with people testing their code on > prefork mpm and then users getting the code malfunctioning on the > threaded mpms. > > I think we could have a temporary subclass of the registry (e.g.: > ModPerl::RegistryPrefork) which will be removed once the issue is > resolved. At least it'll remind the developers that their code won't > work on the threaded mpm setups. However if they make their code > working without relying on chdir then they can use Modperl::Registry > and the code will work everywhere. What's wrong with having the chdir code check for the threaded mpm, and, if it detects it, generate a warning that describes the situation? Admittedly, I have a difficult time understanding someone who tests under one mpm, and then releases under another mpm without testing. I realize there are people who do this sort of thing; I'm merely stating that I have difficulty understanding them. Ed
RE: sybase / mod_perl / linux question? [OFFTOPIC]
Interesting. I tested an identical setup of Apache/modperl/Embperl/Oracle on NT and Linux, and I experienced a huge slowdown on NT. When I looked into it, I found that the more database-intensive the page, the slower the relative performance of the NT platform. I took that to mean that it was actually Oracle on NT that was the root of the problem, but wasn't sure. I guess now I know. -Ed >Doesn't seem to have hindered it in any way for me. Linus has > discussed raw > partitions several times - the upshot being that they tend to be no faster > than using e2fs because e2fs is very fast anyway, and you'd simply have to > implement a lot of the e2fs functions inside of your database anyway. > > Anyhow - I don't know about all the internal workings of these things - > just that it works and is damn quick for me. > > BTW: Someone benchmarked Oracle on Linux vs Oracle on NT (using TPC code) > and found the Linux version to be about 4-7 (depending on the test) times > faster. So I guess that's a finger in the eye to raw partitions (I think > the NT version can use raw partitions - correct me if I'm wrong). > > -- > > > Details: FastNet Software Ltd - XML, Perl, Databases. > Tagline: High Performance Web Solutions > Web Sites: http://come.to/fastnet http://sergeant.org > Available for Consultancy, Contracts and Training. >
Re: Apache::DBI & MySQL
Even with MySQL, persistent connections are a performance boost, although you need to have a reason to use them. That is, only open a persistent connection when you have determined that you will use that connection enough to make a difference. Other connections may be better, as Ken said, one-time. Ed
weirdest bug i have seen in a long time: EmbPerl 1.2b10
First, I'd like to mention that I think EmbPerl is the greatest thing since sliced bread, and I'd like to thank Gerald for his time and effort. But here's an exceptionally strange bug that will cause the embperl/apache process to seg fault: ---BEGIN SCRIPT [- $hello %n; -] END SCRIPT- In fact, anything of the form: .+ %n.* will seg fault, but I have found no other combination (.* %t.*, etc.) that will seg fault. It's not a critical bug-- I don't know of any well-formed Perl statement that looks like that (I found it as a bug in my program)-- but it's a curious one. If anyone has an explanation, I'd be very interested in hearing the answer. My setup: Apache/1.3.9 (Unix) DAV/0.9.11 mod_perl/1.21 mod_ssl/2.4.2 OpenSSL/0.9.4 HTML::Embperl 1.2b10 Perl 5.00404 Linux kernel 2.0.36 (Red Hat 5.2) cheers, Ed
load/regression test builders, monitoring tools for mod_perl apps
Does anyone know of any good open source test builders for regression/performance-testing a mod_perl app? This is the essence of what I would want such a suite to do: RECORD: -set up a proxy server to forward HTTP requests to a mod_perl'd server. -capture all GET/POST requests from the client and log them to a file, along with the server's output. The server's output would be the 'master' copy. PLAYBACK (REGRESSION): -play back the GET/POST requests and capture the output. Compare the output against the master copy. Raise an error in the log file if the two differ. PLAYBACK (LOAD): -play back the GET/POST requests according to some load scheme to see how well the application holds up under load. If this doesn't exist, I think it would be easy enough to write using LWP; I just don't want to duplicate anyone's efforts. I'd also be interested to know if anyone knows of any good webserver monitoring programs that could automatically kill spinning httpds, short of a CRON job. FYI-- I have encountered mystery spinning httpd's as well, but I have always been able to pin it down on bad/risk code or thrashing. At any rate, I still need to be able to kill spinning httpds should it come to that. cheers, Ed
Re: Server Stats
>this is like closing the gate after the horse has bolted without things >like decent locking and transactions. Although perhaps I'm mistaken and You can rest assured that they know what they are doing. :-) It is also worth upgrading to newer versions. The newest versions not deemed stable just yet no longer use ISAM, are much faster, and will allow for a host of new features. stay tuned. ed
Re: Spreading the load across multiple servers (was: Server Stats)
>I don't have any real answers - just a suggestion. What is wrong with the >classic RDBMS architecture of RAID 1 on multiple drives with MySQL - surely >it will be able to do that transparently? Yes, RAID is very helpful with MySQL. I spoke with Monty, the developer of MySQL at the open source conference in Monterey and he said that they are currently working on replication and mirroring features. It might be worth inquiring directly with them. Ed
Re: DBI
This is also not a mod_perl question. depending on where your DBD::Oracle is installed you can get away with certain liberties in the Oracle library department. Nonetheless, you should continue your inquiry on a DBI related list. Thank you, Ed
RE: pool of DB connections ?
Oleg-- I don't know that this will help, but you could try using DBI::Proxy. The setup would theoretically be as follows: each apache process uses DBI::Proxy as the client; each creates a network connection to DBI::ProxyServer, which creates a few persistent connections to the db server using the connect_cached method. I have not tried this myself, but reference is made to it at http://www.crystaltech.com/perldocs/lib/site/DBD/Proxy.html For more help, you might try the DBI mailing list at http://www.symbolstone.org/technology/perl/DBI/ good luck -Ed > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On > Behalf Of Oleg Bartunov > Sent: Monday, November 29, 1999 12:00 PM > To: Leslie Mikesell > Cc: [EMAIL PROTECTED] > Subject: Re: pool of DB connections ? > > > On Mon, 29 Nov 1999, Leslie Mikesell wrote: > > > Date: Mon, 29 Nov 1999 09:59:38 -0600 (CST) > > From: Leslie Mikesell <[EMAIL PROTECTED]> > > To: Oleg Bartunov <[EMAIL PROTECTED]> > > Cc: [EMAIL PROTECTED] > > Subject: Re: pool of DB connections ? > > > > According to Oleg Bartunov: > > > > > I'm using mod_perl, DBI, ApacheDBI and was quite happy > > > with persistent connections httpd<->postgres until I used > > > just one database. Currently I have 20 apache servers which > > > handle 20 connections to database. If I want to work with > > > another database I have to create another 20 connections > > > with DB. Postgres is not multithreading > > > DB, so I will have 40 postgres backends. This is too much. > > > Any experience ? > > > > Try the common trick of using a lightweight non-mod_perl apache > > as a front end, proxying the program requests to a mod_perl > > backend on another port. If your programs live under directory > > boundaries you can use ProxyPass directives. If they don't you > > can use RewriteRules with the [p] flag to selectively proxy > > (or [L] to not proxy). This will probably allow you to cut > > the mod_perl httpd's at least in half. If you still have a > > problem you could run two back end httpds on different ports > > with the front end proxying the requests that need each database > > to separate backends. Or you can throw hardware at the problem > > and move the database to a separate machine with enough memory > > to handle the connections. > > I didn't write all details but of course I already have 2 servers setup. > > Regards, > > Oleg > > > > > Les Mikesell > >[EMAIL PROTECTED] > > > > _ > Oleg Bartunov, sci.researcher, hostmaster of AstroNet, > Sternberg Astronomical Institute, Moscow University (Russia) > Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ > phone: +007(095)939-16-83, +007(095)939-23-83 >
Embperl problem (newbie question?)
Just trying out Embperl, and I discovered that (in my test of dynamic tables) the $maxrow and $maxcol variables are being set to defaults of 100 and 10 respectively and then obeyed, even though the $tabmode variable is set to 17. According to the documentation, these variables should only be obeyed when tabmode contains bits in the 64 and 4 position. My test called for 209 rows and 11 columns, and I was flummoxed for a bit until I started playing around with these variables. Am I missing something, or is it just a documentation inconsistancy? Ed Greenberg __ Get Your Private, Free Email at http://www.hotmail.com
ApacheDBI vs DBI for TicketMaster
My apache children are seg faulting due to some combination of DBI usage and the cookie-based authentication/authorization scheme from chapter 6 in the "eagle" book by L. Stein and D. MacEachern. I have read the ApacheDBI documentation, scoured mailing lists, deja.com, etc., but I'm hoping someone might be able illuminate what is going on here... The authentication/authorization process is working smoothly. Just after a ticketed client is revalidated (ie., just after leaving Apache::TicketAccess with success), however, the apache child seg faults. If I comment out all DBI references in the Apache::Ticket* modules used for password querying, and just hard-code a guaranteed password match, the seg fault disappears. Without any authentication attempts at all, DBI works smoothly. The db connection is done within each child upon the first db query (NOT in the main process). In looking over ApacheDBI, it's not clear it is doing anything significantly different in terms of the timing of connections among children. Any clues? Am I missing something special that ApacheDBI does that would prevent this problem? Cheers, Ed Loehr
Re: ApacheDBI vs DBI for TicketMaster
Randy Harmon wrote: > On Sun, Jan 02, 2000 at 01:48:58AM -0600, Ed Loehr wrote: > > My apache children are seg faulting due to some combination of > > DBI usage and the cookie-based authentication/authorization > [...] > > child seg faults. If I comment out all DBI references in the > > Hm, are you connecting to your database prior to Apache's forking - i.e., in > your startup.pl? That could cause trouble, especially on older versions of > Apache and Apache::DBI. No. I'm only connecting in the child servers. I verified that via PIDs in the log file with debugging prints, and logically as well since the DBI::connect() command is only executed during a query reached via a handler... BTW, this is all on apache 1.3.9 with mod_ssl 2.4.9 and mod_perl 1.21 on Redhat 6.1 (2.2.12-20smp)... Anyone else have any suspicions/ideas/leads? Cheers, Ed Loehr
Re: ApacheDBI vs DBI for TicketMaster
Edmund Mergl wrote: > > > On Sun, Jan 02, 2000 at 01:48:58AM -0600, Ed Loehr wrote: > > > > My apache children are seg faulting due to some combination of > > > > DBI usage and the cookie-based authentication/authorization > > > [...] > > > > child seg faults. If I comment out all DBI references in the > > > > > > Hm, are you connecting to your database prior to Apache's forking > > > > No. BTW, this is all on apache 1.3.9 with mod_ssl 2.4.9 and mod_perl 1.21 on > Redhat 6.1 (2.2.12-20smp)... > > do you use rpm's or did you compile everything by yourself ? Compiled everything myself. Oh, and I am also using DBD::Pg 0.92... Does that suggest anything to anyone? Cheers, Ed Loehr
Re: Comparing arrays
Really Dheeraj, This is not a mod_perl specific question, and I don't know the all important context into which this boilerplate code you are seeking to elicit from the list is to be dropped. here is a boilerplate "find me keys that are not in both hashes": foreach (keys %hash_one) { push(@here_not_there, $_) unless exists $hash_two{$_}; } shame on you. To expiate your sins, read perldoc pages for two hours everyday for two weeks. ed
Re: Comparing arrays
Cliff, I wanted him to work for the rest of it, or at least go to another list. It looks like he wanted two arrays, @in_hash_one_alone and @in_hash_two_alone, so having him push to one array may confuse him. he's better off doing a little studying, methinks. ed
Re: mysql.pm on Apache/mod_perl/perl win98
Hi Dave, I only do *nix, but I think that you should not need mysql.pm if you are using DBI/DBD. Jochen is quite helpful on the MySQL modules list. subscription info availble at www.mysql.com. Good Luck, Ed
Re: modperl success story
>The troll vanisheth! ha! Reminds me of the Zen story of an old fisherman in a boat on a lake in a heavy can't see your hands fog. He bumps into another boat, and shouts at the other guy, "Look where you're going would you! You almost knocked me over." He pulls up beside the boat and is about to give the other guy a piece of his mind, but when he looks in the other boat, he discovers that no one else is there. Flame trolls on mailing lists are virtual empty boats, whose only value is the sometimes humorous apoplexy elicited in the old sea salts on the list. Ed
Re: APACHE_ROOT
Ged, You are very entertaining. The code in question is also known as a combined copy and substitution. >Beware if you haven't got /src on the end of your source directory! If you don't have a match with the string or regexp , you'll just get a straight copy. Ed X-Authentication-Warning: C2H5OH.jubileegroup.co.uk: ged owned process doing -bs Date: Sat, 15 Jan 2000 00:00:37 + (GMT) From: "G.W. Haywood" <[EMAIL PROTECTED]> Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: [EMAIL PROTECTED] Precedence: bulk Hi there, On 14 Jan 2000, William P. McGonigle wrote: > Can someone explain what APACHE_ROOT is meant to be? I'm assuming > it's somehow different thatn APACHE_SRC (which I'm defining). The expression ($APACHE_ROOT = $APACHE_SRC) =~ s,/src/?$,,; sets the scalar $APACHE_ROOT to be equal to the scalar $APACHE_SRC and then chops off any "/src" or "/src/" from the end of it. The =~ binding operator (p27) tells perl to do the substitution s,/src/?$,, to the thing on left hand side of its expression. The parentheses (p77) mean the thing in them is a term, which has the highest precedence in perl so the assignment has to be done first. The substitution then has to be done on the result, $APACHE_ROOT and not $APACHE_SRC, er, obviously. The three commas are quotes (p41) for a substitution, presumably chosen because they can't easily appear in a filename. The pattern to match is /src/?$ The question mark is a quantifier (p63), it says we can have 0 or 1 trailing slash in the pattern we match - it's trailing at the end of a string because of the $ (p62). If our string matches, the matching bit is replaced with the bit between the second and third commas. There's nothing between the second and third commas, so it's replaced with nothing. Have a look at pages 72 to 74 especially for more about the s/// construct. The page numbers are from the Camel Book, second edition. I keep it on my desk at all times, it stops my papers blowing around. You will help yourself a lot with these things if you read chapters one and two five or six times this year as a kind of a penance. So if $APACHE_SRC eq "/usr/local/apache/src/" or $APACHE_SRC eq "/usr/local/apache/src" then $APACHE_ROOT eq "/usr/local/apache" after the substitution. I just *love* Perl's pattern matching! 73, Ged.
Re: splitting mod_perl and sql over machines
> > Most of my requests are served within 0.05-0.2 secs, but I afraid that > > adding a network (even a very fast one) to deliver mysql results, will > > make the response answer go much higher, so I'll need more httpd processes > > and I'll get back to the original situation where I don't have enough > > resources. Hints? > The network just has to match the load. If you go to a switched 100M > net you won't add much delay. You'll want to run persistent DBI > connections, of course, and do all you can with front-end proxies > to keep the number of working mod_perl's as low as possible. Yes. Even under low load a 100M connected separate db server will bench about the same performance as the intuitively "faster" unix socket connection (for MySQL) on same box for reasons mentioned above. Special care can be taken to tune MySQL for the size of and type of requests as well, increasing performance. See the MySQL docs. I asked a question earlier about a client only(sep db) set up of DBD::Oracle and just which of the Oracle libraries are critical and which can be skipped. If you check the CLIENTS read me you'll see some contradictory advice. I'm curious about that because I'd love to minimize use of libraries on Oracle set ups not a mod_perl specific question so offlist would be great if any would be so kind. ed
Re: oracle : The lowdown
For those of you tired of this thread please excuse me, but here is MySQL's current position statement on and discussion about transactions: Disclaimer: I just helped Monty write this partly in response to some of the fruitful, to me, discussion on this list. I know this is not crucial to mod_perl but I find the "wise men who are enquirers into many things" to be one of the great things about this list, to paraphrase old Heraclitus. I learn quite a bit about quite many things by following leads and hints here as well as by seeing others problems. I'd love to see your criticism of the below either here or off the list. Ed - The question is often asked, by the curious and the critical, "Why is MySQl not a transactional database?" or "Why does MySQl not support transactions." MySQL has made a conscious decision to support another paradigm for data integrity, "atomic operations." It is our thinking and experience that atomic operations offer equal or even better integrity with much better performance. We, nonetheless, appreciate and understand the transactional database paradigm and plan, in the next few releases, on introducing transaction safe tables on a per table basis. We will be giving our users the possibility to decide if they need the speed of atomic operations or if they need to use transactional features in their applications. How does one use the features of MySQl to maintain rigorous integrity and how do these features compare with the transactional paradigm? First, in the transactional paradigm, if your applications are written in a way that is dependent on the calling of "rollback" instead of "commit" in critical situations, then transactions are more convenient. Moreover, transactions ensure that unfinished updates or corrupting activities are not commited to the database; the server is given the opportunity to do an automatic rollback and your database is saved. MySQL, in almost all cases, allows you to solve for potential problems by including simple checks before updates and by running simple scripts that check the databases for inconsistencies and automatically repair or warn if such occurs. Note that just by using the MySQL log or even adding one extra log, one can normally fix tables perfectly with no data integrity loss. Moreover, "fatal" transactional updates can be rewritten to be atomic. In fact,we will go so far as to say that all integrity problems that transactions solve can be done with LOCK TABLES or atomic updates, ensuring that you never will get an automatic abort from the database, which is a common problem with transactional databases. Not even transactions can prevent all loss if the server goes down. In such cases even a transactional system can lose data. The difference between different systems lies in just how small the time-lap is where they could lose data. No system is 100 % secure, only "secure enough". Even Oracle, reputed to be the safest of transactional databases, is reported to sometimes lose data in such situations. To be safe with MySQL you only need to have backups and have the update logging turned on. With this you can recover from any situation that you could with any transactional database. It is, of course, always good to have backups, independent of which database you use. The transactional paradigm has its benefits and its drawbacks. Many users and application developers depend on the ease with which they can code around problems where an "abort" appears or is necessary, and they may have to do a little more work with MySQL to either think differently or write more. If you are new to the atomic operations paradigm, or more familiar or more comfortable with transactions, do not jump to the conclusion that MySQL has not addressed these issues. Reliability and integrity are foremost in our minds. Recent estimates are that there are more than 1,000,000 mysqld servers currently running, many of which are in production environments. We hear very, very seldom from our users that they have lost any data, and in almost all of those cases user error is involved. This is in our opinion the best proof of MySQL's stability and reliability. Lastly, in situations where integrity is of highest importance, MySQL's current features allow for transaction-level or better reliability and integrity. If you lock tables with LOCK TABLES, all updates will stall until any integrity checks are made. If you only do a read lock (as opposed to a write lock), then reads and inserts are still allowed to happen. The new inserted records will not be seen by any of the clients that have a READ lock until they relaease their read locks. With INSERT DELAYED you can queue insert into a local queue, until the locks are released, without having to have the client to wait for the insert to complete. Atomic
Can't upgrade that kind of scalar (and more)
I've scoured deja.com, FAQs, modperl list archives at forum.swarthmore.edu, 'perldoc mod_perl_traps', experimented ad nauseum for 4 days now... this modperl newbie is missing something important... Lasting gratitude and a check in the mail for dinner on me to any of you who can offer any tips/help which unlock this riddle for me... Cheers, Ed Loehr SYMPTOMS... --- Spurious errors in my error_log with increasingly nasty consequences: Can't upgrade that kind of scalar at XXX line NN... Not a CODE reference at XXX line NN... Modification of a read-only value attempted at XXX line NN... Attempt to free unreferenced scalar. Attempt to free unreferenced scalar during global destruction. Attempt to free unreferenced scalar at XXX line NN... Once upon a time, the server was fully functional even with these occasional error messages (which is why I ignored them originally). Now, they are frequent showstoppers, causing requests to fail altogether with 500 errors and occasional segfaults... I'm lumping these together because I suspect they are all related. In any case, the severest of these at present seems to be the Can't upgrade that kind of scalar at XXX line NN... message, which causes the request to fail and seems to foul up that child for the rest of its life. FAILED REMEDIES... -- - Turned off PerlFreshRestart - Got rid of '$| = 1;' - Got rid of #!/usr/bin/perl -w (!!) - Check 'use diagnostics' output - Got rid of string regex optimization flags ( $key =~ m/^xyz/o ) - Replaced use of 'apachectl restart' with stop-sleep3-startssl; - use Carp (); local $SIG{__WARN__} = \&Carp::cluck; - Changed global all instances of global 'my $var = 0' to 'use vars qw($var); $var = 0;' - Commented out Apache::Registry (Most of these are just suggestions I found during my hunt...) CONFIGURATION... (detailed config dumps below) mod_perl 1.21 (*NOT* Apache::StatINC) mod_ssl 2.4.9 openssl 0.9.4 perl 5.005_03 DBI 1.13 (*NOT* using Apache::DBI) DBD::Pg 0.92 Apache 1.3.9 (*VERY* lightly loaded) Linux 2.2.12-20smp (RH 6.1), 1Gb RAM, RAID 5 (*lots* of free mem) Dual PII 450 cpus Using modified TicketMaster scheme from Eagle book OBSERVATIONS... --- I'm convinced the code referenced by the error msgs (XXX line NN) is almost at random; typically code that's worked flawlessly before (sometimes my code, often not). I suspect the line numbers in the error msg may be screwed up. #line did not clarify things. If I rearrange my code, I have been able to make the error "move" to another module (eg., from Exporter.pm to CGI.pm). Smell like a stack corruption problem? Currently, the first unsuccessful statement is: # ($r is the usual apache request object) return ($retval,$msg) unless $ticket{'ip'} eq $r->connection->remote_ip; In -X mode, once the server process hits one of these, it can no longer serve any modperl-generated page without a 500 error, occasionally segfaulting in the process. This also happens on both production and development server in slightly different manifestations (with slightly different httpd.conf files). Other change factors that may or may not be related: new firewall rules, increased number of open file handles (echo 8192 > /proc/sys/file-max), increased load, RAM upgrade, numerous modperl app src code changes, added more use of Time::HiRes to other modules, new SSL certificate, and more... Finally, totally commenting out my incarnation of the TicketMaster scheme from the Eagle book (cookie-based passworded sessions) *seems* to remove the problem, but it's a moving target so I'm not sure of that yet. Have been unable to determine what it might be within TicketMaster that is causing the problem. NEXT STEPS... - Try removing Logger.pm from Apache::Ticket* Whittle down until minimal set produces error? Autoload troubles? Find/try MacEachern's Apache::Leak? Hunting XS errors? Apache::Vmonitor? Relying on $_ in foreach loops? SSL Certificate differences? Dreaded Last Step: setup debugger and chase ... # /usr/local/apache_ssl/bin/httpd -l Compiled-in modules: http_core.c mod_env.c mod_log_config.c mod_mime.c mod_negotiation.c mod_status.c mod_include.c mod_autoindex.c mod_dir.c mod_cgi.c mod_asis.c mod_imap.c mod_actions.c mod_userdir.c mod_alias.c mod_access.c mod_auth.c mod_setenvif.c mod_ssl.c mod_perl.c # /usr/local/apache_ssl/bin/httpd -V Server version: Apache/1.3.9 (Unix) Server built: Dec 9 1999 11:40
does ssl encrypt basic auth?
Is a basic authentication password, entered via a connection to an https/SSL server, encrypted or plain text across the wire?
Re: does ssl encrypt basic auth?
[EMAIL PROTECTED] wrote: > > Ed Loehr wrote: > > > > Is a basic authentication password, entered via a connection to an > > https/SSL server, encrypted or plain text across the wire? > > > Encrypted - but that question really doesn't belong here. > It has nothing to do with modperl. Yes, some of your fellow off-topic police have already served notice privately. My unstated context was that mod_perl authentication was giving me fits, and in my effort to find an alternative, I (gasp) posted off-topic. I'm just glad you're watching. :(
Building Apache/modperl for SCO OS 5.05
Has anyone had any luck building Apache on SCO Open Server 5 with mod_perl? We have been unsuccessful, and am hoping to find a solution. r/ ed
$r->print delay?
Any ideas on why would this output statement takes 15-20 seconds to send a 120kb page to a browser on the same host? sub send_it { my ($r, $data) = @_; $| = 1; # Don't buffer anything...send it asap... $r->print( $data ); } modperl 1.21, apache/modssl 1.3.9-2.4.9...lightly loaded Linux (RH6.1) Dual PIII 450Mhz with local netscape 4.7 client...
Re: $r->print delay?
Ken Williams wrote: > > Are you sure it's waiting? You might try debug timestamps before & after the > $r->print(). You might also be interested in the send_fd() method if the data > are in a file. Fairly certain it's waiting there. I cut my debug timestamps out for ease on your eyes in my earlier post, but here's one output (of many like it) when I had the print sandwiched... Thu Feb 10 14:41:59.053 2000 [v1.3.7.1 2227:1 ed:1] INFO : Sending 120453 bytes to client... Thu Feb 10 14:42:14.463 2000 [v1.3.7.1 2227:1 ed:1] INFO : Send of 120453 bytes completed. Re send_fd(), it's all dynamically generated data, so that's not an option... Other clues? > [EMAIL PROTECTED] (Ed Loehr) wrote: > >Any ideas on why would this output statement takes 15-20 seconds to > >send a 120kb page to a browser on the same host? > > > >$| = 1; # Don't buffer anything...send it asap... > >$r->print( $data ); > > > >modperl 1.21, apache/modssl 1.3.9-2.4.9...lightly loaded Linux (RH6.1) > >Dual PIII 450Mhz with local netscape 4.7 client...
Re: $r->print delay?
Greg Stark wrote: > > Ed Loehr <[EMAIL PROTECTED]> writes: > > > > [EMAIL PROTECTED] (Ed Loehr) wrote: > > > >Any ideas on why would this output statement takes 15-20 seconds to > > > >send a 120kb page to a browser on the same host? > > > > > > > >$| = 1; # Don't buffer anything...send it asap... > > > >$r->print( $data ); > > You don't say how many lines your 120kb of data is. If it's about 40 > characters per line average that's 30k calls to write (because you turned > buffering off). Calls to write ought to be around 100us but if they were slow > for some reason and took 500us then that would explain the 15s. A number of the lines are 5-6K each. Context is a rather large html-based spreadsheet-like page. > Additionally you're probably filling the kernel buffer and forcing a context > switch several times. If the buffer is about 4k then there would be 3k context > switches each about 10ms on linux, or about 30s of latency. There was a debate > over the usefulness of the ProxyReceiveBuffer parameter, and even that isn't > the same as this buffer, but it ought to be possible for Apache to setsockopt > to ask for a larger buffer. > > In short, don't set $|=1, it will only slow down this process, and investigate > if there's any way to increase the size of the kernel buffer for the socket in > Apache. Lots of people would be interested to know if you turn up anything > there. With buffering on or off, and max "segment" size of 900 bytes in any one call to print() (or $r->print()), but doing 120kb worth of printing in rapid succession, the smaller-but-still-large delay shows up typically around 100kb into it. The only way I was able to make the large delays disappear so far is to sleep 1 every 50 prints or so...but that's not really disappearing, now is it... Experimentally, it appears this is simply too much data too quickly. I think I've seen $|=1 make a big difference with other non-local NT/MSIE clients, even on these 120kb pages. I've consistently been able to reproduce this problem in both buffered and non-buffered mode with linux netscape 4.7, both locally and remotely. I'll look into the kernel buffer issue... :( Cheers, Ed Loehr
Can't upgrade that kind of scalar
Aside from gdb, any fishing tips on how to track this fatal problem down? Can't upgrade that kind of scalar at XXX line NN... Happens intermittently, often on a call to one of these (maybe the first access of $r?): $r->server->server_hostname() $r->connection->remote_ip() I've tried turning off PerlFreshRestart, have _totally_ clean output from 'use diagnostics', reviewed The Guide, 'perldoc perldiag', FAQ, deja.com, swarthmore, removed /o, used Carp::cluck, handled global vars with 'use vars qw(...)'... Config: apache 1.3.9, mod_perl 1.21, mod_ssl 2.4.9, openssl 0.9.4, perl 5.005_03, DBI 1.13 (no Apache::DBI), DBD::Pg 0.92, Linux 2.2.12-20smp (RH 6.1)...
Re: $r->print delay?
"G.W. Haywood" wrote: > > On Thu, 10 Feb 2000, Ed Loehr wrote: > > > Fairly certain it's waiting there. I cut my debug timestamps out for > > ease on your eyes in my earlier post, but here's one output (of many > > like it) when I had the print sandwiched... > > > > Thu Feb 10 14:41:59.053 2000 [v1.3.7.1 2227:1 ed:1] INFO : Sending > > 120453 bytes to client... > > Thu Feb 10 14:42:14.463 2000 [v1.3.7.1 2227:1 ed:1] INFO : Send of > > 120453 bytes completed. > > > > Re send_fd(), it's all dynamically generated data, so that's not an > > option... > > So write a file...? Duh. In any event, send_fd() doesn't help. Thanks anyway. Cheers, Ed Loehr
[RFI] URI escaping modules?
I just noticed that Apache::Util::escape_uri does not escape embedded '&' characters as I'd expected. What is the preferred module for escaping '&', '?', etc. when embedded in strings? Regards, Ed Loehr
Re: Installation
> Annette wrote: > > I have been trying to install mod_perl for the last couple of weeks and > I still have not been successful. I am new to Linux and have installed > RedHat 6. I used the Custom set-up and installed mod_perl during the > installation. I entered the command 'perl -v' and it tells me that I > have perl loaded but not mod_perl. Does anyone know what I have to do to > enable mod_perl, tell if I have it enabled, or where I can read about > the installation under RedHat 6. I have tried installing it using src > files, followed the directions line by line and still nothing. Is this > the right mailing list to ask this question? Where should I go if not? > Apache is up and running just fine. Any input would be appreciated. The essential can't-run-modperl-without-it guide: http://perl.apache.org/guide Regards, Ed Loehr
RE: loss of shared memory in parent httpd
I believe I have the answer... The problem is that the parent httpd swaps, and any new children it creates load the portion of memory that was swaped from swap, which does not make it copy-on-write. The really annoying thing - when memory gets tight, the parent is the most likely httpd process to swap, because its memory is 99% idle. This issue aflicts Linux, Solaris, and a bunch of other OSes. The solution is mlockall(2), available under Linux, Solaris, and other POSIX.1b compliant OSes. I've not experimented with calling it from perl, and I've not looked at Apache enough to consider patching it there, but this system call, if your process is run as root, will prevent any and all swapping of your process's memory. If your process is not run as root, it returns an error. The reason turning off swap works is because it forces the memory from the parent process that was swapped out to be swapped back in. It will not fix those processes that have been sired after the shared memory loss, as of Linux 2.2.15 and Solaris 2.6. (I have not checked since then for behavior in this regard, nor have I checked on other OSes.) Ed On Thu, 14 Mar 2002, Bill Marrs wrote: > >It's copy-on-write. The swap is a write-to-disk. > >There's no such thing as sharing memory between one process on disk(/swap) > >and another in memory. > > agreed. What's interesting is that if I turn swap off and back on again, > the sharing is restored! So, now I'm tempted to run a crontab every 30 > minutes that turns the swap off and on again, just to keep the httpds > shared. No Apache restart required! > > Seems like a crazy thing to do, though. > > >You'll also want to look into tuning your paging algorithm. > > Yeah... I'll look into it. If I had a way to tell the kernel to never swap > out any httpd process, that would be a great solution. The kernel is > making a bad choice here. By swapping, it triggers more memory usage > because sharing removed on the httpd process group (thus multiplied)... > > I've got MaxClients down to 8 now and it's still happening. I think my > best course of action may be a crontab swap flusher. > > -bill
Re: Berkeley DB 4.0.14 not releasing lockers under mod_perl
Does shutting down apache free up your locks? (As an aside, I'm not sure I'll ever get over undef being proper closing of a database connection; it seems so synonomous to free([23]). I expect something like $db->db_close() or something.) Ed On Thu, 21 Mar 2002, Dan Wilga wrote: > At 2:03 PM -0500 3/21/02, Aaron Ross wrote: >> >>> I'm testing with the Perl script below, with the filename ending >>> ".mperl" (which, in my configuration, causes it to run as a mod_perl >>> registry script). >> >> I would re-write it as a handler and see if Apache::Registry is partly >>to blame. > > I tried doing it as a handler, using the configuration below (and the > appropriate changes in the source) and the problem persists. So it > doesn't seem to be Registry's fault. > > > SetHandler perl-script > PerlHandler DanTest > > > source code > > #!/usr/bin/perl > > package DanTest; > > use strict; > use BerkeleyDB qw( DB_CREATE DB_INIT_MPOOL DB_INIT_CDB ); > > my $dir='/home/httpd/some/path'; > > sub handler { > system( "rm $dir/__db* $dir/TESTdb" ); > > foreach( 1..5 ) { > my $env = open_env($dir); > my %hash; > my $db = open_db( "TESTdb", \%hash, $env ); > untie %hash; > undef $db; > undef $env; > } > print "HTTP/1.1 200\nContent-type: text/plain\n\n"; > print `db_stat -c -h $dir`; > print "\n"; > } > > sub open_env { > my $env = new BerkeleyDB::Env( > -Flags=>DB_INIT_MPOOL|DB_INIT_CDB|DB_CREATE, > -Home=> $_[0], > ); > die "Could not create env: $! ".$BerkeleyDB::Error. "\n" if !$env; > return $env; > } > > sub open_db { > my( $file, $Rhash, $env ) = @_; > my $db_key = tie( %{$Rhash}, 'BerkeleyDB::Btree', > -Flags=>DB_CREATE, > -Filename=>$file, > -Env=>$env ); > die "Can't open $file: $! ".$BerkeleyDB::Error."\n" if !$db_key; > return $db_key; > } > > 1; > > > Dan Wilga [EMAIL PROTECTED] > Web Technology Specialist http://www.mtholyoke.edu > Mount Holyoke CollegeTel: 413-538-3027 > South Hadley, MA 01075"Seduced by the chocolate side of the Force" >
Re: 'Pinning' the root apache process in memory with mlockall
ow the limit, it'll be killed immediately after > serving a single request (assuming that we the > C<$CHECK_EVERY_N_REQUESTS> is set to one). This is a very bad > situation which will eventually lead to a state where the system won't > respond at all, as it'll be heavily engaged in swapping process. Yes, this is why we want to lock the memory. Ed
Re: Performace...
On Sun, 24 Mar 2002, Kee Hinckley wrote: > At 2:27 PM -0500 3/23/02, Geoffrey Young wrote: >> >>you might be interested in Joshua Chamas' ongoing benchmark project: >> >>[EMAIL PROTECTED]">http://mathforum.org/epigone/modperl/sercrerdprou/[EMAIL PROTECTED] >>http://www.chamas.com/bench/ >> >>he has the results from a benchmark of Apache::Registry and plain >>handlers, as well as comparisons between HTML::Mason, Embperl, and >>other templating engines. > > Although there are lots of qualifiers on those benchmarks, I consider > them rather dangerous anyway. They are "Hello World" benchmarks, in > which startup time completely dominates the time. The things that That explains why Embperl did so poorly compared to PHP, yet when we replaced our PHP pages with Embperl, our benchmarks using real user queries, sending the same queries through the old and new pages, the new pages showed a 50% performance boost. Note: that gain was enough to saturate our test network. Our purpose for the benchmark was to determine if it was an improvement or not, not to determine the exact improvement, so we don't really know what the real gain was. The same machines do several other tasks, and our monitoring at the time of change was not very sophisticated, so we only really know it was a big win. Something on the order of 37 load issues the week before the change, most of which were fairly obviously web overload, and two the week after (those two being very obviously associated with other services the boxes are running.) Ed
Re: Apache::DBI or What ?
On Sun, 24 Mar 2002, Andrew Ho wrote: >>What would be ideal is if the database would allow you to change the >>user on the current connection. I know PostgreSQL will allow this >>using the command line interface psql tool (just do \connect >> ), but I'm not sure if you can do this using DBI. >> >>Does anyone know if any datbases support this sort of thing? > > This occurred to me in the case of Oracle (one of my co-workers was > facing a very similar problem in the preliminary stages of one of his > designs), and I actually had asked our DBAs about this (since the > Oracle SQL*Plus also allows you to change users). As I suspected (from > the similar "connect" terminology), our DBAs confirmed that Oracle > just does a disconnect and reconnect under the hood. I would bet the > psql client does the same thing. First, I'll suggest that there are hopefully other areas you can look at optimizing that will get you a bigger bang for your time - in my test environment (old hardware), it takes 7.4 ms per disconnect/reconnect/rebind and 4.8 ms per rebind. Admittedly, I'm dealing with LDAP instead of SQL, and I've no idea how they compare. If the TCP connection were retained, this could still be a significant win. *Any* reduction in the connection overhead is an improvement. If there are a million connects per day, and this saves a milli-second per connect (believable to me, as at least three packets don't need to be sent - syn, syn ack, and fin. My TCP's a bit fuzzy, but I think there's a couple more, and there's also the mod_perl disconnect/reconnect overhead), that's over 15 minutes of response time and about 560,000,000 bits of network bandwidth (assuming the DB is not on the same machine) saved. Admittedly, at 100Mb/s, that's only 6 seconds. It may, in some cases, still be necessary to move access control from the DB into ones application, so one can maintain a single connection which never rebinds, but I think it's better to utilize the security in the DB instead of coding ones own - more eyes have looked over it. We're talking about a fairly small unit of time; it may very well be better to throw money if you are near your performance limit. Ed
Re: [announce] mod_perl-1.99_01
On Mon, 8 Apr 2002, Stas Bekman wrote: > Ged Haywood wrote: >> Compilations should be SILENT unless something goes wrong. > > The build process takes time, if you don't give an indication > of what's going on users will try to do funky things. Since the > build process is comprised of many small sub-processes you cannot > really use something like completion bar. As someone said, redirect the output to a temporary location. But, add to that one of those little | bars, which turns one position every time another build step completes (each file compiled, each dependancy file built, etc.). However, in the case of an error, I would want the whole thing available. Possibly something along the lines of, the last build step and all output from then on printed to stdout (or stderr), ended with, "For the full build log, see /tmp/mod_perl.build.3942" or some such. > Also remember that mod_perl build prints very little, > it's the compilation messages that generated most of the output. > p.s. I don't recall seeing silent build processes at all. The only ones I've seen went too far the other way. I especially loved the one which used a shell script, which started out with a dozen large shell functions, then an 'exec >/dev/null 2>/dev/null', then a half-dozen more large shell functions, and ending with 'main "$@"'. When the shell script finished, its caller checked its exit code, and reported 'done.' or 'failed.' as appropriate. Admittedly, I wouldn't have minded too much, except that I'd gotten the latter answer. Ed
Re: mod perl load average too high
That looks like there's something that occasionally goes off and starts spinning, given the low memory usage and the fact that some processes using little cpu are also not swapped out. I suspect that one of your pages has a potential infinite loop that's being triggered. Try and catch at what point the load suddenly starts rising, and check what pages were accessed around that time. They're where you should start looking. Note that you should probably focus on the access and error log lines that correspond with processes that are using excessive amounts of cpu. Ed On Tue, 6 Aug 2002, Anthony E. wrote: > I'm using apache 1.3.26 and mod_perl 1.27 > > My apache processes seem to be taking up more and more > system resources as time goes on. > > Can someone help me determine why my server load is > going up? > > When i first start apache, my "load average" is about > .02, but after a couple of hours, it goes up to 4 or > 5, and after a couple of days, has been as high as > 155. > > I have the following directives configured in > httpd.conf: > > MaxKeepAliveRequests 100 > MinSpareServers 5 > MaxSpareServers 20 > StartServers 10 > MaxClients 200 > MaxRequestsPerChild 5000 > > Here is a snip of 'top' command: > 6:28pm up 46 days, 23:03, 2 users, load average: > 2.24, 2.20, 1.98 > 80 processes: 74 sleeping, 6 running, 0 zombie, 0 > stopped > CPU0 states: 99.3% user, 0.2% system, 0.0% nice, > 0.0% idle > CPU1 states: 100.0% user, 0.0% system, 0.0% nice, > 0.0% idle > Mem: 1029896K av, 711884K used, 318012K free, > 0K shrd, 76464K buff > Swap: 2048244K av, 152444K used, 1895800K free > 335796K cached > > PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM > TIME COMMAND > 25893 nobody16 0 10188 9.9M 3104 R95.5 0.9 > 21:55 httpd > 25899 nobody16 0 9448 9448 3104 R95.3 0.9 > 63:27 httpd > 25883 nobody 9 0 10468 10M 3096 S 2.5 1.0 > 0:16 httpd > 25895 nobody 9 0 10116 9.9M 3104 S 2.1 0.9 > 0:15 httpd > 25894 nobody 9 0 10240 10M 3104 S 1.9 0.9 > 0:16 httpd > 25898 nobody 9 0 10180 9.9M 3100 S 1.7 0.9 > 0:13 httpd > > Also, I notice in my error_log i get this entry quite > frequently: > 26210 Apache::DBI new connect to > 'news:1.2.3.4.5userpassAutoCommit=1PrintError=1' > > What can i do to keep the server load low? > > > = > Anthony Ettinger > [EMAIL PROTECTED] > http://apwebdesign.com > home: 415.504.8048 > mobile: 415.385.0146 > > __ > Do You Yahoo!? > Yahoo! Health - Feel better, live better > http://health.yahoo.com >
Re: PerlChildInitHandler doesn't work inside VirtualHost?
On Thu, 8 Aug 2002, Rick Myers wrote: > On Aug 09, 2002 at 12:16:45 +1000, Cees Hek wrote: >> Quoting Jason W May <[EMAIL PROTECTED]>: >>> Running mod_perl 1.26 on Apache 1.3.24. >>> >>> I've found that if I place my PerlChildInitHandler inside a VirtualHost >>> block, it is never called. >> >> It doesn't really make sense to put a PerlChildInitHandler >> inside a VirtualHost directive. > > Why? The Eagle book says this is a perfectly valid concept. Well, for one thing, it would only call the handler if a request to that virtual host was the first request for that child. Assuming it works; I'd think this would be a good canidate for a case that's never been tested before, due to the fact that it would not call the handler if the request that initiated the child was not to that virtual host... It would fail to work in all cases if Apache does not recognize what triggered the child until after child init. Looking over pages 59 through 61, 72 and 73, this appears to me to be the case. Yes, it does explicitly say that it's ok in virtual host blocks, but it doesn't say it works. Ed
RE: Compiled-in but not recognized
On Sun, 11 Aug 2002, Colin wrote: >> -Original Message- >> From: Ged Haywood [mailto:[EMAIL PROTECTED]] >> Sent: Sunday, August 11, 2002 6:02 PM >> Subject: Re: Compiled-in but not recognized >> >> >> Hi there, >> >> On Sun, 11 Aug 2002, Colin wrote: >> >>> I know this is a recurring problem but bear with me ... >> >> :) >> >>> httpd -l >>> Compiled-in modules: >>> http_core.c >>> mod_so.c >>> mod_perl.c >> >> pwd? I think that Ged was suggesting you might have multiple httpd binaries on your system, and was suggesting that you verify you're running the binary you think you're running. It's really annoying when you're trying to debug a program, and the program you're running is not the one you're adding the debugging statements to. However, I suspect most of us have done it on occasion. Ed "How the #&@*! is it getting past all those debug statements without hitting any?!?!" - Me
Re: Apache::Registry() and strict
Ron, This is a greivous FAQ. Please read the guide at http://perl.apache.org/guide You'll find much more than this question answered. Ed Ron Rademaker wrote: > Hello, > > I'm just starting with mod_perl and I'm using Apache::Registry(). The > second line after #!/usr/bin/perl -w is use strict; > But somehow variables I use in the script are still defined if I execute > the script again, in one of the script I said undef $foo at the > end, but I don't think this is the way it should be done, but it did work. > Anyone knows what could be causing this?? > > Ron Rademaker > > PS. Please CC to me because I'm not subscribed to this mailinglist
enterprise mod_perl architectures
tensions that are generated when top management and VCs come knocking with questions. In this way, it should dovetail nicely with the mod_perl advocacy project. I am not yet certain whether the best forum for this is this mailing list, or whether I should try to create a private list of names for folks who are interested. Relevant considerations include: -The possible very off-topicness of pieces of the discussion. -At some point, some of us may want opinions from other folks on sensitive information (network diagrams, etc.) that Corporate won't allow us to show to the outside world except under NDA; if all the folks on a list signed an NDA, then we could speak freely all the time. -At any rate, I'd like to publish any methodologies we use and put any monitoring tools, performance benching tools, etc. into open-source. To that end, I'll be creating a page that publishes any code we come up with and summarizes our thoughts. I'd be happy to publish that page myself, but I could also just add it as a page-- 'Enterprise mod_perl architectures'-- to Matt's new site (modperl.sergeant.org). So, I'd like to get folks' thoughts on this project. Again, I am staking out very high ground on this project-- multimillion-dollar companies with multimillion-dollar budgets. I'm doing this not because I'm disparaging other companies, but because part of the reason behind doing this project is to establish mod_perl's credibility as an enterprise web platform and to describe some of the pitfalls and workarounds that allow mod_perl to scale to that level. To that end, I'd like to get a list of interested parties. In general, this should include the chief architects, CTOs, and/or senior engineers at different shops using mod_perl. Some of those folks don't read this list regularly, and in that case, I'd be happy to email them/call them directly if people could just point them my way. If any subset of folks are interested, I'd be more than happy to drive this project forward. This is a project that really describes one of my core responsibilities in my company right now, so I actually have a lot of time and the resources to devote to this as part of my job. Anyways, not to belabor the point-- I'd like y'alls input on this, specifically: 1) What do folks thing about the project in general? 2) Should we keep it on this list, or should we create a separate mailing list for interested parties, or should we do a combination of the two? 3) Is there anyone who'd like to volunteer virtual space to host this? e.g. ftp, web, creating a mailing list, etc. I am not yet interested in specifics about peoples' architectures; I think that we need to frame the general discussion and create some infrastructure before we go into that. cheers, Ed - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[OT]Re: mod_perl advocacy project resurrection
Aristotle from the Ars Rhetorica on money: Money will not make you wise, but it will bring a wise man to your door. Robin Berjon wrote: > At 12:39 06/12/2000 -0800, brian moseley wrote: > >> ActiveState has built an Perl/Python IDE out of Mozilla: > >> http://www.activestate.com/Products/Komodo/index.html > > > >too bad it's windows only :/ > > That's bound to change. I think AS will release it on all platforms where > Moz/Perl/Python run when it's finished. The current release is very > unstable anyway. > > -- robin b. > All paid jobs absorb and degrade the mind. -- Aristotle > > - > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: eval statements in mod_perl
This was a problem that I had when I was first starting out with mod_perl; i.e., it wouldn't work the first or second times through, and then it would magically start working. This was always caused for me by a syntax error in a library file. In your case, it could be caused by a syntax error in a library file used somewhere in your eval'd code. I highly suggest running > perl -c on all of your library files to check them for valid syntax. If all of your library files are in the same directory, > perl -c * will work as well. I'm not certain for the technical reason for this, but I believe it has something to do with the fact that syntax errors in the libraries are not in and of themselves considered a fatal condition for loading libraries in mod_perl, so the second or third time around the persistent mod_perl process thinks that it has successfully loaded the library. Obviously, some functions in that library won't work, but you won't know that unless you actually use them. Someone else might be able to shed more light on this. good luck, Ed -Original Message- From: Gunther Birznieks [mailto:[EMAIL PROTECTED]] Sent: Thursday, December 07, 2000 3:38 AM To: Hill, David T - Belo Corporate; '[EMAIL PROTECTED]' Subject: Re: eval statements in mod_perl Without knowing your whole program, this could be a variety of logic problems leading to this code. For example, perhaps $build{$nkey} is a totally bogus value the first 2 times and hence your $evalcode is also bogus the first two times -- and it's not a problem of eval at all! This is unclear for the snippet. At 10:52 AM 12/6/2000 -0600, Hill, David T - Belo Corporate wrote: >Howdy, > I am running mod_perl and have created a handler that serves all the >pages for our intranet. In this handler I load perl programs from file into >a string and run eval on the string (not in a block). The problem is that >for any session the code doesn't work the first or second time, then it >works fine. Is this a caching issue or compile-time vs. run-time issues? I >am sure this is a simple fix. What am I missing? > > Here is the nasty part (don't throw stones :) So that we can >develop, I put the eval in a loop that tries it until it returns true or >runs 3 times. I can't obviously leave it this way. Any suggestions? Here >is the relevant chunk of code: > > # Expect perl code. Run an eval on the code and execute it. > my $evalcode = ""; > my $line = ""; > open (EVALFILE, $build{"$nkey"}); > while ($line = ) { > $evalcode .= $line; > } > my $evalresult = 0; > my $counter=0; > ># > # Temporary measure to overcome caching issue, try to ># > # run the eval code 3 times to get a true return. ># > ># > until (($evalresult) || ($counter eq 3)) { > $evalresult = eval $evalcode; > $counter++; > } > $pageHash{"Retries"} = $counter if $counter > 1; > $r->print($@) if $@; > close (EVALFILE); > >I appreciate any and all constructive comments. > >- >To unsubscribe, e-mail: [EMAIL PROTECTED] >For additional commands, e-mail: [EMAIL PROTECTED] __ Gunther Birznieks ([EMAIL PROTECTED]) eXtropia - The Web Technology Company http://www.extropia.com/ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: connect_cached, mod_perl && Oracle connection pooling
Hey-- I know that this is mad late, but this caught my eye, and it doesn't look like anyone has responded since then. For anyone else-- if you've even been in a situation where you've wanted to create persistent DBI connections to multiple Oracle schemas, read on. In short, here's the solution for that particular problem: there is a completely undocumented function that is particular to DBD::Oracle that allows you change your default schema within a particular database connection. See http://www.geocrawler.com/archives/3/183/2000/4/0/3652431/ In particular, if you look at the official DBD::Oracle install-test directory (you can see it in /tmp/cpan/DBD-Oracle-1.03/t on titan), there's a file called t/reauth.t. Here's a small chunk of code from that: ok(0, ($dbh->selectrow_array("SELECT USER FROM DUAL"))[0] eq $uid1 ); ok(0, $dbh->func($dbuser_2, '', 'reauthenticate')); ok(0, ($dbh->selectrow_array("SELECT USER FROM DUAL"))[0] eq $uid2 ); Early tests indicate that reauthentication is not a very expensive function at all. We are currently testing this approach in development, and plan to put it into production in the near future. If you use it, lemme know how it works for you. hope this helps, Ed -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] Sent: Saturday, September 23, 2000 6:09 PM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: connect_cached, mod_perl && Oracle connection pooling DBI's connect_cached has been "new" for quite sometime now and has been labeled with "exact behavior of this method is liable to change" since it first showed up last year. In November last year, Tim clarified his intentions for connect_cached and suggested that DBI::ProxyServer be enhanced to provide a pool from which connections can be checked in and out (or something like that). Well, I'm now looking at possibly having a multiplicity of connect strings in a mod_perl environment. So Apache::DBI doesn't sound suitable, I don't want every child to maintain connections nailed up for every connect string (20 apache children * 20 connect strings = 400 nailed up connections, yowza!). At any given time, the processing happening in the mod_perl apache child process will only need one of those connect strings. Persistence connections are important just because of the expense of setting up the Oracle connection. So, I'm wondering what folks think of non-persistent connections between mod_perl and the dbiproxy but persistent connections with connect_cached between dbiproxy and Oracle... does this make sense? I was thinking it'd be cool to be able to specify how many of each connections should be maintained in the pool. Is anybody doing this and care to share their experiences with it? thanks, -Ian -- Salon Internet http://www.salon.com/ Manager, Software and Systems "Livin' La Vida Unix!" Ian Kallen <[EMAIL PROTECTED]> / AIM: iankallen / Fax: (415) 354-3326 - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[OT] RE: Help needed with MAP expression
The point of this function is to right-align numbers in table-data cells and keep everything else left-aligned. Note that this is what Excel does by default (if you type in a number in Excel, it aligns to the right; if you type in a string, it aligns to the left). Technically, it should be use in the context of an arrayref that you are transforming into an HTML table, e.g.: use CGI qw(:all); $_ = ['1','abc','2.34']; print join "", map(/^[\.\d]+$/ ? td({-align=>'right'}, $_) : td($_), @$_); A useful and clever piece of code, that. But the author probably should have commented it. :) cheers, Ed -Original Message- From: bari [mailto:[EMAIL PROTECTED]] Sent: Thursday, December 07, 2000 12:47 PM To: [EMAIL PROTECTED] Subject: Help needed with MAP expression Hi there, Can any one help me what this MAP function does... map(/^[\.\d]+$/ ? td({-align=>'right'}, $_) : td($_), @$_) I am really confused by this one... your help would be appreciated.. Thank You, - Bari - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[ANNOUNCE] new site: scaling mod_perl (+tool: mod_perl + DBD::Oracle)
The enterprise mod_perl architectures idea that I posted earlier has evolved into a slightly modified idea: a 'scaling mod_perl' site: http://www.lifespree.com/modperl. The point of this site will be to talk about & synthesize techniques for scaling, monitoring, and profiling large, complicated mod_perl architectures. So far, I've written up a basic scaling framework, and I've posted a particular development profiling tool that we wrote to capture, time, and explain all SQL select queries that occur on a particular page of a mod_perl + DBD::Oracle application: -http://www.lifespree.com/modperl/explain_dbitracelog.pl -http://www.lifespree.com/modperl/DBD-Oracle-1.06-perfhack.tar.gz Currently, I'm soliciting thoughts and code on the following subjects in particular: 1. Performance benchmarking code. In particular, I'm looking for tools that can read in an apache log, play it back realtime (by looking at the time between requests in the apache log), and simulate slow & simultaneous connections. I've started writing my own, but it would be cool if something else out there existed. 2. Caching techniques. I know that this is a topic that has been somewhat beaten to a pulp on this list, but it keeps coming up, and I don't know of any place where the current best thinking on the subject has been synthesized. I haven't used any caching techniques yet myself, but I intend to begin caching data at the mod_perl tier in the next version of my application, so I have a very good incentive to synthesize and benchmark various techniques. If folks could just send me pointers to various caching modules and code, I'll test them in a uniform environment and let folks know what I come up with. Or, if someone has already done all that work of testing, I'd appreciate if you could point me to the results. I'd still like to run my own tests, though. If folks could point me towards resources/code for these topics (as well as any other topics you think might be relevant to the site), please let me know. I'm offering to do the legwork required to actually test, benchmark, and synthesize all of this stuff, and publish it on the page. I'm also still interested in actually talking with various folks. If anyone who has been through some significant mod_perl scaling exercise would like to chat for 15-30 minutes to swap war stories or tactical plans, I'd love to talk with you; send me a private email. cheers, Ed - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: [ANNOUNCE] new site: scaling mod_perl will be movin to the Guide
I've gotten in touch with Stas, and the 'scaling mod_perl' site will eventually be folded into the Guide. woohoo! I'm going to spend several weeks fleshing it out and cleaning it up before it goes in, though. -Ed -Original Message- From: Perrin Harkins [mailto:[EMAIL PROTECTED]] Sent: Friday, December 08, 2000 12:36 PM To: Ed Park; [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Subject: Re: [ANNOUNCE] new site: scaling mod_perl (+tool: mod_perl + DBD::Oracle) > The enterprise mod_perl architectures idea that I posted earlier has evolved > into a slightly modified idea: a 'scaling mod_perl' site: > http://www.lifespree.com/modperl. > > The point of this site will be to talk about & synthesize techniques for > scaling, monitoring, and profiling large, complicated mod_perl > architectures. No offense, but the content you have here looks really well suited to be part of the Guide. It would fit nicely into the performance section. Making it a separate site kind of fragments the documentation. > So far, I've written up a basic scaling framework, and I've posted a > particular development profiling tool that we wrote to capture, time, and > explain all SQL select queries that occur on a particular page of a mod_perl > + DBD::Oracle application: > -http://www.lifespree.com/modperl/explain_dbitracelog.pl > -http://www.lifespree.com/modperl/DBD-Oracle-1.06-perfhack.tar.gz Take a look at DBIx::Profile as well. > 1. Performance benchmarking code. In particular, I'm looking for tools that > can read in an apache log, play it back realtime (by looking at the time > between requests in the apache log), and simulate slow & simultaneous > connections. I've started writing my own, but it would be cool if something > else out there existed. The mod_backhand project was developing a tool like this called Daquiri. > If folks could just send me pointers to various caching > modules and code, I'll test them in a uniform environment and let folks know > what I come up with. There are a bunch of discussions about this in the archives, including one this week. Joshua Chamas did some benchmarking on a dbm-based approach recently. - Perrin - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[JOB] mod_perl folks wanted in Boston - athenahealth.com
In the spirit of all of this talk about certification, demand for mod_perl programmers, etc., I'd just like to say that I'm looking for programmers. More to the point, I'm looking for kickass folks who just happen to know mod_perl. If you know mod_perl very well, great, but generally speaking, I'm looking for folks who are just kickass hackers, know that they are kickass hackers, and are willing to do anything to drive a problem to extinction. Experience with mod_perl, Linux, Oracle, Solaris, Java, XML/SOAP, MQ Series, transaction brokers, systems administration, NT, DHTML, JavaScript, etc. etc. are all Good Things. But basically, we're looking for folks who are itching to prove themselves and have some sort of history that indicates that they can do it. As a backdrop: we just raised $30 million, and we were the top story in the latest Red Herring VC Dealflow. http://www.redherring.com/vc/2000/1206/vc-ltr-dealflow120600.html As you have probably gathered by now from my posts about the Scaling mod_perl page (http://www.lifespree.com/modperl/- soon to be folded into the Guide), I'm currently starting up a scaling mod_perl project, and I have a lot of money and stock options to burn on good people and interesting toys. If you're interested, send me a private email & a resume and we'll talk. Unfortunately, you sort of have to be in the Boston area (or willing to move) to make this work. cheers, Ed - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Apache::Session benchmarks
FYI-- here are some Apache::Session benchmark results. As with all benchmarks, this may not be applicable to you. Basically, though, the results show that you really ought to use a database to back your session stores if you run a high-volume site. Benchmark: This benchmark measures the time taken to do a create/read for 1000 sessions. It does not destroy sessions, i.e. it assumes a user base that browses around arbitrarily and then just leaves (i.e. does not log out, and so session cleanup can't easily be done). RESULTS: I tested the following configurations: Apache::Session::MySQL - Dual-PIII-600/512MB/Linux 2.2.14SMP: Running both the httpd and mysqld servers on this server. Average benchtime: 2.21 seconds (consistent) Apache::Session::Oracle - Ran the httpd on the dual-PIII-600/512MB/Linux 2.2.14SMP, running Oracle on a separate dual PIII-500/1G (RH Linux 6.2). Average benchtime: 3.1 seconds (consistent). (ping time between the servers: ~3ms) Apache::Session::File - Dual-PIII-600/512MB/Linux 2.2.14SMP: Ran 4 times. First time: ~2.2s. Second time: ~5.0s. Third time: ~8.4s. Fourth time: ~12.2s. Apache::Session::DB_File - Dual-PIII-600/512MB/Linux 2.2.14SMP: Ran 4 times. First time: ~20.0s. Second time: ~20.8s. Third time: ~21.9s. Fourth time: ~23.2s. The actual benchmarking code can be found at http://www.lifespree.com/modperl/ (warning - the site is in a terrible state right now, mostly a scratchpad for various techniques & benchmarks) Question: does anyone know how to pre-specify the _session_id for the session, rather than allowing Apache::Session to set it and read it? I saw some posts about it a while back, but no code... cheers, Ed
RE: Article idea: mod_perl + JSP
I've been thinking about this quite a bit recently. I agree with Gunther-- what is more important is not really the language that you use, but the high-level application framework you have built for yourself and how you use it. This is because most of the essential elements of any framework can be duplicated in any reasonably powerful programming language (mod_perl, java, tcl, even VBScript/VB to some extent). That said, my own experience and benchmarking showed that mod_perl is the best of these architectures for building extremely high-quality, reliable, _complicated_ 3-tier Internet applications in which the prototyping and release cycle are very highly compressed, because I can write the most high-quality, high-speed code per unit time in Perl. By 3-tier apps, I mean apps consisting of a browser, the app server (mod_perl), and an RDBMS. Perl doesn't have much support in the way of n-tier apps, which is why I find Nathan's question interesting, and why I have been thinking about it recently. From some recent experiences with using SOAP to integrate with an outside vendor, I believe that it is possible to create a best-of-breed n-tier solution using Perl as the glue layer. For those of you who don't know what SOAP is, it's essentially RPC over XML, and allows any app to talk to any other app in a standard, XML-based format. Go to http://www.soaplite.com for a very clean implementation of SOAP for Perl. To continue-- there are a few reasons that you might want to use Java as a component of a mod_perl app: -There are sometimes pre-written components for Java that you'd like to use because a vendor has written pre-specified hooks for Java. This could also be the case in which you have to integrate with any legacy systems. -Java has much better support for threading, and therefore in many cases makes a much better server (the simple example for this is a chat server). -Because of Java's threads, it can pool transactions resources (e.g. databases) better, and may therefore be more efficient in places where resources are tightly constrained, _especially_ if the database is queried relatively infrequently. For similar reasons, you might want to use a VB component in your application, etc. SOAP makes that possible. The point here is that I think that an awful lot of folks out there have straitjacketed themselves into thinking that if there's a complicated problem that needs to be solved, and there's a piece of that problem is best done in Java, then we ought to write the whole thing in Java. What I'm saying is that that's not necessarily true-- that it's actually possible to write best-of-breed solutions by introducing a communications-layer abstraction that enables you to build a clean n-tier architecture. CORBA promised this, but was sufficently difficult to implement because it has not (to my knowledge) gained very wide acceptance in the Perl community. Also, the major ORBs (IONA, Visigenic) have largely overlooked creating Perl bindings for theri apps. SOAP, however, makes distributed computing extremely easy and _very_ clean, and I think that it could change the way that people think about building complicated, high-quality applications in an extremely compressed timecycle. Using SOAP actually opens a number of other possibilities that don't require thinking outside of mod_perl, too. For example, one of the big selling points of Java is that it allows horizontal partitioning of classes on different machines. Using SOAP, you can actually partition your _perl_ logic so that different pieces run on different machines; or, you can write a component in Perl that is subsequently called by a Java component. OK, enough of my rambling... cheers, Ed -Original Message- From: Gunther Birznieks [mailto:[EMAIL PROTECTED]] Sent: Tuesday, December 12, 2000 11:42 PM To: Chris Winters; Nathan Torkington Cc: [EMAIL PROTECTED] Subject: Re: Article idea: mod_perl + JSP At 11:11 PM 12/12/2000 -0500, Chris Winters wrote: >* Nathan Torkington ([EMAIL PROTECTED]) [001212 22:09]: > > > > Anyone run such an installation? Anyone want to write about their > > setup and experiences? > > There are projects where we use mod_perl handlers for stuff like prototyping auth but then use servlets/JSPs for the app. But I believe that's too shallow for what Nat wants. We're a small shop of primarily fairly senior developers (at least several years experience in the languages we like to use)... and we've actually found that the Java web and the Perl web projects we've delivered on aren't necessarily THAT far off in project delivery time than some Perl people would have you believe. Of course, we have a toolkit that we use to develop apps in both Perl and Java which helps, but it's still interesting that business logic for people experienced in the language of their choice isn't that bad in term
RE: Mod_perl tutorials
My two cents-- I really like the look of the take23 site as well, and I would be happy as a clam if we could get modperl.org. I'd even be willing to chip in some (money/time/effort) to see whether we could get modperl.org. More than that, though, I think that I would really like to see take23 in large measure replace the current perl.apache.org. I remember the first time I looked at perl.apache.org, it was not at all clear to me that I could build a fast database-backed web application using mod_perl. In contrast, when you click on PHP from www.apache.org, you are taken directly to a site that gives you the sense that there is a strong, vibrant community around php. (BTW, I also like the look and feel of take23 significantly more than php). Anyways, those are my own biases. The final bias is that the advocacy site should be hosted someplace _fast_; one of the reasons I initially avoided PHP was that their _site_ was dog slow, and I associated that with PHP being dog slow. Anyways, take23 is very fast for now. cheers, Ed
showing mod_perl execute time in access_log
quick, obvious trick: This is a trivial modification of Doug's original Apache::TimeIt script that allows you to very precisely show the Apache execute time of the page. This is particularly useful if you want to know which pages of your site you could optimize. Here's a question, though: does anyone know an easy way of measuring how long apache keeps a socket to the client open, assuming that KeepAlive has been turned off? This is relevant because I want to know how long on average it is taking clients to receive certain pages in my application. I know that I can approximately calculate it from bandwidth, but I would expect the actual number to vary wildly throughout a given day due to Internet congestion. cheers, Ed --- package AccessTimer; # USAGE: # Just put the following line into your .conf file: # # PerlFixupHandler AccessTimer # # and use a custom Apache log (this logging piece is not at all mod_perl-based... # see http://httpd.apache.org/docs/mod/mod_log_config.html) # # CustomLog /path/to/your/log "%h %l %u %t \"%r\" %>s %b %{ELAPSED}e" # use strict; use Apache::Constants qw(:common); use Time::HiRes qw(gettimeofday tv_interval); use vars qw($begin); sub handler { my $r = shift; $begin = [gettimeofday]; $r->push_handlers(PerlLogHandler=>\&log); return OK; } sub log { my $r = shift; my $elapsed = tv_interval($begin); $r->subprocess_env('ELAPSED' => "$elapsed"); return DECLINED; } 1;
Re: is morning bug still relevant?
Please use the MySQL modules list. Responses are timely. ;-) ed Subscribe: <mailto:[EMAIL PROTECTED]> Vivek Khera wrote: > >>>>> "SV" == Steven Vetzal <[EMAIL PROTECTED]> writes: > > SV> Greetings, > >> to say "ping doesn't work in all cases" without qualifiying why and/or > >> which drivers that applies to. > > SV> We've had to write our own ->ping method for the MySQL DBD > SV> driver. Our developer tried to track down a maintainer for the > SV> DBD::msql/mysql module to submit a diff, but to no avail. > > How old a version are you talking about? In any case, according to > CPAN, the DBD::mysql module is "owned" by > > Module id = DBD::mysql > DESCRIPTION Mysql Driver for DBI > CPAN_USERID JWIED (Jochen Wiedmann <[EMAIL PROTECTED]>) > CPAN_VERSION 2.0414 > CPAN_FILEJ/JW/JWIED/Msql-Mysql-modules-1.2215.tar.gz > DSLI_STATUS RmcO (released,mailing-list,C,object-oriented) > INST_FILE(not installed) > > and I *know* he's responsive to that email address at least as of a > month or so ago, as we exchanged correspondence on another matter. > > -- > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= > Vivek Khera, Ph.D.Khera Communications, Inc. > Internet: [EMAIL PROTECTED] Rockville, MD +1-240-453-8497 > AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/
RE: the edge of chaos
A few thoughts: In analyzing a few spikes on our site in the last few days, a clear pattern has emerged: the database spikes, and the database spikes induce a corresponding spike on the mod_perl server about 2-6 minutes later(because mod_perl requests start queuing up). This is exacerbated by the fact that as the site slows down, folks start double and triple-clicking on links and buttons, which of course just causes things to get much worse. This has a few ramifications. If your pages are not homogeneous in database usage (i.e., some pages are much heavier than others), then throttling by number of connections or throttling based on webserver load doesn't help that much. You need to throttle based on database server load. This requires some sort of mechanism whereby the webserver can sample the load on the database server and throttle accordingly. Currently, we just mount a common NFS fileserver, sample every minute, and restart the webserver if db load is too high, which works OK. The best course of action, though, is to tune your database, homogenize your pages, and buy a bigger box, which we're doing. -Ed -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Perrin Harkins Sent: Thursday, January 04, 2001 6:38 PM To: Justin Cc: [EMAIL PROTECTED] Subject: Re: the edge of chaos Justin wrote: > Thanks for the links! But. I wasnt sure what in the first link > was useful for this problem, and, the vacuum bots discussion > is really a different topic. > I'm not talking of vacuum bot load. This is real world load. > > Practical experiments (ok - the live site :) convinced me that > the well recommended modperl setup of fe/be suffer from failure > and much wasted page production when load rises just a little > above *maximum sustainable throughput* .. The fact that mod_proxy doesn't disconnect from the backend server when the client goes away is definitely a problem. I remember some discussion about this before but I don't think there was a solution for it. I think Vivek was correct in pointing out that your ultimate problem is the fact that your system is not big enough for the load you're getting. If you can't upgrade your system to safely handle the load, one approach is to send some people away when the server gets too busy and provide decent service to the ones you do allow through. You can try lowering MaxClients on the proxy to help with this. Then any requests going over that limit will get queued by the OS and you'll never see them if the person on the other end gets tired of waiting and cancels. It's tricky though, because you don't want a bunch of slow clients to tie up all of your proxy processes. It's easy to adapt the existing mod_perl throttling handlers to send a short static "too busy" page when there are more than a certain number of concurrent requests on the site. Better to do this on the proxy side though, so maybe mod_throttle could do it for you. - Perrin
getting rid of multiple identical http requests (bad users double-clicking)
Does anyone out there have a clean, happy solution to the problem of users jamming on links & buttons? Analyzing our access logs, it is clear that it's relatively common for users to click 2,3,4+ times on a link if it doesn't come up right away. This not good for the system for obvious reasons. I can think of a few ways around this, but I was wondering if anyone else had come up with anything. Here are the avenues I'm exploring: 1. Implementing JavaScript disabling on the client side so that links become 'click-once' links. 2. Implement an MD5 hash of the request and store it on the server (e.g. in a MySQL server). When a new request comes in, check the MySQL server to see whether it matches an existing request and disallow as necessary. There might be some sort of timeout mechanism here, e.g. don't allow identical requests within the span of the last 20 seconds. Has anyone else thought about this? cheers, Ed
mod_perl + multiple Oracle schemas (was RE: Edmund Mergl)
John-- Another thing you may want to look into is just doing an "alter session set current_schema" call at the top of your mod_perl page. This is actually significantly faster than Tim's reauthenticate solution (about 7X, according to my benchmarks). It has become a supported feature as of Oracle 8i. For details on what I did, see http://www.lifespree.com/modperl/ (which is still a total mess right now-- I'll get around to cleaning it up sometime soon, I promise!) cheers, Ed -Original Message- From: John D Groenveld [mailto:[EMAIL PROTECTED]] Sent: Wednesday, January 10, 2001 5:10 PM To: Edmund Mergl Cc: [EMAIL PROTECTED] Subject: Re: Edmund Mergl Good to see you alive, well, and still coding Perl. Months ago, about the time of the Perl conference so it may have slipped under everyone's radar, Jeff Horn from U of Wisconsin sent you some patches to Apache::DBI to use Oracle 8's re-authenticate function instead of creating and caching a separate Oracle connection for each user. Did you decide whether to incorporate them or to suggest another module name for him to use? I wasn't able to participate in the discussion at the time, but I now have need for that functionality. I don't know if Jeff Horn is still around, but I'll track him down if necessary and offer to work on it. Also, I sent you a small patch to fix Apache::DBI warnings under Perl5.6. I hate to be a pest, but I'm rolling out software where the installation procedure requires the user to fetch Perl from Active State and Apache::DBI from CPAN. I'd rather not ship my own version of yours or any CPAN module. Thanks, John [EMAIL PROTECTED]
Re: Not even beginning - INSTALL HELP
If you are going to upgrade gcc for RH 7.0, I reccomend the new source RPM for gcc to be found in the updates directory on any redhat mirror site. In fact, if you are sticking with RH you should see about updating a number of things. 23, Ed "G.W. Haywood" wrote: > Hi there, > > On Tue, 27 Feb 2001, A. Santillan Iturres wrote: > > > I have Apache 1.3.12 running on a RedHat 7.0 box with perl, v5.6.0 built for > > i386-linux > > I went to install mod_perl-1.25: > > When I did: > > perl Makefile.PL > > I've got a: > > Segmentation fault (core dumped) > > Did you build your Perl yourself? Sounds like there's a problem with > it. Check out the mod_perl List archives for problems with gcc (the C > compiler) that was shipped with RedHat 7.0. You should probably get > that replaced to start with. (Or use Slackware - sorry:) > > 73, > Ged.
Re: Varaible scope & memory under mod_perl
agh! check the headers! Steven Zhu wrote: > How could I unsubscribe from [EMAIL PROTECTED] you so > much.Steven. > > -Original Message- >
Re: Fast DB access
Matthew Kennedy wrote: > I'm on several postgresql mailing lists and couldn't find a recent post > from you complaining about 6.5.3 performance problems (not even by an > archive search). Your benchmark is worthless until you try postgresql > 7.1. There have been two major releases of postgresql since 6.5.x (ie. > 7.0 and 7.1) and several minor ones over a total of 2-3 years. It's no > secret that they have tremendous performance improvements over 6.5.x. So > why did you benchmark 6.5.x? > > This is a good comparison of MySQL and PostgreSQL 7.0: > > "Open Source Databases: As The Tables Turn" -- > http://www.phpbuilder.com/columns/tim20001112.php3 > > > We haven't tried this one. We are doing a project on mysql. Our preliminary >assessment is, it's a shocker. They justify not having commit and rollback!! Makes us >think whether they are even lower end than MS-Access. > > Again, checkout PostgreSQL 7.1 -- I believe "commit" and "rollback" (as > you put it) are available. BTW, I would like to see that comment about > MS-Access posted to pgsql-general... I dare ya. :P > > Matthew You can scale any of these databases; Oracle, MySQL or PostgreSQL, but please research each one thoroughly and tune it properly before you do your benchmarking. And, again, MySQL does support transactions now. Such chutzpah for them to have promoted an "atomic operations" paradigm for so long without supporting transactions! But that discussion is moot now. Please be advised that MySQL is threaded and must be tuned properly to handle many concurrent users on Linux. See the docs at http://www.mysql.com The author of the PHP Builder column did not do his research, so his results for MySQL on Linux are way off. Happily, though, even he got some decent results from PostgreSQL 7.0. The kernel of wisdom here: If you are going to use one of the Open Source databases, please use the latest stable release (they improve quickly!) and please either hire someone with some expertise installing and administering, and tuning your database of choice on your platform of choice or do the research thoroughly yourself. Ed
Re: Can AxKit be used as a Template Engine?
Michael Alan Dorman wrote: > Matt Sergeant <[EMAIL PROTECTED]> writes: > > It depends a *lot* on the type of content on your site. The above > > www.dorado.com is brochureware, so it's not likely to need to be > > re-styled for lighter browsers, or WebTV, or WAP, or... etc. So your > > content (I'm guessing) is pure HTML, with Mason used as a fancy way > > to do SSI, with Mason components for the title bars/menus, and so > > on. (feel free to correct me if I'm wrong). > > It is more sophisticated than that, but you're basically right. I do > pull some tagset-like tricks for individual pages, so it's not totally > pure HTML, but yeah, if we wanted to do WebTV we'd be fscked. > > > AxKit is just as capable of doing that sort of thing, but where it > > really shines is to provide the same content in different ways, > > because you can turn the XML based content into HTML, or WebTV HTML, > > or WML, or PDF, etc. > > Ah---well a web site that does all of that isn't what first comes to > mind when someone talkes about doing a "static site"---though now that > you've explained further, I believe I understand exactly what you > intended. > > > I talk about how the current Perl templating solutions (including > > Mason) aren't suited to this kind of re-styling in my AxKit talk, > > which I'm giving at the Perl conference, so go there and come see > > the talk :-) > > Heh. I agree entirely with this assesment---I can conceptualize a way > to do it in Mason, but the processing overhead would be unfortunate, > the amount of handwaving involved would be enormous, and it would > probably be rather fragile. > > > So I take back that people wouldn't be using Mason for static > > content. I was just trying to find a simple way to classify these > > tools, and to some people (I'd say most people), Mason is more on > > the dynamic content side of things, and AxKit is more on the static > > content side of things, but both tools can be used for both types of > > content. > > > > (I hate getting into these things - I wish I'd never brought up > > Mason or EmbPerl) > > Well I will say that you made an excellent point that hadn't really > occured to me---I use XML + XSL for a lot of stuff (the DTD I use for > my resume is a deeply reworked version of one I believe you had posted > at one time), but not web sites, in part because I'm not currently > obligated to worry about "other devices"---so I don't exactly regret > getting you to clarify things. > > Could I suggest that a better tagline would be that AxKit is superior > when creating easily (re-)targetable sites with mostly static content? > It might stave off more ignorant comments. > > Mike. Matt, I've also found your use of "static" to describe "transformable" or "re-targetable"(unfortunate word)" content to be confusing. This discussion helps clarify things, a little. ;-) Ed
Re: modperl/ASP and MVC design pattern
Francesco Pasqualini wrote: > - Original Message - > From: <[EMAIL PROTECTED]> > To: "Francesco Pasqualini" <[EMAIL PROTECTED]> > Cc: <[EMAIL PROTECTED]> > Sent: Friday, April 20, 2001 8:11 PM > Subject: Re: modperl/ASP and MVC design pattern > > > > > You can (I have) accomplish this with mod_perl and HTML::Mason, Mason's > > root level autohandler can play the role of the JSP model 2 > > "controller servlet": dispatching logic processing to Perl objects (er, > > beans) and "forwarding" to a view (with the Mason $m->call_next or > > $m->comp mechanisms). I think the Apache::Dispatch stuff can also > > perform this role (haven't played with it to say for certain). I'll > > qualify this by saying MVC is not a end in itself, there are a lot of > > modern requirements for flexible branding, client form factor appropriate > > and locale specific presentations that require the view/controller part to > > be a lot smarter than the traditional concepts of MVC that I've seen call > > for. I've been referring to these needs in my own engineering discussions > > as (yikes) MVC++ :) > > ... this is really interesting, can you point me to documentation about > "MVC++" > thanks > Francesco Francesco, I believe that Ian was joking, hence the yikes before the name, so the above post is the documentation! Ed
Re: modperl/ASP and MVC design pattern
[EMAIL PROTECTED] wrote: > > Francesco, I believe that Ian was joking, hence the yikes before the name, > > so the above post is the documentation! > > > > Ed > > > > .. so the best environment for the MVC++ design pattern is parrot/mod_parrot :) > http://www.oreilly.com/news/parrotstory_0401.html > > Thanks > Francesco > Exactly! Wasn't Ian the one responsible for the mod_parrot MVC++ API? ed
[ModPerl] missing POST args mystery
I'm stumped regarding some request object behavior in modperl, and after searching the Guide, Google, and the list archives without success, I'm hoping someone might offer another idea I could explore, or offer some helpful diagnostic questions. In a nutshell, my problem is that POSTed form key-value pairs are intermittently not showing up in the request object inside my handler subroutine. I have a modperl-generated form: ... ... ... Upon submission, the form data eventually flows to my PerlHandler... sub handler { my $r = shift; my @argsarray = ($r->method eq 'POST' ? $r->content() : $r->args()); ... } Now, if I examine (print) the form values retrieved from the request object upon entry into this handler (*after* I load them into $args), 'id' is not present at all. I must be missing something trivially obvious to some of you. This is running Apache/1.3.19 (Unix) mod_perl/1.25 mod_ssl/2.8.3 OpenSSL/0.9.6a. Regards, Ed Loehr
Re: [ModPerl] missing POST args mystery
Ed Loehr wrote: > > > >I'm stumped ... > > >In a nutshell, my problem is that POSTed form key-value pairs are > > >intermittently not showing up in the request object inside my handler > > >subroutine. As I was puzzling over this, I saw this error message in the logs... (offline mode: enter name=value pairs on standard input) A google search turned up a note about needing to have "$CGI::NO_DEBUG = 1" before calling CGI::Cookie->parse(). Adding that line of code before my parse call seems to have fixed the problem. At a glance, looks like CGI.pm was strangely set to read from the command-line (default $CGI::NO_DEBUG = 0), probably triggering a call of Apache's request->args somewhere along the line. How the default setting may have changed I don't know, because I've been using CGI.pm for years without this problem; I may have upgraded that package, picking up a change accidentally. Regards, Ed Loehr