[OT] Where to download Sablotron for AxKit

2000-12-23 Thread Philip Mak

This is off-topic, but I am having problems downloading Sablotron from its
website (Sablotron is a component that AxKit requires).

On http://www.gingerall.com/charlie-bin/get/webGA/act/download.act the
link for "Sablotron 0.50 - sources" and "Sablotron 0.50 - Linux
binary" redirects to download.gingerall.cz, which is an unknown host.

The Sablotron mailing list also appears to be busted, having an unknown
host problem.

Since several people mentioned AxKit on this list, I thought someone here
might know about Sablotron. Do you know where I can download it from? I
haven't been able to find any mirrors for it.

Thanks,

-Philip Mak ([EMAIL PROTECTED])




Re: can't flush buffers?

2000-12-23 Thread Les Mikesell


- Original Message -
From: "Wesley Darlington" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, December 23, 2000 1:44 PM
Subject: Re: can't flush buffers?


> Hi All,
>
> On Sat, Dec 23, 2000 at 09:38:11AM -0800, quagly wrote:
> > This is the relevant code:
> >
> > while ($sth->fetch) {
> >$r->print ("",
> >map("$_",@cols),
> >"");
> >   $r->rflush;
> > }
>
> A thought is knocking at the back of my head - browsers don't render
> tables until they've got the whole thing. I think. Try sending lots
> of single-row tables instead of one big table...?

Yes, this is most likely the real problem - if the browser has to compute
the
column widths it can't do it until it has seen the end of the table.  You
can
avoid it by specifying the widths in the table tag or by closing and
restarting
the table after some reasonable sized number of rows.

Les Mikesell
[EMAIL PROTECTED]





Re: can't flush buffers?

2000-12-23 Thread Stas Bekman

On Sat, 23 Dec 2000, quagly wrote:

> 
>   I posted something like this a week ago, but typos in my message kept
> anyone from understanding the issue.
> 
>   I am trying to return each row to the client as it comes from the
> database, instead of waiting for all the rows to be returned before
> displaying them.  
> 
>   I have set $|=1 and added $r->flush; after every print statement ( I
> realize this is redundant ) but to no avail.  

gmm, may I suggest the guide?

http://perl.apache.org/guide/performance.html#Work_With_Databases

> This is the relevant code:
> 
> while ($sth->fetch) {
>$r->print ("",
>map("$_",@cols),
>"");
>   $r->rflush;
> }
> 
> Here is the complete package:
> 
> package Sql::Client;
> 
> use Apache::Request;
> use strict;
> use warnings;
> use Apache::Constants qw(:common);
> 
> my $r;  #request
> my $apr;   #Apache::Request 
> my $host;  #hostname of remote user
> my $sql;#sql to execute
> 
> $|=1;
> 
> sub getarray ($) { 
> 
> my $dbh;  # Database handle
> my $sth;# Statement handle
> my $p_sql; # sql statement passed as parameter
> my @cols;  #column array to bind results
> my $titles;   # array ref to column headers
> 
> $p_sql = shift;
> 
> # Connect
> $dbh = DBI->connect (
> "DBI:mysql:links_db::localhost",
> "nobody",
> "somebody",
> {
> PrintError => 1,# warn() on errors
> RaiseError => 0,   # don't die on error
> AutoCommit => 1,# commit executes
> immediately
> }
> );
> 
> # prepare statment
> $sth = $dbh->prepare($p_sql);
> 
> $sth->execute;
> 
> $titles = $sth->{NAME_uc};
> #--
> # for minimal memory use, do it this way
> @cols[0..$#$titles] = ();
> $sth->bind_columns(\(@cols));
> $r->print( "");
> $r->print ("",
> map("$_",@$titles),
> "");
> while ($sth->fetch) {
> $r->print ("",
> map("$_",@cols),
> "");
> $r->rflush;
> }
> $r->print ("");
> return; 
> }
> 
> 
> sub handler {
> $r = shift;
> $apr =  Apache::Request->new($r);
> $sql = $apr->param('sql') || 'SELECT';
> $sql='SELECT' if  $apr->param('reset');
> 
> $r->content_type( 'text/html' );
> $r->send_http_header;
> return OK if $r->header_only;
> $host = $r->get_remote_host;
> $r->print(< 
> 
>  HREF="/styles/lightstyle.css" 
> >
> Hello $host
> 
> Sql Client
> 
> Enter your Select Statement:
> 
> $sql
> 
> 
> 
> 
> HTMLEND
> $r->rflush;
> getarray($sql) unless $sql =~ /^SELECT$/;
> 
> $r->print(< 
> 
> HTMLEND
> return OK;
> }
> 1;
> 



_
Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
http://stason.org/   mod_perl Guide  http://perl.apache.org/guide 
mailto:[EMAIL PROTECTED]   http://apachetoday.com http://logilune.com/
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/  





Re: can't flush buffers?

2000-12-23 Thread Wesley Darlington

Hi All,

On Sat, Dec 23, 2000 at 09:38:11AM -0800, quagly wrote:
> This is the relevant code:
> 
> while ($sth->fetch) {
>$r->print ("",
>map("$_",@cols),
>"");
>   $r->rflush;
> }

A thought is knocking at the back of my head - browsers don't render
tables until they've got the whole thing. I think. Try sending lots
of single-row tables instead of one big table...?

Also, you may find it useful to send output in chunks. Keep a wee
counter in the loop and send output every $n$ items or so. Maybe.

ATB,
Wesley.



Re: can't flush buffers?

2000-12-23 Thread Ken Williams

[EMAIL PROTECTED] (C. Jon Larsen) wrote:
>quagly wrote:
>>  I posted something like this a week ago, but typos in my message kept
>> anyone from understanding the issue.
>> 
>>  I am trying to return each row to the client as it comes from the
>> database, instead of waiting for all the rows to be returned before
>> displaying them.  
>
>Why would you want to do this ?
>
>Writing your application this way will ensure that:
>
>a. end users can crash your server/application.

Huh??

>b. your application will preform poorly on the network.
>

I presume that the application is already performing poorly (delivering
content as one chunk after 60 seconds, for example) and he wants it to
be friendlier (delivering 15 chunks, each of which takes 5 seconds).  

I admit I've never tried doing this (so I'm afraid I can't help quagly),
but I can imagine situations in which it might be appropriate.


  ------
  Ken Williams Last Bastion of Euclidity
  [EMAIL PROTECTED]The Math Forum



Problem after rebooting apache with Oracle

2000-12-23 Thread Thomas Moore

Everything works fine for about 1/2 an hour and then we start getting the
message below. We used to get an error that Oracle home was not found, so we
hard-coded it in and now we just get the message below.

Does anyone have any suggestions? If our code produces an oracle error, does
that corrupt the mod_perl process and therefore give any future users who
connect to that particular process the error below?

perl version 5.6.0
Oracle 8.1.5
mod_perl 1.21
apache 1.3.12


[Sat Dec 23 09:55:08 2000] [error] DBI->connect(RSPD1) failed: ORA-12154:
TNS:could not resolve service name (DBD ERROR: OCIServerAttach) at
/usr/local/lib/perl5/site_perl/5.6.0/sun4-solaris/DBI.pm line 411
DBI::connect('DBI', 'dbi:Oracle:RSPD1', 'username', 'password',
'HASH(0x192a9b4)') called at /data/www/racesearch/htdocs/CGI/LIB/RSDBI.pm
line 323
RSDBI::connect('RSDBI') called at
/data/www/racesearch/htdocs/CGI/search_modules/SBPN.pm line 72
SBPN::main('SBPN', 'HASH(0x2574a84)') called at
/data/www/racesearch/htdocs/CGImp/mhp line 132

Apache::ROOTwww_2eracesearch_2ecom::CGImp::mhp::mode_sbpn('HASH(0x2574a84)')
called at /data/www/racesearch/htdocs/CGImp/mhp line 61

Apache::ROOTwww_2eracesearch_2ecom::CGImp::mhp::handler('Apache=SCALAR(0x24a
04f8)') called at
/usr/local/lib/perl5/site_perl/5.6.0/sun4-solaris/Apache/Registry.pm line
143
require 0 called at
/usr/local/lib/perl5/site_perl/5.6.0/sun4-solaris/Apache/Registry.pm line
143
Apache::Registry::handler('Apache=SCALAR(0x24a04f8)') called at
/dev/null line 0
require 0 called at /dev/null line 0




Re: can't flush buffers?

2000-12-23 Thread C. Jon Larsen


> 
>   I posted something like this a week ago, but typos in my message kept
> anyone from understanding the issue.
> 
>   I am trying to return each row to the client as it comes from the
> database, instead of waiting for all the rows to be returned before
> displaying them.  

Why would you want to do this ?

Writing your application this way will ensure that:

a. end users can crash your server/application.
b. your application will preform poorly on the network.

Buffer your output, and when all the output is collected, print it, and
let tcp deliver the data in network-friendly chunks. If your database is
that slow that you think you need an approach like this, investigate the
possibility of a caching server process that you can sit in front of the
actual db. 

You need to consider what happens when a user executes a query that can
return more rows that a browser can reasonably display. In other words,
having a query results pagination module or feature is probably a must.

If you were writing a stand-alone application that ran on a single cpu
(like MS Access on a local file) in  this style (no pagination, no
buffering) I would consider this to be marginally bad style. Inside a
web-based application, this approach is horrendous.

Just my 2 cents . . .

On Sat, 23 Dec 2000, quagly wrote:

> 
>   I have set $|=1 and added $r->flush; after every print statement ( I
> realize this is redundant ) but to no avail.  
> 
> This is the relevant code:
> 
> while ($sth->fetch) {
>$r->print ("",
>map("$_",@cols),
>"");
>   $r->rflush;
> }
> 
> Here is the complete package:
> 
> package Sql::Client;
> 
> use Apache::Request;
> use strict;
> use warnings;
> use Apache::Constants qw(:common);
> 
> my $r;  #request
> my $apr;   #Apache::Request 
> my $host;  #hostname of remote user
> my $sql;#sql to execute
> 
> $|=1;
> 
> sub getarray ($) { 
> 
> my $dbh;  # Database handle
> my $sth;# Statement handle
> my $p_sql; # sql statement passed as parameter
> my @cols;  #column array to bind results
> my $titles;   # array ref to column headers
> 
> $p_sql = shift;
> 
> # Connect
> $dbh = DBI->connect (
> "DBI:mysql:links_db::localhost",
> "nobody",
> "somebody",
> {
> PrintError => 1,# warn() on errors
> RaiseError => 0,   # don't die on error
> AutoCommit => 1,# commit executes
> immediately
> }
> );
> 
> # prepare statment
> $sth = $dbh->prepare($p_sql);
> 
> $sth->execute;
> 
> $titles = $sth->{NAME_uc};
> #--
> # for minimal memory use, do it this way
> @cols[0..$#$titles] = ();
> $sth->bind_columns(\(@cols));
> $r->print( "");
> $r->print ("",
> map("$_",@$titles),
> "");
> while ($sth->fetch) {
> $r->print ("",
> map("$_",@cols),
> "");
> $r->rflush;
> }
> $r->print ("");
> return; 
> }
> 
> 
> sub handler {
> $r = shift;
> $apr =  Apache::Request->new($r);
> $sql = $apr->param('sql') || 'SELECT';
> $sql='SELECT' if  $apr->param('reset');
> 
> $r->content_type( 'text/html' );
> $r->send_http_header;
> return OK if $r->header_only;
> $host = $r->get_remote_host;
> $r->print(< 
> 
>  HREF="/styles/lightstyle.css" 
> >
> Hello $host
> 
> Sql Client
> 
> Enter your Select Statement:
> 
> $sql
> 
> 
> 
> 
> HTMLEND
> $r->rflush;
> getarray($sql) unless $sql =~ /^SELECT$/;
> 
> $r->print(< 
> 
> HTMLEND
> return OK;
> }
> 1;
> 




can't flush buffers?

2000-12-23 Thread quagly


I posted something like this a week ago, but typos in my message kept
anyone from understanding the issue.

I am trying to return each row to the client as it comes from the
database, instead of waiting for all the rows to be returned before
displaying them.  

I have set $|=1 and added $r->flush; after every print statement ( I
realize this is redundant ) but to no avail.  

This is the relevant code:

while ($sth->fetch) {
   $r->print ("",
   map("$_",@cols),
   "");
  $r->rflush;
}

Here is the complete package:

package Sql::Client;

use Apache::Request;
use strict;
use warnings;
use Apache::Constants qw(:common);

my $r;  #request
my $apr;   #Apache::Request 
my $host;  #hostname of remote user
my $sql;#sql to execute

$|=1;

sub getarray ($) { 

my $dbh;  # Database handle
my $sth;# Statement handle
my $p_sql; # sql statement passed as parameter
my @cols;  #column array to bind results
my $titles;   # array ref to column headers

$p_sql = shift;

# Connect
$dbh = DBI->connect (
"DBI:mysql:links_db::localhost",
"nobody",
"somebody",
{
PrintError => 1,# warn() on errors
RaiseError => 0,   # don't die on error
AutoCommit => 1,# commit executes
immediately
}
);

# prepare statment
$sth = $dbh->prepare($p_sql);

$sth->execute;

$titles = $sth->{NAME_uc};
#--
# for minimal memory use, do it this way
@cols[0..$#$titles] = ();
$sth->bind_columns(\(@cols));
$r->print( "");
$r->print ("",
map("$_",@$titles),
"");
while ($sth->fetch) {
$r->print ("",
map("$_",@cols),
"");
$r->rflush;
}
$r->print ("");
return; 
}


sub handler {
$r = shift;
$apr =  Apache::Request->new($r);
$sql = $apr->param('sql') || 'SELECT';
$sql='SELECT' if  $apr->param('reset');

$r->content_type( 'text/html' );
$r->send_http_header;
return OK if $r->header_only;
$host = $r->get_remote_host;
$r->print(<


Hello $host

Sql Client

Enter your Select Statement:

$sql




HTMLEND
$r->rflush;
getarray($sql) unless $sql =~ /^SELECT$/;

$r->print(<

HTMLEND
return OK;
}
1;



Re: Fwd: [speedycgi] Speedycgi scales a better benchmark

2000-12-23 Thread Nigel Hamilton

Hi, 
I think some of the 'threatened' replies to this thread speak
more volumes than any benchmark.

Sam has come up with a cool technology  it will help bridge
the technology adoption gap between traditional perl CGI + mod_perl - 
especially for ISP's.

Well done Sam!

NIge

Nigel Hamilton
__


On Fri, 22 Dec 2000, Ask Bjoern Hansen wrote:

> On Thu, 21 Dec 2000, Sam Horrocks wrote:
> 
> >  > Folks, your discussion is not short of wrong statements that can be easily
> >  > proved, but I don't find it useful.
> > 
> >  I don't follow.  Are you saying that my conclusions are wrong, but
> >  you don't want to bother explaining why?
> >  
> >  Would you agree with the following statement?
> > 
> > Under apache-1, speedycgi scales better than mod_perl with
> > scripts that contain un-shared memory 
> 
> Maybe; but for one thing the feature set seems to be very different
> as others have pointed out. Secondly then the test that was
> originally quoted didn't have much to do with reality and showed
> that whoever made it didn't have much experience with setting up
> real-world high traffic systems with mod_perl.
> 
> 
>   - ask
> 
> -- 
> ask bjoern hansen - 
> more than 70M impressions per day, 
> 




Re: Dynamic content that is static

2000-12-23 Thread barries

On Fri, Dec 22, 2000 at 09:51:55PM -0500, brian d foy wrote:
> 
> however, i have been talking to a few people about something like a
> mod_makefile. :)

I've used this approach succesfully on a lower volume site where the it
was taking lots of time to build the final HTML but the data sources
didn't change much.  I have a module (Slay::Maker) I use for
exactly this purpose that takes a "makefile" written in Perl and I use
that to rebuild the pages, and if no page needs to be rebuilt, I can 304
the result.  A mod_makefile would be even nicer, being written in
C.

If you're looking for Perlish makes, Nick Ing-Simmons also has a
Make.pm, there's a Makepp project out there, and I have an unreleased
but releasable Make.pm that supports most of the GNU constructs.

Putting squid or something in front of a heavily trafficed site (and
remembering to flush all or part of it's cache when you change the back
end would definitely help) and using a makefile approach on the backend
to avoid reading & writing lots of rarely updated data every page view
should both help.

- Barrie



Re: Dynamic content that is static

2000-12-23 Thread Matt Sergeant

On Fri, 22 Dec 2000, Philip Mak wrote:

> I realized something, though: Although the pages on my site are
> dynamically generated, they are really static. Their content doesn't
> change unless I change the files on the website. (For example,
> http://www.animewallpapers.com/wallpapers/ccs.htm depends on header.asp,
> footer.asp, series.dat and index.inc. If none of those files change, the
> content of ccs.htm remains the same.)

Thats just the sort of layout AxKit is great for. Its basically how Take23
works - even though it looks like a dynamically generated site, its not.
Everything just comes from static files, and when those files change,
AxKit recompiles the pages and caches the results (with some
intelligence associated with the parameters used to determine if this is
the same view of that URL).

-- 


/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\