Fast DB access

2000-10-11 Thread Differentiated Software Solutions Pvt. Ltd



Hi,

We have an application where we will have to 
service as high as 50 queries a second.
We've discovered that most database just cannot 
keep pace.

The only option we know is to service queries out 
of flat files.
Can somebody give us pointers o n what modules are 
available to create flat file based database.
Specically we want a mechanism to be able service 
queries which can return rows where values are greater than specified 
value.
We are experiementing currently with dbm and 
DB::File. These seem to handle hashes quite comfortably. How do we handle these 
inequality queries.

Thanks,

Murali

Differentiated Software Solutions Pvt. Ltd.176, 
Ground Floor, 6th Main,2nd Block, RT NagarBangalore - 560032Phone : 
91 80 3431470www.diffs-india.com


Re: Fast DB access

2000-10-11 Thread Matt Sergeant

On Wed, 11 Oct 2000, Differentiated Software Solutions Pvt. Ltd wrote:

 Hi,
 
 We have an application where we will have to service as high as 50
 queries a second. We've discovered that most database just cannot keep
 pace.
 
 The only option we know is to service queries out of flat files. Can
 somebody give us pointers o n what modules are available to create
 flat file based database. Specically we want a mechanism to be able
 service queries which can return rows where values are greater than
 specified value. We are experiementing currently with dbm and
 DB::File. These seem to handle hashes quite comfortably. How do we
 handle these inequality queries.

I'd venture to suggest you look back at those RDBMS' again. What were you
using that couldn't handle 50 queries a second? What were your queries
like? How was the database optimised? Was the DB using the right indexes?

Most modern DBMS software should be able to handle 50 queries per second
on decent hardware, provided the conditions are right. You're not going to
get anything better with flat files.

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: Fast DB access

2000-10-11 Thread Francesc Guasch

 "Differentiated Software Solutions Pvt. Ltd" wrote:
 
 Hi,
 
 We have an application where we will have to service as high as 50
 queries a second.
 We've discovered that most database just cannot keep pace.
 
 The only option we know is to service queries out of flat files.

There is a DBD module : DBD::Ram. If you got enough memory
or there is not many data it could be what you need.

I also have seen recently a post about a new DBD module for
CSV files, in addition of DBD::CSV, try

http://search.cpan.org

-- 
 - frankie -



RE: Wild Proposal :)

2000-10-11 Thread Stephen Anderson



 -Original Message-
 From: Perrin Harkins [mailto:[EMAIL PROTECTED]]
 Sent: 11 October 2000 04:45
 To: Ajit Deshpande
 Cc: [EMAIL PROTECTED]
 Subject: Re: Wild Proposal :)
 
 
 Hi Ajit,
 
 It's not entirely clear to me what problem you're trying to 
 solve here. 
 I'll comment on some of the specifics you've written down here, but I
 may be missing your larger point.

Ajit's examples aren't perfect, but the problem is a real one. The problem
is one of generalisation. Logically, you don't want to put an application
that is 10% web-related into mod_perl. So, you can take it out the other 90%
and stick it into an RPC server, but wouldn't it be nice if there was an
application server framework that handled connections,load balancing and
resource management for you?

 There's DBI::Proxy already.  Before jumping on the "we need pooled
 connections" bandwagon, you should read Jeffrey Baker's post on the
 subject here:

http://forum.swarthmore.edu/epigone/modperl/breetalwox/38B4DB3F.612476CE@acm
.org

People always manage to miss the point on this one. It's not about saving
the cycles required to open the connection, as they're minimal at worst.
It's about saving the _time_ to open the connection. On a network
application, opening a connection is going to be quite possibly your largest
latency. On a large application  doing a lot of transactions per second, the
overhead involved in building connections and tearing them down can lose you
serious time. It also complicates scaling the database server. It's far
better to pay your overhead once and just re-use the connection.

Stephen.



hi all

2000-10-11 Thread Rajesh Mathachan

hi all,
we have a query which goes to 7kb and we use mysql and php , th eserver
is literally crashing when we do the process 
what is the other alternative fpor me
The site is  aQuiz site 
regards
rajesh mathachan

--
QuantumLink Communications, Bombay, India





Re: Compiling apache staticly with mod_perl.

2000-10-11 Thread Peter Gebauer

On Tue, 10 Oct 2000, Paul Lindner wrote:

 On Tue, Oct 10, 2000 at 02:43:36PM -0400, Geoffrey Young wrote:
  
  
   -Original Message-
   From: Peter Gebauer [mailto:[EMAIL PROTECTED]]
   Sent: Tuesday, October 10, 2000 8:20 AM
   To: [EMAIL PROTECTED]
   Subject: Compiling apache staticly with mod_perl.
   
  [snip]
   
   Did anybody compile Apache + mod_perl + other modules or have
   documentation that is written for this specific purpouse (since the
   INSTALL file that comes with mod_perl is totally insufficient)?
  
  http://perl.apache.org/guide/install.html
  
  in general, the guide full of lots of good information for mod_perl users...
 
 Also see the following URLs
 
 http://people.redhat.com/plindner/apache/apache-heavy-1.3.12-3.i386.rpm
 http://people.redhat.com/plindner/apache/apache-heavy-1.3.12-3.src.rpm
 
 For a statically linked RPM.


Thank's to everybody who helped me! I have modperl activated and I
still have my PHP and SSL support. (yeah!)
I use Slackware so RPM's aren't that fun to use in a system solely based
on tarballs and source codes :-)

Regards Peter




RE: Adding parameters on an internal_redirect()

2000-10-11 Thread Geoffrey Young



 -Original Message-
 From: darren chamberlain [mailto:[EMAIL PROTECTED]]
 Sent: Friday, October 06, 2000 4:41 PM
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: Adding parameters on an internal_redirect()
 
 
 [EMAIL PROTECTED] ([EMAIL PROTECTED]) said something to this effect:
  What I'd like to do with a particular type of error is 
 redirect with all the
  parameters passed to the error-inducing script plus one 
 tacked on for good
  measure. 
  
  So if /blah/foo.pl?bar=1 was the script that generates the 
 error, something
  like 
  $r-internal_redirect( /error/error.pl?type=4 ) would hopefully pass
  error.pl both the 'bar' and 'type' params. As it stands 
 now, I can get the
  'bar' param, but have had no luck getting the 'type' param added on.

no need for all of the pnotes stuff...

in error.pl use

my %orig = $r-prev-args if $r-prev;
my %error = $r-args;

which should work fine if foo.pl does an internal redirect or uses
ErrorDocument...

HTH
--Geoff

 
 what about something like
 
 my $error = "/error/error.pl?type=4";
 my %args  = $r-args;
 my $uri   = join '', $error, map { "$_=$args{$_}" } keys %args;
 
 $r-internal_redirect($uri);
 
 ?
 
 Alternatively, how about taking a different approach:
 
 $r-pnotes('Original-Uri-QueryString' = $r-args);
 $r-internal_redirect('/error/error.pl?type=4');
 
 And then in /error/error.pl, look for 
 'Original-Uri-QueryString' in the
 pnotes table. Hint:
 
 # In /error/error.pl:
 my $orig_args = $r-pnotes('Original-Uri-QueryString');
 
 (darren)
 
 -- 
 There are two ways to write error-free programs.  Only the 
 third one works.
 



RE: mod_perl on RH7 fails make test

2000-10-11 Thread Geoffrey Young



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, October 10, 2000 11:35 PM
 To: [EMAIL PROTECTED]
 Subject: mod_perl on RH7 fails make test
 
 
 
 I am trying to build mod_perl-1.24 on apache_1.3.12 on 
 RedHat linux 7.0 2.2.16-22  gcc version 2.96
 All seems to build fine, but when I run make test it fails to start
 the server with lock error :
 
 [notice] Destruction-DESTROY called for $global_object
 [Fri Oct  6 10:39:06 2000] [warn] [notice] child_init for 
 process 3211, report any problems to [no address given]
 [Fri Oct  6 10:39:06 2000] [emerg] (22)Invalid argument: 
 fcntl: F_SETLKW: Error getting accept lock, exiting!  Perhaps 
 you need to use the LockFile directive to place your lock 
 file on a local disk!
 [notice] child process 3211 terminating
 [notice] push'd PerlChildExitHandler called, pid=3211
 [notice] push'd PerlChildExitHandler called, pid=3211
 [notice] END block called for startup.pl
 [notice] Destruction-DESTROY called for $global_object
 
 Any help, hints or pointers appreciated.

hmmm,  I've heard that there are issues with RedHat's new version of gcc,
just not sure exactly what they are :)

can you downgrade to the gcc packages from 6.2 and see if using that
compiler changes things?

--Geoff

 
 -- 
 Danny Aldham Providing Certified Internetworking 
 Solutions to Business
 www.postino.com  E-Mail, Web Servers, Web Databases, SQL PHP  Perl
 



Re: Fast DB access

2000-10-11 Thread Sean D. Cook



On Wed, 11 Oct 2000, Differentiated Software Solutions Pvt. Ltd wrote:

 Hi,
 
 We have an application where we will have to service as high as 50 queries a second.
 We've discovered that most database just cannot keep pace.
 
 The only option we know is to service queries out of flat files.
 Can somebody give us pointers o n what modules are available to create flat file 
based database.
 Specically we want a mechanism to be able service queries which can return rows 
where values are greater than specified value.
 We are experiementing currently with dbm and DB::File. These seem to handle hashes 
quite comfortably. How do we handle these inequality queries.
 
Something you may want to consider if you are doing large numbers of
read-only transactions on the DB, build them into a large complex data
structure and load them, pre-fork, into ram.  Store them in a package
variable.  It is extremely fast and scalable.

Sean Cook
Systems Analyst
Edutest.com

Phone: 804.673.22531.888.335.8378
email: [EMAIL PROTECTED]
__
Save the whales.  Collect the whole set.





Re: Spawning

2000-10-11 Thread atli



--- On 10/10/2000 11:46:14 PM bcburke wrote: ---
You can use Perl's IPC::Shareable to share objects in memory across
processes:
http://theoryx5.uwinnipeg.ca/CPAN/data/IPC-Shareable/IPC/Shareable.html

Good luck,

Thanks for your reply,
 I read through the documentation and IPC-Shareable is exactly what I
need. I am having some problems getting it working. (Maybe that's why you
wished me Good luck). Seems to have to do with the size option. I can tie
simple variables but I get "Munged shared memory segment (size exceeded?)"
when I try to tie a package. Is that not possible or can I fiddle with the
size option? The default however would seem to be adequate (65536).
 And again thanks for the idea.
 Atli.






Re: Spawning

2000-10-11 Thread steven

On Thu, 12 Oct 2000 [EMAIL PROTECTED] wrote:

 Thanks for your reply,
  I read through the documentation and IPC-Shareable is exactly what I
 need. I am having some problems getting it working. (Maybe that's why you
 wished me Good luck). Seems to have to do with the size option. I can tie
 simple variables but I get "Munged shared memory segment (size exceeded?)"
 when I try to tie a package. Is that not possible or can I fiddle with the
 size option? The default however would seem to be adequate (65536).

I got the same error constantly when attempting to retrieve a key's value
(or testing with exists()) after it had been delete()ed. After various
different attempts at getting it working over 3-4 days I eventually gave
up. I hadn't exceeded any size constraint.

See if IPC::ShareLite does what you need, for the record I ended up using
IPC::SharedCache, which does almost all of what I wanted.

-- 
steven




Re: Fast DB access

2000-10-11 Thread Joe Schaefer

"Differentiated Software Solutions Pvt. Ltd" [EMAIL PROTECTED] writes:

 We have an application where we will have to service as high as 50 =
 queries a second.
 We've discovered that most database just cannot keep pace.
 
 The only option we know is to service queries out of flat files.
 Can somebody give us pointers o n what modules are available to create =
 flat file based database.
 Specically we want a mechanism to be able service queries which can =
 return rows where values are greater than specified value.
 We are experiementing currently with dbm and DB::File. These seem to =
 handle hashes quite comfortably. How do we handle these inequality =
 queries.

You might look at BerkeleyDB's cursor implementation, although
50 queries / second should be do-able with mysql and optimized tables.  

Also consider cacheing the results (as others have suggested), 
if many of the queries are reused  not changed between queries.

-- 
Joe Schaefer




Why double requests?

2000-10-11 Thread Bill Moseley

mod_perl 1.24/1.3.12 perl 5.6

Here's my httpd.conf:
-

Listen 9000
VirtualHost *:9000
perl
package My::DirectoryIndex;
use Apache::Constants qw( DECLINED );

sub handler {
my $r = shift;

$r-log_error(
( $r-is_initial_req ? 'initial:' : 'not initial:' )
. $r-uri );


return DECLINED;
}

package My::Hello;
use strict;
use Apache::Constants qw( OK );
sub handler {
my $r = shift;
$r-send_http_header('text/plain');
$r-print('hello');
return OK;
}
/perl

PerlTransHandler My::DirectoryIndex

Location /test
Allow from all
SetHandler perl-script
PerlHandler My::Hello
PerlSendHeader on
/Location
/VirtualHost

Here's the request:
---
GET /test/abc/123 http/1.0

HTTP/1.1 200 OK
Date: Wed, 11 Oct 2000 17:17:16 GMT
Server: Apache/1.3.12 (Unix) mod_perl/1.24
Connection: close
Content-Type: text/plain

hello

Here's the error_log

[Wed Oct 11 10:17:16 2000] [error] initial:/test/abc/123
[Wed Oct 11 10:17:16 2000] [error] not initial:/abc/123
[Wed Oct 11 10:17:16 2000] [error] [client 192.168.0.98] client denied by
server configuration: /usr/local/apache/htdocs/abc

Why the second request with the extra path?

This doesn't generate a second request:
GET /hello/abc/123 http/1.0

HTTP/1.1 403 Forbidden

[Wed Oct 11 10:25:22 2000] [error] initial:/hello/abc/123
[Wed Oct 11 10:25:22 2000] [error] [client 192.168.0.98] client denied by
server configuration: /usr/local/apache/htdocs/hello


Bill Moseley
mailto:[EMAIL PROTECTED]



Segfaults with mod_rewrite

2000-10-11 Thread Bill Moseley

I've been looking for this segfault for a while.

This may be related to my last post about the double requests under
mod_perl because if I use a cgi-script instead of a mod_perl handler I
don't get the segfault.  But there's an issue with mod_rewrite, too.

I have a url that can go to an advanced search page:

   http://localhost/search/advanced/extra/path

Everything after /search is extra path info.

I thought it would be nice to make it easy to use:

   http://localhost/advanced/extra/path


So, I was using this rewrite rule:

   RewriteRule ^/(foo|bar|advanced)(.*) /search/$1$2

But that's causing a segfault.

My guess is that 

/search/advanced/extra/path

ends up with a subrequest of 

/advanced/extra/path

And the rewrite_log shows that happening.

Interesting, is that although this causes the segfaults

   RewriteRule ^/(foo|bar|advanced)(.*) /search/$1$2

this works fine:

   RewriteRule ^/(advanced)(.*) /search/$1$2

Looks like a bug in mod_rewrite, too.

Here's the httpd.conf:
VirtualHost *:9000
RewriteEngine on
RewriteLogLevel 9
RewriteLog rewrite_log
RewriteRule ^/(foo|bar|advanced)(.*) /search/$1$2


perl
package My::Hello;
use strict;
use Apache::Constants qw( OK );
sub handler {
my $r = shift;
$r-send_http_header('text/plain');
$r-print('hello');
return OK;
}
/perl

   
Location /search
Allow from all
SetHandler perl-script
PerlHandler My::Hello
PerlSendHeader on
/Location
/VirtualHost

And here's the backtrace:

Program received signal SIGSEGV, Segmentation fault.
0x4012c65c in memset (dstpp=0x0, c=0, len=3355583254) at
../sysdeps/i386/memset.c:64
64  ../sysdeps/i386/memset.c: No such file or directory.
(gdb) bt
#0  0x4012c65c in memset (dstpp=0x0, c=0, len=3355583254) at
../sysdeps/i386/memset.c:64
#1  0x809d25a in ap_pcalloc ()
#2  0x80bb39e in ap_pregsub ()
#3  0x807bfa6 in expand_backref_inbuffer ()
#4  0x807a8e8 in apply_rewrite_rule ()
#5  0x8079ea1 in apply_rewrite_list ()
#6  0x8078a37 in hook_uri2file ()
#7  0x80a2254 in run_method ()
#8  0x80a22d4 in ap_translate_name ()
#9  0x80b79d9 in ap_sub_req_method_uri ()
#10 0x80b7bb5 in ap_sub_req_lookup_uri ()
#11 0x80c0214 in ap_add_cgi_vars ()
#12 0x808b66a in perl_cgi_env_init ()
#13 0x8086042 in perl_setup_env ()
#14 0x8085722 in perl_per_request_init ()
#15 0x80858ea in perl_call_handler ()
#16 0x8085636 in perl_run_stacked_handlers ()
#17 0x808418d in perl_handler ()
#18 0x80a27b3 in ap_invoke_handler ()
#19 0x80b8af9 in process_request_internal ()
#20 0x80b8b5c in ap_process_request ()
#21 0x80af1f9 in child_main ()
#22 0x80af3b8 in make_child ()
#23 0x80af543 in startup_children ()
#24 0x80afbe4 in standalone_main ()
#25 0x80b0463 in main ()
#26 0x400e2313 in __libc_start_main (main=0x80b0080 main, argc=3,
argv=0xb8d4, init=0x8062188 _init, 
fini=0x8156ef8 _fini, rtld_fini=0x4000ac70 _dl_fini,
stack_end=0xb8cc) at ../sysdeps/generic/libc-start.c:90
(gdb) 






Bill Moseley
mailto:[EMAIL PROTECTED]



Handler is preventing redirects on missing trailing / ?

2000-10-11 Thread Clayton Mitchell

I installed a handler as such:


httpsd.conf:
-
PerlModule Tofu::Tofuhandler
Location /userfiles 
SetHandler perl-script
PerlHandler Tofu::Tofuhandler
/Location

Location /
AddHandler perl-script *.html
PerlHandler Tofu::Tofuhandler
/Location

I then noticed that URI's of directories lacking a trailing '/' were not
being redirected in the browser and so relative links started to break.

As a w/o I installed a TransHandler listed below to modify the URI.   It
seems like I am re-inventing the wheel here and so I assume I did something
wrong to get to this point?  
Thanks for any advice.


package Tofu::Tofugruff;

#use Apache::Constants qw(:common);
use Apache::Constants qw(:response);

use strict;


 sub handler {
my $r = shift;
my $f = $r-document_root.$r-uri;
my $uri = $r-uri;
if ( -d $f  $f !~/.*\/$/ ) {
   # A directory that doesn't end in a '/'
   $uri.='/';  # fix it
   $r-err_header_out("Pragma", "no-cache");
   $r-header_out("Location" = $uri );
   $r-status(REDIRECT);
   return DECLINED;
}
return DECLINED;
 }

1;
__END__





Re: Fast DB access

2000-10-11 Thread Sander van Zoest

On Wed, 11 Oct 2000, Matt Sergeant wrote:

 Most modern DBMS software should be able to handle 50 queries per second
 on decent hardware, provided the conditions are right. You're not going to
 get anything better with flat files.

Hmm... I guess it all depends on what your queries look like, but you can
get better results from flat files if you put them in a percise layout. 
Granted if you are talking about having a million lines in a single
flat file, then I definately agree with you.

I really think that sometimes going for a flat file layout *can* be much
more reliable and scalable then RDBMS software. It all really depends on
what you plan to do with the data and what you would like to get out of
it.
  
Cheers,

--
Sander van Zoest [[EMAIL PROTECTED]]
Covalent Technologies, Inc.   http://www.covalent.net/
(415) 536-5218 http://www.vanzoest.com/sander/




$r-dir_config at server startup?

2000-10-11 Thread Bill Moseley

Can I get the value of a PerlSetVar at startup?

# Main server config
PerlSetVar foo bar

VirtualHost 80

perl
package My::Handler;
use strict;

# Is there a way to get at 'foo'?
my $foo = Apache-dir_config('foo');

sub handler {
  ...
}
/perl

Perl*Handler My::Handler;
...
/virturalhost



Bill Moseley
mailto:[EMAIL PROTECTED]



Re: Fast DB access

2000-10-11 Thread Matt Sergeant

On Wed, 11 Oct 2000, Sander van Zoest wrote:

 On Wed, 11 Oct 2000, Matt Sergeant wrote:
 
  Most modern DBMS software should be able to handle 50 queries per second
  on decent hardware, provided the conditions are right. You're not going to
  get anything better with flat files.
 
 Hmm... I guess it all depends on what your queries look like, but you can
 get better results from flat files if you put them in a percise layout. 
 Granted if you are talking about having a million lines in a single
 flat file, then I definately agree with you.

I think the limiting factors are quite a bit sooner than a million
lines. What I'm trying to get across is that developers should be
focussing on letting the DBMS do what a DBMS does best - queries. The DB
is far better placed (and generally better developed) to do the
optimisation than trying to come up with a flat file strategy that works
with your system.

 I really think that sometimes going for a flat file layout *can* be much
 more reliable and scalable then RDBMS software. It all really depends on
 what you plan to do with the data and what you would like to get out of
 it.

I think you chose the wrong words there. I think a flat file layout can be
more performant than an RDBMS, but I don't think its going to be
more reliable or scalable than an RDBMS. There are far too many locking
issues and transaction issues necessary for the terms "reliable and
scalable", unless you're willing to spend a few years re-coding Oracle :-)

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: $r-dir_config at server startup?

2000-10-11 Thread Matt Sergeant

On Wed, 11 Oct 2000, Bill Moseley wrote:

 Can I get the value of a PerlSetVar at startup?
 
 # Main server config
 PerlSetVar foo bar
 
 VirtualHost 80
 
 perl
 package My::Handler;
 use strict;
 
 # Is there a way to get at 'foo'?
 my $foo = Apache-dir_config('foo');

Apache-server-dir_config('foo'); # IIRC

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: Fast DB access

2000-10-11 Thread Sander van Zoest

On Wed, 11 Oct 2000, Matt Sergeant wrote:

  I really think that sometimes going for a flat file layout *can* be much
  more reliable and scalable then RDBMS software. It all really depends on
  what you plan to do with the data and what you would like to get out of
  it.
 I think you chose the wrong words there. I think a flat file layout can be
 more performant than an RDBMS, but I don't think its going to be
 more reliable or scalable than an RDBMS. There are far too many locking
 issues and transaction issues necessary for the terms "reliable and
 scalable", unless you're willing to spend a few years re-coding Oracle :-)

I actually think that there are times that can be all three. Notice how
I said there are times it can be all three, it definately isn't the case
all the time. Neither are RDBMS. ;-) 

Lots of places use databases for read-only queries. Having a database 
that gets lots of similar queries that are read-only makes it an
unnecessary single point of failure. Why not use the local disk and
use rsync to replicate the data around. This way if a machine goes down,
the others still have a full copy of the content and keep on running.

If you have a lot of data that you need to keep in sync and needs constant
updating with a random amount of different queries then you get some real
use out of a RDBMS.

I guess I am just saying that there are a gazillions of ways of doing things,
and each tool has something it is good at. File systems are really good
at serving up read-only content. So why re-invent the wheel? It all really
depends on what content you are dealing with and how you expect to query
it and use it.

There is a reason that table optimisation and tuning databases is such
a sought after skill. Most of these things require different things that
all rely on the type of content and their use. These things need to be
taken in consideration on a case by case basis.

You can do things terribly using Oracle and you can do things well using
Oracle. The same can be said about just about everything. ;-)


--
Sander van Zoest [[EMAIL PROTECTED]]
Covalent Technologies, Inc.   http://www.covalent.net/
(415) 536-5218 http://www.vanzoest.com/sander/




Re: Fast DB access

2000-10-11 Thread Matt Sergeant

On Wed, 11 Oct 2000, Sander van Zoest wrote:

 On Wed, 11 Oct 2000, Matt Sergeant wrote:
 
 Lots of places use databases for read-only queries. Having a database 
 that gets lots of similar queries that are read-only makes it an
 unnecessary single point of failure. Why not use the local disk and
 use rsync to replicate the data around. This way if a machine goes down,
 the others still have a full copy of the content and keep on running.

What is the actual use of the flat files in this case? Wouldn't generating
your HTML offline be better if your data is that static?

 You can do things terribly using Oracle and you can do things well using
 Oracle. The same can be said about just about everything. ;-)

You put your point well, and my only remaining point is that I think its
far far easier to screw up a flat file system by not taking into account
locking issues (just look at all those perl hit-counters that did it
wrong), and perhaps some reliability issues, than it is with a real
database. Caveat emptor, and all that.

-- 
Matt/

/||** Director and CTO **
   //||**  AxKit.com Ltd   **  ** XML Application Serving **
  // ||** http://axkit.org **  ** XSLT, XPathScript, XSP  **
 // \\| // ** Personal Web Site: http://sergeant.org/ **
 \\//
 //\\
//  \\




Re: Fast DB access

2000-10-11 Thread Sander van Zoest

On Wed, 11 Oct 2000, Matt Sergeant wrote:

 On Wed, 11 Oct 2000, Sander van Zoest wrote:
  On Wed, 11 Oct 2000, Matt Sergeant wrote:
  Lots of places use databases for read-only queries. Having a database 
  that gets lots of similar queries that are read-only makes it an
  unnecessary single point of failure. Why not use the local disk and
  use rsync to replicate the data around. This way if a machine goes down,
  the others still have a full copy of the content and keep on running.
 What is the actual use of the flat files in this case? Wouldn't generating
 your HTML offline be better if your data is that static?

The actual use of the flat files can vary. XML in some sense is a good
example, generated HTML is another. Sometimes a csv or other format
works best. And other times, although technically not being a flat file,
a dbm file could be a good/fast alternative as well. It really depends
on how flexable you need/want to be. XML is definately becoming a
useful alternative here.

It is just that databases can create a lot of unnecessary features/overhead
that can be pre-computed ahead of time.
  
  You can do things terribly using Oracle and you can do things well using
  Oracle. The same can be said about just about everything. ;-)
 You put your point well, and my only remaining point is that I think its
 far far easier to screw up a flat file system by not taking into account
 locking issues (just look at all those perl hit-counters that did it
 wrong), and perhaps some reliability issues, than it is with a real
 database. Caveat emptor, and all that.
  
I still have some locking issues with the mailing list archive. *grin*
I totally agree that it is far easier to screw up a flat file system.
It might not be as flexable as you really need it to be, because it was
build for a particular query and performance on that in mind. Databases
are great and I am happy to have them. It is just that it isn't that
when you throw down the money to get Oracle, it will be the answer to
all your problems. ;-)

Sometimes an LDAP system can make more sense then an RDBMS. Other times
a distributed system based on DNS. It all depends on what you value
most, how much you control your environment and what you can live with
and what you can't.
 

Matt,

I am not sure if you are on Dean Gaudet's scalable mailing list
http://archive.covalent.net/new-httpd/2000/09/0478.xml, it is
definately a great place to see how people accomplish things with
their problem sets. In some odd way it reminds me of the old
alt.hackers days.

Cheers,

--
Sander van Zoest [[EMAIL PROTECTED]]
Covalent Technologies, Inc.   http://www.covalent.net/
(415) 536-5218 http://www.vanzoest.com/sander/




@INC startup under Win32

2000-10-11 Thread siberian.org

Caveat : I have built modperl on a gazillion unix boxes. This win32 is 
black magic to me so I have no idea what I am doing, I just need to get 
mod_perl running under NT desperately. That said, here is my current situation.

Running

ActiveState Perl build 618
Apache 1.3.12

I ppm'd the mod_perl.ppd from theoryx5.uwinnipeg.ca and it installed properly.

I then add the LoadModule line to my httpd.conf

When apache tries to startup it says :

Can't location Cwd.pm in @INC (@INC contains C:/WINNT/system32/lib .) at 
(eval 1) line 1

Ok, so it can not find my libs which live in C:\Perl\lib.

How do I get apache on startup to see the proper lib files?

John-




Re: @INC startup under Win32

2000-10-11 Thread Carlos Ramirez


Update your PATH evironment variable to include C:\Perl\lib
-Carlos

"siberian.org" wrote:
Caveat : I have built modperl on a gazillion unix
boxes. This win32 is
black magic to me so I have no idea what I am doing, I just need to
get
mod_perl running under NT desperately. That said, here is my current
situation.
Running
ActiveState Perl build 618
Apache 1.3.12
I ppm'd the mod_perl.ppd from theoryx5.uwinnipeg.ca and it installed
properly.
I then add the LoadModule line to my httpd.conf
When apache tries to startup it says :
Can't location Cwd.pm in @INC (@INC contains C:/WINNT/system32/lib .)
at
(eval 1) line 1
Ok, so it can not find my libs which live in C:\Perl\lib.
How do I get apache on startup to see the proper lib files?
John-

--
---
Carlos Ramirez + Boeing + Reusable Space Systems + 714.372.4181
---
-- Don't make me use uppercase



Re: @INC startup under Win32

2000-10-11 Thread siberian.org

I tried that ( and again ) with no luck :

set PATH=C:\Perl;C:\Perl\lib;C:\Perl\bin;C:\WINNT;C:\WINNT\system32

but apache gives me the same error. @INC is only consisting of 
C:\winnt\system32\lib which isn't even in my Path environment variable. I 
have no idea where Apache is getting its @INC from at startup..

Thanks though :(

John-

At 03:27 PM 10/11/00 -0700, Carlos Ramirez wrote:
Update your PATH evironment variable to include C:\Perl\lib

-Carlos


"siberian.org" wrote:
Caveat : I have built modperl on a gazillion unix boxes. This win32 is
black magic to me so I have no idea what I am doing, I just need to get
mod_perl running under NT desperately. That said, here is my current 
situation.

Running

ActiveState Perl build 618
Apache 1.3.12

I ppm'd the mod_perl.ppd from theoryx5.uwinnipeg.ca and it installed 
properly.

I then add the LoadModule line to my httpd.conf

When apache tries to startup it says :

Can't location Cwd.pm in @INC (@INC contains C:/WINNT/system32/lib .) at
(eval 1) line 1

Ok, so it can not find my libs which live in C:\Perl\lib.

How do I get apache on startup to see the proper lib files?

John-

--
---
Carlos Ramirez + Boeing + Reusable Space Systems + 714.372.4181
---
-- Don't make me use uppercase




Re: @INC startup under Win32

2000-10-11 Thread Carlos Ramirez


Are you able run perl from a DOS terminal? i.e. perl -v (without having
to run set PATH ...)
If not, your Perl PATH is not set. Try setting your system environment
via:
right-click on My Computer and select Properties, then set the System
environment PATH variable by clicking on the Environment tab.
I think running set from the DOS terminal only sets you current session
and does not set the System wide environment.. (i think???)
-Carlos


"siberian.org" wrote:
I tried that ( and again ) with no luck :
set PATH=C:\Perl;C:\Perl\lib;C:\Perl\bin;C:\WINNT;C:\WINNT\system32
but apache gives me the same error. @INC is only consisting of
C:\winnt\system32\lib which isn't even in my Path environment variable.
I
have no idea where Apache is getting its @INC from at startup..
Thanks though :(
John-
At 03:27 PM 10/11/00 -0700, Carlos Ramirez wrote:
>Update your PATH evironment variable to include C:\Perl\lib
>
>-Carlos
>
>
>"siberian.org" wrote:
>>Caveat : I have built modperl on a gazillion unix boxes. This win32
is
>>black magic to me so I have no idea what I am doing, I just need
to get
>>mod_perl running under NT desperately. That said, here is my current
>>situation.
>>
>>Running
>>
>>ActiveState Perl build 618
>>Apache 1.3.12
>>
>>I ppm'd the mod_perl.ppd from theoryx5.uwinnipeg.ca and it installed
>>properly.
>>
>>I then add the LoadModule line to my httpd.conf
>>
>>When apache tries to startup it says :
>>
>>Can't location Cwd.pm in @INC (@INC contains C:/WINNT/system32/lib
.) at
>>(eval 1) line 1
>>
>>Ok, so it can not find my libs which live in C:\Perl\lib.
>>
>>How do I get apache on startup to see the proper lib files?
>>
>>John-
>
>--
>---
>Carlos Ramirez + Boeing + Reusable Space Systems + 714.372.4181
>---
>-- Don't make me use uppercase

--
---
Carlos Ramirez + Boeing + Reusable Space Systems + 714.372.4181
---
-- Don't make me use uppercase



Re: @INC startup under Win32

2000-10-11 Thread siberian.org

Yup, from DOS shell everything works great. Perl runs great. I tried 
setting my path via the properties ( you were correct, DOS only sets it for 
the session and not for the system ) but it had no impact on apache. 
Apache.exe still looks only in the winnt\system32\lib directory.

Windows is a frustrating experience :(

John-

At 03:55 PM 10/11/00 -0700, Carlos Ramirez wrote:
Are you able run perl from a DOS terminal? i.e. perl -v (without having to 
run set PATH ...)
If not, your Perl PATH is not set. Try setting your system environment via:

right-click on My Computer and select Properties, then set the System 
environment PATH variable by clicking on the Environment tab.

I think running set from the DOS terminal only sets you current session 
and does not set the System wide environment.. (i think???)

-Carlos



"siberian.org" wrote:
I tried that ( and again ) with no luck :

set PATH=C:\Perl;C:\Perl\lib;C:\Perl\bin;C:\WINNT;C:\WINNT\system32

but apache gives me the same error. @INC is only consisting of
C:\winnt\system32\lib which isn't even in my Path environment variable. I
have no idea where Apache is getting its @INC from at startup..

Thanks though :(

John-

At 03:27 PM 10/11/00 -0700, Carlos Ramirez wrote:
 Update your PATH evironment variable to include C:\Perl\lib
 
 -Carlos
 
 
 "siberian.org" wrote:
 Caveat : I have built modperl on a gazillion unix boxes. This win32 is
 black magic to me so I have no idea what I am doing, I just need to get
 mod_perl running under NT desperately. That said, here is my current
 situation.
 
 Running
 
 ActiveState Perl build 618
 Apache 1.3.12
 
 I ppm'd the mod_perl.ppd from theoryx5.uwinnipeg.ca and it installed
 properly.
 
 I then add the LoadModule line to my httpd.conf
 
 When apache tries to startup it says :
 
 Can't location Cwd.pm in @INC (@INC contains C:/WINNT/system32/lib .) at
 (eval 1) line 1
 
 Ok, so it can not find my libs which live in C:\Perl\lib.
 
 How do I get apache on startup to see the proper lib files?
 
 John-
 
 --
 ---
 Carlos Ramirez + Boeing + Reusable Space Systems + 714.372.4181
 ---
 -- Don't make me use uppercase

--
---
Carlos Ramirez + Boeing + Reusable Space Systems + 714.372.4181
---
-- Don't make me use uppercase




RE: Wild Proposal :)

2000-10-11 Thread Perrin Harkins

On Wed, 11 Oct 2000, Stephen Anderson wrote:
  There's DBI::Proxy already.  Before jumping on the "we need pooled
  connections" bandwagon, you should read Jeffrey Baker's post on the
  subject here:
 
 http://forum.swarthmore.edu/epigone/modperl/breetalwox/38B4DB3F.612476CE@acm
 .org
 
 People always manage to miss the point on this one. It's not about saving
 the cycles required to open the connection, as they're minimal at worst.
 It's about saving the _time_ to open the connection.

My point was that Apache::DBI already gives you persistent connections,
and when people say they want actual pooled connections instead they
usually don't have a good reason for it.

- Perrin




Re: about a error

2000-10-11 Thread Joshua Chamas

Like the error message says, try to find the global.asa
that should have been in the ./eg directory and add it
there.  

Otherwise, try a fresh install using perl's CPAN,
"perldoc CPAN" for more info, for Apache::ASP to get
the global.asa and .htaccess that comes with the examples.
It may also be that the .htaccess is not being used, 
see http://www.apache-asp.org/install.html#Quick%20Start
for more info.

--Joshua

jsquan wrote:
 
 modperl£¬
I have pkg_add mod_perl and apache_1.3.12.tgz and so on.
 but I happen a question on visiting http://localhost/asp/site/eg/index.html
please help me.thanks.
 
 Errors Output
 
  %EG is not defined, make sure you copied ./eg/global.asa correctly at (eval 8) 
line 5.
 , /usr/local/lib/perl5/site_perl/5.005/Apache/ASP.pm line 1229
 
 Debug Output
 
  RUN ASP (v0.18) for /usr/local/www/data/asp/site/eg/index.html
  GlobalASA package Apache::ASP::Demo
  ASP object created - GlobalASA: Apache::ASP::GlobalASA=HASH(0x8353348); Request: 
Apache::ASP::Request=HASH(0x831c730); Response: 
Apache::ASP::Response=HASH(0x831c634); Server: Apache::ASP::Server=HASH(0x831c508); 
basename: index.html; compile_includes: 1; dbg: 2; debugs_output: ARRAY(0x81bb83c); 
filename: /usr/local/www/data/asp/site/eg/index.html; global: /tmp; global_package: 
Apache::ASP::Demo; id: NoCache; includes_dir: ; init_packages: ARRAY(0x8286114); 
no_cache: 1; no_state: 1; package: Apache::ASP::Demo; pod_comments: 1; r: 
Apache=SCALAR(0x839f718); sig_warn: ; stat_inc: ; stat_inc_match: ; stat_scripts: 1; 
unique_packages: ; use_strict: ;
  parsing index.html
  runtime exec of dynamic include header.inc args ()
  parsing header.inc
  loaded module Apache::Symbol
  active undefing sub Apache::ASP::Demo::_tmp_header_inc code CODE(0x81fedbc)
  compile include header.inc sub _tmp_header_inc
  runtime exec of dynamic include footer.inc args ()
  parsing footer.inc
  active undefing sub Apache::ASP::Demo::_tmp_footer_inc code CODE(0x83b255c)
  compile include footer.inc sub _tmp_footer_inc
  active undefing sub Apache::ASP::Demo::NoCache code CODE(0x83b2688)
  compiling into package Apache::ASP::Demo subid Apache::ASP::Demo::NoCache
  executing NoCache
  %EG is not defined, make sure you copied ./eg/global.asa correctly at (eval 8) 
line 5.
 , /usr/local/lib/perl5/site_perl/5.005/Apache/ASP.pm line 1229
 
 ASP to Perl Program
 
   1: package Apache::ASP::Demo; ;; sub Apache::ASP::Demo::NoCache {  ;;  return(1) 
unless $_[0];  ;; no strict;;use vars qw($Application $Session $Response $Server 
$Request);;
   2: # split the page in 2 for nice formatting and english style sorting
   3: my(@col1, @col2);
   4: my @keys = sort keys %EG;
   5: @keys || die("\%EG is not defined, make sure you copied ./eg/global.asa 
correctly");
   6: my $half = int(@keys/2) + 1;
   7:
   8: for(my $i =0; $i = $#keys; $i++) {
   9:if($i  $half) {
  10:push(@col1, $keys[$i]);
  11:} else {
  12:push(@col2, $keys[$i]);
  13:}
  14: }
  15: $Response-Debug('col1', \@col1, 'col2', \@col2);
  16: $title = 'Example ASP Scripts';
  17: $Response-Write('
  18:
  19: '); $Response-Include('header.inc', ); $Response-Write('
  20:
  21: table border=0
  22: '); while(@col1) {
  23:my $col1 = shift @col1;
  24:my $col2 = shift @col2;
  25:$Response-Write('
  26:tr
  27:'); for([$col1, $EG{$col1}], '', [$col2, $EG{$col2}]) {
  28:unless(ref $_) {
  29:print "td width=10nbsp;/td";
  30:next;
  31:}
  32:next unless $_-[0]; # last col / last row
  33:
  34:# clean up the descriptions
  35:$_-[1] =~ s/\s*\.\s*$//s;
  36:$_-[1] .= '.';
  37:
  38:$Response-Write('
  39:td valign=top
  40:nobr
  41:font size=-0
  42:ttb
  43:a href='.($_-[0]).''.($_-[0]).'/a
  44:'); if($_-[0] =~ /\.(htm|asp|ssi|xml)$/) { 
$Response-Write('
  45:nbsp
  46:i
  47:(a 
href=source.asp?file='.($_-[0]).'source/a)
  48:/i
  49:'); } $Response-Write('
  50:/tt/b
  51:/font
  52:/nobr
  53:br
  54:font size=-1'.($_-[1]).'/font
  55:/td
  56:'); } $Response-Write('
  57:/tr
  58: '); } $Response-Write('
  59: /table
  60:
  61: '); $Response-Include('footer.inc', );  ;; }
 
 jsquan
 [EMAIL PROTECTED]



Re: hi all

2000-10-11 Thread Differentiated Software Solutions Pvt. Ltd

Hi,

We had a similar problem with postgres db.
We had a large query running to 3 kb and the query ran forever without ever
getting completed.
We solved this, by breaking the query into parts and executing each part
separately... i.e., by creating a hash of the output of one step and filter
it into the next step and so on...

Hope this helps.

Murali

- Original Message -
From: Rajesh Mathachan [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, October 11, 2000 2:34 PM
Subject: hi all


 hi all,
 we have a query which goes to 7kb and we use mysql and php , th eserver
 is literally crashing when we do the process
 what is the other alternative fpor me
 The site is  aQuiz site
 regards
 rajesh mathachan

 --
 QuantumLink Communications, Bombay, India






Re: [OT] hi all

2000-10-11 Thread Rodney Broom

RM we have a query which goes to 7kb...

"7 kb"? I don't mean to be picky, but do you mean "seven kilo-bytes"? I'm
thinking that either you mean some much larger number, or that I'm missing
something terribly.

Either way, what does your query look like? Are you joining across 3 tables and
then back onto one of those tables again, and then using a bunch of LIKEs and
ORs? Or is this just a simple "select * from xyz"?

If it isn't obvious from your query as to what's the problems, then we should
probably know a bit about your server config. Like, "We're running on Win3.11".
;-)


Rodney Broom






Re: [OT] hi all

2000-10-11 Thread Dave Baker

On Wed, Oct 11, 2000 at 08:24:13PM -0700, Rodney Broom wrote:
 RM we have a query which goes to 7kb...
 
 "7 kb"? I don't mean to be picky, but do you mean "seven kilo-bytes"? I'm
 thinking that either you mean some much larger number, or that I'm missing
 something terribly.
 

I read this as meaning the QUERY string is 7k in size, not the result set.

A 7k query is pretty hefty, however you slice it  the words 'stored
procedure' come to mind (but that's always another story)


dave


-- 

-  Dave Baker  :  [EMAIL PROTECTED]  :  http://dsb3.com/  :  AIM#duirwyrd  -
GPG: 1024D/D7BCA55D / 09CD D148 57DE 711E 6708  B772 0DD4 51D5 D7BC A55D


 PGP signature


Re: [OT] hi all

2000-10-11 Thread Rodney Broom

DB I read this as meaning the QUERY string is 7k in size, not the result set.

Hmm, I didn't think of that. Yes, that would be a big query.

DB ...the words 'stored
DB procedure' come to mind (but that's always another story)

Yes, no stored proceedures in mysql. But if this does refer to 7KB of text in
the query, then I have to think that there's a better way to write it. I wrote a
little search engine that did a bit of:

where (
  id = 3 or
  id = 5 or
  id = 2838
 ...
)
But that was to get around a bad LIKE statement. And it actually runs pretty
well. My thought would still be that the statement can probably be cleaned up a
bit. Hey Rajesh, I know that you probably don't want to share the exactities of
the query for business reasons, but any indication you can give would help in my
oppinion.


Rodney Broom