[Note Reply-to: dbi-users]

On Tue, Oct 19, 1999 at 10:58:42AM +0100, [EMAIL PROTECTED] wrote:
> Hi. I realise this is getting off-topic, so I suppose replies should go
> direct to me unless they'll interest the list.
> 
> I work on a site that makes use of mod_perl, Apache and MySQL. We are
> currently toying around with our server set-up, trying to spread the
> load across multiple machines. For web-serving, this is fairly simple,
> but we're concerned about our MySQL server. Currently, different apps
> sit on different boxes, each with its own MySQL. However, for ease of
> upgrading, we're thinking of moving all MySQL databases to dedicated
> machine(s).
> 
> On  7 Oct, Pascal Eeftinck wrote:
> 
> >  MySQL is quick, it's by far the fastest you can get at most operations. On the
> >  other hand, you can't easily spread the load over multiple servers
> 
> This is our current concern. Is the single machine a good way to go? If
> one app takes down MySQL (which unfortunately does happen once in a
> while) then all apps lose their database. But if the machine gets
> bogged down, we can throw more ram/disk space at it. Is it possible to
> run MySQL across multiple servers? Should we be looking at a solution
> with multiple database servers instead of one machine? At the
> hardware level, this would be more reliable, but at the script level,
> we'd have to keep track of multiple machines, and being a lazy perl
> monkey, I want all my scripts to talk to the database in an identical
> manner.
> 
> Basically, what I'm after is a few words of advice :)
> Perhaps I should take this to a MySQL mailing list, but given the
> frequency of the word "MySQL" on this list I thought I'd chance my arm.

There's a DBD::Multiplex in development (by Thomas Kishel and myself)
that's designed to allow you to spread the load across multiple
database servers and/or add resilience in case one goes down.

It's still rather alpha quality but it does work. It might be worth
a look. I've appended his last message on the subject.

Tim.


A newer version of DBD::Multiplex is available at:

ftp://not.tdlc.com/pub/Multiplex.pm

DBD::Multiplex is a Perl module which works with the DBI to provide
access to multiple datasources using singular DBI calls, like this:

$dsn1 = 'dbi:Pg:dbname=aaa;host=10.0.0.1;mx_id=db-aaa-1';
$dsn2 = 'dbi:Pg:dbname=bbb;host=10.0.0.2;mx_id=db-bbb-2';
$dsn3 = 'dbi:Pg:dbname=ccc;host=10.0.0.3;mx_id=db-ccc-3';
$dbh = DBI->connect("dbi:Multiplex:$dsn1|$dsn2|$dsn3", 'username', 'password');

This version allows you to pass in a reference to a custom error procedure.

This version passes all but one test in the PostgreSQL test.pl script.
# The $dbh->{Name} test expects a single name, DBD::Multiplex returns an
array.)

Works above, but not yet tested below, DBD::Proxy.  Storable complains
about storing a reference to a procedure, but it still works.

This version includes a --preliminary-- 'randomize DSNs' mode.
# It randomly sorts the DSN list before connecting.
# Trying to select a DSN at random later does not return an expected result.

Feel free to test and use against other database drivers and report
your results.  Discussion would be appreciated on the design of
handling multiple result and error sets, and ideas for falling back and
restoring broken connections. Thank you.

Reply via email to