[EMAIL PROTECTED] ("Anjan Dave") writes:
> I would like to know whether there are any significant performance
> advantages of compiling (say, 7.4) on your platform (being RH7.3, 8,
> and 9.0, and Fedora especially) versus getting the relevant binaries
> (rpm) from the postgresql site? Hardware is I
On Tuesday 03 February 2004 22:29, Kevin wrote:
> The mammoth replicator has been working well. I had tried
> the pgsql-r and had limited success with it, and dbmirror was just
> taking to long having to do 4 db transactions just to mirror one
> command. I have eserv but was never really a java k
On 03/02/2004 20:58 Anjan Dave wrote:
Hello,
I would like to know whether there are any significant performance
advantages of compiling (say, 7.4) on your platform (being RH7.3, 8, and
9.0, and Fedora especially) versus getting the relevant binaries (rpm)
from the postgresql site? Hardware is Inte
Anjan Dave wrote:
Hello,
I would like to know whether there are any significant performance
advantages of compiling (say, 7.4) on your platform (being RH7.3, 8, and
9.0, and Fedora especially) versus getting the relevant binaries (rpm)
from the postgresql site? Hardware is Intel XEON (various
First just wanted to say thank you all for the quick and helpful
answers. With all the input I know I am on the right track. With that
in mind I created a perl script to do my migrations and to do it based
on moving from a db name to a schema name. I had done alot of the
reading on convertin
Title: Message
Hello,
I would like to know
whether there are any significant performance advantages of compiling (say, 7.4)
on your platform (being RH7.3, 8, and 9.0, and Fedora especially) versus getting
the relevant binaries (rpm) from the postgresql site? Hardware is Intel XEON
(various
[EMAIL PROTECTED] ("Kevin Carpenter") writes:
> I am doing a massive database conversion from MySQL to Postgresql for a
> company I am working for. This has a few quirks to it that I haven't
> been able to nail down the answers I need from reading and searching
> through previous list info.
>
> Fo
On Tuesday 03 February 2004 16:42, Kevin Carpenter wrote:
>
> Thanks in advance, will give more detail - just looking for some open
> directions and maybe some kicks to fuel my thought in other areas.
I've taken to doing a lot of my data manipulation (version conversions etc) in
PG even if the fi
Wow, I didn't know that (didn't get far enough to test any rollback).
That's not a good thing. But then again, it's MySQL who
needs rollback anyway?
On Feb 2, 2004, at 5:44 PM, Christopher Kings-Lynne wrote:
One more thing that annoyed me. If you started a process, such as a
large DDL opera
Kevin,
> With the size of each single db, I don't
> know how I could put them all together under one roof, and if I was
> going to, what are the maximums that Postgres can handle for tables in
> one db? We track over 2 million new points of data (records) a day, and
> are moving to 5 million in
> > This is called a "materialized view". PostgreSQL doesn't support them
> > yet, but most people think it would be a Good Thing to have.
>
> There is a project on gborg (called "mview" iirc) though I don't know how
far
> it's got - I think it's still pretty new.
tnx
>
>
Josh Berkus wrote:
Folks,
I've had requests from a couple of businesses to see results of infomal MySQL
+InnoDB vs. PostgreSQL tests.I know that we don't have the setup to do
full formal benchmarking, but surely someone in our community has gone
head-to-head on your own application?
Josh,
On Tue, 3 Feb 2004, Kevin Carpenter wrote:
> Hello everyone,
>
> I am doing a massive database conversion from MySQL to Postgresql for a
> company I am working for. This has a few quirks to it that I haven't
> been able to nail down the answers I need from reading and searching
> through previou
Jeff <[EMAIL PROTECTED]> writes:
> Not sure at what point it will topple, in my case it didn't matter if it
> ran good with 5 clients as I'll always have many more clients than 5.
I did some idle, very unscientific tests the other day that indicated
that MySQL insert performance starts to suck wi
On Tue, 03 Feb 2004 11:42:59 -0500
"Kevin Carpenter" <[EMAIL PROTECTED]> wrote:
> For starters, I am moving roughly 50 seperate databases which each one
> represents one of our clients and is roughly 500 megs to 3 gigs in
> size.
> Currently we are using the MySQL replication, and so I am looking
Jeff <[EMAIL PROTECTED]> writes:
> On Tue, 03 Feb 2004 11:46:05 -0500
> Tom Lane <[EMAIL PROTECTED]> wrote:
>> I did some idle, very unscientific tests the other day that indicated
>> that MySQL insert performance starts to suck with just 2 concurrent
>> inserters. Given a file containing 1 IN
On Tue, 03 Feb 2004 11:46:05 -0500
Tom Lane <[EMAIL PROTECTED]> wrote:
> Jeff <[EMAIL PROTECTED]> writes:
> > Not sure at what point it will topple, in my case it didn't matter
> > if it ran good with 5 clients as I'll always have many more clients
> > than 5.
>
> I did some idle, very unscienti
Hello everyone,
I am doing a massive database conversion from MySQL to Postgresql for a
company I am working for. This has a few quirks to it that I haven't
been able to nail down the answers I need from reading and searching
through previous list info.
For starters, I am moving roughly 50 seper
On Tue, 3 Feb 2004 16:02:00 +0200
"Rigmor Ukuhe" <[EMAIL PROTECTED]> wrote:
> > script [I also decided to use this perl script for testing PG to be
> > fair].
> >
> > For one client mysql simply screamed.
> >
>
> If already have test case set up, you could inform us, from where
> Postgres starts
On 2 Feb 2004 at 16:45, scott.marlowe wrote:
> Do you have the cache set to write back or write through? Write through
> can be a performance killer. But I don't think your RAID is the problem,
> it looks to me like postgresql is doing a lot of I/O. When you run top,
> do the postgresql proc
Czuczy Gergely <[EMAIL PROTECTED]> writes:
> to leave it unspecified what value should I set to the paramTypes array?
> and could you insert this answer to to docs, it could be useful
It is in the docs:
paramTypes[] specifies, by OID, the data types to be assigned to the
parameter symbols
hello
to leave it unspecified what value should I set to the paramTypes array?
and could you insert this answer to to docs, it could be useful
Bye,
Gergely Czuczy
mailto: [EMAIL PROTECTED]
PGP: http://phoemix.harmless.hu/phoemix.pgp
The point is, that geeks are not necessarily the outcasts
soc
Czuczy Gergely <[EMAIL PROTECTED]> writes:
> i've read in the docs to use the proper indexes both types must match in
> the where clause, to achive this the user can simply put a string into the
> side of the equation mark and pgsql will convert it automaticly. my
> question is, when I'm using PQex
> script [I also decided to use this perl script for testing PG to be
> fair].
>
> For one client mysql simply screamed.
>
If already have test case set up, you could inform us, from where Postgres
starts to beat MySql. Because if with 5 clients it still "screams" then i
would give it a try in cas
Well, when I prepared my PG presentation I did some testing of MySQL (So
I could be justified in calling it lousy :). I used the latest release
(4.0.something I think)
I was first bitten by my table type being MyISAM when I thought I set
the default ot InnoDB. But I decided since my test was g
Put it on a RAM disk.
chris
On Tue, 2004-02-03 at 07:54, David Teran wrote:
> Hi,
>
> we are trying to speed up a database which has about 3 GB of data. The
> server has 8 GB RAM and we wonder how we can ensure that the whole DB
> is read into RAM. We hope that this will speed up some queries
David Teran wrote:
we are trying to speed up a database which has about 3 GB of data. The
server has 8 GB RAM and we wonder how we can ensure that the whole DB is
read into RAM. We hope that this will speed up some queries.
Neither the DBa or postgresql has to do anything about it. Usually OS cac
On Tue, Feb 03, 2004 at 13:54:17 +0100,
David Teran <[EMAIL PROTECTED]> wrote:
> Hi,
>
> we are trying to speed up a database which has about 3 GB of data. The
> server has 8 GB RAM and we wonder how we can ensure that the whole DB
> is read into RAM. We hope that this will speed up some queri
Hi,
we are trying to speed up a database which has about 3 GB of data. The
server has 8 GB RAM and we wonder how we can ensure that the whole DB
is read into RAM. We hope that this will speed up some queries.
regards David
---(end of broadcast)--
Hello
i've read in the docs to use the proper indexes both types must match in
the where clause, to achive this the user can simply put a string into the
side of the equation mark and pgsql will convert it automaticly. my
question is, when I'm using PQexecParams, should I give all the values as
a
You could do high speed inserts with COPY command:
http://developer.postgresql.org/docs/postgres/sql-copy.html
Check whenether your database adapter/client lib supports it (i guess it
does).
Note that it doesnt help very much if there are fk's/triggers's on the
target table.
Bill Moran wrote:
31 matches
Mail list logo