Corin wrote:
Hi all,
I'm running quite a large social community website (250k users, 16gb
database). We are currently preparing a complete relaunch and thinking
about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The
"relaunch" looks like you are nearing the end (the "launch") of
I've also observed the same behaviour on a very large table (200GB data,
170GB for 2 indexes)
I have a table which has 6 small columns, let's call them (a, b, c, d, e, f)
and about 1 billion rows. There is an index on (a, b, c, d) - not my idea,
Hibernate requires primary keys for every table
On Wed, 2010-03-17 at 16:49 -0400, Greg Smith wrote:
> Alvaro Herrera wrote:
> > Andres Freund escribió:
> >
> >
> >> I find it way much easier to believe such issues exist on a tables in
> >> constrast to indexes. The likelihood to get sequential accesses on an
> >> index is
> >> small enoug
On 18-3-2010 16:50 Scott Marlowe wrote:
It's different because it only takes pgsql 5 milliseconds to run the
query, and 40 seconds to transfer the data across to your applicaiton,
which THEN promptly throws it away. If you run it as
MySQL's client lib doesn't transfer over the whole thing. Thi
On Thu, Mar 18, 2010 at 8:31 AM, Corin wrote:
> Hi all,
>
> I'm running quite a large social community website (250k users, 16gb
> database). We are currently preparing a complete relaunch and thinking about
> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server
> is a dual
On Thu, Mar 18, 2010 at 16:09, Stephen Frost wrote:
> Corin,
>
> * Corin (wakath...@gmail.com) wrote:
>> {"QUERY PLAN"=>"Total runtime: 5.847 ms"}
>
> This runtime is the amount of time it took for the backend to run the
> query.
>
>> 44.173002243042
>
> These times are including all the time requ
Dimitri Fontaine wrote:
I still think the best tool around currently for this kind of testing is
tsung
I am happy to say that for now, pgbench is the only actual testing tool
supported. Done; now I don't need tsung.
However, that doesn't actually solve any of the problems I was talking
abo
Corin,
* Corin (wakath...@gmail.com) wrote:
> I'm running quite a large social community website (250k users, 16gb
> database). We are currently preparing a complete relaunch and thinking
> about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The
> database server is a dual dualcore
time that psql or pgAdmin shows is purely the postgresql time.
Question here was about the actual application's time. Sometimes the data
transmission, fetch and processing on the app's side can take longer than
the 'postgresql' time.
On 18 March 2010 14:31, Corin wrote:
> Hi all,
>
> I'm running quite a large social community website (250k users, 16gb
> database). We are currently preparing a complete relaunch and thinking about
> switching from mysql 5.1.37 innodb to postgresql 8.4.2. The database server
> is a dual dualcore
If you expect this DB to be memory resident, you should update
the cpu/disk cost parameters in postgresql.conf. There was a
post earlier today with some more reasonable starting values.
Certainly your test DB will be memory resident.
Ken
On Thu, Mar 18, 2010 at 03:31:18PM +0100, Corin wrote:
> Hi
I guess we need some more details about the test. Is the
connection/disconnection part of each test iteration? And how are the
databases connected (using a socked / localhost / different host)?
Anyway measuring such simple queries will tell you almost nothing about
the general app performance - us
Hi all,
I'm running quite a large social community website (250k users, 16gb
database). We are currently preparing a complete relaunch and thinking
about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The
database server is a dual dualcore operton 2216 with 12gb ram running on
debian
On Sun, 14 Mar 2010, David Newall wrote:
nohup time pg_dump -f database.dmp -Z9 database
I presumed pg_dump was CPU-bound because of gzip compression, but a test I
ran makes that seem unlikely...
There was some discussion about this a few months ago at
http://archives.postgresql.org/pgsql-
Greg Smith writes:
> I'm not sure how to make progress on similar ideas about
> tuning closer to the filesystem level without having something automated
> that takes over the actual benchmark running and data recording steps; it's
> just way too time consuming to do those right now with every too
15 matches
Mail list logo