On Fri, Nov 9, 2012 at 7:15 AM, Craig Ringer wrote:
> On 11/08/2012 09:29 PM, Denis wrote:
>> Ok guys, it was not my intention to hurt anyone's feelings by mentioning
>> MySQL. Sorry about that.
> It's pretty silly to be upset by someone mentioning another DB product.
> I wouldn't worry.
>> There
On 11/08/2012 09:29 PM, Denis wrote:
> Ok guys, it was not my intention to hurt anyone's feelings by mentioning
> MySQL. Sorry about that.
It's pretty silly to be upset by someone mentioning another DB product.
I wouldn't worry.
> There simply was a project with a similar
> architecture built using
Em 08-11-2012 13:38, Alvaro Herrera escreveu:
Rodrigo Rosenfeld Rosas escribió:
Em 07-11-2012 22:58, Tom Lane escreveu:
Rodrigo Rosenfeld Rosas writes:
Ok, I could finally strip part of my database schema that will allow you
to run the explain query and reproduce the issue.
There is a simple
Rodrigo Rosenfeld Rosas escribió:
> Em 07-11-2012 22:58, Tom Lane escreveu:
> >Rodrigo Rosenfeld Rosas writes:
> >>Ok, I could finally strip part of my database schema that will allow you
> >>to run the explain query and reproduce the issue.
> >>There is a simple SQL dump in plain format that you
Em 07-11-2012 22:58, Tom Lane escreveu:
Rodrigo Rosenfeld Rosas writes:
Ok, I could finally strip part of my database schema that will allow you
to run the explain query and reproduce the issue.
There is a simple SQL dump in plain format that you can restore both on
9.1 and 9.2 and an example E
On 11/8/2012 6:58 AM, Shaun Thomas wrote:
On 11/07/2012 09:16 PM, David Boreham wrote:
bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48
Unfortunately without -S, you're not really testing the processors. A
regular pgbench can fluctuate a more than that due to writing and
checkpoints.
On 11/07/2012 09:16 PM, David Boreham wrote:
bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48
Unfortunately without -S, you're not really testing the processors. A
regular pgbench can fluctuate a more than that due to writing and
checkpoints.
For what it's worth, our X5675's perform
On 08/11/12 09:36, Denis wrote:
We have a web application where we create a schema or a database with a
number of tables in it for each customer. Now we have about 2600 clients.
The problem we met using a separate DB for each client is that the creation
of new DB can take up to 2 minutes, that i
Hello
2012/11/8 Denis :
> Samuel Gendler wrote
>> On Thu, Nov 8, 2012 at 1:36 AM, Denis <
>
>> socsam@
>
>> > wrote:
>>
>>>
>>> P.S.
>>> Not to start a holywar, but FYI: in a similar project where we used MySQL
>>> now we have about 6000 DBs and everything works like a charm.
>>>
>>
>> You seem to
Samuel Gendler wrote
> On Thu, Nov 8, 2012 at 1:36 AM, Denis <
> socsam@
> > wrote:
>
>>
>> P.S.
>> Not to start a holywar, but FYI: in a similar project where we used MySQL
>> now we have about 6000 DBs and everything works like a charm.
>>
>
> You seem to have answered your own question here.
On Thu, Nov 8, 2012 at 1:36 AM, Denis wrote:
>
> P.S.
> Not to start a holywar, but FYI: in a similar project where we used MySQL
> now we have about 6000 DBs and everything works like a charm.
>
You seem to have answered your own question here. If my recollection of a
previous discussion about
We have a web application where we create a schema or a database with a
number of tables in it for each customer. Now we have about 2600 clients.
The problem we met using a separate DB for each client is that the creation
of new DB can take up to 2 minutes, that is absolutely unacceptable. Using
s
Tom Lane-2 wrote
> Denis <
> socsam@
> > writes:
>> Tom Lane-2 wrote
>>> Hmmm ... so the problem here isn't that you've got 2600 schemas, it's
>>> that you've got 183924 tables. That's going to take some time no matter
>>> what.
>
>> I wonder why pg_dump has to have deal with all these 183924 t
13 matches
Mail list logo