Here is my new pgbench's point
pgbench -h 127.0.0.1 -p -U postgres -c 200 -t 100 -s 10 pgbench
Scale option ignored, using pgbench_branches table count = 10
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 10
query mode: simple
number of clients: 200
number of threads
On Mon, Dec 20, 2010 at 7:10 AM, tuanhoanganh wrote:
> Here is my new pgbench's point
>
> pgbench -h 127.0.0.1 -p -U postgres -c 200 -t 100 -s 10 pgbench
Your -c should always be the same or lower than -s. Anything higher
and you're just thrashing your IO system waiting for locks. Note tha
P.s. here's one of my two slower slave machines. It has dual quad
core opterons (2352 2.1GHz) and 32 Gig ram. Controller is an Areca
1680 with 512M battery backed cache and 2 disks for pg_xlog and 12 for
the data/base directory. Running Centos 5.4 or so.
pgbench -c 10 -t 1 test
starting vac
> "MG" == Mladen Gogala writes:
MG> Good time accounting is the most compelling reason for having a wait
MG> event interface, like Oracle. Without the wait event interface, one
MG> cannot really tell where the time is spent, at least not without
MG> profiling the database code, which is not a
On Mon, Dec 20, 2010 at 10:33:26AM -0500, James Cloos wrote:
> > "MG" == Mladen Gogala writes:
>
> MG> Good time accounting is the most compelling reason for having a wait
> MG> event interface, like Oracle. Without the wait event interface, one
> MG> cannot really tell where the time is spen
On Fri, Dec 17, 2010 at 07:48, selvi88 wrote:
>
> My requirement is more than 15 thousand queries will run,
> It will be 5000 updates and 5000 insert and rest will be select.
>
>
What IO system are you running Postgres on? With that kind of writes you
should be really focusing on your storage sol
On Sat, Dec 18, 2010 at 2:34 AM, selvi88 wrote:
>
>
> Thanks for ur suggestion, already I have gone through that url, with that
> help I was able to make my configuration to work for 5K queries/second.
> The parameters I changed was shared_buffer, work_mem, maintenance_work_mem
> and effective_cac
Scott Marlowe wrote:
I can sustain about 5,000 transactions per second on a machine with 8
cores (2 years old) and 14 15k seagate hard drives.
Right. You can hit 2 to 3000/second with a relatively inexpensive
system, so long as you have a battery-backed RAID controller and a few
hard driv
On 2010-12-20 15:48, Kenneth Marshall wrote:
And how exactly, given that the kernel does not know whether the CPU is
active or waiting on ram, could an application do so?
Exactly. I have only seen this data from hardware emulators. It would
be nice to have... :)
There's no reason that the c
On Mon, Dec 20, 2010 at 10:49 AM, Greg Smith wrote:
> Scott Marlowe wrote:
>>
>> I can sustain about 5,000 transactions per second on a machine with 8
>> cores (2 years old) and 14 15k seagate hard drives.
>>
>
> Right. You can hit 2 to 3000/second with a relatively inexpensive system,
> so long
2010/12/12 pasman pasmański :
>> UNION will remove all duplicates, so that the result additionally requires to
>> be sorted.
>
>>Right, to avoid the SORT and UNIQUE - operation you can use UNION ALL
>
>
> by the way maybe apply hashing to calculate UNION be better ?
The planner already considers s
Is there any tool work on windows can open 200 connect to postgresql and
application connect to this tool to decrease time connect to PostgreSQL
(because PostgreSQL start new process when have a new connect, I want this
tool open and keep 200 connect to postgreSQL, my application connect to this
t
On Mon, Dec 20, 2010 at 8:31 PM, tuanhoanganh wrote:
> Is there any tool work on windows can open 200 connect to postgresql and
> application connect to this tool to decrease time connect to PostgreSQL
> (because PostgreSQL start new process when have a new connect, I want this
> tool open and ke
13 matches
Mail list logo