Hi,
i am using libpq library and postgresql 8.4 for my linux application running
on ARM with 256 MB. I am just doing:
PQconnectdb();
PQexec(INSERT INTO table1 ); (0.009661 sec.)
PQexec(INSERT INTO table1 ); (0.004208 sec.)
PQexec(INSERT INTO table2 ); (0.007352 sec.)
PQexec(INSERT
How up to date are the statistics for the tables in question?
What value do you have for effective cache size?
My guess would be that planner thinks the method it is using is right
either for its current row number estimations, or the amount of memory
it thinks it has to play with.
Hello,
(Apologies if this is an obvious question. I have gone through the archives
without seeing something that directly ties to this.)
We are running Postgresql on a 64b RHEL5.2 64b server. Uname -a:
--Linux xxx 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT
2008 x86_64 x86_64
On Thursday 07 July 2011 22:26:45 Guillaume Lelarge wrote:
On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
Hi,
I have a delete query taking 7.2G of ram (and counting) but I do not
understant why so much memory is necessary. The server has 12G, and
I'm afraid it'll go into
On Thursday 07 July 2011 19:54:08 French, Martin wrote:
How up to date are the statistics for the tables in question?
What value do you have for effective cache size?
My guess would be that planner thinks the method it is using is right
either for its current row number estimations, or the
On Friday 08 July 2011 10:05:47 Dean Rasheed wrote:
On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
Hi,
I have a delete query taking 7.2G of ram (and counting) but I do not
understant why so much memory is necessary. The server has 12G, and
I'm afraid it'll go into swap.
On Friday 08 July 2011 10:31:33 French, Martin wrote:
If the query planner thinks it has the default amount of memory (128MB)
and the stats are out of date, then it will by no means be able to plan
proper execution.
I would recommend setting the effective_cache_size to an appropriate
value,
On 07/06/2011 05:59 PM, Clem Dickey wrote:
On 07/05/2011 07:26 PM, Clem Dickey wrote:
Column | Type | Modifiers
+-+---
y | integer | not null
x | integer | not null
k | integer | not null
j | integer | not null
z | integer | not null
Indexes:
t_pkey PRIMARY KEY, btree
I am doing some research that will hopefully lead to replacing a big
Oracle installation with a set PostgreSQL servers.
The current Oracle installations consists of multiple of RAC clusters
with 8 RAC nodes each. Each RAC node has 256gb of
memory (to be doubled soon).
The nature of our
Hello
a) look on COPY statement and COPY API protocol - it can be 100x
faster than INSERTS
http://www.postgresql.org/docs/8.3/static/libpq-copy.html
b) if you can't to use COPY use:
* outer transaction - BEGIN, INSERT, INSERT ... COMMIT if this is possible
* use a prepared statement
Thanks Tom and Heikki! I really appreciate your help.
I went ahead and loaded all the data. In the icream table, I now have ~175
partitions, each with 4041866 records.
The data finished loading 12 hours ago, and I then ran the same query I gave
you guys, and it took 25 seconds since it used
Hello Tom!
Sat, 09 Jul 2011 12:23:18 -0400, you wrote:
Gael Le Mignot g...@pilotsystems.net writes:
Sat, 09 Jul 2011 11:06:16 +0200, you wrote:
BTW, what's your PostgreSQL release? I assume at least 8.3 since you're
using FTS?
It's 8.4 from Debian Squeeze.
8.4.what?
It's
On 2011-07-09 18:43, Tom Lane wrote:
Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes:
On 09.07.2011 00:36, Anish Kejariwal wrote:
My guess as to what happened:
-because the icecream parent table has zero records, the query optimizer
chooses the incorrect execution plan
-when I do
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retentive the query total time
in millisecond
if i connect with
sergio mayoral smayo...@gmail.com writes:
i am using libpq library and postgresql 8.4 for my linux application running
on ARM with 256 MB. I am just doing:
PQconnectdb();
PQexec(INSERT INTO table1 ); (0.009661 sec.)
PQexec(INSERT INTO table1 ); (0.004208 sec.)
PQexec(INSERT INTO
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retetive the query total time
in millisecond
if i connect with
On Sun, Jul 10, 2011 at 4:41 AM, Radhya sahal rad_cs_2...@yahoo.com wrote:
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retetive the query total time
in millisecond
if i connect with
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retetive the query total time
in millisecond
if i connect with
- Forwarded Message
From: Radhya sahal rad_cs_2...@yahoo.com
To: Samuel Gendler sgend...@ideasculptor.com
Sent: Sun, July 10, 2011 11:25:46 AM
Subject: Re: [PERFORM] query total time im milliseconds
Thank's
long startTime = System.currentTimeMillis();
//execute query
long
I have since moved the WAL to its own EBS volume (ext4, data=writeback)
to make it easier to monitor IO.
The times where the SELECTs slow down coincide with heavy write traffic
to the WAL volume.
Maybe this has to do with WALInsertLock or WALWriteLock (or some other
lock).
Since the slowdown
I know this has been discussed various times...
We are maintaining a large multi tenant database where *all* tables have
a tenant-id and all indexes and PKs lead with the tenant-id.
Statistics and counts for the all other columns are only really
meaningful within the context of the tenant they
On Sun, Jul 10, 2011 at 2:16 PM, lars lhofha...@yahoo.com wrote:
I know this has been discussed various times...
We are maintaining a large multi tenant database where *all* tables have a
tenant-id and all indexes and PKs lead with the tenant-id.
Statistics and counts for the all other
On 10/07/2011 9:08 PM, Radhya sahal wrote:
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retetive the query total
On 11/07/2011 2:26 AM, Radhya sahal wrote:
long startTime = System.currentTimeMillis();
//execute query
long executionTime = System.currentTimeMillis() - startTime;
this executionTime is not an actual time for query ,
it includes time for access to postgresql server
using JDBC
The
On 11/07/2011 4:34 AM, lars wrote:
I have since moved the WAL to its own EBS volume (ext4, data=writeback)
to make it easier to monitor IO.
The times where the SELECTs slow down coincide with heavy write traffic
to the WAL volume.
In theory, UPDATEs shouldn't be blocking or slowing SELECTs.
Thanks Craig.
Yep, I am not seeing the SELECTs slow down (measurably) during checkpoints
(i.e. when dirty pages are flushed to disk), but only during writing of the
WAL
files. The buffers shared and OS are big enough to hold the entire database,
so
evicting cached data should not be
27 matches
Mail list logo