Hi,
i am using libpq library and postgresql 8.4 for my linux application running
on ARM with 256 MB. I am just doing:
PQconnectdb();
PQexec(INSERT INTO table1 ); (0.009661 sec.)
PQexec(INSERT INTO table1 ); (0.004208 sec.)
PQexec(INSERT INTO table2 ); (0.007352 sec.)
PQexec(INSERT IN
How up to date are the statistics for the tables in question?
What value do you have for effective cache size?
My guess would be that planner thinks the method it is using is right
either for its current row number estimations, or the amount of memory
it thinks it has to play with.
-Origina
Hello,
(Apologies if this is an obvious question. I have gone through the archives
without seeing something that directly ties to this.)
We are running Postgresql on a 64b RHEL5.2 64b server. "Uname -a":
--Linux xxx 2.6.18-92.el5 #1 SMP Tue Apr 29 13:16:15 EDT
2008 x86_64 x86_64
Is there any guidelines to sizing work_mem, shared_bufferes and other
configuration parameters etc., with regards to very large records? I
have a table that has a bytea column and I am told that some of these
columns contain over 400MB of data. I am having a problem on several
servers reading and
On Thursday 07 July 2011 22:26:45 Guillaume Lelarge wrote:
> On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
> > Hi,
> >
> > I have a delete query taking 7.2G of ram (and counting) but I do not
> > understant why so much memory is necessary. The server has 12G, and
> > I'm afraid it'll g
On Thursday 07 July 2011 19:54:08 French, Martin wrote:
> How up to date are the statistics for the tables in question?
>
> What value do you have for effective cache size?
>
> My guess would be that planner thinks the method it is using is right
> either for its current row number estimations, o
If the query planner thinks it has the default amount of memory (128MB)
and the stats are out of date, then it will by no means be able to plan
proper execution.
I would recommend setting the effective_cache_size to an appropriate
value, running "analyze" on both tables with an appropriate stats
t
On Friday 08 July 2011 10:05:47 Dean Rasheed wrote:
> > On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
> >> Hi,
> >>
> >> I have a delete query taking 7.2G of ram (and counting) but I do not
> >> understant why so much memory is necessary. The server has 12G, and
> >> I'm afraid it'll g
On Friday 08 July 2011 10:31:33 French, Martin wrote:
> If the query planner thinks it has the default amount of memory (128MB)
> and the stats are out of date, then it will by no means be able to plan
> proper execution.
>
> I would recommend setting the effective_cache_size to an appropriate
> v
On 07/06/2011 05:59 PM, Clem Dickey wrote:
On 07/05/2011 07:26 PM, Clem Dickey wrote:
Column | Type | Modifiers
+-+---
y | integer | not null
x | integer | not null
k | integer | not null
j | integer | not null
z | integer | not null
Indexes:
"t_pkey" PRIMARY KEY, btree
I am doing some research that will hopefully lead to replacing a big
Oracle installation with a set PostgreSQL servers.
The current Oracle installations consists of multiple of RAC clusters
with 8 RAC nodes each. Each RAC node has 256gb of
memory (to be doubled soon).
The nature of our service
Hello
a) look on COPY statement and COPY API protocol - it can be 100x
faster than INSERTS
http://www.postgresql.org/docs/8.3/static/libpq-copy.html
b) if you can't to use COPY use:
* outer transaction - BEGIN, INSERT, INSERT ... COMMIT if this is possible
* use a prepared statement
http://www.p
Thanks Tom and Heikki! I really appreciate your help.
I went ahead and loaded all the data. In the icream table, I now have ~175
partitions, each with 4041866 records.
The data finished loading 12 hours ago, and I then ran the same query I gave
you guys, and it took 25 seconds since it used the
Hello Tom!
Sat, 09 Jul 2011 12:23:18 -0400, you wrote:
> Gael Le Mignot writes:
>> Sat, 09 Jul 2011 11:06:16 +0200, you wrote:
BTW, what's your PostgreSQL release? I assume at least 8.3 since you're
using FTS?
>> It's 8.4 from Debian Squeeze.
> 8.4.what?
It's 8.4.8-0squeeze1
On 2011-07-09 18:43, Tom Lane wrote:
Heikki Linnakangas writes:
On 09.07.2011 00:36, Anish Kejariwal wrote:
My guess as to what happened:
-because the icecream parent table has zero records, the query optimizer
chooses the incorrect execution plan
-when I do select * from icecream, the optimiz
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retentive the query total time
in millisecond
if i connect with post
sergio mayoral writes:
> i am using libpq library and postgresql 8.4 for my linux application running
> on ARM with 256 MB. I am just doing:
> PQconnectdb();
> PQexec(INSERT INTO table1 ); (0.009661 sec.)
> PQexec(INSERT INTO table1 ); (0.004208 sec.)
> PQexec(INSERT INTO table2 ); (
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retetive the query total time
in millisecond
if i connect with po
On Sun, Jul 10, 2011 at 4:41 AM, Radhya sahal wrote:
> Dear all ,
> could any one help me?
> when i use pgadmin to exceute a query it shows the total time for query ..
> such as
> (select * form table_name.)query total time is for example 100 ms
> i want to know the command that can reten
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retetive the query total time
in millisecond
if i connect with po
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retetive the query total time
in millisecond
if i connect with p
- Forwarded Message
From: Radhya sahal
To: Samuel Gendler
Sent: Sun, July 10, 2011 11:25:46 AM
Subject: Re: [PERFORM] query total time im milliseconds
Thank's
long startTime = System.currentTimeMillis();
//execute query
long executionTime = System.currentTimeMillis() - startTime;
I have since moved the WAL to its own EBS volume (ext4, data=writeback)
to make it easier to monitor IO.
The times where the SELECTs slow down coincide with heavy write traffic
to the WAL volume.
Maybe this has to do with WALInsertLock or WALWriteLock (or some other
lock).
Since the slowdown w
I know this has been discussed various times...
We are maintaining a large multi tenant database where *all* tables have
a tenant-id and all indexes and PKs lead with the tenant-id.
Statistics and counts for the all other columns are only really
meaningful within the context of the tenant they
On Sun, Jul 10, 2011 at 2:16 PM, lars wrote:
> I know this has been discussed various times...
>
> We are maintaining a large multi tenant database where *all* tables have a
> tenant-id and all indexes and PKs lead with the tenant-id.
> Statistics and counts for the all other columns are only rea
On 10/07/2011 9:08 PM, Radhya sahal wrote:
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the command that can retetive the query total t
On 11/07/2011 2:26 AM, Radhya sahal wrote:
long startTime = System.currentTimeMillis();
//execute query
long executionTime = System.currentTimeMillis() - startTime;
this executionTime is not an actual time for query ,
it includes time for access to postgresql server
using JDBC
The pg_stat_st
On 11/07/2011 4:34 AM, lars wrote:
I have since moved the WAL to its own EBS volume (ext4, data=writeback)
to make it easier to monitor IO.
The times where the SELECTs slow down coincide with heavy write traffic
to the WAL volume.
In theory, UPDATEs shouldn't be blocking or slowing SELECTs. Whe
Thanks Craig.
Yep, I am not seeing the SELECTs slow down (measurably) during checkpoints
(i.e. when dirty pages are flushed to disk), but only during writing of the
WAL
files. The buffers shared and OS are big enough to hold the entire database,
so
evicting cached data should not be necess
29 matches
Mail list logo