On Fri, Mar 25, 2011 at 12:05 PM, Adarsh Sharma wrote:
> Dear all,
>
> Today I got to run a query internally from my application by more than 10
> connections.
>
> But The query performed very badly. A the data size of tables are as :
>
> pdc_uima=# select pg_size_pretty(pg_total_relation_size('c
Thanks Andreas, I was about print the output but it takes too much time.
Below is the output of explain analyze command :
pdc_uima=# explain analyze select c.clause, s.* from clause2 c, svo2 s
where c.clause_id=s.clause_id and s.doc_id=c.source_id and c.
pdc_uima-# sentence_id=s.sentence_id ;
Adarsh Sharma wrote:
> Dear all,
>
> Today I got to run a query internally from my application by more than
> 10 connections.
>
> But The query performed very badly. A the data size of tables are as :
>
> pdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));
> pg_size_pretty
> -
Dear all,
Today I got to run a query internally from my application by more than
10 connections.
But The query performed very badly. A the data size of tables are as :
pdc_uima=# select pg_size_pretty(pg_total_relation_size('clause2'));
pg_size_pretty
5858 MB
(1 row)
pdc_ui
Thank you for your research on and posting on it, when I first encountered
this issue I saw your posting/research on this issue, this gave me a great
insight.
gettimeofday() on my new box is slow, after further research we found that,
when we set ACPI=Off, we got a good clock performance even the
On Thu, Mar 24, 2011 at 5:30 PM, Nathan Boley wrote:
> Another approach, that hasn't been suggested yet, is some Bayesian
> update method. There, rather than calculating a specific parameter
> value ( like ndistinct ), you try to store the entire distribution and
> choose the plan that minimizes c
>> This can se GUC-controllable. Like plan_safety=0..1 with low default value.
>> This can influence costs of plans where cost changes dramatically with small
>> table changes and/or statistics is uncertain. Also this can be used as
>> direct "hint" for such dangerous queries by changing GUC for se
2011/3/24 Віталій Тимчишин :
> 2011/3/23 Tom Lane
>>
>> Claudio Freire writes:
>> > On Wed, Mar 23, 2011 at 5:29 PM, Josh Berkus wrote:
>> >> On 3/23/11 10:35 AM, Claudio Freire wrote:
>> >>> * consider plan bailout: execute a tempting plan, if it takes too
>> >>> long or its effective cost ra
Em 24-03-2011 11:40, Uwe Bartels escreveu:
Or is there a dynamic way to put $PGDATA/base/pgsql_tmp into RAM without
blocking it completely like a ram disk?
Create a tablespace in a ram disk and set temp_tablespaces.
--
Euler Taveira de Oliveira
http://www.timbira.com/
--
Sent via pgsql-p
On 03/24/2011 10:28 AM, Uwe Bartels wrote:
OK. sounds promising. On my machine this looks similar.
I'll try this.
I just realized I may have implied that DBSHM automatically defaults to
/db/shm/pgsql_tmp. It dosen't. I also have this at the very top of our
/etc/init.d/postgresql script:
if
OK. sounds promising. On my machine this looks similar.
I'll try this.
Thanks,
Uwe
On 24 March 2011 16:14, Shaun Thomas wrote:
> On 03/24/2011 09:40 AM, Uwe Bartels wrote:
>
> Does anybody know of a solution out of that on Linux?
>> Or is there a dynamic way to put $PGDATA/base/pgsql_tmp into
On 03/24/2011 09:40 AM, Uwe Bartels wrote:
Does anybody know of a solution out of that on Linux?
Or is there a dynamic way to put $PGDATA/base/pgsql_tmp into RAM without
blocking it completely like a ram disk?
We put this in our startup script just before starting the actual database:
for x i
OK. I didn't now that. Thanks for sharing that information.
Can anybody tell if we have this limitation on maintenance_work_mem as well?
Does anybody know of a solution out of that on Linux?
Or is there a dynamic way to put $PGDATA/base/pgsql_tmp into RAM without
blocking it completely like a ram
Uwe,
* Uwe Bartels (uwe.bart...@gmail.com) wrote:
> So I checked this again and raised afterwards maintenance_work_mem step by
> step up 64GB.
> I logged in via psql, run the following statements
> set maintenance_work_mem = '64GB';
I believe maintenance_work_mem suffers from the same problem tha
Hi,
I see my application creating temporary files while creating an index.
LOG: temporary file: path "base/pgsql_tmp/pgsql_tmp7076.0", size 779853824
STATEMENT: CREATE INDEX IDX_LPA_LINKID ON NNDB.LPA (LINK_ID);
So I checked this again and raised afterwards maintenance_work_mem step by
step up
Στις Thursday 24 March 2011 13:39:19 ο/η Marti Raudsepp έγραψε:
> On Thu, Mar 24, 2011 at 11:11, Achilleas Mantzios
> wrote:
> > My problem had to do with the speed of gettimeofday. You might want to do
> > some special setting regarding
> > your box's way of reading time for the hw clock.
>
> J
On Thu, Mar 24, 2011 at 11:11, Achilleas Mantzios
wrote:
> My problem had to do with the speed of gettimeofday. You might want to do
> some special setting regarding
> your box's way of reading time for the hw clock.
Just for extra info, on x86, TSC is usually the "fast" timeofday
implementation
Hi Cédric,
OK, sounds promising. But all of these improvements are for the postgres
developers.
For me as an administrator I can't do a thing right now. OK.
Thanks for you suggestions. I think for batchjobs other that just COPY they
could speed up the process quite well if now the backend process
You might take a look here:
http://archives.postgresql.org/pgsql-admin/2011-01/msg00050.php
My problem had to do with the speed of gettimeofday. You might want to do some
special setting regarding
your box's way of reading time for the hw clock.
Στις Thursday 24 March 2011 04:04:21 ο/η DM έγραψε:
2011/3/23 Tom Lane
> Claudio Freire writes:
> > On Wed, Mar 23, 2011 at 5:29 PM, Josh Berkus wrote:
> >> On 3/23/11 10:35 AM, Claudio Freire wrote:
> >>> * consider plan bailout: execute a tempting plan, if it takes too
> >>> long or its effective cost raises well above the expected cost, bai
20 matches
Mail list logo