Hi,
I'm having trouble with some sql statements which use an expression with
many columns and distinct in the column list of the select.
select distinct col1,col2,.col20,col21
from table1 left join table2 on ,...
where
;
The negative result is a big sort with teporary files.
->
Hey!
I'm having some trouble optimizing a query that uses a custom operator class.
#Postgres has given me a solution for natural sort -
http://www.rhodiumtoad.org.uk/junk/naturalsort.sql
I'm trying to run it over a huge table - when running it on demand,
the data needs to be dumped to memory and s
hi all,
Setup:
Sparc, Solaris 10, Postgres 9.0.2, using streaming replication and hot
standby. 1 master 1 slave
Everything works fine (w.r.t replication), but the pg_xlog size grows
continuously, though i had no operations going on. Also the archiving to the
other side filled up the other side FS
Forgot to include our non-default config settings and server info, not that
it probably makes a difference for this.
from pg_settings:
name | current_setting
version | PostgreSQL 9.0.3 on
x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080
Sorry meant with 32GB of memory.
Tim
On Tue, Mar 15, 2011 at 2:39 PM, Timothy Garnett wrote:
> Forgot to include our non-default config settings and server info, not that
> it probably makes a difference for this.
>
> from pg_settings:
> name | current_setting
>
> versi
Hi all,
We added an index to a table (to support some different functionality) then
ran into cases where the new index (on month, bl_number in the schema below)
made performance of some existing queries ~20,000 times worse. While we do
have a workaround (using a CTE to force the proper index to b
Dear all,
I am facing a problem while creating the index to make the below query
run faster. My table size is near about 1065 MB and 428467 rows.
explain analyze select count(*) from page_content where
publishing_date like '%2010%' and content_language='en' and content is
not null and i
On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:
> Dear all,
>
> I am facing a problem while creating the index to make the below query run
> faster. My table size is near about 1065 MB and 428467 rows.
>
> explain analyze select count(*) from page_content where publishing_date
Timothy Garnett wrote:
>>>-> Index Scan Backward using
>>> index_customs_records_on_month_and_bl_number on customs_records
>>> (cost=0.00..78426750.74 rows=48623 width=908) (actual
>>> time=171344.182..3858893.588 rows=100 loops=1)
We've seen a lot of those lately -- Index Scan Backward p
"Kevin Grittner" writes:
> Timothy Garnett wrote:
> -> Index Scan Backward using
> index_customs_records_on_month_and_bl_number on customs_records
> (cost=0.00..78426750.74 rows=48623 width=908) (actual
> time=171344.182..3858893.588 rows=100 loops=1)
> We've seen a lot of those lately -- Inde
On 03/15/2011 01:23 PM, Timothy Garnett wrote:
Column | Type
--++
id | integer|
bl_number| character varying(16) |
month| date
On 03/16/2011 05:13 AM, Adarsh Sharma wrote:
Dear all,
I am facing a problem while creating the index to make the below query run
faster. My table size is near about 1065 MB and 428467 rows.
explain analyze select count(*) from page_content where publishing_date like
'%2010%' and content_lang
Em 15-03-2011 12:09, Tech Madhu escreveu:
[This is not a performance question, next time post at the appropriate list,
that is -general]
Everything works fine (w.r.t replication), but the pg_xlog size grows
continuously, though i had no operations going on. Also the archiving to
the other sid
Shaun Thomas writes:
> Ok. In your table description, you don't really talk about the
> distribution of bl_number. But this part of your query:
> ORDER BY month DESC LIMIT 100 OFFSET 0
> Is probably tricking the planner into using that index. But there's the
> fun thing about dates: we almost
Tom Lane wrote:
> "Kevin Grittner" writes:
>> We've seen a lot of those lately -- Index Scan Backward
>> performing far worse than alternatives.
>
> It's not clear to me that that has anything to do with Tim's
> problem. It certainly wouldn't be 2x faster if it were a
> forward scan.
We
On 03/16/2011 12:44 PM, Kevin Grittner wrote:
Well, that's one way of looking at it. Another would be that the
slower plan with the backward scan was only estimated to be 14.5%
less expensive than the fast plan, so a pretty moderate modifier
would have avoided this particular problem.
I was w
On Wed, Mar 16, 2011 at 3:34 PM, Shaun Thomas wrote:
> If not, it seems like a valid configurable. We set our random_page_cost to
> 1.5 once the DB was backed by NVRAM. I could see that somehow influencing
> precedence of a backwards index scan. But even then, SSDs and their ilk
> react more like
Greetings.
I recently ran into a problem with a planner opting for a sequential scan
rather than a bitmap heap scan because the stats suggested that my delete
query was going to affect 33% of the rows, rather than the 1% it really
was. I was able to follow the planner's logic and came to the real
Oh, I'm using 8.2
On Wed, Mar 16, 2011 at 3:40 PM, Derrick Rice wrote:
> Greetings.
>
> I recently ran into a problem with a planner opting for a sequential scan
> rather than a bitmap heap scan because the stats suggested that my delete
> query was going to affect 33% of the rows, rather than th
Claudio Freire wrote:
> Forgive the naive question...
> but...
>
> Aren't all index scans, forward or backward, random IO?
No. Some could approach that; but, for example, an index scan
immediately following a CLUSTER on the index would be totally
sequential on the heap file access and would
Derrick Rice wrote:
> I recently ran into a problem with a planner opting for a
> sequential scan rather than a bitmap heap scan because the stats
> suggested that my delete query was going to affect 33% of the
> rows, rather than the 1% it really was.
> could possibly react by updating the hi
On Wed, Mar 16, 2011 at 12:09 AM, Tech Madhu wrote:
> hi all,
>
> Setup:
> Sparc, Solaris 10, Postgres 9.0.2, using streaming replication and hot
> standby. 1 master 1 slave
>
> Everything works fine (w.r.t replication), but the pg_xlog size grows
> continuously, though i had no operations going o
Thanks Marshall, would I need to change the data type of *content
*column to tsvector and create a Gist Index on it.
Best Regards,
Adarsh
Kenneth Marshall wrote:
On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:
Dear all,
I am facing a problem while creating the index to m
Thanks, I understand it know :-
But My one doubt which isn't clear :
*Original Query :-*
select count(*) from page_content where (content like '%Militant%'
OR content like '%jihad%' OR content like '%Mujahid%' OR
content like '%fedayeen%' OR content like '%insurgent%' OR content
like '%t
24 matches
Mail list logo