> On 05/30/2015 09:46 AM, Ashik S L wrote:
>> We are using postgres SQL version 8.4.17..
>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
>> each row is of 60 bytes.Every time we insert 16380 bytes of data.
>
> Way back when, I was inserting a lot of rows of date (millions o
On 05/31/15 18:22, Tom Lane wrote:
Tomas Vondra writes:
On 05/31/15 13:00, Peter J. Holzer wrote:
(There was no analyze on facttable_stat_fta4 (automatic or manual) on
facttable_stat_fta4 between those two tests, so the statistics on
facttable_stat_fta4 shouldn't have changed - only those fo
Tomas Vondra writes:
> On 05/31/15 13:00, Peter J. Holzer wrote:
>> (There was no analyze on facttable_stat_fta4 (automatic or manual) on
>> facttable_stat_fta4 between those two tests, so the statistics on
>> facttable_stat_fta4 shouldn't have changed - only those for term.)
> So maybe there was
On 05/31/15 13:00, Peter J. Holzer wrote:
[I've seen in -hackers that you already seem to have a fix]
On 2015-05-30 15:04:34 -0400, Tom Lane wrote:
Tomas Vondra writes:
Why exactly does the second query use a much slower plan I'm not sure. I
believe I've found an issue in planning semi joins
"Peter J. Holzer" writes:
>>> Merge Semi Join (cost=316864.57..319975.79 rows=1 width=81) (actual
>>> time=7703.917..30948.271 rows=2 loops=1)
>>> Merge Cond: ((t.term)::text = (f.berechnungsart)::text)
>>> -> Index Scan using term_term_idx on term t (cost=0.00..319880.73
>>> rows=636 wid
On 2015-05-31 07:04, Jean-David Beyer wrote:
> On 05/30/2015 09:46 AM, Ashik S L wrote:
>> We are using postgres SQL version 8.4.17..
>> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
>> each row is of 60 bytes.Every time we insert 16380 bytes of data.
>
> Way back when, I w
On 05/30/2015 09:46 AM, Ashik S L wrote:
> We are using postgres SQL version 8.4.17..
> Postgres DB szie is 900 MB and we are inserting 273 rows at once .and
> each row is of 60 bytes.Every time we insert 16380 bytes of data.
Way back when, I was inserting a lot of rows of date (millions of rows)
[I've seen in -hackers that you already seem to have a fix]
On 2015-05-30 15:04:34 -0400, Tom Lane wrote:
> Tomas Vondra writes:
> > Why exactly does the second query use a much slower plan I'm not sure. I
> > believe I've found an issue in planning semi joins (reported to
> > pgsql-hackers a f