just a
case of optimisations being removed in the RC?
Cheers,
Anton
;::text))
Rows Removed by Filter: 163989
Heap Blocks: exact=1434533 lossy=3404597
-> Bitmap Index Scan on idx_accepted_mid
(cost=0.00..101148.46 rows=6301568 width=0) (actual
time=4806.816..4806.816 rows=6680183 loops=1)
Index Cond: ((message_id)::text >
'20151225'::text)
Planning time: 76.707 ms
Execution time: 488939.880 ms
Any suggestions on something else to try?
Thanks again,
Anton
As space isn't ever likely to be a problem, and there are no updates (only
copy) to these tables, I'll keep it like this to avoid having to reload the
entire DB.
Thanks very much for your help.
Cheers,
Anton
rmance degradations compared
with the RC. Not what I was hoping for with a much more powerful machine!
Were these optimisations really dangerous? Is there any (easy and safe) way
to get them back or would I need to reinstall an RC version?
Thanks again,
Anton
Does anyone have any ideas? All data are loaded into this table via copy
and no updates are done. Autovacuum settings weren't changed (and is on
both). Do I need to increase shared_buffers to half of available memory for
the planner to make certain optimisations? Anything else I'm missing or can
try? The new server has been running for almost two weeks now so I would
have thought things would have had a chance to settle down.
Cheers,
Anton
Hello guru of postgres, it's possoble to tune query with join on random
string ?
i know that it is not real life example, but i need it for tests.
soe=# explain
soe-# SELECT ADDRESS_ID,
soe-# CUSTOMER_ID,
soe-# DATE_CREATED,
soe-# HOUSE_NO_OR_NAME,
soe-#
nce 2 years i have tested 16 discs for speed only. i sell the disc
after the test. i got 6 returns for failure within those 2 years - its really
happening to the mainstream discs.
--
Mit freundlichen Grüssen
Anton Rommerskirchen
--
Sent via pgsql-performance mailing list (pgsql-performance@p
Can anybody briefly explain me how each postgres process allocate
memory for it needs?
I mean, what is the biggest size of malloc() it may want? How many
such chunks? What is the average size of allocations?
I think that at first it allocates contiguous piece of shared memory
for "shared buffers"
1.11
firmware).
perhaps its time to act and not only to complain about the fact.
(btw: found funny bonnie++ for my intel 160 gb postville and my samsung pb22
after using the sam for now approx. 3 months+ ... my conclusion: NOT all SSD
are equal ...)
best regards
anton
--
ATRSoft GmbH
Bi
I have to insert rows to table with 95% primary key unique_violation.
I've tested 2 examples below:
1)
BEGIN
INSERT INTO main (name, created) VALUES (i_name, CURRENT_TIMESTAMP
AT TIME ZONE 'GMT');
EXCEPTION WHEN UNIQUE_VIOLATION THEN
RETURN 'error: already exists';
END;
RETURN 'ok: s
On 20/12/2007, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Anton Melser escribió:
> > Hi,
> > Sorry but I couldn't find the answer to this...
> >
> > I would like to empty all stats (pg_stat_all_tables probably mostly)
> > so I can get an idea of what
Hi,
Sorry but I couldn't find the answer to this...
I would like to empty all stats (pg_stat_all_tables probably mostly)
so I can get an idea of what's going on now. Is this possible? I
didn't want to just go deleting without knowing what it would do...
Thanks
Anton
--
echo '
2007/10/27, Tom Lane <[EMAIL PROTECTED]>:
> Anton <[EMAIL PROTECTED]> writes:
> > I want ask about problem with partioned tables (it was discussed some
> > time ago, see below). Is it fixed somehow in 8.2.5 ?
>
> No. The patch you mention never was considered
of index scan of a child table by recognizing sort
> order of the append node. Kurt Harriman did the work.
...
>
> On 8/24/07 3:38 AM, "Heikki Linnakangas" <[EMAIL PROTECTED]> wrote:
>
> > Anton wrote:
> >>>&
Just a random thought/question...
Are you running else on the machine? When you say "resource usage", do
you mean hd space, memory, processor, ???
What are your values in top?
More info...
Cheers
Anton
On 27/08/2007, Bill Moran <[EMAIL PROTECTED]> wrote:
> In response to
> > =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;
> >QUERY PLAN
> -
> > Limit (cost=824637.69..824637.69 rows=1 width=32)
> >
Hi.
I just created partitioned table, n_traf, sliced by month
(n_traf_y2007m01, n_traf_y2007m02... and so on, see below). They are
indexed by 'date_time' column.
Then I populate it (last value have date 2007-08-...) and do VACUUM
ANALYZE ON n_traf_y2007... all of it.
Now I try to select latest va
out 200 bytes per row) is taking up 1.7gig!!!
Vive le truncate table, and vive le vacuum full!
:-)
Anton
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
?
3. do you expand cache over mor than one numa node ?
> Thanks,
>
> Alex
>
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faq
--
ATRSoft GmbH
Hello !
Am Freitag 26 Januar 2007 12:28 schrieb John Parnefjord:
> Hi!
>
> I'm planning to move from mysql to postgresql as I believe the latter
> performs better when it comes to complex queries. The mysql database
> that I'm running is about 150 GB in size, with 300 million rows in the
> largest
Hi. I have a performance problem with this simple query:
SELECT collect_time FROM n_traffic JOIN n_logins USING (login_id)
WHERE n_logins.account_id = '1655' ORDER BY collect_time LIMIT 1;
I must add that is occurs when there is no rows in n_traffic for these
login_id's. Where there is at least
Hi. I have a performance problem with this simple query:
SELECT collect_time FROM n_traffic JOIN n_logins USING (login_id)
WHERE n_logins.account_id = '1655' ORDER BY collect_time LIMIT 1;
Limit (cost=0.00..2026.57 rows=1 width=8) (actual
time=5828.681..5
> I have 2 tables - one with calls numbers and another with calls codes.
> The structure almost like this:
...
How long does this query take?
SELECT code FROM a_voip_codes c, a_voip v where v.called_station_id
like c.code ||
'%' order by code desc limit 1
billing=# explain analyze SELECT code F
Hi.
I have 2 tables - one with calls numbers and another with calls codes.
The structure almost like this:
billing=# \d a_voip
Table "public.a_voip"
Column |Type |
Modifiers
+
Hi, all.
While working on algorithm of my project I came to question. Let it
be table like this (user+cookie pair is the primary key).
INT user
INT cookie
INT count
Periodically (with period 10 minutes) this PostgreSQL table
updated with my information.
The main problem that some of pairs (us
25 matches
Mail list logo