go up with actual time, you're fine.
2. You should try to ensure that costs go up linearly with actual time.
3. You should try to ensure that costs are as close as possible to actual time.
4. The number "4".
Jay Levitt
--
Sent via pgsql-performance mailing list (pgsql-performanc
Tom Lane wrote:
Jay Levitt writes:
If the query was more like
select questions.id
from questions
join (
select sum(u.id)
from users as u
group by u.id
) as s
on s.id = questions.user_id
where questions.id = 1;
would you no longer be surprised that it scanned all user rows
ing" helps; we'll just have to rephrase the
query.
Jay
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Kevin Grittner wrote:
Merlin Moncure wrote:
Well, this may not fit the OP's 'real' query
Right, if I recall correctly, the OP said it was simplified down as
far as it could be and still have the issue show.
but the inner subquery is probably better written as a semi-join
(WHERE EXISTS).
Kevin Grittner wrote:
Jay Levitt wrote:
I don't get why the GROUP BY in this subquery forces it to scan
the entire users table (seq scan here, index scan on a larger
table) when there's only one row in users that can match:
Are you sure there's a plan significantly fas
Jay Levitt wrote:
And yep! When I do a CREATE TABLE AS from that view, and add an index on
user_id, it works just as I'd like.
Or not. Feel free to kick me back over to pgsql-novice, but I don't get why
the GROUP BY in this subquery forces it to scan the entire users table (seq
anyway, so that's what I'll do.
Thanks for the push in the right direction..
Jay
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
that matches questions.user_id?
Jay Levitt
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Jay Levitt wrote:
Yes, that patch works great! Oddly enough, the workaround now does NOT work;
functions returning SETOF named composite types don't get inlined, but
functions returning the equivalent TABLE do get inlined. Let me know if you
need a failcase, but the bug doesn't actua
ning the equivalent TABLE do get inlined. Let me know if you
need a failcase, but the bug doesn't actually affect me now :)
Jay
>
> create type matcher_result as (user_id int, match int);
>
> create or replace function matcher() returns setof matcher_result as ...
--
S
What other info can I
provide? id is int, gender is varchar(255), and it's happening on
9.0.4...
Tom Lane
November 3, 2011
2:41 PM
Jay Levitt writes:
I'm confused. I have a now-trivial SQL function that, unrestricted, would
scan my whole users tabl
I'm confused. I have a now-trivial SQL function that, unrestricted, would
scan my whole users table. When I paste the body of the function as a
subquery and restrict it to one row, it only produces one row. When I paste
the body of the function into a view and restrict it to one row, it produ
l perform great but it will be difficult to
maintain, and it will be inelegant and a kitten will die. My tools
are CTEs, subqueries, aliases, SQL functions, PL/pgSQL functions, and
views (and other tools?) What optimizations do each of those prevent?
We're on 9.0 now but will happily upgrade to 9.1 if
frequency writes to a postgres database?
- Jay
I suspect your table has index, or checkpoint_segments is small and lead PG
do checkpoint frequently.
If the table has index or constraint, drop it and copy it ,after copy
finished, do create index or constraint again.
If checkpoint_segments is small, enlarge it.
And also you can turn fsync off wh
Well thats what I thought when I first started writing
the tables...
To Jay: Thanks for keeping it short and simple. "I'd like sequence, which is
unique and just use 8 bytes(bigint) When it querying other tables, it will
faster , and disk space smaller than UUID(40 bytes)." I'm
22088-13072
贸易通ID:jaymo 淘宝ID:jackem
公司网站:www.alisoft.com
wiki:http://10.0.32.21:1688/confluence/pages/viewpage.action?pageId=10338
-邮件原件-
发件人: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] 代表 Jay
发送时间: 2008年8月11日 15:35
收件人: pgsql-performance@postgresql.org
主题: [PERFORM] Using PK value as a S
I have a table named table_Users:
CREATE TABLE table_Users (
UserID character(40) NOT NULL default '',
Username varchar(256) NOT NULL default '',
Email varchar(256) NOT NULL default ''
etc...
);
The UserID is a character(40) and is generated using UUID function. We
ort transaction within function yet?
-邮件原件-
发件人: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] 代表 Heikki Linnakangas
发送时间: 2008年6月25日 18:11
收件人: jay
抄送: pgsql-performance@postgresql.org
主题: Re: [PERFORM] Postgresql update op is very very slow
jay wrote:
> I've a table with abou
Conover
发送时间: 2008年6月25日 13:02
收件人: jay
抄送: pgsql-performance@postgresql.org
主题: Re: [PERFORM] Postgresql update op is very very slow
On Jun 24, 2008, at 9:12 PM, jay wrote:
>
> I've a table with about 34601755 rows ,when I execute 'update
> msg_table set
>
I've a table with about 34601755 rows ,when I execute 'update msg_table set
type=0;' is very very slow, cost several hours, but still not complete?
Why postgresql is so slowly? Is the PG MVCC problem?
But I try it on Mysql, the same table and rows, it only cost about 340
seconds.
Any idea for
Hey Richard,
Thanks again for the reply, its great to hear some feedback. So once again,
here we go:
On 7/30/07, Richard Huxton <[EMAIL PROTECTED]> wrote:
>
> Jay Kang wrote:
> > Thanks for the reply Richard, but I guess I didn't explain myself well.
> I
> > ha
e a better solution or is this wishful thinking?
On 7/30/07, Richard Huxton <[EMAIL PROTECTED]> wrote:
>
> Jay Kang wrote:
> > Hello,
> >
> > I'm currently trying to decide on a database design for tags in my web
> > 2.0application. The problem I'm
on the schoolname field, because it accepts varchar
not integer. There seems to be some design that would better fit my needs.
I'm asking you guys for a little assistance.
--
Regards,
Jay Kang
hanks,
Jay.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Jeff Trout
Sent: Wednesday, February 15, 2006 6:23 AM
To: Jay Greenfield
Cc: 'Tom Lane'; 'Stephen Frost'; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgres sl
> Hmm, the large number of columns might have something to do with it ...
> what datatypes are the columns?
All sorts, but mostly float4 and varchar(2 to 10)
Jay
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 14, 2006 1:03 PM
To: Jay Greenfi
ot;Total runtime: 4472646.988 ms"
> Is this table exceptionally large in same way (ie: lots
> of columns)?
The table is 1.2 million rows X 246 columns. The only index is the primary
key. I will try to remove that index to see if that improves performance at
all.
Jay
-Origi
Is it possible to configure Postgres to behave like Access - a single user
and use as much of the recourses as required?
Thanks,
Jay.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Scott Marlowe
Sent: Tuesday, February 14, 2006 8:05 AM
To: Jay
tested.
work_mem: 1,000 to 2,000,000
temp_buffers: 1,000 to 10,000
shared_buffers: 1,000 to 64,000
sort_mem: 1,024,000
fsync on / off
Why does Access run so much faster? How can I
get Postgres to run as fast as Access?
Thanks,
Jay
29 matches
Mail list logo