On 17/12/16 23:04, Tom Lane wrote:
> so that you don't need to use
> SELECT DISTINCT? The sort/unique steps needed to do DISTINCT are
> eating a large part of the runtime,
Does a hash join result in a set of buckets that are then read out
in order? It might, unless the sort method takes advantag
On 01/03/2013 10:54 PM, Alex Vinnik wrote:
I have implemented my first app using PG DB and thought for a minute(may be
two) that I know something about PG but below problem totally destroyed my
confidence :). Please help me to restore it.
https://wiki.postgresql.org/wiki/SlowQueryQuestions
--
J
On 22/11/2012 00:08, Craig Ringer wrote:
WITH
FENCE foo AS (SELECT ...),
bar AS (SELECT ...)
SELECT * FROM bar;
Are we fencing just foo? Or all expressions?
WITH foo AS (FENCED SELECT ...),
bar AS (SELECT ...),
SELECT ... ;
--
Jeremy
--
Sent via pgsql-performance mailing list
On 09/27/2012 10:22 PM, M. D. wrote:
On 09/27/2012 02:55 PM, Scott Marlowe wrote:
On Thu, Sep 27, 2012 at 2:46 PM, M. D. wrote:
select item.item_id,item_plu.number,item.description,
(select number from account where asset_acct = account_id),
(select number from account where expense_acct = acc
On 2011-11-20 15:00, Jon Nelson wrote:
Do you happen to recall if disk I/O is
counted as user or system time?
Neither, traditionally. Those times are cpu times;
they only account for what the cpu was doing.
The disks could be working in parallel as a result
of cpu actions, and probably were -
On 2011-11-17 17:10, Jon Nelson wrote:
external sort ended, 7708696 disk blocks used: CPU 359.84s/57504.66u
sec elapsed 58966.76 sec
Am I to understand that the CPU portion of the sorting only took 6
minutes but the sort itself took almost 16.5 hours and used approx
60GB of disk space?
I real
On 2011-07-09 18:43, Tom Lane wrote:
Heikki Linnakangas writes:
On 09.07.2011 00:36, Anish Kejariwal wrote:
My guess as to what happened:
-because the icecream parent table has zero records, the query optimizer
chooses the incorrect execution plan
-when I do select * from icecream, the optimiz
On 2011-02-03 21:51, Mark Kirkwood wrote:
The cases I've seen in production typically involve "outgrowing" optimizer
parameter settings: (e.g work_mem, effective_cache_size) as the application dataset gets
bigger over time.
An argument in favour of the DBMS maintaining a running estimate of s
On 2011-02-03 23:29, Robert Haas wrote:
Yeah, but you'll be passing the entire table through this separate
process that may only need to see 1% of it or less on a large table.
It doesn't sound too impossible to pass only a percentage, starting high
and dropping towards 1% once the loaded size h
On 2010-12-20 15:48, Kenneth Marshall wrote:
And how exactly, given that the kernel does not know whether the CPU is
active or waiting on ram, could an application do so?
Exactly. I have only seen this data from hardware emulators. It would
be nice to have... :)
There's no reason that the c
On 10/04/2010 04:22 AM, Greg Smith wrote:
I had a brain-storming session on this subject with a few of the hackers in the community in this area a
while back I haven't had a chance to do something with yet (it exists only as a pile of scribbled notes
so far). There's a couple of ways to collect
On 03/28/2010 05:27 PM, Tom Lane wrote:
This is intentional: the size estimates for a never-yet-analyzed
table are *not* zero. This is because people frequently create and load
up a table and then immediately query it without an explicit ANALYZE.
Does the creation of an index also populate s
On 01/11/2010 02:53 AM, Robert Haas wrote:
On Sun, Jan 10, 2010 at 9:04 AM, Jeremy Harris wrote:
Needing to use an external (on-disk) sort method, when taking
only 90MB, looks odd.
[...]
Well, you'd need to have work_mem> 90 MB for that not to happen, and
very few people can affor
On 01/10/2010 12:28 PM, Mathieu De Zutter wrote:
Sort (cost=481763.31..485634.61 rows=1548520 width=338) (actual
time=5423.628..6286.148 rows=1551923 loops=1)
Sort Key: event_timestamp
> Sort Method: external merge Disk: 90488kB
-> Seq Scan on log_event (cost=0.00..79085.92 rows=154
On 12/24/2009 05:12 PM, Richard Neill wrote:
Of course, with a server machine, it's nearly impossible to use mdadm
raid: you are usually compelled to use a hardware raid card.
Could you expand on that?
- Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To m
On 12/02/2009 11:31 PM, Ashish Kumar Singh wrote:
While importing this dump in to database I have noticed that initially
query response time is very slow but it does improves with time.
Any suggestions to improve performance after dump in imported in to
database will be highly appreciated!
Ana
On 11/03/2009 07:16 PM, Subbiah Stalin-XCGF84 wrote:
All,
I'm trying to understand the free memory usage and why it falls below
17G sometimes and what could be causing it. Any pointers would be
appreciated.
r...@prod1 # prtconf
System Configuration: Sun Microsystems sun4u
Memory size: 32768 M
On 10/30/2009 08:01 PM, Greg Stark wrote:
On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu wrote:
Any relational database worth its salt has partitioning for a reason.
1. Maintenance. You will need to delete data at some
point.(cleanup)...Partitions are the only way to do it effectively.
This is t
On 10/30/2009 12:43 PM, Merlin Moncure wrote:
On Thu, Oct 29, 2009 at 11:40 AM, Steve Crawford
wrote:
Use a parent table and 20 child tables. Create a new child every day and
drop the 20-day-old table. Table drops are far faster and lower-impact than
delete-from a 120-million row table. Index-
Scott Carey wrote:
Well, what does a revolution like this require of Postgres? That is the
question.
[...]
#1 Per-Tablespace optimizer tuning parameters.
... automatically measured?
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
John Beaver wrote:
I'm having a strange problem with a query. The query is fairly simple,
with a few constants and two joins. All relevant columns should be
indexed, and I'm pretty sure there aren't any type conversion issues.
But the query plan includes a fairly heavy seq scan. The only possib
bijayant kumar wrote:
select * from tablename takes 10-15 mins to give the output
There are better ways to dump data than using a database; that's
not a useful query.
Any query select,insert,update simple or complex behaves in the same way
Have you set up suitable indexes for your operat
Tom Lane wrote:
One additional point: this means that one transaction in every 32K
writing transactions *does* have to do extra work when it assigns itself
an XID, namely create and zero out the next page of pg_clog. And that
doesn't just slow down the transaction in question, but the next few
g
Bill Moran wrote:
This is a FAQ, it comes up on an almost weekly basis.
I don't think so. "where".
- select count(*) from gene_prediction_view where gene_ref = 523
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
Paul Lambert wrote:
"-> Merge Join
(cost=35.56..19732.91 rows=12 width=48) (actual time=0.841..2309.828
rows=206748 loops=1)"
I'm no expert, but in the interests of learning: why is the
rows estimate so far out for this join?
Thanks,
Jeremy
--
Tilmann Singer wrote:
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [20070728 21:05]:
Let's try putting the sort/limit in each piece of the UNION to speed them up
separately.
SELECT * FROM (
(SELECT * FROM large_table lt
WHERE lt.user_id = 12345
ORDER BY created_at DESC LIMIT 10) AS q1
UNION
(S
26 matches
Mail list logo