systems. I
haven't fully thought that through, so I'm wondering if anyone sees reasons
that wouldn't work, or if some other language would be a better or more
natural choice.
Cheers,
Ken
--
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/*
ken.tan
Hey John, and thanks for the input.
On Wed, Oct 16, 2013 at 11:00 AM, John Melesky john.mele...@rentrakmail.com
wrote:
On Wed, Oct 16, 2013 at 12:20 AM, Ken Tanzer ken.tan...@gmail.com wrote:
First and foremost (and primarily directed to people who are kind enough
to provide help
!
Cheers,
Ken
--
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801
Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
to
learn more about AGENCY or
follow
for any views or custom functions--I don't think they're in the
analyze output, but if they are please let me know.
Any comments or suggestions for improvement would be most welcome. Thanks.
Ken
p.s., This script runs fine on my computer (Ubuntu 13.04), but on a Fedora
11 machine it dies
p.s., This script runs fine on my computer (Ubuntu 13.04), but on a
Fedora 11 machine it dies with
pg_analyze_info.sh: line 18: unexpected EOF while looking for matching `)'
pg_analyze_info.sh: line 57: syntax error: unexpected end of file
If anyone knows why, or encounters a similar
This is a known limitation of partitioning. One solution is to use a
recursive stored proc, which can use indexes. Such a solution is
discussed here:
http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php
Regards,
Ken
http://archives.postgresql.org/pgsql-performance/2009-09
he same whether one row is out of place or the table is completely disordered?
Thanks,
Ken
Discover Yahoo!
Find restaurants, movies, travel & more fun for the weekend. Check it out!
Richard,
What do you mean by summary table? Basically a cache of the query into a
table with replicated column names of all the joins? I'd probably have to
whipe out the table every minute and re-insert the data for each carrier in
the system. I'm not sure how expensive this operation would
2) Force PG to drop the merge join via SET ENABLE_MERGEJOIN = FALSE;
Actually, it was 312 milliseconds, so it got worse.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
I'm not sure what the answer is but maybe I can help? Would clustering the
name index make this faster? I thought that would bunch up the pages so the
names were more or less in order, which would improve search time. Just a
guess though.
Ken
- Original Message -
From: Jeremiah Jahn
Josh,
I did everything you said and my query does perform a bit better. I've been
getting speeds from 203 to 219 to 234 milliseconds now. I tried increasing
the work mem and the effective cache size from the values you provided, but
I didn't see any more improvement. I've tried to looking
setup you recommended. That will probably be the
first adjustment to our server if we don't just replace the entire thing.
Thanks again,
Ken
---(end of broadcast)---
TIP 8: explain analyze is your friend
of this from CVS on my
work server.
Ken
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
? The reason I ask is because this would really
reduce the number of rows it pulls out from the shipment table.
Thanks for comments. I'll try making that date explicit and change the
query to use between to see if that does anything.
Regards and many thanks,
Ken
---(end
left join is for eager loading so that I don't have to run a seperate
query
to fetch the children for each shipment. This really does improve
performance because otherwise you'll have to make N+1 queries to the
database, and that's just too much overhead.
are you saying that you are actually
)
Index Cond: (355 =
person_id)
Total runtime: 312.000 ms
Ken
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Ken Egervari [EMAIL PROTECTED] writes:
Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
regards, tom lane
My machine is WinXP professional, athon xp 2100, but I get
, or is there something I can do to really speed it up?
Thanks again,
Ken
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
#random_page_cost = 4
I'm not sure what these do, but I'm guessing the last 2 affect the planner
to do different things with the statistics. Should I increase the first
two?
Regards,
Ken
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
pretty
much stuck.
Best regards and many thanks,
Ken
Yes, I'm very well aware of VACUUM and VACUUM
ANALYZE. I've even clusted the date index and so on to ensure faster
performance.
- Original Message -
From:
David
Parker
To: Ken Egervari ; pgsql-performance@postgresql.org
Sent: Saturday, January 29, 2005 5:04
PM
Well, postgres does what you asked. It will be slow, because you have a
full table join. LIMIT does not change this because the rows have to be
sorted first.
I am aware that limit doesn't really affect the execution time all that
much. It does speed up ORM though and keeps the rows to a
Thanks again for your response. I'll try and clarify some metrics that I
took a few days to figure out what would be the best join order.
By running some count queries on the production database, I noticed there
were only 8 rows in release_code. The filtered column is unique, so that
means
in it. Will a query on child table B be slowed down by the
fact that it inherits from the same table as A. I'm sure the answer is
absolutely yes, and so I guess I'm just looking for corroboration.
Maybe I'll be surprised!
Thanks a bunch,
Ken
---(end of broadcast
On Fri, 2005-01-21 at 08:14, Greg Stark wrote:
ken [EMAIL PROTECTED] writes:
From my understanding, all the data for these columns in all the child
tables will be stored in this one parent table
No, all the data is stored in the child table.
So if you perform a select * from parent
On Wed, 2004-10-13 at 02:21, Robin Ericsson wrote:
Hi,
I sent this to general earlier but I was redirected to performance.
The query have been running ok for quite some time, but after I did a
vacuum on the database, it's very very slow.
Did you do a VACUUM FULL ANALYZE on the database
not choose
the other index. Instead it reverts to doing a sequential scan of the
entire table and its really slow.
Again, sorry for the long post. Hope someone has experience with either
of these problems.
Ken
I don't have any advice to magically solve this problem. I would
suggest experimenting
and the result is again a very slow query.
I can provide those queries and results but that would only complicate
this already lengthy email and the above seems to be the crux of the
problem anyway.
Any help or thoughts would be greatly appreciated of course.
Thanks,
Ken Southerland
--
--s
Thanks Rod,
This setting has no effect however. If I set statistics to 1000, or
even 0, (and then reanalyze the table) I see no change in the behaviour
of the query plans. i.e. there is still the odd transtion in the plans
at diagonalSize = 50.
Ken
On Wed, 2004-06-09 at 13:12, Rod Taylor
On Wed, 2004-06-09 at 13:56, Rod Taylor wrote:
On Wed, 2004-06-09 at 16:50, ken wrote:
Thanks Rod,
This setting has no effect however. If I set statistics to 1000, or
Okay.. but you never did send EXPLAIN ANALYZE output. I want to know
what it is really finding.
Ah, sorry, missed
of magnitude in guessing how many rows there are going to
be in this case and thus is not using my index. How can I fix that?
Ken
On Wed, 2004-06-09 at 14:29, Rod Taylor wrote:
... and here is the plan with statistics set to 1000 ...
Seq Scan on nrgfeature f (cost=0.00..31675.57 rows
I'm running 7.4.2 on an older Linux box (450MHzAMD K-6-III, 450M RAM)
running kernel 2.6.5. My client is a Java/JDBC program on Windows.
I'm having trouble seeing where the bottleneck in my performance is.
The client uses about 30% CPU. The server uses 70% CPU plus 1.5% I/O
wait. The I/O
Richard Huxton wrote:
On Tuesday 06 April 2004 21:25, Ken Geis wrote:
I am trying to find an efficient way to draw a random sample from a
complex query. I also want it to be easy to use within my application.
So I've defined a view that encapsulates the query. The id in the
driving table
considering just making a sample table and creating an
analogous view around that. I'd like to be able to keep this as simple
as possible though.
Ken
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
Bruno Wolff III wrote:
I haven't come up with any great ideas for this one. It might be interesting
to compare the explain analyze output from the distinct on query with
and without seqscans enabled.
Can't do that comparison. Remember, with seqscan it fails. (Oh, and
that nested loops solution
method
took maybe 6.5x as long, but the estimate was that it would take 1400x
as long. I think instead of trying out his patch I might actually work
on my application!
Ken
---(end of broadcast)---
TIP 7: don't forget to increase your free space map
enough to justify special
treatment, and could that be cleanly implemented? Perhaps the aggregate
function can request the data in a certain order, be told that it is
being passed data in a certain order, and return before seeing the
entire set of data.
Food for thought...
Thanks,
Ken Geis
Bruno Wolff III wrote:
On Thu, Aug 28, 2003 at 17:10:31 -0700,
Ken Geis [EMAIL PROTECTED] wrote:
The query I want to run is
select stock_id, min(price_date) from day_ends group by stock_id;
The fast way to do this is:
select distinct on (stock_id) stock_id, price_date
order by stock_id
of the table. And yes, I *am* analyzed!
The answers are guarenteed to be correct. See:
http://developer.postgresql.org/docs/postgres/sql-select.html#SQL-DISTINCT
That's good to know. Thanks!
Ken
---(end of broadcast)---
TIP 2: you can get off all lists
Bruno Wolff III wrote:
On Thu, Aug 28, 2003 at 20:00:32 -0700,
Ken Geis [EMAIL PROTECTED] wrote:
Bruno Wolff III wrote:
Not according to the optimizer! Plus, this is not guaranteed to return
the correct results.
For it to be fast you need an index on (stock_id, price_date) so that
you can use
for a couple of years now, but I haven't run these
queries against it.
Ken
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Ken Geis wrote:
I went through the code (7.4 beta2) that estimates the cost of an index
scan path. What I need to be sure of is that when running a query in
pgsql that uses only the columns that are in an index, the underlying
table need not be accessed. I know that Oracle does this.
Thinking
Ken Geis wrote:
When run on 7.3.3, forcing an index scan by setting
enable_seqscan=false, the query took 55 minutes to run. The index is
about 660M in size, and the table is 1G. As I mentioned before, with
table scans enabled, it bombs, running out of temporary space.
Man, I should wait
to
go through the table once and don't have to hit the index blocks.
It is still odd that you didn't get a big speed up for just the min though.
I found I'm suffering from an effect detailed in a previous thread titled
Does correlation mislead the optimizer on large tables?
Ken
44 matches
Mail list logo