On Mon, Oct 16, 2006 at 05:56:54PM -0400, Carlo Stonebanks wrote:
> >I think there's 2 things that would help this case. First, partition on
> > country. You can either do this on a table level or on an index level
> > by putting where clauses on the indexes (index method would be the
> > fastest o
> you have a two part part key on facility(country code, postal code),
> right?
Well, I'm glad you pointed it out, because I THOUGhT I had created it, but
apparently I haven't -- I only noticed that it was missing after I listed
all the other indexes. Looks like this query is one of the victims
Sorry, I didn'tpoint it out because an earlier post included the query with
documentation - that post got lost... or at least *I* can't see it.
The other half of the union renders the facilities that DO have addresses,
and because of the performance problem (which I have finally sorted out by
c
On Monday 16 October 2006 16:37, Carlo Stonebanks wrote:
> The facility_address_id is null statement is necessary, as this is a
> sub-query from a union clause and I want to optimise the query with
> the original logic intact. The value is not hard coded to true but
> rather to null.
Heh, you ne
>I think there's 2 things that would help this case. First, partition on
> country. You can either do this on a table level or on an index level
> by putting where clauses on the indexes (index method would be the
> fastest one to test, since it's just new indexes). That should shrink
> the size of
> what is the facility_address_id is null all about? remove it since you
> hardcode it to true in select.
The facility_address_id is null statement is necessary, as this is a
sub-query from a union clause and I want to optimise the query with the
original logic intact. The value is not hard code
On 10/15/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
that contains full address data
*/
select
f.facility_id,
null as facility_address_id,
null as address_id,
f.facility_type_code,
f.name,
null as address,
f.default_city as city,
f.default_state_code as state_c
I think there's 2 things that would help this case. First, partition on
country. You can either do this on a table level or on an index level
by putting where clauses on the indexes (index method would be the
fastest one to test, since it's just new indexes). That should shrink
the size of that ind
> Can you try temporarily disabling bitmap scans and see what comes up?
Well, that's slowing everything down. I've got a couple of results, below
1) Bitmap scan off, but seq scan enabled.
2) Bitmap scan and seq scan off
3) Bitmap scan back on, seq scan back on, and a new index created
4) VACUUM V
On 10/15/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
Hi Merlin,
Well, I'm back. first of all, thanks for your dogged determination to help
me out - it is much appreciated. I owe you a beer or twelve.
The import has been running for a week. The import program got faster as I
tuned things. I
Hey Tom, thanks for jumping in. Nothing on TV on a Sunday afternoon? ;-)
Appreciate teh input.
Here is vacuum verbose output for both the tables in question.
Carlo
INFO: vacuuming "mdx_core.facility"
INFO: index "facility_pkey" now contains 832399 row versions in 3179 pages
DETAIL: 0 index
"Carlo Stonebanks" <[EMAIL PROTECTED]> writes:
> Curiously, it's using index scans, and it really looks like a simple query
> to me. I am completely baffled. The two tables in question have about 800K
> rows each - not exactly an incredible number. The EXPLAIN is simple, but the
> performance is
Hi Merlin,
Well, I'm back. first of all, thanks for your dogged determination to help
me out - it is much appreciated. I owe you a beer or twelve.
The import has been running for a week. The import program got faster as I
tuned things. I capture the dynamic SQL statements generated by the app,
On 10/8/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Thu, Oct 05, 2006 at 09:30:45AM -0400, Merlin Moncure wrote:
> I personally only use explicit joins when doing outer joins and even
> them push them out as far as possible.
I used to be like that too, until I actually started using join synt
On 10/6/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
> how did you determine that it is done every 500 rows? this is the
The import program pages the import table - it is currently set at 500 rows
per page. With each page, I run an ANALYZE.
right, i just wanted to make sure of something (yo
On Fri, Oct 06, 2006 at 02:53:35PM -0400, Merlin Moncure wrote:
> On 10/6/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> >On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:
> >> This didn't work right away, but DID work after running a VACUUM FULL. In
> >> other words, i was still stuck with a se
On Thu, Oct 05, 2006 at 09:30:45AM -0400, Merlin Moncure wrote:
> I personally only use explicit joins when doing outer joins and even
> them push them out as far as possible.
I used to be like that too, until I actually started using join syntax.
I now find it's *way* easier to identify what the
> how did you determine that it is done every 500 rows? this is the
The import program pages the import table - it is currently set at 500 rows
per page. With each page, I run an ANALYZE.
> default autovacuum paramater. if you followed my earlier
> recommendations, you are aware that autovacuum
On 10/6/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:
> This didn't work right away, but DID work after running a VACUUM FULL. In
> other words, i was still stuck with a sequential scan until after the
> vacuum.
>
> I turned autovacuum off in or
On Fri, 2006-10-06 at 11:44, Carlo Stonebanks wrote:
> This didn't work right away, but DID work after running a VACUUM FULL. In
> other words, i was still stuck with a sequential scan until after the
> vacuum.
>
> I turned autovacuum off in order to help with the import, but was perfoming
> an
This didn't work right away, but DID work after running a VACUUM FULL. In
other words, i was still stuck with a sequential scan until after the
vacuum.
I turned autovacuum off in order to help with the import, but was perfoming
an ANALYZE with every 500 rows imported.
With autovacuum off for i
Oh you hate explicit joins too? I started in Oracle and was dismayed to find
out what the SQL standard was. I especially miss the simplicity of += outer
joins.
I'll try adding the address_id index to facility_address and see what I get!
Carlo
""Merlin Moncure"" <[EMAIL PROTECTED]> wrote in me
"Carlo Stonebanks" <[EMAIL PROTECTED]> writes:
> Just to clarify: if I expect to join two tables that I expect to benfit from
> indexed scans, I should create indexes on the joined columns on BOTH sides?
Well, it all depends on the queries you plan to issue ... but for the
particular query shown
Just to clarify: if I expect to join two tables that I expect to benfit from
indexed scans, I should create indexes on the joined columns on BOTH sides?
Carlo
"Tom Lane" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> "Carlo Stonebanks" <[EMAIL PROTECTED]> writes:
>> I didn't cre
On 10/5/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
> do we have an multi-column index on
> facility_address(facility_id, address_id)? did you run analyze?
There is an index on facility_address on facility_id.
I didn't create an index on facility_address.address_id because I expected
joins
"Carlo Stonebanks" <[EMAIL PROTECTED]> writes:
> I didn't create an index on facility_address.address_id because I expected
> joins to go in the other direction (from facility_address to address).
Well, that's your problem right there ...
regards, tom lane
--
> do we have an multi-column index on
> facility_address(facility_id, address_id)? did you run analyze?
There is an index on facility_address on facility_id.
I didn't create an index on facility_address.address_id because I expected
joins to go in the other direction (from facility_address to a
On 10/5/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
Hi Merlin,
Here are the results. The query returned more rows (65 vs 12) because of the
vague postal_code.
right. interestingly, the index didn't work properly anyways.
regardless, this is easily solvable but it looks like we might be
lo
Hi Merlin,
Here are the results. The query returned more rows (65 vs 12) because of the
vague postal_code.
In reality, we would have to modify the postal_code logic to take advantage
of full zip codes when they were avalable, not unconditionally truncate
them.
Carlo
explain analyze select
On 10/4/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
> can you do explain analyze on the two select queries on either side of
> the union separatly? the subquery is correctly written and unlikely
> to be a problem (in fact, good style imo). so lets have a look at
> both sides of facil query
> can you do explain analyze on the two select queries on either side of
> the union separatly? the subquery is correctly written and unlikely
> to be a problem (in fact, good style imo). so lets have a look at
> both sides of facil query and see where the problem is.
Sorry for the delay, the se
Hi, Alex,
Alex Stapleton wrote:
>> explain analyze is more helpful because it prints the times.
>
> You can always use the \timing flag in psql ;)
Have you ever tried EXPLAIN ANALYZE?
\timing gives you one total timing, but EXPLAIN ANALYZE gives you
timings for sub-plans, including real row co
On 10/3/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
Please ignore sample 1 - now that I have the logging feature, I can see that
my query generator algorithm made an error.
can you do explain analyze on the two select queries on either side of
the union separatly? the subquery is correctly
Please ignore sample 1 - now that I have the logging feature, I can see that
my query generator algorithm made an error.
The SQL of concern is now script 2.
---(end of broadcast)---
TIP 4: Have you searched our list archives?
htt
> explain analyze is more helpful because it prints the times.
Sorry, this runs in-line in my code, and I didn't want to slow the
already-slow program with explain analyze. I have run it outside of the code
in its own query. The new results are below.
> sample 1, couple questions:
> what is the
On 3 Oct 2006, at 16:04, Merlin Moncure wrote:
On 10/3/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
Some very helpful people had asked that I post the troublesome
code that was
generated by my import program.
I installed a SQL log feature in my import program. I have
posted samples of the
On 10/3/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote:
Some very helpful people had asked that I post the troublesome code that was
generated by my import program.
I installed a SQL log feature in my import program. I have
posted samples of the SQL statements that cause the biggest delays.
ex
Some very helpful people had asked that I post the troublesome code that was
generated by my import program.
I installed a SQL log feature in my import program. I have
posted samples of the SQL statements that cause the biggest delays.
Thanks for all of your help.
Carlo
--
Sample 1:
Thi
38 matches
Mail list logo