Hello Peter,
If you are dealing with timed data or similar, you may consider to
partition your table(s).
In order to deal with large data, I've built a logical partition
system,
whereas the target partition is defined by the date of my data (the date
is part of the filenames that I import...).
You may try to change the planner's opinion using sub queries. Something
like:
select * from
eventactivity,
(select * from
keyword_incidents,
eventmain,
eventgeo
where
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi all,
take a look at those plans:
test=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE pvcp in
(select id from l_pvcp where value ilike '%pi%');
QUERY PLAN
-
Hi all,
take a look at those plans:
Try changing random_page_cost from the default 4 to 2 in postgresql.conf:
random_page_cost = 2
The default in postgresql is somewhat conservative. This setting
indicates for postgresql how fast your disks are, the lower the
faster.
Could this setting be
Gaetano Mendola wrote:
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1
Hi all, take a look at those plans:
test=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE
pvcp in (select id from l_pvcp where value ilike '%pi%');
- Hash IN Join (cost=2.22..153835.49 rows=177404 width=8)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Claus Guttesen wrote:
Hi all,
take a look at those plans:
Try changing random_page_cost from the default 4 to 2 in postgresql.conf:
random_page_cost = 2
The default in postgresql is somewhat conservative. This setting
indicates for
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Richard Huxton wrote:
Gaetano Mendola wrote:
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1
Hi all, take a look at those plans:
test=# explain analyze SELECT COUNT(id) FROM t_oa_2_00_card WHERE
pvcp in (select id from l_pvcp where value ilike
Gaetano Mendola wrote:
Richard Huxton wrote:
Now, why 19 rows from the subquery should produce such a large estimate
in the outer query I'm not sure. Any strange distribution of values on
pvcp?
I don't know what do you mean for strange, this is the distribution:
test=# select count(*) from
I don't know about postgres, but in oracle it could be better to write:
SELECT COUNT(distinct c.id)
FROM t_oa_2_00_card c,l_pvcp l
WHERE l.value ilike '%pi%' and c.pvcp=l.id;
or
SELECT COUNT(c.id)
FROM t_oa_2_00_card c,
(select distinct id from l_pvcp where value ilike '%pi%') l
WHERE
On Fri, Mar 30, 2007 at 12:08:26PM +0200, Gaetano Mendola wrote:
Claus Guttesen wrote:
Try changing random_page_cost from the default 4 to 2 in postgresql.conf:
random_page_cost = 2
I have tuned that number already at 2.5, lowering it to 2 doesn't change
the plan.
The following
On Fri, Mar 30, 2007 at 04:46:11AM -0600, Michael Fuhr wrote:
Have you tried increasing the statistics target on l_pvcp.value?
I ran your queries against canned data in 8.2.3 and better statistics
resulted in more accurate row count estimates for this and other
parts of the plan. I don't
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Michael Fuhr wrote:
On Fri, Mar 30, 2007 at 12:08:26PM +0200, Gaetano Mendola wrote:
Claus Guttesen wrote:
Try changing random_page_cost from the default 4 to 2 in postgresql.conf:
random_page_cost = 2
I have tuned that number already at 2.5,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Michael Fuhr wrote:
On Fri, Mar 30, 2007 at 04:46:11AM -0600, Michael Fuhr wrote:
Have you tried increasing the statistics target on l_pvcp.value?
I ran your queries against canned data in 8.2.3 and better statistics
resulted in more accurate row
Gaetano Mendola wrote:
The match 19 for '%pi%' is estimated, the real matches are:
test=# select id from l_pvcp where value ilike '%pi%';
id
-
62
(1 row)
test=# select id from l_pvcp where value ilike 'pi';
id
-
62
(1 row)
so one row in both cases, that's why I expect for
You are right in that the page size constraint is lifted in that
directio cuts out the VM filesystem cache. However, the Solaris
kernel still issues io ops in terms of its logical block size (which
we have at the default 8K). It can issue io ops for fragments as
small as 1/8th of the block
Gaetano Mendola wrote:
Michael Fuhr wrote:
On Fri, Mar 30, 2007 at 04:46:11AM -0600, Michael Fuhr wrote:
Have you tried increasing the statistics target on l_pvcp.value?
I ran your queries against canned data in 8.2.3 and better statistics
resulted in more accurate row count estimates for this
On Mar 30, 2007, at 8:14 AM, Dimitri wrote:
You are right in that the page size constraint is lifted in that
directio cuts out the VM filesystem cache. However, the Solaris
kernel still issues io ops in terms of its logical block size (which
we have at the default 8K). It can issue io ops
The problem is while your goal is to commit as fast as possible - it's
pity to vast I/O operation speed just keeping common block size...
Let's say if your transaction modification entering into 512K - you'll
be able to write much more 512K blocks per second rather 8K per second
(for the same
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Vincenzo Romano
Is there any workaround?
In my opinion the later the query planner decisions are taken the more
effective they can be.
It could be an option for the function (body) to delay any
query planner
decision.
Richard Huxton dev@archonet.com writes:
Ah, but it's got no way of knowing what matches you'll get for
'%anything%'. There's no easy way to get statistics for matching substrings.
8.2 actually tries the match on the most-common-values list, if said
list is big enough (I think the threshold is
On Mar 30, 2007, at 10:05 AM, Kenneth Marshall wrote:
On Fri, Mar 30, 2007 at 04:25:16PM +0200, Dimitri wrote:
The problem is while your goal is to commit as fast as possible -
it's
pity to vast I/O operation speed just keeping common block size...
Let's say if your transaction modification
Hi all,
When I run multiple TPC-H queries (DBT3) on postgresql, I found the system
is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor
( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18. I
run multiple
q2 queries simultaneously. The results are:
1
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Xiaoning Ding
Hi all,
When I run multiple TPC-H queries (DBT3) on postgresql, I
found the system
is not scalable. My machine has 8GB memory, and 4 Xeon Dual
Core processor
( 8 cores in total). OS kernel is linux
Xiaoning Ding [EMAIL PROTECTED] writes:
When I run multiple TPC-H queries (DBT3) on postgresql, I found the system
is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor
( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18.
If you are not running PG 8.1 or
Thanks guys,
I update PG to 8.2.3. The results are much better now.
1 process :0.94 second
2 processes: 1.32 seconds
4 processes: 2.03 seconds
8 processes: 2.54 seconds
Do you think they are good enough?
BTW where can I found some info on what 8.2.3 did to improve
scalability compared with
On Fri, 2007-03-30 at 15:25, Xiaoning Ding wrote:
Hi all,
When I run multiple TPC-H queries (DBT3) on postgresql, I found the system
is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor
( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18. I
run
On Fri, 2007-03-30 at 16:38, Xiaoning Ding wrote:
Thanks guys,
I update PG to 8.2.3. The results are much better now.
1 process :0.94 second
2 processes: 1.32 seconds
4 processes: 2.03 seconds
8 processes: 2.54 seconds
Do you think they are good enough?
BTW where can I found some
Erik,
You'er welcome! However, I believe our situation is very different
from what you're testing if I understand you correctly. Are you
saying that you're entire database will fit in memory? If so, then
these are very different situations as there is no way ours could
ever do that. In
Quoth [EMAIL PROTECTED] (Xiaoning Ding):
When I run multiple TPC-H queries (DBT3) on postgresql, I found the system
is not scalable. My machine has 8GB memory, and 4 Xeon Dual Core processor
( 8 cores in total). OS kernel is linux 2.6.9. Postgresql is 7.3.18. I
I think it might be caused by
On 30.03.2007, at 19:18, Christopher Browne wrote:
2. There are known issues with the combination of Xeon processors and
PAE memory addressing; that sort of hardware tends to be *way* less
speedy than the specs would suggest.
That is not true as the current series of processors (Woodcrest
30 matches
Mail list logo