Thank you all for the very helpful advice. Upping work_mem made it possible
for me to generate the table within this century without bringing the server
to a near standstill. I have not yet experimented with GROUP BY, but I'll
do this next.
Cheers,
Kynn
. Even printing it
out to a file takes forever, let alone creating an index for it.
Any words of wisdom on how to speed this up would be appreciated.
TIA!
Kynn
it took to execute?
Thanks!
Kynn
Andreas, Heikki:
Thanks!
Kynn
!
Kynn
the number of exceptional cases is small enough to
warrant a second table? Of course, one could do a systematic profiling of
various possible scenarios, but as a first approximation what's your
rule-of-thumb?
TIA!
Kynn
On Fri, Mar 14, 2008 at 3:46 PM, Heikki Linnakangas [EMAIL PROTECTED]
wrote:
tons of useful info snipped
From performance point of view, I would go with a single table with
NULL fields on PostgreSQL.
Wow. I'm so glad I asked! Thank you very much!
Kynn
On Fri, Mar 14, 2008 at 2:59 PM, Oleg Bartunov [EMAIL PROTECTED] wrote:
have you seen contrib/hstore ? You can have one table with common
attributes
and hide others in hstore
That's interesting. I'll check it out. Thanks!
Kynn
down.
That's a very helpful reminder. Thanks.
Kynn
On Mon, Feb 25, 2008 at 11:56 AM, Matthew [EMAIL PROTECTED] wrote:
On Mon, 25 Feb 2008, Kynn Jones wrote:
This is just GREAT!!! It fits the problem to a tee.
It makes the queries quick then?
It is good that you ask. Clearly you know the story: a brilliant-sounding
optimization
On Mon, Feb 25, 2008 at 8:45 AM, Matthew [EMAIL PROTECTED] wrote:
On Fri, 22 Feb 2008, Kynn Jones wrote:
Hi. I'm trying to optimize...
(Q1) SELECT a1.word, a2.word
FROM T a1 JOIN T a2 USING ( zipk )
WHERE a1.type = int1
AND a2.type = int2;
Okay, try
On Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson (DB Administrator)
[EMAIL PROTECTED] wrote:
On 2008-02-22 12:49, Kynn Jones wrote:
Of course, I expect that using views Vint1 and Vint2... would
result in a loss in performance relative to a version that used bona
fide tables Tint1 and Tint2
On Fri, Feb 22, 2008 at 8:48 PM, Dean Gibson (DB Administrator)
[EMAIL PROTECTED] wrote:
On 2008-02-22 12:49, Kynn Jones wrote:
Of course, I expect that using views Vint1 and Vint2... would
result in a loss in performance relative to a version that used bona
fide tables Tint1 and Tint2
before I can post the EXPLAIN results.)
kynn
, and many thanks in advance
for your comments!
Kynn
P.S. Here are the actual form of the queries. They now include an initial
join with table S, and the join with Tint2 (or Vint2) is a left outer
join. Interestingly, even though the queries below that use views (i.e.
Q1*** and Q2
Consider these two very similar schemas:
Schema 1:
CREATE TABLE foo (
id serial PRIMARY KEY,
frobnitz character(varying 100) NOT NULL UNIQUE
);
CREATE TABLE bar (
id serial PRIMARY KEY,
foo_id int REFERENCES foo(id)
)
Schema 2:
CREATE TABLE foo (
frobnitz character(varying 100)
I want to optimize this simple join:
SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )
huge_table has about 2.5 million records, can be assumed as fixed, and
has the following index:
CREATE INDEX huge_table_index ON huge_table( UPPER( id ) );
...while tiny_table
[ I had a problem with my mailer when I first sent this. My apologies
for any repeats. ]
I want to optimize this simple join:
SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )
huge_table has about 2.5 million records, can be assumed as fixed, and
has the
On 5/24/06, Tom Lane [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] writes:Limit(cost=19676.75..21327.99
rows=6000 width=84)-Hash Join(cost=19676.75..1062244.81 rows=3788315 width=84)Hash Cond: (upper((outer.id)::text) = upper((inner.id)::text))-Seq Scan on huge_table h(cost=
0.00..51292.43
On 5/24/06, Tom Lane [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] writes:
Limit (cost=19676.75..21327.99 rows=6000 width=84)
- Hash Join (cost=19676.75..1062244.81 rows=3788315 width=84)
Hash Cond: (upper((outer.id)::text) upper((inner.id)::text))
- Seq Scan on
I want to optimize this simple join:
SELECT * FROM huge_table h, tiny_table t WHERE UPPER( h.id ) = UPPER( t.id )
huge_table has about 2.5 million records, can be assumed as fixed, and
has the following index:
CREATE INDEX huge_table_index ON huge_table( UPPER( id ) );
...while tiny_table
21 matches
Mail list logo