Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> First, it's not a particular problem with pgpool. As far as I know any
> connection pool solution has exactly the same problem. Second, it's
> easy to fix if PostgreSQL provides a functionarity such as:"drop all
> temporary tables if any".
I don't like th
> Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> > First, it's not a particular problem with pgpool. As far as I know any
> > connection pool solution has exactly the same problem. Second, it's
> > easy to fix if PostgreSQL provides a functionarity such as:"drop all
> > temporary tables if any".
>
> I
I don't really think it would be a useful plan anyway. What *would* be
useful is to support HashAggregate as an implementation alternative for
DISTINCT --- currently I believe we only consider that for GROUP BY.
The DISTINCT planning code is fairly old and crufty and hasn't been
redesigned lately
Consider how the fact table is going to be used, and review hacking it up
based on usage. Fact tables should be fairly narrow, so if there are extra
columns beyond keys and dimension keys consider breaking it into parallel
tables (vertical partitioning).
Horizontal partitioning is your friend; esp
=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?= <[EMAIL PROTECTED]> writes:
> Present state is that DISTINCT and UNION are slow with or without using
> the GROUP BY trick. Including the index skip scan in the planning options
> would only happen when appropriate cases are detected. This detect
Alan Stange wrote:
Here's a few numbers from the Opteron 250. If I get some time I'll
post a more comprehensive comparison including some other systems.
The system is a Sun v20z. Dual Opteron 250, 2.4Ghz, Linux 2.6, 8 GB
memory. I did a compile and install of pg 8.0 beta 3. I created a
dat
Pierre-Frédéric Caillaud <[EMAIL PROTECTED]> writes:
> I see this as a minor annoyance only because I can write GROUP BY
> instead of DISTINCT and get the speed boost. It probably annoys people
> trying to port applications to postgres though, forcing them to rewrite
> their queries.
Yeah,
At 13.30 07/10/2004, Aaron Werman wrote:
Consider how the fact table is going to be used, and review hacking it up
based on usage. Fact tables should be fairly narrow, so if there are extra
columns beyond keys and dimension keys consider breaking it into parallel
tables (vertical partitioning).
Hmm
Am Donnerstag, 7. Oktober 2004 14:01 schrieb Pierre-Frédéric Caillaud:
> Side Note :
>
> What do you think about the idea of an "UniqueSort" which would do
> sort+unique in one pass ?
This is what oracle does and it is quite fast with it...
--
Ole Langbehn
freiheit.com technologies gmbh
Theodo
Ole Langbehn <[EMAIL PROTECTED]> writes:
>> What do you think about the idea of an "UniqueSort" which would do
>> sort+unique in one pass ?
> This is what oracle does and it is quite fast with it...
Hashing is at least as fast, if not faster.
regards, tom lane
-
On Thu, Oct 07, 2004 at 11:48:41AM -0400, Bill Montgomery wrote:
> Alan Stange wrote:
>
> The same test on a Dell PowerEdge 1750, Dual Xeon 3.2 GHz, 512k cache,
> HT on, Linux 2.4.21-20.ELsmp (RHEL 3), 4GB memory, pg 7.4.5:
>
> Far less performance that the Dual Opterons with a low number of
>
Bill Montgomery wrote:
Alan Stange wrote:
Here's a few numbers from the Opteron 250. If I get some time I'll
post a more comprehensive comparison including some other systems.
The system is a Sun v20z. Dual Opteron 250, 2.4Ghz, Linux 2.6, 8 GB
memory. I did a compile and install of pg 8.0 be
Michael Adler wrote:
On Thu, Oct 07, 2004 at 11:48:41AM -0400, Bill Montgomery wrote:
Alan Stange wrote:
The same test on a Dell PowerEdge 1750, Dual Xeon 3.2 GHz, 512k cache,
HT on, Linux 2.4.21-20.ELsmp (RHEL 3), 4GB memory, pg 7.4.5:
Far less performance that the Dual Opterons with a low nu
Gabriele,
> That's another interesting argument. Again, I had in mind the space
> efficiency principle and I decided to use null IDs for dimension tables if
> I don't have the information. I noticed though that in those cases I can't
> use any index and performances result very poor.
For one thin
- Original Message -
From: "Gabriele Bartolini" <[EMAIL PROTECTED]>
To: "Aaron Werman" <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>
Sent: Thursday, October 07, 2004 1:07 PM
Subject: Re: [PERFORM] Data warehousing requirements
> At 13.30 07/10/2004, Aaron Werman wrote:
> >Consider how the fa
Josh Berkus <[EMAIL PROTECTED]> writes:
> For one thing, this is false optimization; a NULL isn't saving you any table
> size on an INT or BIGINT column.NULLs are only smaller on variable-width
> columns.
Uh ... not true. The column will not be stored, either way. Now if you
had a row that
Tom,
Well, I sit corrected. Obviously I misread that.
> It's not so much that they are necessarily inefficient as that they
> constrain the planner's freedom of action. You need to think a lot more
> carefully about the order of joining than when you use inner joins.
I've also found that OUTE
17 matches
Mail list logo