Tom Lane <[EMAIL PROTECTED]> writes:
> > I still don't really know why it failed, but after two days building the
> > index I gave up.
>
> Sounds like a bug to me. Could you put together a test case?
At the time I contacted one of the GiST authors and we went over things for a
while. They diag
It seems to me that Postgres will treat it as
a constant.
Yeah, that works too, though again it might burn you if used inside a
function or prepared statement. What you're doing here is to push the
freezing of the "now" value even further upstream, namely to initial
parsing of the command.
What I
Litao Wu <[EMAIL PROTECTED]> writes:
> How about changing:
> CURRENT_TIMESTAMP - INTERVAL '10 minutes'
> to
> 'now'::timestamptz - INTERVAL '10 minutes'
> It seems to me that Postgres will treat it as
> a constant.
Yeah, that works too, though again it might burn you if used inside a
function or
Hi,
How about changing:
CURRENT_TIMESTAMP - INTERVAL '10 minutes'
to
'now'::timestamptz - INTERVAL '10 minutes'
It seems to me that Postgres will treat it as
a constant.
Thanks,
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> "Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> > VACUUM FULL ANALYZE
Greg Stark <[EMAIL PROTECTED]> writes:
> I would be curious to know how it goes. My own project uses
> denormalized sets stored as arrays as well, though in my case they're
> precalculated from the fully normalized data. I tried to use GiST
> indexes but ran into problems combining the btree-GiST c
"Ross J. Reedstrom" <[EMAIL PROTECTED]> writes:
> These groups _really are_ ideal for Joe Conway's work on arrays: we need
> ordered vectors, so we'd be sorting all the time, otherwise. They're
> static, and they're read only. The one thing they're not is fixed, known
> size (Sorry Merlin). They w
> SELECT * from content where 42 = ANY (authors);
Postgres does have a way to do what you ask, though. It involves GiST
indexes and the operators from the contrib/intarray directory from the
Postgres source.
I have tried to use these indexes, and the performance was very good. It
can be faste
On Mon, Jul 26, 2004 at 02:27:20AM -0400, Greg Stark wrote:
>
> "Ross J. Reedstrom" <[EMAIL PROTECTED]> writes:
>
> > In the new schema, the same thing is:
> >
> > SELECT * from content where 42 = ANY (authors);
> >
> > Works fine, but for the life of me I can't find nor figure out how to
> > b
THAT WAS IT!!
Thank you very much.
Is there a way to change the type of "CURRENT_TIMESTAMP" to "timestamp
without time zone" so that casting isn't needed?
BTW, isn't this a bug?
-- Harmon
Stephan Szabo wrote:
On Mon, 26 Jul 2004, Harmon S. Nine wrote:
However, we can't get t
On Mon, 2004-07-26 at 08:20, Gaetano Mendola wrote:
> Hervà Piedvache wrote:
SNIP
> > sort_mem = 512000
>
> This is too much, you are instructing Postgres to use 512MB
> for each backend ( some time each backend can use this quantity
> more then one )
agreed. If any one process needs this mu
"Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> VACUUM FULL ANALYZE every 3 hours seems a little severe.
If rows are only deleted once a day, that's a complete waste of time,
indeed.
I'd suggest running a plain VACUUM just after the deletion pass is done.
ANALYZEs are a different matter and p
On Mon, 26 Jul 2004, Harmon S. Nine wrote:
> However, we can't get the planner to do an timestamp-based index scan.
>
> Anyone know what to do?
I'd wonder if the type conversion is causing you problems.
CURRENT_TIMESTAMP - INTERVAL '10 minutes' is a timestamp with time zone
while the column is t
We were getting a little desperate, so we engaged in overkill to rule
out lack-of-analyze as a cause for the slow queries.
Thanks for your advice :)
-- Harmon
Matthew T. O'Connor wrote:
VACUUM FULL ANALYZE every 3 hours seems a little severe. You will
probably be be served just as well by VACUU
Thank you for your response :)
This improves the row estimation, but it is still using a sequential scan.
It really seems like the query would go faster if an index scan was
used, given the number of rows fetched (both estimated and actual) is
significantly less than the number of rows in the tab
VACUUM FULL ANALYZE every 3 hours seems a little severe. You will
probably be be served just as well by VACUUM ANALYZE. But you probably
don't need the VACUUM part most of the time. You might try doing an
ANALYZE on the specific tables you are having issues with. Since
ANALYZE should be mu
Harmon S. Nine wrote:
monitor=# explain analyze select * from "eventtable" where timestamp >
CURRENT_TIMESTAMP - INTERVAL '10 minutes';
QUERY PLAN
Try
SELECT * FROM eventtable where timestamp BETWEEN (CURRENT_TIMESTAMP -
INTERVAL '10 minut
Ross wrote:
> Hi all -
> I've got a schema I'm working on modifying, nad I need some help
getting
> the best performance out. The orginal schema has a many to many
linkage
> between a couple tables, using a two column linkage table. This is
used
> to represent groups of people and their relationshi
Hello --
To increase query (i.e. select) performance, we're trying to get
postgres to use an index based on a timestamp column in a given table.
Event-based data is put into this table several times a minute, with the
timestamp indicating when a particular row was placed in the table.
The table
Hervé Piedvache wrote:
Josh,
Le mardi 13 Juillet 2004 19:10, Josh Berkus a écrit :
What can I do to get better results ?? (configuration option, and/or
hardware update ?)
What can I give you to get more important informations to help me ?
1) What PostgreSQL version are you using?
v7.4.3
2) What's
Josh Berkus wrote:
Herve'
I forgot to ask about your hardware. How much RAM, and what's your disk
setup? CPU?
sort_mem = 512000
Huh? Sort_mem is in K. The above says that you've allocated 512MB sort
mem. Is this process the *only* thing going on on the machine?
And also is not system
Brian Hirt wrote:
I've been using the adaptec ZCR raid cards in our servers for a while
now, mostly small systems with 3 or 6 disks, and we've been very happy
with them. However, we're building a new DB machine with 14 U320 15K
SCA drives, and we've run into a performance bottlenkeck with the
21 matches
Mail list logo