Josh,
Thanks again for the feedback.
Well, the list of ones which are good is shorter: pretty much LSI and 3Ware
(for SATA). You can suffer with Adaptec if you have to.
Good. We don't plan on using IDE, but I've pondered Firewire.
If we went with a single CPU, like Athlon/Opertron64, would
On Jan 26, 2005, at 10:27, Van Ingen, Lane wrote:
Clarification: I am talking about SQL coding practices in Postgres
(how to write queries for best
results), not tuning-related considerations (although that would be
welcomed too).
Your question is a bit too vague. At this point in your
Hello,
if i have the following (simple) table layout:
create table a (
id serial primary key
);
create table b (
id integer references a,
test text
);
create view c as
select a.id,b.test from a
left join b
on a.id = b.id;
So if i do a select * from c i get the following:
test=# EXPLAIN
On Thu, 27 Jan 2005, PFC wrote:
Thats a really good idea, just store a list of the sorted ids in the
temp table - small amount of data for insert... I like it!
Alex Turner
NetEconomist
The best part is that you can skip the LIMIT/OFFSET entirely if you
put page numbers in your cache table while
Alexandre wrote:
Here a normal listing of design.product_department_time:
product_id | department_id | req_time
+---+--
906 | A | 3000
906 | C | 3000
906 | D | 1935
907 | A
On Thu, 27 Jan 2005 10:23:34 -0500, Alexandre Leclerc
[EMAIL PROTECTED] wrote:
Here a normal listing of design.product_department_time:
product_id | department_id | req_time
+---+--
906 | A | 3000
906 | C | 3000
However, it seems that integer integer[] does not exist :
Try intset(id) int[]. intset is an undocumented function :)
I'm going to add intset() to README.
SELECT * FROM table WHERE id int[]
Mm.
intset(x) seems to be like array[x] ?
Actually what I want is the opposite. I have a btree
Unfortunately we will need a rowtype with all the departaments:
CREATE DOMAIN departaments AS (a int, b int, c int, d int, ...);
I think you mean CREATE TYPE departments...
Chris
---(end of broadcast)---
TIP 2: you can get off all lists at once with
Steve,
You mentioned earlier that to get around the CS bug, avoid the query
structures which trigger it. Dumb question: How do you isolate this?
In real terms, it's generally triggered by a query joining against a very
large table requiring a seq scan.
You can probably find the bad queries
On Thu, Jan 27, 2005 at 08:56:03AM -0800, Josh Berkus wrote:
It's well understood. See the archives of this list. The problem is that
implementing the solution is very, very hard -- 100+ hours from a top-notch
programmer. I'm still hoping to find a corporate sponsor for the issue ...
Steve wrote:
Okay. Darn. While I don't write the queries for the application, I do
interact with the company frequently. Their considering moving the
queries into the database with PL/pgSQL. Currently their queries are
done through ProvIV development using ODBC. Will context switching be
On Thu, 27 Jan 2005 17:27:40 +0100, Dawid Kuroczko [EMAIL PROTECTED] wrote:
On Thu, 27 Jan 2005 10:23:34 -0500, Alexandre Leclerc
[EMAIL PROTECTED] wrote:
Here a normal listing of design.product_department_time:
product_id | department_id | req_time
+---+--
On Thu, 27 Jan 2005 10:44:45 -0500, Merlin Moncure
[EMAIL PROTECTED] wrote:
Alexandre wrote:
Here a normal listing of design.product_department_time:
product_id | department_id | req_time
+---+--
906 | A | 3000
906 | C
Alexandre wrote:
On Thu, 27 Jan 2005 10:44:45 -0500, Merlin Moncure
[EMAIL PROTECTED] wrote:
Alexandre wrote:
ok, you have a couple of different options here. The first thing
that
jumps out at me is to use arrays to cheat using arrays.
Let's start with the normalized result set.
On Fri, Jan 21, 2005 at 02:00:03AM -0500, Tom Lane wrote:
got absolutely zero flak about their use of Postgres in connection
with the .mobi bid, after having endured very substantial bombardment
Well, absolutely zero is probably overstating it, but Tom is right
that PostgreSQL is not the sort
On Fri, Jan 21, 2005 at 03:23:30PM -0800, Kevin Brown wrote:
beefier CPU setup would be in order. But in my (limited) experience,
the disk subsystem is likely to be a bottleneck long before the CPU is
in the general case, especially these days as disk subsystems haven't
improved in
PFC [EMAIL PROTECTED] writes:
intset(x) seems to be like array[x] ?
Actually what I want is the opposite.
What you want is called UNNEST. It didn't get done in time for 8.0. But if
what you have is an array of integers the int_array_enum() function I quoted
in the other post is
Greg Stark wrote:
test= create or replace function array_push (anyarray, anyelement)
returns anyarray as 'select $1 || $2' language sql immutable strict;
CREATE FUNCTION
test= create aggregate array_aggregate (basetype=anyelement,
sfunc=array_push, stype=anyarray, initcond = '{}');
CREATE
What you want is called UNNEST. It didn't get done in time for 8.0. But
if
what you have is an array of integers the int_array_enum() function I
quoted
in the other post is basically that.
Yes, I used it, thanks. That's what I wanted. The query plans are good.
You don't really need the
On Thu, 27 Jan 2005 13:02:48 -0500, Merlin Moncure
[EMAIL PROTECTED] wrote:
Alexandre wrote:
On Thu, 27 Jan 2005 10:44:45 -0500, Merlin Moncure
[EMAIL PROTECTED] wrote:
Alexandre wrote:
Let's start with the normalized result set.
product_id | department_id| sum
924
Alexandre wrote:
Right. I expanding departments into columns is basically a dead
end.
First of all, SQL is not really designed to do this, and second of
all
(comments continued below)
Ok, I got it. The basic message is to avoid making columns out of rows
yes. This is wrong.
like
On Wed, Jan 19, 2005 at 10:42:26AM -0500, Alan Stange wrote:
I'm fairly sure that the pi and po numbers include file IO in Solaris,
because of the unified VM and file systems.
That's correct.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
When my information changes, I alter my conclusions.
On Thu, 27 Jan 2005 16:05:09 -0500, Merlin Moncure
[EMAIL PROTECTED] wrote:
Alexandre wrote:
like I'm doing right now, that de-normalizing in an array is the way
to go.
Only sometimes. Looping application code is another tactic. There may
be other things to do as well that don't involve
Merlin Moncure [EMAIL PROTECTED] writes:
what about
CREATE AGGREGATE array_accum (
sfunc = array_append,
basetype = anyelement,
stype = anyarray,
initcond = '{}'
);
huh, that is faster. It's only 14x slower than the C implementation.
For completeness, here are the
PFC wrote:
Supposing your searches display results which are rows coming from one
specific table, you could create a cache table :
search_id serial primary key
index_n position of this result in the global result set
result_id id of the resulting row.
Then, making a
quote from manual:
--
Unfortunately, there is no similarly trivial query
that can be used to improve the performance of count()
when applied to the entire table
--
does count(1) also cause a sequential scan of the
entire table? It should be able to just use the
primary keys.
-Zavier
=
---
I'm involved in an implementation of doing trigger-based counting as a
substitute for count( * ) in real time in an application. My
trigger-based counts seem to be working fine and dramatically improve
the performance of the display of the counts in the application layer.
The problem comes in
On Thu, Jan 27, 2005 at 21:17:56 -0800,
Zavier Sheran [EMAIL PROTECTED] wrote:
quote from manual:
--
Unfortunately, there is no similarly trivial query
that can be used to improve the performance of count()
when applied to the entire table
--
does count(1) also cause a sequential scan
Thomas,
Would it be absurd to drop the triggers during import and recreate them
afterward and update the counts in a summ ary update based on
information from the import process?
That's what I'd do.
Also, might I suggest storing the counts in memcached (see the pgmemached
project on
Thomas F.O'Connell wrote:
The problem comes in importing new data into the tables for which the
counts are maintained. The current import process does some
preprocessing and then does a COPY from the filesystem to one of the
tables on which counts are maintained. This means that for each row
I forgot to mention that I'm running 7.4.6. The README includes the
caveat that pgmemcache is designed for use with 8.0. My instinct is to
be hesitant using something like that in a production environment
without some confidence that people have done so with good and reliable
success or
31 matches
Mail list logo