I have a database of hundreds of millions of web links (between sites)
in Postgres. For each link, we record the url, the referer, and the
most recent date the link exists. I'm having some serious performance
issues when it comes to writing new data into the database.
One machine is simply not
Hi Everyone
The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k
disks.
2 disks are in RAID1 and hold the OS, SWAP pg_xlog
4 disks are in RAID10 and hold the Cluster itself.
the DB will have two major tables 1 with 10 million rows and one with
100 million rows.
All the
Hello all,
Mostly Postgres makes sense to me. But now and then it does something
that boggles my brain. Take the statements below. I have a table
(agent) with 5300 rows. The primary key is agent_id. I can do SELECT
agent_id FROM agent and it returns all PK values in less than half a
second
On Fri, Sep 16, 2005 at 08:34:14PM -0500, Martin Nickel wrote:
Hello all,
Mostly Postgres makes sense to me. But now and then it does something
that boggles my brain. Take the statements below. I have a table
(agent) with 5300 rows. The primary key is agent_id. I can do SELECT
agent_id
I have a query that looks roughly like this (I've removed irrelevant
SELECT clause material and obfuscated names, trying to keep them
consistent where altered in EXPLAIN output):
SELECT u.emma_member_id, h.action_ts
FROM user as u, history as h
WHERE u.user_id = h.user_id
AND h.action_id =
On Fri, 16 Sep 2005, Martin Nickel wrote:
Hello all,
Mostly Postgres makes sense to me. But now and then it does something
that boggles my brain. Take the statements below. I have a table
(agent) with 5300 rows. The primary key is agent_id. I can do SELECT
agent_id FROM agent and it
Thomas F. O'Connell [EMAIL PROTECTED] writes:
Clearly, if the index on the timestamp field is there, postgres wants
to use it for the ORDER BY, even though the performance is worse. How
is this preference made internally? If both indexes exist, will
postgres always prefer the index on an
[EMAIL PROTECTED] [EMAIL PROTECTED] writes:
We currently have around 400,000 tables
and I would estimate that the vast majority of these tables are
relatively small (less than 200 rows).
Stop right there, and go redesign your schema. This is unbelievably
wrong :-(
[EMAIL PROTECTED] wrote:
I have a database of hundreds of millions of web links (between sites)
in Postgres. For each link, we record the url, the referer, and the
most recent date the link exists. I'm having some serious performance
issues when it comes to writing new data into the
bm\mbn wrote:
Hi Everyone
The machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15k
disks.
2 disks are in RAID1 and hold the OS, SWAP pg_xlog
4 disks are in RAID10 and hold the Cluster itself.
the DB will have two major tables 1 with 10 million rows and one with
100 million
10 matches
Mail list logo