just wondering if there's a special tweak i can do to force more usage
of indexes to do BITMAP ands?
I have a table like
A int
B int
C int
D int
E int
F int
g int
where A/B/C/D/E are indexes
There's ~20millions rows in the table.
Query are something like this.
select * from table
where A=X
On Mon, 7 Apr 2008, Ow Mun Heng wrote:
just wondering if there's a special tweak i can do to force more usage
of indexes to do BITMAP ands?
There's no need to post this again. You have already had a couple of
useful answers.
Matthew
--
All of this sounds mildly turgid and messy and
We have a PostgreSQL 8.2.7 database (~230 GB) running on a machine with 8 Intel
Xeon Cores and 32 GB RAM (64-bit Linux 2.6.18). Data is stored on an EMC²
CLARiiON on RAID 1/0 (8 x 146 GB 15k rpm).
When we do random I/O with a small test tool (reading random 8k blocks from big
files in 200
Hell, Robert wrote:
We have a PostgreSQL 8.2.7 database (~230 GB) running on a machine with 8
Intel Xeon Cores and 32 GB RAM (64-bit Linux 2.6.18). Data is stored on an
EMC² CLARiiON on RAID 1/0 (8 x 146 GB 15k rpm).
When we do random I/O with a small test tool (reading random 8k blocks
Removing the constants definitely did take care of the issue on 8.3 (still
same query plan on 8.1). Thanks for your help in getting this resolved, and
sorry again for not including all relevant information on my initial request
On Fri, Apr 4, 2008 at 10:20 PM, Tom Lane [EMAIL PROTECTED] wrote:
Hi folks,
Here is the executive summary:
* RHEL5 (postgresql 8.1, .conf tweaked for performance [1])
* 2x Intel E5410 @ 2.33GHz (8 cores), 8GB RAM, 15KRPM SAS disks
* 4.9 million records in a table (IP address info)
* composite primary key: primary key(ipFrom, ipTo)
* ipFrom/ipTo are
kevin kempter wrote:
One of the things we need to query is the min date from the master table
- we may explore alternatives for this particular query, however even if
we fix this query I think we have a fundamental issue with the use of
indexes (actuallt the non-use) by the planner.
We had a
On Mon, 7 Apr 2008, Dimi Paun wrote:
* bad performance on queries of the form:
select * from ipTable where ipFrom = val and val = ipTo
This type of query is very hard for a normal B-tree index to answer. For
example, say val is half-way between min and max values. If you have an
index
On Mon, 7 Apr 2008, Dimi Paun wrote:
* bad performance on queries of the form:
select * from ipTable where ipFrom = val and val = ipTo
Oh yes, if you can guarantee that no two entries overlap at all, then
there is a simpler way. Just create a B-tree index on ipFrom as usual,
sort by
Matthew wrote:
On Mon, 7 Apr 2008, Dimi Paun wrote:
* bad performance on queries of the form:
select * from ipTable where ipFrom = val and val = ipTo
This type of query is very hard for a normal B-tree index to answer. For
example, say val is half-way between min and max values. If you
On Mon, 2008-04-07 at 17:32 +0100, Heikki Linnakangas wrote:
If I understood the original post correctly, the ipFrom and ipTo
columns actually split a single linear ip address space into
non-overlapping chunks. Something like this:
ipFrom ipTo
1 10
10 20
20 50
50
On Mon, 7 Apr 2008, Heikki Linnakangas wrote:
In that case, a regular index on (ipFrom, ipTo) should work just fine, and
that's what he's got. Actually, an index on just ipFrom would probably work
just as well. The problem is that the planner doesn't know about that special
relationship
Hi. I am looking for information about using table partitions in Postgres,
in particular w.r.t. performance when querying a partitioned table.
I implemented table partitioning following the documentation, which
is quite good and easy to follow (Chapter 5.9). I am doing some
testing, so at this
A Dilluns 07 Abril 2008, Janet Jacobsen va escriure:
no. of partitions constraint_exclusion off constraint_exclusion on
2 0.597 ms0.427 ms
4 0.653 ms0.414 ms
8
Hi
I have written a program that imputes(or rather corrects data) with in
my database.
Iam using a temporary table where in i put data from other partitoined
table. I then query this table to get the desired data.But the thing
is this temporary table has to be craeted for every record that i need
The http://www.1-800-save-a-pet.com site is hosted with FreeBSD and
PostgreSQL and as a geo-spatial search as a central feature.
One thing that made a substantial performance improvement was switching
from the geo_distance() search in the earthdistance contrib, to use
the cube based geo-spatial
When traffic to our PostgreSQL-backed website spikes, the first resource
we see being exhausted is the DB slots on the master server (currently
set to about 400).
I expect that as new Apache/mod_perl children are being put to us, they
are creating new database connections.
I'm interested in
On Mon, 07 Apr 2008 14:36:00 -0400
Mark Stosberg [EMAIL PROTECTED] wrote:
I'm particularly interested in review of DBD::Gofer, which seems like
it would help with this in our Perl application:
http://search.cpan.org/dist/DBI/lib/DBD/Gofer.pm
I realize it has limitations, like no
Mark,
do you know about our sky segmentation code Q3C,
see details http://www.sai.msu.su/~megera/wiki/SkyPixelization
We use it for billions objects in database and quite happy.
Oleg
On Mon, 7 Apr 2008, Mark Stosberg wrote:
The http://www.1-800-save-a-pet.com site is hosted with FreeBSD and
* Dimi Paun:
* 4.9 million records in a table (IP address info)
You should use the ip4r type for that.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I would stick to proven postgresql technologies such as pgbouncer.
Thanks for the fast recommendation, Joshua. I'll consider it.
Our application is Slony-replicated web/db project with two slaves.
Does this design seem sensible?
- Run one pgbouncer server on the master, with settings to
I tried different other tools for random IO (including a self written one which
does random lseek and read).
This tool, started during one of our tests, achieves 2 iops (8k each).
Started alone I get something about 1,500 iops with an avg latency of 100 ms.
We are using SAN (EMC CLARiiON CX
In response to Hell, Robert [EMAIL PROTECTED]:
I tried different other tools for random IO (including a self written one
which does random lseek and read).
This tool, started during one of our tests, achieves 2 iops (8k each).
Started alone I get something about 1,500 iops with an avg
On Mon, 7 Apr 2008, Bill Moran wrote:
You know, with all the performance problems people have been bringing up
with regard to SANs, I'm putting SAN in the same category as RAID-5 ...
Not really fair, because unlike RAID5 it's at least *possible* to get good
write performance out of a SAN.
In response to Greg Smith [EMAIL PROTECTED]:
On Mon, 7 Apr 2008, Bill Moran wrote:
You know, with all the performance problems people have been bringing up
with regard to SANs, I'm putting SAN in the same category as RAID-5 ...
Not really fair, because unlike RAID5 it's at least
Hi,
I have a performance problem with a script that does massive bulk
insert in 6 tables. When the script starts the performance is really
good but will degrade minute after minute and take almost a day to
finish!
I almost tried everything suggested on this list, changed our external
raid array
Christian Bourque wrote:
Hi,
I have a performance problem with a script that does massive bulk
insert in 6 tables. When the script starts the performance is really
good but will degrade minute after minute and take almost a day to
finish!
Would I be correct in guessing that there are foreign
Craig Ringer wrote:
Christian Bourque wrote:
Hi,
I have a performance problem with a script that does massive bulk
insert in 6 tables. When the script starts the performance is really
good but will degrade minute after minute and take almost a day to
finish!
Would I be correct in guessing
Hi all,
I need to do a bulk data loading around 704GB (log file size) at present in 8 hrs (1 am - 9am). The data file size may increase 3 to 5 times in future.
Using COPY it takes 96 hrs to finish the task.
What is the best way to do it ?
HARDWARE: SUN THUMPER/ RAID10
OS : SOLARIS 10.
DB:
I use 1 rows,have big blob
2008-04-08
bitaoxiao
发件人: Chris
发送时间: 2008-04-08 11:35:57
收件人: Christian Bourque
抄送: pgsql-performance@postgresql.org
主题: Re: [PERFORM] bulk insert performance problem
Craig Ringer wrote:
Christian Bourque wrote:
Hi,
I have a performance problem
30 matches
Mail list logo