At 12:52 AM 12/6/2005, Thomas Harold wrote:
David Lang wrote:
in that case you logicly have two disks, so see the post from Ron
earlier in this thread.
And it's a very nice performance gain. Percent spent waiting
according to top is down around 10-20% instead of 80-90%. While
I'm not
Jenny schrieb:
I'm running PostgreSQL 8.0.3 on i686-pc-linux-gnu (Fedora Core 2). I've been
dealing with Psql for over than 2 years now, but I've never had this case
before.
I have a table that has about 20 rows in it.
Table public.s_apotik
Column | Type
Thanks Bruno,
Issuing VACUUM FULL seems not to have influence on the time.
I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the
test again (on different record) and the time still increase.
Any other ideas?
Thanks,
Assaf.
-Original Message-
From: Bruno Wolff III
Hi,
I setup a database server using the following configuration.
Redhat 9.0
Postgresql 8.0.3
Then, I setup a client workstation to access this database server with
the following configuration.
Redhat 9.0
unixODBC 2.2.11
psqlodbc-08.01.0101
and write a C++ program to run database query.
In
I need to slice up a web server's disk space to provide space for
postgres and storing binaries such as images and sound files. I'm
thinking of using logical volume management (LVM) to help manage the
amount of space I use between postgres and the data volumes.
The server has a 250GB RAID10 (LSI
Hi,
Is it possible to get this query run faster than it does now, by adding
indexes, changing the query?
SELECT customers.objectid FROM prototype.customers, prototype.addresses
WHERE
customers.contactaddress = addresses.objectid
ORDER BY zipCode asc, housenumber asc
LIMIT 1 OFFSET 283745
Hi,
You might try these steps
1. Do a vacuum full analyze
2. Reindex the index on id column
3. Cluster the table based on this index
On 12/5/05, Assaf Yaari [EMAIL PROTECTED] wrote:
Hi,
I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.
My application updates counters in DB. I left a test
Hi,
Is it possible to get this query run faster than it does now, by adding
indexes, changing the query?
SELECT customers.objectid FROM prototype.customers, prototype.addresses
WHERE
customers.contactaddress = addresses.objectid
ORDER BY zipCode asc, housenumber asc
LIMIT 1 OFFSET 283745
Joost,
Why do you use an offset here ? I guess you're traversing the table
somehow, in this case it would be better to remember the last zipcode +
housenumber and put an additional condition to get the next bigger than
the last one you've got... that would go for the index on
zipcode+housenumber
Hi,
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Im Auftrag
von Joost Kraaijeveld
Gesendet: Dienstag, 6. Dezember 2005 10:44
An: Pgsql-Performance
Betreff: [PERFORM] Can this query go faster???
SELECT customers.objectid FROM prototype.customers,
On Tue, 2005-12-06 at 10:52 +0100, Csaba Nagy wrote:
Joost,
Why do you use an offset here ? I guess you're traversing the table
somehow, in this case it would be better to remember the last zipcode +
housenumber and put an additional condition to get the next bigger than
the last one you've
Joost Kraaijeveld schrieb:
On Tue, 2005-12-06 at 10:52 +0100, Csaba Nagy wrote:
Joost,
Why do you use an offset here ? I guess you're traversing the table
somehow, in this case it would be better to remember the last zipcode +
housenumber and put an additional condition to get the next bigger
On Tue, Dec 06, 2005 at 01:40:47PM +0300, Olleg wrote:
I can't undestand why bigger is better. For instance in search by
index. Index point to page and I need load page to get one row. Thus I
load 8kb from disk for every raw. And keep it then in cache. You
recommend 64kb. With your
Hi Tino,
On Tue, 2005-12-06 at 11:32 +0100, Tino Wildenhain wrote:
You can have a row number in postgres easily too. For example if you
just include a serial for the row number.
Not if the order of things is determined runtime and not at insert time...
Cursor would work too but you would need
Joost Kraaijeveld schrieb:
Hi Tino,
..
Cursor would work too but you would need to have a persistent connection.
I just tried it: a cursor is not faster (what does not surprise me at
all, as the amount of work looks the same to me)
Actually no, if you scroll forward, you just ask the
On Tue, 6 Dec 2005, Steinar H. Gunderson wrote:
On Tue, Dec 06, 2005 at 01:40:47PM +0300, Olleg wrote:
I can't undestand why bigger is better. For instance in search by
index. Index point to page and I need load page to get one row. Thus I
load 8kb from disk for every raw. And keep it then in
Joost Kraaijeveld schrieb:
On Tue, 2005-12-06 at 12:36 +0100, Tino Wildenhain wrote:
I just tried it: a cursor is not faster (what does not surprise me at
all, as the amount of work looks the same to me)
Actually no, if you scroll forward, you just ask the database for the
next rows to
On 12/6/2005 4:08 AM, Assaf Yaari wrote:
Thanks Bruno,
Issuing VACUUM FULL seems not to have influence on the time.
I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the
test again (on different record) and the time still increase.
I think he meant
- run VACUUM FULL once,
On Tue, 2005-12-06 at 13:20, Joost Kraaijeveld wrote:
[snip]
Ah, a misunderstanding: I only need to calculate an index if the user
wants a record that is not in or adjacent to the cache (in which case I
can do a select values last value in the cache. So I must always
materialize all rows
At 04:43 AM 12/6/2005, Joost Kraaijeveld wrote:
Hi,
Is it possible to get this query run faster than it does now, by adding
indexes, changing the query?
SELECT customers.objectid FROM prototype.customers, prototype.addresses
WHERE
customers.contactaddress = addresses.objectid
ORDER BY zipCode
On Tue, 2005-12-06 at 11:32 +0100, Tino Wildenhain wrote:
You can have a row number in postgres easily too. For example if you
just include a serial for the row number.
Not if the order of things is determined runtime and not at insert
time...
Cursor would work too but you would need to
Kathy Lo [EMAIL PROTECTED] writes:
I found that the memory of the databaser server nearly used up (total 2G RAM).
After I stop the program, the used memory did not free.
I see no particular reason to believe that you are describing an actual
memory leak. More likely, you are just seeing the
Ron wrote:
For accuracy's sake, which exact config did you finally use?
How did you choose the config you finally used? Did you test the three
options or just pick one?
(Note: I'm not the original poster.)
I just picked the option of putting the data/pg_xlog directory (WAL) on
a 2nd set
Hi,
After few test, the difference is explained by the
effective_cache_size parameter.
with effective_cache_size=1000 (default)
the planner chooses the following plan
postgres=# explain select count(*) from (select distinct on (val) *
from test) as foo;
Greetings all,
I'm going to do a performance comparison with DocMgr and PG81/TSearch2 on
one end, and Apache Lucene on the other end.
In order to do this, I'm going to create a derivative of the
docmgr-autoimport script so that I can specify one file to import at a
time. I'll then create
[EMAIL PROTECTED] writes:
On Mon, 5 Dec 2005, Tom Lane wrote:
(Note to self: it is a bit odd that fac_id=261 is pushed down to become
an indexqual in one case but not the other ...)
I speculate that the seq_scan wasn't really the slow part
compared to not using using both parts of the index
Has anyone ever compared TSearch2 to Lucene, as far as performance is
concerned?
I'll stay away from TSearch2 until it is fully integrated in the
postgres core (like create index foo_text on foo (texta, textb) USING
TSearch2). Because a full integration is unlikely to happen in the near
Folks,
tsearch2 and Lucene are very different search engines, so it'd be unfair
comparison. If you need full access to metadata and instant indexing
you, probably, find tsearch2 is more suitable then Lucene. But, if
you could live without that features and need to search read only
archives you
Oleg Bartunov wrote:
Folks,
tsearch2 and Lucene are very different search engines, so it'd be unfair
comparison. If you need full access to metadata and instant indexing
you, probably, find tsearch2 is more suitable then Lucene. But, if
you could live without that features and need to
This didn't get through the first time around, so resending it again.
Sorry for any duplicate entries.
Hello,
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full) on all
of my tables. However, its noted in
On Tue, 2005-12-06 at 03:22, Kathy Lo wrote:
Hi,
In this program, it will access this database server using simple and
complex (joining tables) SQL Select statement and retrieve the matched
rows. For each access, it will connect the database and disconnect it.
I found that the memory of
Bruce Momjian pgman@candle.pha.pa.us writes:
Oleg Bartunov wrote:
Tsearch2 integration into pgsql would be cool, but, I see no problem to
use tsearch2 as an official extension module.
Agreed. There isn't anything magical about a plug-in vs something
integrated, as least in PostgreSQL.
The
Ameet Kini [EMAIL PROTECTED] writes:
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full) on all
of my tables. However, its noted in the docs (e.g.
Bruce Momjian schrieb:
Oleg Bartunov wrote:
Folks,
tsearch2 and Lucene are very different search engines, so it'd be unfair
comparison. If you need full access to metadata and instant indexing
you, probably, find tsearch2 is more suitable then Lucene. But, if
you could live without that
Michael Riess wrote:
Bruce Momjian schrieb:
Oleg Bartunov wrote:
Folks,
tsearch2 and Lucene are very different search engines, so it'd be unfair
comparison. If you need full access to metadata and instant indexing
you, probably, find tsearch2 is more suitable then Lucene. But, if
On Dec 6, 2005, at 12:44 PM, Ameet Kini wrote:
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full)
on all
of my tables. However, its noted in the docs (e.g.
I ran a bit exhaustive pgbench on 2 test machines I have (quad dual core
Intel and Opteron). Ofcourse the Opteron was much faster, but
interestingly, it was experiencing 3x more context switches than the
Intel box (upto 100k, versus ~30k avg on Dell). Both are RH4.0
64bit/PG8.1 64bit.
Sun (v40z):
Hello,
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full) on all
of my tables. However, its noted in the docs (e.g.
http://developer.postgresql.org/docs/postgres/routine-reindex.html)
and on the lists here
On Dec 6, 2005, at 2:04 PM, Anjan Dave wrote:
interestingly, it was experiencing 3x more context switches than the
Intel box (upto 100k, versus ~30k avg on Dell). Both are RH4.0
I'll assume that's context switches per second... so for the opteron
that's 6540 cs's and for the Dell
On Dec 6, 2005, at 11:14 AM, Ameet Kini wrote:
need for vacuums. However, it'd be great if there was a similar
automatic
reindex utility, like say, a pg_autoreindex daemon. Are there any
plans
for this feature? If not, then would cron scripts be the next best
what evidence do you have
Tom Lane wrote:
[EMAIL PROTECTED] writes:
On Mon, 5 Dec 2005, Tom Lane wrote:
I speculate that the seq_scan wasn't really the slow part
compared to not using using both parts of the index in the
second part of the plan. The table point_features is tens of
thousands of rows, while the
Rory Campbell-Lange wrote:
The server has a 250GB RAID10 (LSI 320-I + BBU) volume which I am
thinking of slicing up in the following way (Linux 2.6 kernel):
/ : ext3 : 47GB (root, home etc)
/boot : ext3 : 1GB
/tmp : ext2 : 2GB
/usr : ext3 : 4GB
Ron Mayer [EMAIL PROTECTED] writes:
The point_features table is pretty dynamic and it's possible
that the data changed between my 'explain analyze' statement in
the first post in this thread. However since both of them
show an estimate of rows=948 and returned an actual of 917 I
don't think
On 6 Dec 2005, at 16:47, Joshua Kramer wrote:
Has anyone ever compared TSearch2 to Lucene, as far as performance
is concerned?
In our experience (small often-updated documents) Lucene leaves
tsearch2 in the dust. This probably has a lot to do with our usage
pattern though. For our usage
On Tue, Dec 06, 2005 at 10:52:57 +0100,
Csaba Nagy [EMAIL PROTECTED] wrote:
Joost,
Why do you use an offset here ? I guess you're traversing the table
somehow, in this case it would be better to remember the last zipcode +
housenumber and put an additional condition to get the next bigger
Vivek Khera wrote:
On Dec 6, 2005, at 11:14 AM, Ameet Kini wrote:
need for vacuums. However, it'd be great if there was a similar
automatic
reindex utility, like say, a pg_autoreindex daemon. Are there any plans
for this feature? If not, then would cron scripts be the next best
what
Ameet Kini schrieb:
This didn't get through the first time around, so resending it again.
Sorry for any duplicate entries.
Hello,
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full) on all
of my tables.
Tom Lane wrote:
...planner is actually going to choose based on the ultimate join cost,
not on the subplan costs...
In this explanation, the reason for the change in plans over time could
be a change in the statistics for the other table. Is facets more
dynamic than point_features?
In total
Alan Stange [EMAIL PROTECTED] writes:
Vivek Khera wrote:
what evidence do you have that you are suffering index bloat?
The files for the two indices on a single table used 7.8GB of space
before a reindex, and 4.4GB after.
That's not bloat ... that's pretty nearly in line with the normal
We're running a dual Xeon machine with hyperthreading enabled and
PostgreSQL 8.0.3. Below is the type of CPUs:
processor : 3
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 3.20GHz
stepping: 1
cpu MHz
Ron Mayer [EMAIL PROTECTED] writes:
Tom Lane wrote:
In this explanation, the reason for the change in plans over time could
be a change in the statistics for the other table. Is facets more
dynamic than point_features?
Facets is a big table with rather odd distributions of values.
Many of
Tom Lane wrote:
If you have background tasks doing ANALYZEs then this explanation seems
plausible enough. I'm willing to accept it anyway ...
Yup, there are such tasks. I could dig through logs to try to confirm
or reject it; but I think it's reasonably likely that this happened.
Basically,
Hi August. Thanks very much for your mail.
On 06/12/05, August Zajonc ([EMAIL PROTECTED]) wrote:
Rory Campbell-Lange wrote:
The server has a 250GB RAID10 (LSI 320-I + BBU) volume which I am
thinking of slicing up in the following way (Linux 2.6 kernel):
/ : ext3 : 47GB (root,
Tom Lane wrote:
Alan Stange [EMAIL PROTECTED] writes:
Vivek Khera wrote:
what evidence do you have that you are suffering index bloat?
The files for the two indices on a single table used 7.8GB of space
before a reindex, and 4.4GB after.
That's not bloat ... that's
t == [EMAIL PROTECTED] writes:
t Brandon Metcalf [EMAIL PROTECTED] writes:
t We've been tuning the kernel (2.4 SMP flavor) and have improved
t performance quite a bit. I'm now wondering if turning off HT will
t improve performance even more. Based on the vmstat output below, is
t the
On Tue, Dec 06, 2005 at 03:01:02PM -0600, Brandon Metcalf wrote:
We're running a dual Xeon machine with hyperthreading enabled and
PostgreSQL 8.0.3.
The two single most important things that will help you with high rates of
context switching:
- Turn off hyperthreading.
- Upgrade to 8.1.
On Tue, 6 Dec 2005, Thomas Harold wrote:
Ron wrote:
For accuracy's sake, which exact config did you finally use?
How did you choose the config you finally used? Did you test the three
options or just pick one?
(Note: I'm not the original poster.)
I just picked the option of putting the
Anjan Dave [EMAIL PROTECTED] writes:
-bash-3.00$ time pgbench -c 1000 -t 30 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 1000
number of transactions per client: 30
number of transactions actually processed: 3/3
tps =
On Tue, Dec 06, 2005 at 04:03:22PM -0600, Ameet Kini wrote:
I'm running postgresql v8.0 and my problem is that running vacuum on my
indices are blazing fast (upto 10x faster) AFTER running reindex. For a
table with only 1 index, the time to do a vacuum (without full) went down
from 45 minutes to
On Tue, Dec 06, 2005 at 09:36:23PM +, Rory Campbell-Lange wrote:
Yes, we don't have any spare disks unfortunately. We have enabled the
BBU write, so we are hoping for good performance.
Even if you don't use seperate disks you'll probably get better
performance by putting the WAL on a
I would argue that almost certainly won't by doing that as you will
create a new place even further away for the disk head to seek to
instead of just another file on the same FS that is probably closer to
the current head position.
Alex
On 12/6/05, Michael Stone [EMAIL PROTECTED] wrote:
On Tue,
On Tue, Dec 06, 2005 at 07:52:25PM -0500, Alex Turner wrote:
I would argue that almost certainly won't by doing that as you will
create a new place even further away for the disk head to seek to
instead of just another file on the same FS that is probably closer to
the current head position.
I
...
So you'll avoid a non-core product and instead only use another non-core
product...?
Chris
Michael Riess wrote:
Has anyone ever compared TSearch2 to Lucene, as far as performance is
concerned?
I'll stay away from TSearch2 until it is fully integrated in the
postgres core (like
On Sat, 2005-12-03 at 23:00 +, Rodrigo Madera wrote:
CREATE TABLE person(
id bigint PRIMARY KEY,
first_name TEXT,
age INT,
mother bigint REFERENCES person,
father biging REFERENCES person,
siblings array of bigints (don't remember the syntax, but you get the
point)
I'm running PostgreSQL 8.0.3 on i686-pc-linux-gnu (Fedora Core 2). I've been
dealing with Psql for over than 2 years now, but I've never had this case
before.
I have a table that has about 20 rows in it.
Table public.s_apotik
Column | Type|
I run the VACUUM as you suggested, but still no response from the server. So, I
decided to DROP the database. I got a message that the database is being used.
I closed every application that accessing it. But, the message remains.
I checked the server processes (ps -ax). There were lots of
Hi,
I´m trying to optimize some selects between 2 tables and the best way I
found was
alter the first table and add the fields of the 2nd table. I adjusted
the contents and
now a have only one table with all info that I need. Now resides my
problem, because
of legacy queries I decided to
Title: Query Fails with error calloc - Cannot alocate memory
Hi
I am trying to run a query that selects 26 million rows from a
table with 68 byte rows.
When run on the Server via psql the following error occurs:
calloc : Cannot allocate memory
When run via ODBC from Cognos Framework
Michael Stone wrote:
Note that the WAL will
wander around the disk as files are created and deleted, whereas tables
are basically updated in place.
Huh? I was rather under the impression that the WAL files (in
pg_xlog, right?) were reused once they'd been created, so their
locations on the
If you're trying to retrieve 26 million rows into RAM in one go of
course it'll be trouble.
Just use a cursor. (DECLARE/FETCH/MOVE)
Chris
Howard Oblowitz wrote:
Hi …
I am trying to run a query that selects 26 million rows from a
table with 68 byte rows.
When run on the Server via psql
Tom Lane wrote:
Anjan Dave [EMAIL PROTECTED] writes:
-bash-3.00$ time pgbench -c 1000 -t 30 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 1000
number of transactions per client: 30
number of transactions actually processed:
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
1. You don't want number of clients (-c) much higher than scaling factor
(-s in the initialization step).
Should we throw a warning when someone runs the test this way?
Not a bad idea (though of course only for the standard
On Mon, 2005-12-05 at 09:42 +0200, Howard Oblowitz wrote:
I am trying to run a query that selects 26 million rows from a
table with 68 byte rows.
When run on the Server via psql the following error occurs:
calloc : Cannot allocate memory
That's precisely what I'd expect: the backend will
No, my problem is that using TSearch2 interferes with other core
components of postgres like (auto)vacuum or dump/restore.
...
So you'll avoid a non-core product and instead only use another non-core
product...?
Chris
Michael Riess wrote:
Has anyone ever compared TSearch2 to Lucene, as
74 matches
Mail list logo