No, my problem is that using TSearch2 interferes with other core
components of postgres like (auto)vacuum or dump/restore.
...
So you'll avoid a non-core product and instead only use another non-core
product...?
Chris
Michael Riess wrote:
Has anyone ever compared TSearch2 to Lucene, as
On Mon, 2005-12-05 at 09:42 +0200, Howard Oblowitz wrote:
> I am trying to run a query that selects 26 million rows from a
> table with 68 byte rows.
>
> When run on the Server via psql the following error occurs:
>
> calloc : Cannot allocate memory
That's precisely what I'd expect: the backend
Bruce Momjian writes:
> Tom Lane wrote:
>> 1. You don't want number of clients (-c) much higher than scaling factor
>> (-s in the initialization step).
> Should we throw a warning when someone runs the test this way?
Not a bad idea (though of course only for the "standard" scripts).
Tatsuo, what
Would someone add a comment in the code about this, or research it?
---
Tom Lane wrote:
> I wrote:
> > Looking at this, I wonder if there isn't a bug or at least an
> > inefficiency in 8.1. The KeysEqual short circuit tests
Tom Lane wrote:
> "Anjan Dave" <[EMAIL PROTECTED]> writes:
> > -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
> > starting vacuum...end.
> > transaction type: TPC-B (sort of)
> > scaling factor: 1
> > number of clients: 1000
> > number of transactions per client: 30
> > number of transactions actua
If you're trying to retrieve 26 million rows into RAM in one go of
course it'll be trouble.
Just use a cursor. (DECLARE/FETCH/MOVE)
Chris
Howard Oblowitz wrote:
Hi …
I am trying to run a query that selects 26 million rows from a
table with 68 byte rows.
When run on the Server via psql th
Michael Stone wrote:
> Note that the WAL will
> wander around the disk as files are created and deleted, whereas tables
> are basically updated in place.
Huh? I was rather under the impression that the WAL files (in
pg_xlog, right?) were reused once they'd been created, so their
locations on the
Title: Query Fails with error calloc - Cannot alocate memory
Hi …
I am trying to run a query that selects 26 million rows from a
table with 68 byte rows.
When run on the Server via psql the following error occurs:
calloc : Cannot allocate memory
When run via ODBC from Cognos Framework Ma
Hi,
I´m trying to optimize some selects between 2 tables and the best way I
found was
alter the first table and add the fields of the 2nd table. I adjusted
the contents and
now a have only one table with all info that I need. Now resides my
problem, because
of legacy queries I decided to make
I run the VACUUM as you suggested, but still no response from the server. So, I
decided to DROP the database. I got a message that the database is being used.
I closed every application that accessing it. But, the message remains.
I checked the server processes (ps -ax). There were lots of 'UPDAT
I'm running PostgreSQL 8.0.3 on i686-pc-linux-gnu (Fedora Core 2). I've been
dealing with Psql for over than 2 years now, but I've never had this case
before.
I have a table that has about 20 rows in it.
Table "public.s_apotik"
Column | Type| Modifi
On Sat, 2005-12-03 at 23:00 +, Rodrigo Madera wrote:
> CREATE TABLE person(
>id bigint PRIMARY KEY,
>first_name TEXT,
>age INT,
>mother bigint REFERENCES person,
>father biging REFERENCES person,
>siblings array of bigints (don't remember the syntax, but you get the
>
...
So you'll avoid a non-core product and instead only use another non-core
product...?
Chris
Michael Riess wrote:
Has anyone ever compared TSearch2 to Lucene, as far as performance is
concerned?
I'll stay away from TSearch2 until it is fully integrated in the
postgres core (like "cre
On Tue, Dec 06, 2005 at 07:52:25PM -0500, Alex Turner wrote:
I would argue that almost certainly won't by doing that as you will
create a new place even further away for the disk head to seek to
instead of just another file on the same FS that is probably closer to
the current head position.
I
I would argue that almost certainly won't by doing that as you will
create a new place even further away for the disk head to seek to
instead of just another file on the same FS that is probably closer to
the current head position.
Alex
On 12/6/05, Michael Stone <[EMAIL PROTECTED]> wrote:
> On Tu
On Tue, Dec 06, 2005 at 09:36:23PM +, Rory Campbell-Lange wrote:
Yes, we don't have any spare disks unfortunately. We have enabled the
BBU write, so we are hoping for good performance.
Even if you don't use seperate disks you'll probably get better
performance by putting the WAL on a sepera
On Tue, Dec 06, 2005 at 04:03:22PM -0600, Ameet Kini wrote:
I'm running postgresql v8.0 and my problem is that running vacuum on my
indices are blazing fast (upto 10x faster) AFTER running reindex. For a
table with only 1 index, the time to do a vacuum (without full) went down
from 45 minutes to
"Anjan Dave" <[EMAIL PROTECTED]> writes:
> -bash-3.00$ time pgbench -c 1000 -t 30 pgbench
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> number of clients: 1000
> number of transactions per client: 30
> number of transactions actually processed: 3/3
> tps
> what evidence do you have that you are suffering index bloat? or are
> you just looking for solutions to problems that don't exist as an
> academic exercise? :-)
Well, firstly, its not an academic exercise - Its very much of a real
problem that needs a real solution :)
I'm running postgresql
On Tue, 6 Dec 2005, Thomas Harold wrote:
Ron wrote:
For accuracy's sake, which exact config did you finally use?
How did you choose the config you finally used? Did you test the three
options or just pick one?
(Note: I'm not the original poster.)
I just picked the option of putting the d
On Tue, Dec 06, 2005 at 03:01:02PM -0600, Brandon Metcalf wrote:
> We're running a dual Xeon machine with hyperthreading enabled and
> PostgreSQL 8.0.3.
The two single most important things that will help you with high rates of
context switching:
- Turn off hyperthreading.
- Upgrade to 8.1.
t == [EMAIL PROTECTED] writes:
t> "Brandon Metcalf" <[EMAIL PROTECTED]> writes:
t> > We've been tuning the kernel (2.4 SMP flavor) and have improved
t> > performance quite a bit. I'm now wondering if turning off HT will
t> > improve performance even more. Based on the vmstat output below, is
Tom Lane wrote:
Alan Stange <[EMAIL PROTECTED]> writes:
Vivek Khera wrote:
what evidence do you have that you are suffering index bloat?
The files for the two indices on a single table used 7.8GB of space
before a reindex, and 4.4GB after.
That's not bloat ... that's
Hi August. Thanks very much for your mail.
On 06/12/05, August Zajonc ([EMAIL PROTECTED]) wrote:
> Rory Campbell-Lange wrote:
> >The server has a 250GB RAID10 (LSI 320-I + BBU) volume which I am
> >thinking of slicing up in the following way (Linux 2.6 kernel):
> >
> >/ : ext3 : 47GB
Tom Lane wrote:
If you have background tasks doing ANALYZEs then this explanation seems
plausible enough. I'm willing to accept it anyway ...
Yup, there are such tasks. I could dig through logs to try to confirm
or reject it; but I think it's reasonably likely that this happened.
Basically,
"Brandon Metcalf" <[EMAIL PROTECTED]> writes:
> We've been tuning the kernel (2.4 SMP flavor) and have improved
> performance quite a bit. I'm now wondering if turning off HT will
> improve performance even more. Based on the vmstat output below, is
> the context switching typical or too high?
G
Ron Mayer <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> In this explanation, the reason for the change in plans over time could
>> be a change in the statistics for the other table. Is "facets" more
>> dynamic than "point_features"?
> Facets is a big table with rather odd distributions of valu
We're running a dual Xeon machine with hyperthreading enabled and
PostgreSQL 8.0.3. Below is the type of CPUs:
processor : 3
vendor_id : GenuineIntel
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 3.20GHz
stepping: 1
cpu MHz
Alan Stange <[EMAIL PROTECTED]> writes:
> Vivek Khera wrote:
>> what evidence do you have that you are suffering index bloat?
> The files for the two indices on a single table used 7.8GB of space
> before a reindex, and 4.4GB after.
That's not bloat ... that's pretty nearly in line with the norm
Tom Lane wrote:
...planner is actually going to choose based on the ultimate join cost,
not on the subplan costs...
In this explanation, the reason for the change in plans over time could
be a change in the statistics for the other table. Is "facets" more
dynamic than "point_features"?
In to
Ameet Kini schrieb:
This didn't get through the first time around, so resending it again.
Sorry for any duplicate entries.
Hello,
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full) on all
of my tables.
Vivek Khera wrote:
On Dec 6, 2005, at 11:14 AM, Ameet Kini wrote:
need for vacuums. However, it'd be great if there was a similar
automatic
reindex utility, like say, a pg_autoreindex daemon. Are there any plans
for this feature? If not, then would cron scripts be the next best
what eviden
On Tue, Dec 06, 2005 at 10:52:57 +0100,
Csaba Nagy <[EMAIL PROTECTED]> wrote:
> Joost,
>
> Why do you use an offset here ? I guess you're traversing the table
> somehow, in this case it would be better to remember the last zipcode +
> housenumber and put an additional condition to get the next b
On Tue, Dec 06, 2005 at 11:08:07 +0200,
Assaf Yaari <[EMAIL PROTECTED]> wrote:
> Thanks Bruno,
>
> Issuing VACUUM FULL seems not to have influence on the time.
That was just to get the table size back down to something reasonable.
> I've added to my script VACUUM ANALYZE every 100 UPDATE's and
On 6 Dec 2005, at 16:47, Joshua Kramer wrote:
Has anyone ever compared TSearch2 to Lucene, as far as performance
is concerned?
In our experience (small often-updated documents) Lucene leaves
tsearch2 in the dust. This probably has a lot to do with our usage
pattern though. For our usage it
Ron Mayer <[EMAIL PROTECTED]> writes:
> The "point_features" table is pretty dynamic and it's possible
> that the data changed between my 'explain analyze' statement in
> the first post in this thread. However since both of them
> show an estimate of "rows=948" and returned an actual of 917 I
> d
Rory Campbell-Lange wrote:
The server has a 250GB RAID10 (LSI 320-I + BBU) volume which I am
thinking of slicing up in the following way (Linux 2.6 kernel):
/ : ext3 : 47GB (root, home etc)
/boot : ext3 : 1GB
/tmp : ext2 : 2GB
/usr : ext3 : 4GB
/
Tom Lane wrote:
[EMAIL PROTECTED] writes:
On Mon, 5 Dec 2005, Tom Lane wrote:
I speculate that the seq_scan wasn't really the slow part
compared to not using using both parts of the index in the
second part of the plan. The table point_features is tens of
thousands of rows, while the table
On Dec 6, 2005, at 11:14 AM, Ameet Kini wrote:
need for vacuums. However, it'd be great if there was a similar
automatic
reindex utility, like say, a pg_autoreindex daemon. Are there any
plans
for this feature? If not, then would cron scripts be the next best
what evidence do you have th
On Dec 6, 2005, at 2:04 PM, Anjan Dave wrote:
interestingly, it was experiencing 3x more context switches than the
Intel box (upto 100k, versus ~30k avg on Dell). Both are RH4.0
I'll assume that's context switches per second... so for the opteron
that's 6540 cs's and for the Dell that's
Hello,
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full) on all
of my tables. However, its noted in the docs (e.g.
http://developer.postgresql.org/docs/postgres/routine-reindex.html)
and on the lists here t
I ran a bit exhaustive pgbench on 2 test machines I have (quad dual core
Intel and Opteron). Ofcourse the Opteron was much faster, but
interestingly, it was experiencing 3x more context switches than the
Intel box (upto 100k, versus ~30k avg on Dell). Both are RH4.0
64bit/PG8.1 64bit.
Sun (v40z):
On Dec 6, 2005, at 12:44 PM, Ameet Kini wrote:
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full)
on all
of my tables. However, its noted in the docs (e.g.
http://developer.postgresql.org/docs/postgres/r
Michael Riess wrote:
> Bruce Momjian schrieb:
> > Oleg Bartunov wrote:
> >> Folks,
> >>
> >> tsearch2 and Lucene are very different search engines, so it'd be unfair
> >> comparison. If you need full access to metadata and instant indexing
> >> you, probably, find tsearch2 is more suitable then Luc
Bruce Momjian schrieb:
Oleg Bartunov wrote:
Folks,
tsearch2 and Lucene are very different search engines, so it'd be unfair
comparison. If you need full access to metadata and instant indexing
you, probably, find tsearch2 is more suitable then Lucene. But, if
you could live without that featur
Ameet Kini <[EMAIL PROTECTED]> writes:
> I have a question on postgres's performance tuning, in particular, the
> vacuum and reindex commands. Currently I do a vacuum (without full) on all
> of my tables. However, its noted in the docs (e.g.
> http://developer.postgresql.org/docs/postgres/routine-
Bruce Momjian writes:
> Oleg Bartunov wrote:
>> Tsearch2 integration into pgsql would be cool, but, I see no problem to
>> use tsearch2 as an official extension module.
> Agreed. There isn't anything magical about a plug-in vs something
> integrated, as least in PostgreSQL.
The quality gap bet
On Tue, 2005-12-06 at 03:22, Kathy Lo wrote:
> Hi,
>
> In this program, it will access this database server using simple and
> complex (joining tables) SQL Select statement and retrieve the matched
> rows. For each access, it will connect the database and disconnect it.
>
> I found that the memo
This didn't get through the first time around, so resending it again.
Sorry for any duplicate entries.
Hello,
I have a question on postgres's performance tuning, in particular, the
vacuum and reindex commands. Currently I do a vacuum (without full) on all
of my tables. However, its noted in th
Oleg Bartunov wrote:
> Folks,
>
> tsearch2 and Lucene are very different search engines, so it'd be unfair
> comparison. If you need full access to metadata and instant indexing
> you, probably, find tsearch2 is more suitable then Lucene. But, if
> you could live without that features and need to
Folks,
tsearch2 and Lucene are very different search engines, so it'd be unfair
comparison. If you need full access to metadata and instant indexing
you, probably, find tsearch2 is more suitable then Lucene. But, if
you could live without that features and need to search read only
archives you
Has anyone ever compared TSearch2 to Lucene, as far as performance is
concerned?
I'll stay away from TSearch2 until it is fully integrated in the
postgres core (like "create index foo_text on foo (texta, textb) USING
TSearch2"). Because a full integration is unlikely to happen in the near
f
[EMAIL PROTECTED] writes:
> On Mon, 5 Dec 2005, Tom Lane wrote:
>> (Note to self: it is a bit odd that fac_id=261 is pushed down to become
>> an indexqual in one case but not the other ...)
> I speculate that the seq_scan wasn't really the slow part
> compared to not using using both parts of the
Greetings all,
I'm going to do a performance comparison with DocMgr and PG81/TSearch2 on
one end, and Apache Lucene on the other end.
In order to do this, I'm going to create a derivative of the
docmgr-autoimport script so that I can specify one file to import at a
time. I'll then create a
Hi,
After few test, the difference is explained by the
effective_cache_size parameter.
with effective_cache_size=1000 (default)
the planner chooses the following plan
postgres=# explain select count(*) from (select distinct on (val) *
from test) as foo;
Q
Ron wrote:
For accuracy's sake, which exact config did you finally use?
How did you choose the config you finally used? Did you test the three
options or just pick one?
(Note: I'm not the original poster.)
I just picked the option of putting the data/pg_xlog directory (WAL) on
a 2nd set o
Kathy Lo <[EMAIL PROTECTED]> writes:
> I found that the memory of the databaser server nearly used up (total 2G RAM).
> After I stop the program, the used memory did not free.
I see no particular reason to believe that you are describing an actual
memory leak. More likely, you are just seeing the
> On Tue, 2005-12-06 at 11:32 +0100, Tino Wildenhain wrote:
> > You can have a row number in postgres easily too. For example if you
> > just include a serial for the row number.
> Not if the order of things is determined runtime and not at insert
time...
>
> > Cursor would work too but you would
At 04:43 AM 12/6/2005, Joost Kraaijeveld wrote:
Hi,
Is it possible to get this query run faster than it does now, by adding
indexes, changing the query?
SELECT customers.objectid FROM prototype.customers, prototype.addresses
WHERE
customers.contactaddress = addresses.objectid
ORDER BY zipCode a
On Tue, 2005-12-06 at 13:20, Joost Kraaijeveld wrote:
[snip]
> Ah, a misunderstanding: I only need to calculate an index if the user
> wants a record that is not in or adjacent to the cache (in which case I
> can do a "select values > last value in the cache". So I must always
> materialize all ro
On 12/6/2005 4:08 AM, Assaf Yaari wrote:
Thanks Bruno,
Issuing VACUUM FULL seems not to have influence on the time.
I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the
test again (on different record) and the time still increase.
I think he meant
- run VACUUM FULL once,
Joost Kraaijeveld schrieb:
On Tue, 2005-12-06 at 12:36 +0100, Tino Wildenhain wrote:
I just tried it: a cursor is not faster (what does not surprise me at
all, as the amount of work looks the same to me)
Actually no, if you scroll forward, you just ask the database for the
next rows to materi
On Tue, 2005-12-06 at 12:36 +0100, Tino Wildenhain wrote:
> >
> > I just tried it: a cursor is not faster (what does not surprise me at
> > all, as the amount of work looks the same to me)
>
> Actually no, if you scroll forward, you just ask the database for the
> next rows to materialize. So if
On Tue, 6 Dec 2005, Steinar H. Gunderson wrote:
On Tue, Dec 06, 2005 at 01:40:47PM +0300, Olleg wrote:
I can't undestand why "bigger is better". For instance in search by
index. Index point to page and I need load page to get one row. Thus I
load 8kb from disk for every raw. And keep it then in
Joost Kraaijeveld schrieb:
Hi Tino,
..
Cursor would work too but you would need to have a persistent connection.
I just tried it: a cursor is not faster (what does not surprise me at
all, as the amount of work looks the same to me)
Actually no, if you scroll forward, you just ask the dat
Hi Tino,
On Tue, 2005-12-06 at 11:32 +0100, Tino Wildenhain wrote:
> You can have a row number in postgres easily too. For example if you
> just include a serial for the row number.
Not if the order of things is determined runtime and not at insert time...
> Cursor would work too but you would ne
On Tue, Dec 06, 2005 at 01:40:47PM +0300, Olleg wrote:
> I can't undestand why "bigger is better". For instance in search by
> index. Index point to page and I need load page to get one row. Thus I
> load 8kb from disk for every raw. And keep it then in cache. You
> recommend 64kb. With your rec
Ron wrote:
In general, and in a very fuzzy sense, "bigger is better". pg files are
laid down in 1GB chunks, so there's probably one limitation.
Hm, expect result of tests on other platforms, but if there theoretical
dispute...
I can't undestand why "bigger is better". For instance in search b
Joost Kraaijeveld schrieb:
On Tue, 2005-12-06 at 10:52 +0100, Csaba Nagy wrote:
Joost,
Why do you use an offset here ? I guess you're traversing the table
somehow, in this case it would be better to remember the last zipcode +
housenumber and put an additional condition to get the next bigger
On Tue, 2005-12-06 at 10:52 +0100, Csaba Nagy wrote:
> Joost,
>
> Why do you use an offset here ? I guess you're traversing the table
> somehow, in this case it would be better to remember the last zipcode +
> housenumber and put an additional condition to get the next bigger than
> the last one y
Hi,
> -Ursprüngliche Nachricht-
> Von: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Im Auftrag
> von Joost Kraaijeveld
> Gesendet: Dienstag, 6. Dezember 2005 10:44
> An: Pgsql-Performance
> Betreff: [PERFORM] Can this query go faster???
> SELECT customers.objectid FROM prototype.custo
Joost,
Why do you use an offset here ? I guess you're traversing the table
somehow, in this case it would be better to remember the last zipcode +
housenumber and put an additional condition to get the next bigger than
the last one you've got... that would go for the index on
zipcode+housenumber a
Hi,
Is it possible to get this query run faster than it does now, by adding
indexes, changing the query?
SELECT customers.objectid FROM prototype.customers, prototype.addresses
WHERE
customers.contactaddress = addresses.objectid
ORDER BY zipCode asc, housenumber asc
LIMIT 1 OFFSET 283745
Explai
Hi,
You might try these steps
1. Do a vacuum full analyze
2. Reindex the index on id column
3. Cluster the table based on this index
On 12/5/05, Assaf Yaari <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm using PostgreSQL 8.0.3 on Linux RedHat WS 3.0.
>
> My application updates counters in DB. I left
Hi,
Is it possible to get this query run faster than it does now, by adding
indexes, changing the query?
SELECT customers.objectid FROM prototype.customers, prototype.addresses
WHERE
customers.contactaddress = addresses.objectid
ORDER BY zipCode asc, housenumber asc
LIMIT 1 OFFSET 283745
Explain
I need to slice up a web server's disk space to provide space for
postgres and storing binaries such as images and sound files. I'm
thinking of using logical volume management (LVM) to help manage the
amount of space I use between postgres and the data volumes.
The server has a 250GB RAID10 (LSI 3
Hi,
I setup a database server using the following configuration.
Redhat 9.0
Postgresql 8.0.3
Then, I setup a client workstation to access this database server with
the following configuration.
Redhat 9.0
unixODBC 2.2.11
psqlodbc-08.01.0101
and write a C++ program to run database query.
In thi
Thanks Bruno,
Issuing VACUUM FULL seems not to have influence on the time.
I've added to my script VACUUM ANALYZE every 100 UPDATE's and run the
test again (on different record) and the time still increase.
Any other ideas?
Thanks,
Assaf.
> -Original Message-
> From: Bruno Wolff III [m
Jenny schrieb:
I'm running PostgreSQL 8.0.3 on i686-pc-linux-gnu (Fedora Core 2). I've been
dealing with Psql for over than 2 years now, but I've never had this case
before.
I have a table that has about 20 rows in it.
Table "public.s_apotik"
Column | Type
At 12:52 AM 12/6/2005, Thomas Harold wrote:
David Lang wrote:
in that case you logicly have two disks, so see the post from Ron
earlier in this thread.
And it's a very nice performance gain. Percent spent waiting
according to "top" is down around 10-20% instead of 80-90%. While
I'm not pr
80 matches
Mail list logo