Re: [PERFORM] tsearch2/GIST performance factors?

2005-10-18 Thread Craig A. James
Oleg wrote: Did you consider *decreasing* SIGLENINT ? Size of index will diminish and performance could be increased. I use in current project SIGLENINT=15 The default value for SIGLENINT actually didn't work at all. It was only by increasing it that I got any performance at all. An

Re: [PERFORM] Sequential scan on FK join

2005-10-18 Thread Richard Huxton
Martin Nickel wrote: When I turn of seqscan it does use the index - and it runs 20 to 30% longer. Based on that, the planner is correctly choosing a sequential scan - but that's just hard for me to comprehend. I'm joining on an int4 key, 2048 per index page - I guess that's a lot of reads -

Re: [PERFORM] Help tuning postgres

2005-10-18 Thread Robert Treat
reindex should be faster, since you're not dumping/reloading the table contents on top of rebuilding the index, you're just rebuilding the index. Robert Treat emdeon Practice Services Alachua, Florida On Wed, 2005-10-12 at 13:32, Steve Poe wrote: Would it not be faster to do a dump/reload

Re: [PERFORM] Help tuning postgres

2005-10-18 Thread Csaba Nagy
In the light of what you've explained below about nonremovable row versions reported by vacuum, I wonder if I should worry about the following type of report: INFO: vacuuming public.some_table INFO: some_table: removed 29598 row versions in 452 pages DETAIL: CPU 0.01s/0.04u sec elapsed 18.77

Re: [PERFORM] Help tuning postgres

2005-10-18 Thread Csaba Nagy
First of all thanks all for the input. I probably can't afford even the reindex till Christmas, when we have about 2 weeks of company holiday... but I guess I'll have to do something until Christmas. The system should at least look like working all the time. I can have downtime, but only for

Re: [PERFORM] Help tuning postgres

2005-10-18 Thread Andrew Sullivan
On Tue, Oct 18, 2005 at 05:21:37PM +0200, Csaba Nagy wrote: INFO: vacuuming public.some_table INFO: some_table: removed 29598 row versions in 452 pages DETAIL: CPU 0.01s/0.04u sec elapsed 18.77 sec. INFO: some_table: found 29598 removable, 39684 nonremovable row versions in 851 pages

[PERFORM] Inefficient escape codes.

2005-10-18 Thread Rodrigo Madera
Hello there, This is my first post in the list. I have a deep low-level background on computer programming, but I am totally newbie to sql databases. I am using postgres because of its commercial license. My problem is with storing large values. I have a database that stores large ammounts of

Re: [PERFORM] Inefficient escape codes.

2005-10-18 Thread Michael Fuhr
On Tue, Oct 18, 2005 at 06:07:12PM +, Rodrigo Madera wrote: 1) Is there any way for me to send the binary field directly without needing escape codes? In 7.4 and later the client/server protocol supports binary data transfer. If you're programming with libpq you can use PQexecParams() to

Re: [PERFORM] Inefficient escape codes.

2005-10-18 Thread Michael Fuhr
[Please copy the mailing list on replies so others can participate in and learn from the discussion.] On Tue, Oct 18, 2005 at 07:09:08PM +, Rodrigo Madera wrote: What language and API are you using? I'm using libpqxx. A nice STL-style library for C++ (I am 101% C++). I've only dabbled