Vacuum memory usage is tuned by the maintenance_work_mem parameter. I
suggest you look at
http://www.postgresql.org/docs/8.2/static/runtime-config-resource.html and
http://www.postgresql.org/docs/8.2/static/kernel-resources.html#AEN19338.
Thanks Sander, I've read so many of these pages
On 8/14/07, Sander Steffann [EMAIL PROTECTED] wrote:
Hi Lim,
It might also be in /etc/security/limits.conf.
Thanks. I see these two lines in that file:
postgressoftnofile 8192
postgreshardnofile 8192
How should I change these values? I am not sure how
If this is only a PostgreSQL database server, don't limit the postgres user.
Don't tweak these limits unless you know exactly what you are doing.
Unfortunately, it is not. It has other applications. Including Apache
and so on. I tried not setting the ulimits at all, but it seems to be
required
The default values of a column during table definition do not accept
values generated by passing another column's value through my own
function. So I try to do this with a rule as follows. The name of my
function in this example is MYFUNCTION.
drop table test cascade;
create table test (id
On 8/13/07, John Coulthard [EMAIL PROTECTED] wrote:
The part of the php code for the connection is
$dbconn=pg_connect( dbname=lumbribase host=localhost port=5432
user=postgres password=$PG_PASS );
if ( ! $dbconn ) {
echo Error connecting to the database !br ;
printf(%s,
On 8/13/07, Gregory Stark [EMAIL PROTECTED] wrote:
Lim Berger [EMAIL PROTECTED] writes:
Hi
I am getting the following error while running queries such as vacuum
analyze TABLE, even on small tables with a piddly 35,000 rows!
The error message:
--
ERROR: out of memory
DETAIL
On 8/13/07, Tom Lane [EMAIL PROTECTED] wrote:
Lim Berger [EMAIL PROTECTED] writes:
ERROR: out of memory
DETAIL: Failed on request of size 67108860.
Apparently, this number:
maintenance_work_mem = 64MB
is more than your system can actually support. Which is a bit odd for
any modern
On 8/14/07, Lim Berger [EMAIL PROTECTED] wrote:
On 8/14/07, Alvaro Herrera [EMAIL PROTECTED] wrote:
Lim Berger escribió:
Thanks. I did su postgres and ran the ulimit command again. All
values are the same, except for open files which is double in the
case of this user (instead
On 8/14/07, Sander Steffann [EMAIL PROTECTED] wrote:
Hi Lim,
Lim Berger [EMAIL PROTECTED] writes:
Wow, you are right! The su - postgres showed up with wildly
different values! Most notably, the max user processes is only 20!!
Whereas in the regular user stuff it was above 14000. Would
Hi,
I've googled and yahooed and most of the performance tweaks suggested
cover SELECT speed, some cover COPY speed with things like turning
fsync off and such. But I still have not found how to improve regular
INSERT speed on Postgresql.
I have a table in MySQL with three compound indexes. I
On 8/14/07, Sander Steffann [EMAIL PROTECTED] wrote:
Hi Lim,
It might also be in /etc/security/limits.conf.
Thanks. I see these two lines in that file:
postgressoftnofile 8192
postgreshardnofile 8192
How should I change these values? I am not sure how
On 8/14/07, Andrej Ricnik-Bay [EMAIL PROTECTED] wrote:
On 8/14/07, Lim Berger [EMAIL PROTECTED] wrote:
INSERTing into MySQL takes 0.0001 seconds per insert query.
INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
What can I do to improve this performance? What
On 8/14/07, Tom Lane [EMAIL PROTECTED] wrote:
Lim Berger [EMAIL PROTECTED] writes:
I have a table in MySQL with three compound indexes. I have only three
columns from this table also in PostgreSQL, which serves as a cache of
sorts for fast queries, and this table has only ONE main index
On 8/14/07, Lim Berger [EMAIL PROTECTED] wrote:
On 8/14/07, Tom Lane [EMAIL PROTECTED] wrote:
Lim Berger [EMAIL PROTECTED] writes:
I have a table in MySQL with three compound indexes. I have only three
columns from this table also in PostgreSQL, which serves as a cache of
sorts
14 matches
Mail list logo