Hi,
We have been running PostgreSQL 7.3.4 on 64 bit MAC OS X G5 dual
processors with 8GB of RAM for a while.
Lately, we realized that consistently only about 4GB of RAM is used
even when CPUs have maxed out
for postgtres processes and pageouts starts to happen. Here is a
portion of the output f
Hi, there,
I am running PostgreSQL 7.3.4 on MAC OS X G5 with dual processors and
8GB memory. The shared buffer was set as 512MB.
The database has been running great until about 10 days ago when our
developers decided to add some indexes to some tables to speed up
certain uploading ops.
Now the CPU
Josh:
Sorry for the reply to the existing subject!
The newly added indexes have made all other queries much slower except
the uploading ops.
As a result, all the CPU's are running crazy but not much is getting
finished and our Application
Server waits for certain time and then times out. Customer
Hi, there,
I am running PostgreSQL 7.3.4 on MAC OS X G5 with dual processors and
8GB memory. The shared buffer was set as 512MB.
The database has been running great until about 10 days ago when our
developers decided to add some indexes to some tables to speed up
certain uploading ops.
Now the CPU
Hello,
I have recently configured my PG7.3 on a G5 (8GB RAM) with
shmmax set to 512MB and shared_buffer=5, sort_mem=4096
and effective cache size = 1. It seems working great so far but
I am wondering if I should make effctive cache size larger myself.
Tnaks!
Qing
On Apr 21, 2004, at 9:29 A
Tom:
I used sysctl -A to see the kernel state, I got:
kern.sysv.shmmax: -1
It looks the value is too big!
Thanks!
Qing
On Apr 13, 2004, at 12:55 PM, Tom Lane wrote:
Qing Zhao <[EMAIL PROTECTED]> writes:
My suspision is that the change i made in /etc/rc does not take
effect.Is there a
Hi, all,
I have got a new MaC OS G5 with 8GB RAM. So i tried to increase
the shmmax in Kernel so that I can take advantage of the RAM.
I searched the web and read the manual for PG7.4 chapter 16.5.1.
After that, I edited /etc/rc file:
sysctl -w kern.sysv.shmmax=4294967296 // byte
sysctl -w kern.sy
d Max connections=32, it gives me error when I tried
to start PG using
pg_ctl start as postgres. It kept saying this is bigger than the system
Shared Memory. So finally
I started PG using SystemStarter start PostgreSQL and it seems starting
OK. Any idea?
Thanks a lot!
Qing
Thanks a lot! We were migrating to Postgres from Oracle and
every now and then, we ran into something that we do not
understand completely and it is a learning process for us.
Your responses have made it much clear for us. BTW, do you
think that it's better for us just to rewrite everything so we
It is 7.3.4 on MAC OS X (darwin). The patch we applied is hier-Pg7.3-0.5, which allows
to perform hierarchical queries on PgSQL using Oracle's syntax.
Thanks!
Qing
On Mar 25, 2004, at 2:57 PM, Stephan Szabo wrote:
On Thu, 25 Mar 2004, Qing Zhao wrote:
select
_level_ as l,
ne
Tom,
Thanks for your help!
It's not through one client. I am using JDBC. But the same things
happen when I use client like psql.
Qing
On Mar 25, 2004, at 10:20 AM, Tom Lane wrote:
Qing Zhao <[EMAIL PROTECTED]> writes:
I have a query which get's data from a single table.
When I
I have a query which get's data from a single table.
When I try to get data from for an RFQ which has around 5000 rows, it is breaking off at 18th row.
If i reduce some columns , then it returns all the rows and not so slow.
I have tried with different sets of column and there is no pattern base
I am new here. I have a question related to this in some way.
Our web site needs to upload a large volume of data into Postgres at a
time. The performance deterioates as number of rows becomes larger.
When it reaches 2500 rows, it never come back to GUI. Since the tests
were run through GUI, m
13 matches
Mail list logo