On Thu, Feb 19, 2009 at 10:02 PM, Tena Sakai wrote:
> Hi Everybody,
>
> I am running postgres v8.3.3 on redhat linux (del hardware)
> with 4 cpu's. This machine is terribly bogged down and I
> would like a bit of help as to what can be done.
>
> For last maybe 18+/- hours, there are 24 queries ha
Hi Everybody,
I am running postgres v8.3.3 on redhat linux (del hardware)
with 4 cpu's. This machine is terribly bogged down and I
would like a bit of help as to what can be done.
For last maybe 18+/- hours, there are 24 queries happening.
What's odd is that 21 of them are identical queries. Th
Hello
I want to see statistics about the use of my postgresql 8.3 in Ubuntu 8.04.1
(Hardy) Server.
In my postgresql.conf I have the follow configuration related to statistics:
"""
#---
---
# RUNTIME STATISTICS
#-
On Thu, Feb 19, 2009 at 11:01 AM, Rafael Domiciano
wrote:
> I used to run vacuum full in one of my bases, but now i'm not running
> anymore vacuum full, just vacuum analyze in the past 1 month, but the number
> of necessary pages is increasing every day, now it's in 311264... there is
> any proble
I used to run vacuum full in one of my bases, but now i'm not running
anymore vacuum full, just vacuum analyze in the past 1 month, but the number
of necessary pages is increasing every day, now it's in 311264... there is
any problem this get increasing?
When I runned Reindex few days ago, this num
On Thu, Feb 19, 2009 at 9:35 AM, Jessica Richard wrote:
> I am running "vacuum full" via a shell script for a list of large databases
> now... and I may run out of my scheduled system down time
>
> If I don't finish all databases and kill the script in the middle... am I
> going to cause any t
I am running "vacuum full" via a shell script for a list of large databases
now... and I may run out of my scheduled system down time
If I don't finish all databases and kill the script in the middle... am I going
to cause any table corruptions since "vacuum full" is rebuilding the tables an
Hi, Scott
Slony is a good software at all, we are using it now to replicate real data
to a dedicate report server and to a data center and works very well.
But when we have some downtime like crash hardware, the time to put the
slave node running, even as of less of 1 minute is too long for the co