Hi,
(B
(BSorry, I didn't catch the original message, so I'm not sure if the original
(Bposter mentioned the postgres version that he's using.
(B
(BI just thought that I'd contribute this observation.
(B
(BI have a DB that takes several hours to restore under 7,1 but completes in
(Baround
Vivek,
> Do I need a correspondingly large checkpoint timeout then? Or does
> that matter much?
Yes, you do.
> And does this advice apply if the pg_xlog is on the same RAID partition
> (mine currently is not, but perhaps will be in the future)
Not as much, but it's still a good idea to seriali
Josh Berkus <[EMAIL PROTECTED]> writes:
> Not as much, but it's still a good idea to serialize the load. With too few
> segments, you get a pattern like:
> Fill up segments
> Write to database
> Recycle segments
> Fill up segments
> Write to database
> Recycle segments
> etc.
Actually I think
Vivek,
> The biggest improvement in speed to restore time I have discovered is
> to increase the checkpoint segments. I bump mine to about 50. And
> moving the pg_xlog to a separate physical disk helps a lot there, too.
Don't leave it at 50; if you have the space on your log array, bump it up t
> "RC" == Rodrigo Carvalhaes <[EMAIL PROTECTED]> writes:
RC> Hi!
RC> I am using PostgreSQL with a proprietary ERP software in Brazil. The
RC> database have around 1.600 tables (each one with +/- 50 columns).
RC> My problem now is the time that takes to restore a dump. My customer
RC> database
On P, 2004-12-05 at 21:43, Rodrigo Carvalhaes wrote:
> Hi !
>
> Thanks for the lots of tips that I received on this matter.
>
...
> There is something more that I can try to improve this performance?
check the speed of your ide drive. maybe tweak some params with
/sbin/hdparm . Sometimes the def
Hi !
Thanks for the lots of tips that I received on this matter.
Some points:
1. I bumped the sort_mem and vaccum_mem to 202800 (200mb each) and the
performance was quite the same , the total difference was 10 minutes
2. I made the restore without the index and the total time was 3 hours
so, I d
On Wed, 2004-12-01 at 09:16 -0200, Rodrigo Carvalhaes wrote:
>
> I am using PostgreSQL with a proprietary ERP software in Brazil. The
> database have around 1.600 tables (each one with +/- 50 columns).
...
> max_fsm_pages = 2
> max_fsm_relations = 1000
Hi,
I doubt that this will improve y
Rodrigo,
> Our machine it's a Dell Server Power Edge 1600sc (Xeon 2,4Ghz, with 1GB
> memory, 7200 RPM disk). I don't think that there is a machine problem
> because it's a server dedicated for the database and the cpu utilization
> during the restore is around 30%.
In addition to Tom and Shridhar
--- Shridhar Daithankar <__> wrote:
> On Wednesday 01 Dec 2004 4:46 pm, Rodrigo Carvalhaes wrote:
> > I need to find a solution for this because I am convincing
> customers
> > that are using SQL Server, DB2 and Oracle to change to PostgreSQL
> but
> > this customers have databases of 5GB!!! I am
Shridhar Daithankar <[EMAIL PROTECTED]> writes:
> On Wednesday 01 Dec 2004 4:46 pm, Rodrigo Carvalhaes wrote:
>> I need to find a solution for this because I am convincing customers
>> that are using SQL Server, DB2 and Oracle to change to PostgreSQL but
>> this customers have databases of 5GB!!! I
On Wednesday 01 Dec 2004 4:46 pm, Rodrigo Carvalhaes wrote:
> I need to find a solution for this because I am convincing customers
> that are using SQL Server, DB2 and Oracle to change to PostgreSQL but
> this customers have databases of 5GB!!! I am thinking that even with a
> better server, the re
Hi!
I am using PostgreSQL with a proprietary ERP software in Brazil. The
database have around 1.600 tables (each one with +/- 50 columns).
My problem now is the time that takes to restore a dump. My customer
database have arount 500mb (on the disk, not the dump file) and I am
making the dump wit
13 matches
Mail list logo