Thanks everyone. Sorry for the late reply.
Do you have indexes on all the referencing columns?
I had thought so, but it turns out no, and this appears to be the main
cause of the slowness. After adding a couple of extra indexes in the bigger
tables, things are going much more smoothly.
write
On Mon, Jun 26, 2017 at 07:26:08PM -0700, Joshua D. Drake wrote:
> Alternatively, and ONLY do this if you take a backup right before hand, you
> can set the table unlogged, make the changes and assuming success, make the
> table logged again. That will great increase the write speed and reduce wal
On 06/26/2017 06:29 PM, Andrew Sullivan wrote:
On Tue, Jun 27, 2017 at 10:17:49AM +1200, Craig de Stigter wrote:
We're doing a large migration on our site which involves changing most of
the primary key values. We've noticed this is a *very* slow process.
You can make it faster through a num
On Tue, Jun 27, 2017 at 10:17:49AM +1200, Craig de Stigter wrote:
> We're doing a large migration on our site which involves changing most of
> the primary key values. We've noticed this is a *very* slow process.
Indeed.
Does the database need to be online when this is happening?
If it were me,
Craig de Stigter writes:
> We're doing a large migration on our site which involves changing most of
> the primary key values. We've noticed this is a *very* slow process.
> Firstly we've set up all the foreign keys to use `on update cascade`. Then
> we essentially do this on every table:
> UPDA
Hi folks
We're doing a large migration on our site which involves changing most of
the primary key values. We've noticed this is a *very* slow process.
Firstly we've set up all the foreign keys to use `on update cascade`. Then
we essentially do this on every table:
UPDATE TABLE users SET id = id
Randy Johnson wrote:
> in the config file for 7.4 we have an entry:
>
> shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each
>
> in 9.1 the default is:
>
> shared_buffers = 32MB
>
>
> max connections is the default 100
>
> Do I need to make any adjustments or can I leave it at
Hello,
in the config file for 7.4 we have an entry:
shared_buffers = 1000 # min 16, at least max_connections*2, 8KB each
in 9.1 the default is:
shared_buffers = 32MB
max connections is the default 100
Do I need to make any adjustments or can I leave it at the default?
The machine is dedicat
On 28 May 2010, at 15:40, Tom Wilcox wrote:
> Hi,
>
> I am fighting with Postgres on a 64-bit Windows (Server 2008) machine with
> 96GB trying to get it to use as much memory as possible (I am the only user
> and I am running complex queries on large tables). [See my previous thread
> for deta
Hi Stephen,
Thanks for the response. Unfortunately, we are somewhat tied to a
Windows platform and I would expect us to sooner switch to SQL Server
rather than move to Linux/Unix/BSD.. Although, (in complete contrast to
what I just said), I am toying with the idea of the dual boot or
virtuali
* Tom Wilcox (hungry...@googlemail.com) wrote:
> Can anyone tell me what might be going on and how I can fix it so that
> postgres uses as much memory and processing power as poss... in a stable
> manner?
I realize this probably isn't the answer you're looking for, and
hopefully someone can come u
Hi,
I am fighting with Postgres on a 64-bit Windows (Server 2008) machine with
96GB trying to get it to use as much memory as possible (I am the only user
and I am running complex queries on large tables). [See my previous thread
for details "Out of Memory and Configuration Problems (Big Computer)
BuyAndRead Test wrote:
This is a virtual server, so I could give it as much as 8 GB of memory if
this will give much higher performance. What should shared_buffere be
set to
if I use 8 GB, as much as 4 GB?
John R Pierce wrote:
I'd keep it around 1-2GB shared_buffers, and let the rest of the m
BuyAndRead Test wrote:
This is a virtual server, so I could give it as much as 8 GB of memory if
this will give much higher performance. What should shared_buffere be set to
if I use 8 GB, as much as 4 GB?
I'd keep it around 1-2GB shared_buffers, and let the rest of the memory
be used as f
sql.org
> Emne: Re: [GENERAL] Config help
>
> On Sun, Nov 15, 2009 at 2:43 PM, BuyAndRead Test
> wrote:
> > Hi
> >
> > I need some help with our postgresql.conf file. I would appreciate if
> > someone could look at the values and tell me if it looks alright or
> if
On Sun, Nov 15, 2009 at 2:43 PM, BuyAndRead Test wrote:
> Hi
>
> I need some help with our postgresql.conf file. I would appreciate if
> someone could look at the values and tell me if it looks alright or if I
> need to change anything.
>
> The db server has 4 GB of memory and one quad core CPU (2
Hi
I need some help with our postgresql.conf file. I would appreciate if
someone could look at the values and tell me if it looks alright or if I
need to change anything.
The db server has 4 GB of memory and one quad core CPU (2,53 GHz).
The hard drives is on a iSCSI array and is configured as f
Thanks, we're at 128 now but I'll see how bumping that up goes.
On Nov 28, 2007, at 9:46 AM, Vivek Khera wrote:
On Nov 27, 2007, at 3:30 PM, Erik Jones wrote:
I'm just wondering what is considered the general wisdom on config
setting for large pg_restore runs. I know to increase
maintena
On Nov 27, 2007, at 3:30 PM, Erik Jones wrote:
I'm just wondering what is considered the general wisdom on config
setting for large pg_restore runs. I know to increase
maintenance_work_mem and turn off autovacuum and stats collection.
Shoule should checkpoint_segments and checkpoint_time
On Tue, 27 Nov 2007, Erik Jones wrote:
> I'm just wondering what is considered the general wisdom on config setting
> for large pg_restore runs.
I think the first thing you can do is to "fsync=off" temporarily. But
do remember to turn this back on when you're done restoring.
Regards
Tometzky
--
Hi,
I'm just wondering what is considered the general wisdom on config
setting for large pg_restore runs. I know to increase
maintenance_work_mem and turn off autovacuum and stats collection.
Shoule should checkpoint_segments and checkpoint_timeout be
increased? Would twiddling shared_
Thanks to everyone for giving me a starting point.
here's what I tried so far:
changed the CFLAGS in the src/template/freebsd file to:
CFLAGS='-O0 -pipe'
did
./configure --with-template=freebsd
configure succeeded.
Did a make and started to build. During the build, there were a ton of messag
Michael Engelhart <[EMAIL PROTECTED]> writes:
> Thanks Adam. Yeah, I know that it uses a mach kernel and variant of
> freebsd runs atop the kernel. I would attempt the FreeBSD template
> but the other snag is that it has to compile on PowerPC.
FreeBSD template seems like it'd be a good starting
Hi,
I just got the Mac OS X public beta running on my home computer and want to compile
postgresql for it but don't know where to start. I have installed Postgresql on linux
boxes but they always just work because there are configs for them. Since v7.0.2
doesn't know about Mac OS X I'm assu
24 matches
Mail list logo