Your biggest benefit was dropping the indexes before the load, most likely.
-Original Message-
From: Jonathan Hoover [mailto:jhoo...@yahoo-inc.com]
Sent: Friday, 5 November 2010 2:16 PM
To: Samuel Stearns; pgsql-admin@postgresql.org
Cc: Kenneth Marshall
Subject: RE: [ADMIN] Disk Performan
I am in the middle of re-creating the indexes now, and what is interesting is
how atop is not reporting heavy use of the hard drive now. Instead, I see
postgres using 80% of the proc (instead of 8% earlier) and drive usage is 20+
MBr/s and 16+ MBw/s now (instead of .1 and 3.0 respectively earlie
Yep!
Coming from previous Oracle job into Postgres, discovering the transactable
stuff, is indeed, pretty cool.
-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com]
Sent: Friday, 5 November 2010 2:12 PM
To: Samuel Stearns
Cc: Jonathan Hoover; pgsql-admin@postgresql.or
On Thu, Nov 4, 2010 at 9:33 PM, Samuel Stearns
wrote:
> TRUNCATE doesn't generate any rollback data, which makes it lightning fast.
> It just de-allocates the data pages used by the table.
Also truncate, like nearly everything in pgsql, can be rolled back. I
still remember showing my oracle co-
Actually, this is a better definition of TRUNCATE than my previous post. From
the doco:
TRUNCATE quickly removes all rows from a set of tables. It has the same effect
as an unqualified DELETE on each table, but since it does not actually scan the
tables it is faster. Furthermore, it reclaims d
How does TRUNCATE differ from DELETE FROM ? Sorry, probably an easy RTFM
question, but I'll ask anyhow.
-Original Message-
From: Samuel Stearns [mailto:sstea...@internode.com.au]
Sent: Thursday, November 04, 2010 10:27 PM
To: Jonathan Hoover; pgsql-admin@postgresql.org
Cc: Kenneth Marsha
On Thu, Nov 4, 2010 at 9:03 PM, Jonathan Hoover wrote:
> 1. I have now set maintenance_work_mem to 256 MB (which was previously
> commented by the default config)
> 2. The version is PostgreSQL 8.1.18 on x86_64-redhat-linux-gnu, compiled by
> GCC gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46)
If
TRUNCATE doesn't generate any rollback data, which makes it lightning fast. It
just de-allocates the data pages used by the table.
Sam
-Original Message-
From: Jonathan Hoover [mailto:jhoo...@yahoo-inc.com]
Sent: Friday, 5 November 2010 1:59 PM
To: Samuel Stearns; pgsql-admin@postgresql
TRUNCATE removes all data from the table leaving the schema structure in place.
What helped the most was probably the drop of the indexes.
-Original Message-
From: pgsql-admin-ow...@postgresql.org
[mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Jonathan Hoover
Sent: Friday, 5 Nove
Just FYI, I removed the PK and the indexes for now. Load times for 1M rows is
now 7 SECONDS instead of 7 MINUTES (or just never happening). Granted, I made
the changes in #1 below, but WOW! So, question: what helped the most: 1) no PK,
2) no indexes, 3) the maintenance_work_mem being uncommented
1. I have now set maintenance_work_mem to 256 MB (which was previously
commented by the default config)
2. The version is PostgreSQL 8.1.18 on x86_64-redhat-linux-gnu, compiled by GCC
gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46)
3. What would be the best and cheapest thing I could for IO performan
Panos Katergiathis writes:
> Unfortunately, it has been a while since i last compiled and installed
> Postgresql on Linux machines. I remember that selecting the proper
> locale and encoding for the database cluster was a matter to be
> decided at built time. Is it still so for version 9?
No, see
"Jonathan Hoover" wrote:
> I have a simple database, with one table for now. It has 4
> columns:
>
> anid serial primary key unique,
> time timestamp,
> source varchar(5),
> unitid varchar(15),
> guid varchar(32)
>
> There is a btree index on each.
>
> I am loading data 1,000,000 (1M) rows a
On Thu, Nov 04, 2010 at 01:42:49PM -0700, Jonathan Hoover wrote:
> Hello,
>
> I have a RHEL 5 box, 4 GB RAM, single hard drive SATA, Intel Core 2 Duo 2.4.
> A basic workstation.
>
> I have a simple database, with one table for now. It has 4 columns:
>
> anid serial primary key unique,
> time t
Hello,
I have a RHEL 5 box, 4 GB RAM, single hard drive SATA, Intel Core 2 Duo 2.4. A
basic workstation.
I have a simple database, with one table for now. It has 4 columns:
anid serial primary key unique,
time timestamp,
source varchar(5),
unitid varchar(15),
guid varchar(32)
There is a btree
Ramiro,
Yes, this will pose no problem. We do it commonly. You can simply install your
v9.0 server, as Szymon has suggested, on another port to effect your testing.
Be sure you've set up a 9.0-compatible cluster of course; you can't share data.
Lou Picciano
- Original Message -
F
rbarr...@gmail.com (Ramiro Barreca) writes:
> We need to have, for migration, either an instance of our actual PG 8.4.4 and
> a
> new one of PG 9.0 for testing.
> Where can we found a paper for helping us?
> Our platform is Centos 5.4 x86 64 bits
The "complication" to this is that it is quite lik
On Thu, 2010-11-04 at 12:13 -0300, Ramiro Barreca wrote:
> We need to have, for migration, either an instance of our actual PG
> 8.4.4 and a new one of PG 9.0 for testing.
> Where can we found a paper for helping us?
> Our platform is Centos 5.4 x86 64 bits
Barring a bug that prevents yum to insta
and of course install binaries somewhere else, use a different PG_DATA
dir ... you're soon loosing simplicity ;-) . Some distros support this
natively. Gentoo supports this, don't know about Centos.
On Thu, Nov 4, 2010 at 4:50 PM, Szymon Guz wrote:
>
>
> On 4 November 2010 16:13, Ramiro Barreca
On 4 November 2010 16:13, Ramiro Barreca wrote:
> We need to have, for migration, either an instance of our actual PG 8.4.4
> and a new one of PG 9.0 for testing.
> Where can we found a paper for helping us?
> Our platform is Centos 5.4 x86 64 bits
> --
> Ramiro Barreca
> rbarr...@gmail.com
>
I
We need to have, for migration, either an instance of our actual PG 8.4.4
and a new one of PG 9.0 for testing.
Where can we found a paper for helping us?
Our platform is Centos 5.4 x86 64 bits
--
Ramiro Barreca
rbarr...@gmail.com
Admin Friends,
Can someone point to a few 'bullet points' regarding the state of play of the
PostgreSQL debugger?
Specifically, we'd like to implement a debugger on our UNIX deployments, and
remember - I think - that their were issues with the module - or something on
UNIX ??
Perhaps so
Kevin Grittner ha scritto:
Scott Marlowe wrote:
Silvio Brandani wrote:
we have develop a script to execute the vacuum full on all tables
Vacuum full is more of a recovery / offline command and is to be
used sparingly, especially before 9.0.
And before 9.0, mos
23 matches
Mail list logo