2011/2/4 Robert Haas robertmh...@gmail.com
On Fri, Feb 4, 2011 at 4:19 AM, Vitalii Tymchyshyn tiv...@gmail.com
wrote:
Why do you expect such a invasive code changes? I know little about
postgresql code layering, but what I propose (with changing delete to
truncate) is:
1) Leave tuple
03.02.11 20:42, Robert Haas написав(ла):
2011/1/30 Віталій Тимчишинtiv...@gmail.com:
I was thinking if a table file could be deleted if it has no single live
row. And if this could be done by vacuum. In this case vacuum on table that
was fully updated recently could be almost as good as cluster
On Fri, Feb 4, 2011 at 4:19 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
Why do you expect such a invasive code changes? I know little about
postgresql code layering, but what I propose (with changing delete to
truncate) is:
1) Leave tuple addressing as it is now
i.e. a block number and a
2011/1/30 Віталій Тимчишин tiv...@gmail.com:
I was thinking if a table file could be deleted if it has no single live
row. And if this could be done by vacuum. In this case vacuum on table that
was fully updated recently could be almost as good as cluster - any scan
would skip such
There is a process in Oracle which essentially allows you to do the
equivalent of a CLUSTER in Postgres, but without locking the table, and so
updates can continue throughout the process. It requires a bit of manual
setup and fiddling (at least in Oracle 10g) this would probably scratch
a lot
2011/1/28 Scott Carey sc...@richrelevance.com
On 1/28/11 9:28 AM, Stephen Frost sfr...@snowman.net wrote:
* Scott Marlowe (scott.marl...@gmail.com) wrote:
There's nothing wrong with whole table updates as part of an import
process, you just have to know to clean up after you're done, and
Robert,
* Robert Schnabel (schnab...@missouri.edu) wrote:
Once the bulk data is inserted into the tables I generally
do some updates on columns to set values which characterize the
data.
Please tell me you're not running actual full-table UPDATE statements...
You would be *much* better off
Putting the WAL on a second controller does help, if you're write-heavy.
I tried separating indexes and data once on one server and didn't
really notice that it helped much. Managing the space was problematic.
I would suggest putting those together on a single RAID-10 of all the
300GB
On 1/28/2011 7:14 AM, Stephen Frost wrote:
Robert,
* Robert Schnabel (schnab...@missouri.edu) wrote:
Once the bulk data is inserted into the tables I generally
do some updates on columns to set values which characterize the
data.
Please
On Fri, Jan 28, 2011 at 9:39 AM, Robert Schnabel schnab...@missouri.edu wrote:
I can't do outside the database. So yes, once the upload is done I run
queries that update every row for certain columns, not every column. After
I'm done with a table I run a VACUUM ANALYZE. I'm really not
On 1/28/2011 11:00 AM, Scott Marlowe wrote:
On Fri, Jan 28, 2011 at 9:39 AM, Robert Schnabelschnab...@missouri.edu wrote:
I can't do outside the database. So yes, once the upload is done I run
queries that update every row for certain columns, not every column. After
I'm done with a table I
Robert,
* Robert Schnabel (schnab...@missouri.edu) wrote:
Depends on what you mean by that. The tables that I'm concerned with look
something like bigint x2, char var x13, int x24, real x8, smallint x4 by
about 65M rows, each. I only do the updates on one table at a time. The
* Scott Marlowe (scott.marl...@gmail.com) wrote:
There's nothing wrong with whole table updates as part of an import
process, you just have to know to clean up after you're done, and
regular vacuum can't fix this issue, only vacuum full or reindex or
cluster.
Just to share my experiences-
On 1/27/11 4:11 PM, Alan Hodgson
ahodg...@simkin.camailto:ahodg...@simkin.ca wrote:
On January 27, 2011, Robert Schnabel
schnab...@missouri.edumailto:schnab...@missouri.edu wrote:
So my questions are 1) am I'm crazy for doing this, 2) would you change
anything and 3) is it acceptable to
On 1/28/11 9:00 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Fri, Jan 28, 2011 at 9:39 AM, Robert Schnabel schnab...@missouri.edu
wrote:
I can't do outside the database. So yes, once the upload is done I run
queries that update every row for certain columns, not every column.
After
On 1/28/11 9:28 AM, Stephen Frost sfr...@snowman.net wrote:
* Scott Marlowe (scott.marl...@gmail.com) wrote:
There's nothing wrong with whole table updates as part of an import
process, you just have to know to clean up after you're done, and
regular vacuum can't fix this issue, only vacuum
On Fri, Jan 28, 2011 at 10:44 AM, Scott Carey sc...@richrelevance.com wrote:
If you go this route, I suggest two equally sized RAID 10's on different
controllers fir index + data, with software raid-0 on top of that. RAID 10
will max out a controller after 6 to 10 drives, usually. Using the
On 1/28/2011 11:14 AM, Stephen Frost wrote:
It got lost from the original post but my database (9.0.0) is currently on
my Windows XP 64-bit workstation in my office on a 16 drive Seagate 15k.5
RAID5, no comments needed, I know, I'm moving it :-). I'm moving it to my
server
On 1/28/2011 11:44 AM, Scott Carey wrote:
On 1/27/11 4:11 PM, "Alan Hodgson" ahodg...@simkin.ca
wrote:
On January 27,
HI,
I use PostgreSQL basically as a data warehouse to store all the genetic
data that our lab generates. The only person that accesses the database
is myself and therefore I've had it housed on my workstation in my
office up till now. However, it's getting time to move it to bigger
On Thu, 27 Jan 2011, Robert Schnabel wrote:
HI,
I use PostgreSQL basically as a data warehouse to store all the genetic data
that our lab generates. The only person that accesses the database is myself
and therefore I've had it housed on my workstation in my office up till now.
However,
On January 27, 2011, Robert Schnabel schnab...@missouri.edu wrote:
So my questions are 1) am I'm crazy for doing this, 2) would you change
anything and 3) is it acceptable to put the xlog wal (and perhaps tmp
filespace) on a different controller than everything else? Please keep
in mind I'm
On 1/27/2011 5:19 PM, da...@lang.hm wrote:
On Thu, 27 Jan 2011, Robert Schnabel wrote:
HI,
I use PostgreSQL basically as a data warehouse to store all the genetic data
that our lab generates. The only person that accesses the database is myself
and therefore I've had it housed on my
...@lang.hm wrote:
Date: Thu, 27 Jan 2011 15:19:32 -0800 (PST)
From: da...@lang.hm
To: Robert Schnabel schnab...@missouri.edu
Cc: pgsql-performance pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to best use 32 15k.7 300GB drives?
On Thu, 27 Jan 2011, Robert Schnabel wrote:
HI,
I use
24 matches
Mail list logo