On 9/12/2011 1:57 PM, Andy Colson wrote:
On 9/12/2011 1:22 PM, Robert Schnabel wrote:
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for
On 9/12/2011 3:58 PM, Scott Marlowe wrote:
On Mon, Sep 12, 2011 at 11:33 AM, Robert Schnabel
wrote:
The recent "data warehouse" thread made me think about how I use work_mem
for some of my big queries. So I tried SET work_mem = '4GB' for a session
and got
ERROR: 41
On 9/12/2011 12:57 PM, Shaun Thomas wrote:
On 09/12/2011 12:47 PM, Andy Colson wrote:
work_mem is not the total a query can use. I believe each step can
use that much, and each backend can use it for multiple bits. So if
you had two backends, each doing 2 sorts, you'd use 2*2 = 4 * 2GB =
8GB.
The recent "data warehouse" thread made me think about how I use
work_mem for some of my big queries. So I tried SET work_mem = '4GB'
for a session and got
ERROR: 4194304 is outside the valid range for parameter "work_mem" (64
.. 2097151)
A bit of searching turned up the "Allow sorts to use
On 8/22/2011 9:42 PM, David Boreham wrote:
I'm buying a bunch of new machines (all will run an application that heavily
writes to PG). These machines will have 2 spindle groups in a RAID-1 config.
Drives will be either 15K SAS, or 10K SATA (I haven't decided if it is
better
to buy the faster dri
On 7/15/2011 2:10 AM, Greg Smith wrote:
chris wrote:
My employer is a university with little funds and we have to find a
cheap way to scale for the next 3 years, so the SAN seems a good chance
to us.
A SAN is rarely ever the cheapest way to scale anything; you're paying
extra for reliability i
On January 27,
2011, Robert Schnabel <schnab...@missouri.edu>
wrote:
> So my
questions are 1) am I'm crazy for doing this, 2) would
you change
> anything
and 3) is it acce
On 1/28/2011 11:14 AM, Stephen Frost wrote:
It got lost from the original post but my database (9.0.0) is currently on
my Windows XP 64-bit workstation in my office on a 16 drive Seagate 15k.5
RAID5, no comments needed, I know, I'm moving it :-). I'm moving it to my
server whic
On 1/28/2011 11:00 AM, Scott Marlowe wrote:
On Fri, Jan 28, 2011 at 9:39 AM, Robert Schnabel wrote:
I can't do outside the database. So yes, once the upload is done I run
queries that update every row for certain columns, not every column. After
I'm done with a table I run a VACU
On 1/28/2011 7:14 AM, Stephen Frost wrote:
Robert,
* Robert Schnabel (schnab...@missouri.edu) wrote:
Once the bulk data is inserted into the tables I generally
do some updates on columns to set values which characterize the
data.
Please
On 1/27/2011 5:19 PM, da...@lang.hm wrote:
On Thu, 27 Jan 2011, Robert Schnabel wrote:
HI,
I use PostgreSQL basically as a data warehouse to store all the genetic data
that our lab generates. The only person that accesses the database is myself
and therefore I've had it housed
tmp
filespace) on a different controller than everything else? Please keep
in mind I'm a geneticist who happens to know a little bit about
bioinformatics and not the reverse. :-)
Thanks!
Bob
--
*
Robert Schnabel
Research Assistant Professor
Jochen Erwied wrote:
Tuesday, November 24, 2009, 10:34:00 PM you wrote:
Aberdeen is the builder I use. They'll put any card in you want
(within reason) including our preference here, Areca. Perhaps you
meant Areca?
I knew Areca only for their internal arrays (which o
Scott Marlowe wrote:
On Mon, Nov 16, 2009 at 1:32 PM, Robert Schnabel wrote:
Ok, so you have sufficiently sparked my curiosity as to whether Diskeeper
will in any way cause Postgres to fail the power chord test. Unfortunately
I have some deadlines to meet so won't be ab
Scott Marlowe wrote:
On Mon, Nov 16, 2009 at 1:04 PM, Robert Schnabel wrote:
So the short answer is yes, I have it running with
PostgreSQL and have not had any problems.
Have you unplugged the power cord a few times in the middle of heavy
write activity?
...Robert
Nope
So the short answer is yes, I have it running with
PostgreSQL and have not had any problems.
Have you unplugged the power cord a few times in the middle of heavy
write activity?
...Robert
Nope. Forgive my ignorance but isn't that what a UPS is for anyway?
Along with a B
cb wrote:
I'm curious if anyone else has used Diskeeper's Automatic Mode in
combination with PostgreSQL to defrag and keep the drive defragged
while PostgreSQL is actually running.
Thanks!
-chris
I've been a Diskeeper customer for about 10 years now and consider it
'must have' softwa
Glyn Astill wrote:
Hi Chaps,
I'm putting together some new servers, and whilst I've been happy with our
current config of Adaptec 5805's with bbu I've noticed these 5805Z cards,
apparently the contents of DRAM is copied into onboard flash upon power failure.
Just wondered if anyone had any e
If you want to partition your huge data set by "time", and the data
isn't already ordered by "time" on disk, you could do this :
SET work_mem TO something very large like 10GB since you got 32GB RAM,
check your shared buffers etc first;
CREATE TABLE tmp AS SELECT * FROM bigTable ORDER BY "tim
On Fri, Jun 5, 2009 at 9:38 AM, Bruce Momjian wrote:
Laszlo Nagy wrote:
On a 8 processor system, my stats collector is always at 100% CPU.
Meanwhile disk I/O is very low. We have many databases, they are
accessed frequently. Sometimes there are big table updates, but in most
of the tim
Greg Smith wrote:
On Fri, 22 May 2009, Scott Marlowe wrote:
It's much less common to see such a change in server class drives
This is a good point, and I just updated
http://wiki.postgresql.org/wiki/SCSI_vs._IDE/SATA_Disks with a section
about this topic (the last one under "ATA Disks").
An
Greg Smith wrote:
On Thu, 21 May 2009, Robert Schnabel wrote:
A word of warning for anyone out there considering the Seagate 1.5TB
SATA drives (ST31500341AS)...I'm going through a fiasco right now
with these drives and I wish I had purchased more when I did.
I don't think you came to
Matthew Wakeling wrote:
On Thu, 21 May 2009, Linos wrote:
i have to buy a new server and in the budget i have (small) i
have to select one of this two options:
-4 sas 146gb 15k rpm raid10.
-8 sas 146gb 10k rpm raid10.
It depends what you are doing. I think in most situations, the second
23 matches
Mail list logo