On 2012-03-12, Carson Gross carsongr...@gmail.com wrote:
We've got a postgres database with *a lot* of data in one table. On the
order of 100 million rows at this point. Postgres is, of course, handling
it with aplomb.
ALTER TABLE my_table ALTER COLUMN id TYPE bigint;
However, given
On 03/13/12 8:41 PM, Carson Gross wrote:
Does anyone have a reasonable guess as to the inserts per second
postgres is capable of these days on middle-of-the-road hardware? Any
order of magnitude would be fine: 10, 100, 1000, 10,000.
my dedicated database server in my lab, which is a 2U dual
I felt pretty good about my server until I read this.
On Wed, 2012-03-14 at 00:24 -0700, John R Pierce wrote:
On 03/13/12 8:41 PM, Carson Gross wrote:
Does anyone have a reasonable guess as to the inserts per second
postgres is capable of these days on middle-of-the-road hardware? Any
Heh. OK, so I'll plan on about 100 writes per second... *gulp*
Thanks a bunch for the info guys.
Cheers,
Carson
On Wed, Mar 14, 2012 at 7:54 AM, Bret Stern
bret_st...@machinemanagement.com wrote:
I felt pretty good about my server until I read this.
On Wed, 2012-03-14 at 00:24 -0700, John
On 03/14/12 12:24 AM, John R Pierce wrote:
thats my definition of a middle of the road database server. I have
no idea what yours is.
let me add... this server was under $7000 plus the disk drives (it
actually has 25 drives, 20 are in the raid10 used for the database
testing).we
On 03/14/2012 12:04 PM, John R Pierce wrote:
On 03/14/12 12:24 AM, John R Pierce wrote:
thats my definition of a middle of the road database server. I have
no idea what yours is.
let me add... this server was under $7000 plus the disk drives (it
actually has 25 drives...
My car was
On 03/14/12 12:16 PM, Steve Crawford wrote:
I was just looking at some modest-sized 15k SAS drives that priced out
in the $400-550 range. 25 of them would add a minimum of $10,000 to
the price tag. Still under 6-figures, though.
those disks aren't any cheaper when they are behind a EMC or
On Wed, Mar 14, 2012 at 1:41 PM, John R Pierce pie...@hogranch.com wrote:
On 03/14/12 12:16 PM, Steve Crawford wrote:
I was just looking at some modest-sized 15k SAS drives that priced out in
the $400-550 range. 25 of them would add a minimum of $10,000 to the price
tag. Still under
On 03/14/12 12:53 PM, Scott Marlowe wrote:
Note that if you don't need a lot of storage you can often use 300G
15k SAS drives which are around $300 each. 20 of those in a RAID-10
gives you ~3TB of storage which is plenty for most transactional
applications.
I'm actually using 25 x 146gb 15k
On Wed, Mar 14, 2012 at 2:34 PM, John R Pierce pie...@hogranch.com wrote:
On 03/14/12 12:53 PM, Scott Marlowe wrote:
Note that if you don't need a lot of storage you can often use 300G
15k SAS drives which are around $300 each. 20 of those in a RAID-10
gives you ~3TB of storage which is
John,
Thanks, I'll clarify my language around that.
Still hoping that there is a way to get a rough estimate of how long
converting an integer column to a bigint will take. Not possible?
Thanks guys,
Carson
On Mon, Mar 12, 2012 at 6:13 PM, John R Pierce pie...@hogranch.com wrote:
On
As a follow up, is the upgrade from integer to bigint violent? I assume
so: it has to physically resize the column on disk, right?
Thanks,
Carson
On Tue, Mar 13, 2012 at 9:43 AM, Carson Gross carsongr...@gmail.com wrote:
John,
Thanks, I'll clarify my language around that.
Still hoping
On 03/13/12 6:10 PM, Carson Gross wrote:
As a follow up, is the upgrade from integer to bigint violent? I
assume so: it has to physically resize the column on disk, right?
I think we've said several times, any ALTER TABLE ADD/ALTER COLUMN like
that will cause every single tuple (row) of
Got it.
Thank you, that's very helpful: we could delete quite a few of the rows
before we did the operation and cut way down on the size of the table
before we issue the update. Trimming the table size down seems obvious
enough, but that's good confirmation that it will very much help. And
OK, last post on this topic, I promise. I'm doing some math, and I think
I'll have about 100 million rows in the table to deal with.
Given a table that size, I'd like to do the following math:
100 million rows / inserted rows per second = total seconds
Does anyone have a reasonable guess as
Hello All,
I've looked through the docs, but I'm unable to find complete answers to my
questions, so thanks in advance if you can lend any expertise.
Here's the situation I'm in (always a good opener, right? :) ):
We've got a postgres database with *a lot* of data in one table. On the
order of
However, given the size of this table, I have no idea how long something
like this might take. In general I've had a tough time getting feedback
from postgres on the progress of a query, how long something might take,
etc.
You can always do this which would result in minimum hassles.
On 03/12/12 1:25 PM, Tim Uckun wrote:
create a new bigint field.
copy all the IDs to it.
index it in the background
at frequency of your choosing sync the id field to the new field to keep it up.
at a time of your choosing set the default for the new field to be
serial starting at max(id)
drop
Tim,
Commando. I like it. Thanks a ton for that suggestion. I'd still like to
hear if anyone has a good way to estimate the performance of these
operations, but I'll explore what it would mean to do exactly that.
John: thankfully this is a table without any fks in, although it is indexed
to
On 03/12/12 5:01 PM, Carson Gross wrote:
We are also considering sharding the table and maybe the right thing
is to simply fix it when we do the sharding.
postgres generally calls that partitioning... Sharding usually means
splitting data across multiple servers.
--
john r pierce
20 matches
Mail list logo