Hi,
I have a database cluster running PostgreSQL 8.2
and I have **new Linux virtualized database environment running PostgreSQL
9.0
My question is how to ensure that database schemas are always performing and
scalable and databases optimized and entirely migrated
Thanks in advance!
Hany
I have been a MySQL user for years, including owning a few
multi-gigabyte databases for my websites, and using it to host
algebra.com (about 12 GB database).
I have had my ups and downs with MySQL. The ups were ease of use and
decent performance for small databases such as algebra.com. The downs
On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov ichu...@gmail.com wrote:
I have a server with about 18 TB of storage and 48 GB of RAM, and 12
CPU cores.
1 or 2 fast cores is plenty for what you're doing. But the drive
array and how it's configured etc are very important. There's a huge
For 10 TB table and 3hours, disks should have a transfer about 1GB/s (seqscan).
2011/9/11, Scott Marlowe scott.marl...@gmail.com:
On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov ichu...@gmail.com wrote:
I have a server with about 18 TB of storage and 48 GB of RAM, and 12
CPU cores.
1 or 2 fast
On Sun, Sep 11, 2011 at 7:52 AM, Scott Marlowe scott.marl...@gmail.comwrote:
On Sun, Sep 11, 2011 at 6:35 AM, Igor Chudov ichu...@gmail.com wrote:
I have a server with about 18 TB of storage and 48 GB of RAM, and 12
CPU cores.
1 or 2 fast cores is plenty for what you're doing.
I need
2011/9/11 pasman pasmański pasma...@gmail.com
For 10 TB table and 3hours, disks should have a transfer about 1GB/s
(seqscan).
I have 6 Gb/s disk drives, so it should be not too far, maybe 5 hours for a
seqscan.
i
2011/9/11, Scott Marlowe scott.marl...@gmail.com:
On Sun, Sep 11, 2011 at
On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov ichu...@gmail.com wrote:
Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6
configuration.
They are managed by a 3WARE 9750 RAID CARD.
I would say that I am not very concerned with linear relationship of read
speed to disk
On 09/11/2011 07:35 AM, Igor Chudov wrote:
I have been a MySQL user for years, including owning a few
multi-gigabyte databases for my websites, and using it to host
algebra.com http://algebra.com (about 12 GB database).
I have had my ups and downs with MySQL. The ups were ease of use and
decent
On Sun, Sep 11, 2011 at 9:16 AM, Claudio Freire klaussfre...@gmail.comwrote:
On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov ichu...@gmail.com wrote:
Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a
RAID-6
configuration.
They are managed by a 3WARE 9750 RAID CARD.
I
On 09/11/2011 08:59 AM, Igor Chudov wrote:
I do not plan to do a lot of random writing. My current design is that my perl
scripts write to a temporary table every week, and then I do INSERT..ON
DUPLICATE KEY UPDATE.
By the way, does that INSERT UPDATE functionality or something like this
On Sun, Sep 11, 2011 at 4:16 PM, Andy Colson a...@squeakycode.net wrote:
Upgrading to major versions of PG may or may not be painful. (mysql
sometimes works seamlessly between versions, it appears brilliant. But I
have had problems with an update, and when it goes bad, you dont have a lot
of
On 09/11/2011 09:21 AM, Igor Chudov wrote:
On Sun, Sep 11, 2011 at 9:16 AM, Claudio Freire klaussfre...@gmail.com
mailto:klaussfre...@gmail.com wrote:
On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov ichu...@gmail.com
mailto:ichu...@gmail.com wrote:
Well, right now, my server has
On Sun, Sep 11, 2011 at 4:21 PM, Igor Chudov ichu...@gmail.com wrote:
Quantitatively, what would you say is the write speed difference between
RAID 10 and RAID 6?
https://support.nstein.com/blog/archives/73
There you can see a comparison with 4 drives, and raid 10 is twice as fast.
Since raid
On Sun, Sep 11, 2011 at 17:23, Andy Colson a...@squeakycode.net wrote:
On 09/11/2011 08:59 AM, Igor Chudov wrote:
By the way, does that INSERT UPDATE functionality or something like this
exist in Postgres?
You have two options:
1) write a function like:
create function doinsert(_id integer,
On Sep 11, 2011, at 9:21 AM, Igor Chudov wrote:
On Sun, Sep 11, 2011 at 9:16 AM, Claudio Freire klaussfre...@gmail.com
wrote:
On Sun, Sep 11, 2011 at 3:59 PM, Igor Chudov ichu...@gmail.com wrote:
Well, right now, my server has twelve 7,200 RPM 2TB hard drives in a RAID-6
On Sun, Sep 11, 2011 at 1:36 PM, Ogden li...@darkstatic.com wrote:
As someone who migrated a RAID 5 installation to RAID 10, I am getting far
better read and write performance on heavy calculation queries. Writing on
the RAID 5 really made things crawl. For lots of writing, I think RAID 10 is
Sorry, meant to send this to the list.
For really big data-warehousing, this document really helped us:
http://pgexperts.com/document.html?id=49
I want to thank members on this list which helped me benchmark and conclude
that RAID 10 on a XFS filesystem was the way to go over what we had prior.
PostgreSQL we have been using with Perl for the last 8 years and it has been
nothing but outstanding for us. Things have definitely worked out
2011/9/11 pasman pasmański pasma...@gmail.com:
For 10 TB table and 3hours, disks should have a transfer about 1GB/s
(seqscan).
Random data point. Our primary servers were built for OLTP with 48
cores and 32 15kSAS drives. We started out on Arecas but the
Supermicro 1Us we were using didn't
On Sun, Sep 11, 2011 at 1:50 PM, Ogden li...@darkstatic.com wrote:
I want to thank members on this list which helped me benchmark and conclude
that RAID 10 on a XFS filesystem was the way to go over what we had prior.
PostgreSQL we have been using with Perl for the last 8 years and it has
I doubt you'll get much useful feedback because your question is too
broad for a mailing list answer. If you're looking for basic
performance guidelines, I recommend Greg Smith's PostgreSQL 9.0 High
Performance [1] (disclaimer: I used to work with Greg and got a free
copy), although I don't think
We've currently got PG 8.4.4 running on a whitebox hardware set up, with (2)
5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives, using
the onboard IDE controller and ext3.
A few weeks back, we purchased two refurb'd HP DL360's G5's, and were hoping
to set them up with PG 9.0.2,
Thanks Maciek.
I really do not know where to start or how to explain my question
I am newbie to Postgres. I will try to get more information from the
development team and SA's
Cheers
Hany
On Mon, Sep 12, 2011 at 10:22 AM, Maciek Sakrejda msakre...@truviso.comwrote:
I doubt you'll get much
* Anthony Presley (anth...@resolution.com) wrote:
I was really hoping that with hardware RAID that something would be faster
(loading times, queries, etc...). What am I doing wrong?
ext3 and ext4 do NOT perform identically out of the box.. You might be
running into the write barriers problem
* Igor Chudov (ichu...@gmail.com) wrote:
Right now I have a personal (one user) project to create a 5-10
Terabyte data warehouse. The largest table will consume the most space
and will take, perhaps, 200,000,000 rows.
I run data-warehouse databases on that order (current largest single
On Sun, Sep 11, 2011 at 6:01 PM, Stephen Frost sfr...@snowman.net wrote:
* Igor Chudov (ichu...@gmail.com) wrote:
Right now I have a personal (one user) project to create a 5-10
Terabyte data warehouse. The largest table will consume the most space
and will take, perhaps, 200,000,000 rows.
Dne 12.9.2011 00:44, Anthony Presley napsal(a):
We've currently got PG 8.4.4 running on a whitebox hardware set up,
with (2) 5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM
SATA drives, using the onboard IDE controller and ext3.
A few weeks back, we purchased two refurb'd HP DL360's
On Mon, Sep 12, 2011 at 1:16 AM, Igor Chudov ichu...@gmail.com wrote:
I could, say, set work_mem to 30 GB? (64 bit linux)
I don't think you'd want that. Remember, work_mem is the amount of
memory *per sort*.
Queries can request several times that much memory, once per sort they
need to perform.
On September 11, 2011 03:44:34 PM Anthony Presley wrote:
First thing I noticed is that it takes the same amount of time to load the
db (about 40 minutes) on the new hardware as the old hardware. I was
really hoping with the faster, additional drives and a hardware RAID
controller, that this
On Sun, Sep 11, 2011 at 4:44 PM, Anthony Presley anth...@resolution.com wrote:
We've currently got PG 8.4.4 running on a whitebox hardware set up, with (2)
5410 Xeon's, and 16GB of RAM. It's also got (4) 7200RPM SATA drives, using
the onboard IDE controller and ext3.
A few weeks back, we
* Igor Chudov (ichu...@gmail.com) wrote:
Can I partition data by month (or quarter), without that month being part of
PRIMARY KEY?
The way partitioning works in PG is by using CHECK constraints. Not
sure if you're familiar with those (not sure if MySQL has them), so
here's a quick example:
* Claudio Freire (klaussfre...@gmail.com) wrote:
I don't think you'd want that. Remember, work_mem is the amount of
memory *per sort*.
Queries can request several times that much memory, once per sort they
need to perform.
You can set it really high, but not 60% of your RAM - that wouldn't
On Sun, Sep 11, 2011 at 5:22 PM, Maciek Sakrejda msakre...@truviso.comwrote:
performance guidelines, I recommend Greg Smith's PostgreSQL 9.0 High
Performance [1] (disclaimer: I used to work with Greg and got a free
copy)
I'll second that. PostgreSQL 9.0 High Performance is an excellent
Hi,
On 12 September 2011 12:28, Stephen Frost sfr...@snowman.net wrote:
Once those are done, you can query against the 'parent' table with
something like:
select * from parent where date = '2010-01-01';
And PG will realize it only has to look at table2 to get the results for
that query.
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Anthony Presley
Sent: Sunday, September 11, 2011 4:45 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] RAID Controller (HP P400) beat by SW-RAID?
We've currently got PG 8.4.4
35 matches
Mail list logo