I think your notion that you have an HP CCISS driver in this older
kernel that just doesn't drive your card very fast is worth exploring.
What I sometimes do in the situation you're in is boot a Linux
distribution that comes with a decent live CD, such as Debian or
Ubuntu. Just mount the
-Original Message-
From: Greg Smith [mailto:g...@2ndquadrant.com]
Sent: Monday, August 08, 2011 9:42 PM
To: mark
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] benchmark woes and XFS options
I think your notion that you have an HP CCISS driver in this older
kernel
Hi.
Not strictly connected to your tests, but:
As of ZFS, we've had experience that it degrades over time after random
updates because of files becoming non-linear and sequential reads becomes
random.
Also there are Q about ZFS block size - setting it to 8K makes first problem
worse, setting it
Joao,
Wow, thanks for doing this!
In general, your tests seem to show that there isn't a substantial
penalty for using ZFS as of version 8.0.
If you have time for more tests, I'd like to ask you for a few more tweaks:
(1) change the following settings according to conventional wisdom:
Hi,
The tests were made without the -s parameter, (so 1 is assumed). I'm running
the numbers
again on CentOS, with the optimized config and I'll test also different
scale values. I also will be able
to repeat the test again in FreeBSD with ZFS with the new options and
different scale,
but
James Mansion wrote:
Ivan Voras wrote:
I wish that, when people got the idea to run a simplistic benchmark
like this, they would at least have the common sense to put the
database on a RAM drive to avoid problems with different cylinder
speeds of rotational media and fragmentation from
On 01/27/10 14:28, Thom Brown wrote:
Had a quick look at a benchmark someone put together of MySQL vs
PostgreSQL, and while PostgreSQL is generally faster, I noticed the bulk
delete was very slow:
http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html
I wish that, when people got the
On Wed, 27 Jan 2010, Thom Brown wrote:
Had a quick look at a benchmark someone put together of MySQL vs PostgreSQL,
and while PostgreSQL is generally faster, I noticed the bulk delete was very
slow: http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html
Is this normal?
On the
Thom Brown thombr...@gmail.com wrote:
Had a quick look at a benchmark someone put together of MySQL vs
PostgreSQL, and while PostgreSQL is generally faster, I noticed
the bulk delete was very slow:
http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html
Is this normal?
It is if
On Wed, Jan 27, 2010 at 9:54 AM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
It is if you don't have an index on the table which has a foreign
key defined which references the table in which you're doing
deletes. The author of the benchmark apparently didn't realize that
MySQL
On Wednesday 27 January 2010 15:49:06 Matthew Wakeling wrote:
On Wed, 27 Jan 2010, Thom Brown wrote:
Had a quick look at a benchmark someone put together of MySQL vs
PostgreSQL, and while PostgreSQL is generally faster, I noticed the bulk
delete was very slow:
Ivan Voras wrote:
I wish that, when people got the idea to run a simplistic benchmark
like this, they would at least have the common sense to put the
database on a RAM drive to avoid problems with different cylinder
speeds of rotational media and fragmentation from multiple runs.
Huh?
It's
Kevin Grittner wrote:
It is if you don't have an index on the table which has a foreign
key defined which references the table in which you're doing
deletes. The author of the benchmark apparently didn't realize that
MySQL automatically adds such an index to the dependent table, while
On Mon, Feb 23, 2009 at 1:29 PM, Sergio Lopez sergio.lo...@nologin.es wrote:
El Sat, 21 Feb 2009 21:04:49 -0500
I've taken down the article and I'll bring up it again when I've
collected new numbers.
Please do, this subject is very interesting.
Regards.
--
Sent via pgsql-performance mailing
El Sat, 21 Feb 2009 21:04:49 -0500
Jonah H. Harris jonah.har...@gmail.com escribió:
On Fri, Feb 20, 2009 at 8:40 PM, Denis Lussier
denis.luss...@enterprisedb.com wrote:
Hi all,
As the author of BenchmarkSQL and the founder of EnterpriseDB I
can assure you that BenchmarkSQL was
Hi all,
As the author of BenchmarkSQL and the founder of EnterpriseDB I
can assure you that BenchmarkSQL was NOT written specifically for
PostgreSQL.It is intended to be a completely database agnostic
tpc-c like java based benchmark.
However; as Jonah correctly points out in painstaking
On Fri, Feb 20, 2009 at 8:40 PM, Denis Lussier
denis.luss...@enterprisedb.com wrote:
Hi all,
As the author of BenchmarkSQL and the founder of EnterpriseDB I
can assure you that BenchmarkSQL was NOT written specifically for
PostgreSQL.It is intended to be a completely database
On Friday 20 February 2009, Sergio Lopez sergio.lo...@nologin.es wrote:
Hi,
I've made a benchmark comparing PostgreSQL, MySQL and Oracle under three
environments: GNU/Linux-x86, Solaris-x86 (same machine as GNU/Linux) and
Solaris-SPARC. I think you might find it interesting:
El Fri, 20 Feb 2009 08:36:44 -0800
Alan Hodgson ahodg...@simkin.ca escribió:
On Friday 20 February 2009, Sergio Lopez sergio.lo...@nologin.es
wrote:
Hi,
I've made a benchmark comparing PostgreSQL, MySQL and Oracle under
three environments: GNU/Linux-x86, Solaris-x86 (same machine as
On Fri, Feb 20, 2009 at 6:28 AM, Sergio Lopez sergio.lo...@nologin.eswrote:
Hi,
I've made a benchmark comparing PostgreSQL, MySQL and Oracle under three
environments: GNU/Linux-x86, Solaris-x86 (same machine as GNU/Linux) and
Solaris-SPARC. I think you might find it interesting:
El Fri, 20 Feb 2009 12:39:41 -0500
Jonah H. Harris jonah.har...@gmail.com escribió:
On Fri, Feb 20, 2009 at 6:28 AM, Sergio Lopez
sergio.lo...@nologin.eswrote:
Hi,
I've made a benchmark comparing PostgreSQL, MySQL and Oracle under
three environments: GNU/Linux-x86, Solaris-x86 (same
First of all, you need to do some research on the benchmark kit itself,
rather than blindly downloading and using one. BenchmarkSQL has significant
bugs in it which affect the result. I can say that authoritatively as I
worked on/with it for quite awhile. Don't trust any result that comes
On Fri, Feb 20, 2009 at 1:15 PM, Sergio Lopez sergio.lo...@nologin.eswrote:
On the other hand, I've neved said that what I've done is the
Perfect-Marvelous-Definitive Benchmark, it's just a personal project,
and I don't have an infinite amount of time to invest on it.
When you make comments
On Fri, Feb 20, 2009 at 2:35 PM, Robert Haas robertmh...@gmail.com wrote:
First of all, you need to do some research on the benchmark kit itself,
rather than blindly downloading and using one. BenchmarkSQL has
significant
bugs in it which affect the result. I can say that authoritatively
On Fri, Feb 20, 2009 at 2:48 PM, Jonah H. Harris jonah.har...@gmail.comwrote:
Having this said, the benchmark is not as unfair as you thought. I've
taken care to prepare all databases to meet similar values for their
cache, buffers and I/O configuration (to what's possible given their
On Fri, Feb 20, 2009 at 2:48 PM, Jonah H. Harris jonah.har...@gmail.com wrote:
On Fri, Feb 20, 2009 at 1:15 PM, Sergio Lopez sergio.lo...@nologin.es
wrote:
On the other hand, I've neved said that what I've done is the
Perfect-Marvelous-Definitive Benchmark, it's just a personal project,
and
On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure mmonc...@gmail.com wrote:
ISTM you are the one throwing out unsubstantiated assertions without
data to back it up. OP ran benchmark. showed hardware/configs, and
demonstrated result. He was careful to hedge expectations and gave
rationale for
On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris jonah.har...@gmail.com wrote:
On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure mmonc...@gmail.com wrote:
ISTM you are the one throwing out unsubstantiated assertions without
data to back it up. OP ran benchmark. showed hardware/configs, and
On Fri, Feb 20, 2009 at 2:54 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris jonah.har...@gmail.com
wrote:
On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure mmonc...@gmail.com wrote:
ISTM you are the one throwing out unsubstantiated assertions
Robert Haas wrote:
The biggest flaw in the benchmark by far has got to be that it was
done with a ramdisk, so it's really only measuring CPU consumption.
Measuring CPU consumption is interesting, but it doesn't have a lot to
do with throughput in real-life situations.
... and memory
El Fri, 20 Feb 2009 14:48:06 -0500
Jonah H. Harris jonah.har...@gmail.com escribió:
On Fri, Feb 20, 2009 at 1:15 PM, Sergio Lopez
sergio.lo...@nologin.eswrote:
Having this said, the benchmark is not as unfair as you thought. I've
taken care to prepare all databases to meet similar values
El Fri, 20 Feb 2009 16:54:58 -0500
Robert Haas robertmh...@gmail.com escribió:
On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris
jonah.har...@gmail.com wrote:
On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure
mmonc...@gmail.com wrote:
ISTM you are the one throwing out unsubstantiated
[EMAIL PROTECTED] wrote:
WAL is on a RAID 0 drive along with the OS
Isn't that just as unsafe as having the whole lot on RAID0?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
On Sun, Mar 16, 2008 at 12:04:44PM -0700, Craig James wrote:
Just out of curiosity: Last time I did research, the word seemed to be that
xfs was better than ext2 or ext3. Is that not true? Why use ext2/3 at all
if xfs is faster for Postgres?
For the WAL, the filesystem is largely
OK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure
out if XFS file system is installed, if not installed getting it
installed.
I would like to see the difference between XFS and ext2 performance
numbers.
any pointers would be nice. I 'm not going to reinstall the OS.
Justin wrote:
OK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure
out if XFS file system is installed, if not installed getting it
installed.
Hm? Installed/not installed? You can select that when you are preparing
your partitions. If you run the automated partitioner
On 17/03/2008, Justin [EMAIL PROTECTED] wrote:
OK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure
out if XFS file system is installed, if not installed getting it
installed.
...
any pointers would be nice. I 'm not going to reinstall the OS. Nor do
i want to
Justin wrote:
OK i'm showing my ignorance of linux. On Ubuntu i can't seem to figure
out if XFS file system is installed, if not installed getting it
installed.
There are two parts to the file system, really. One is the kernel driver
for the file system. This is almost certainly available,
Well every thing worked right up to the point where i tried to mount the
file system
Warning: xfs_db: /dev/sdb1 contains a mounted file system
fatal error -- couldn't initialize XFS library.
think i'm missing something???
Craig Ringer wrote:
Justin wrote:
OK i'm showing my ignorance of
On 17-Mar-08, at 2:50 PM, Justin wrote:
Just out of curiosity: Last time I did research, the word seemed to
be that xfs was better than ext2 or ext3. Is that not true? Why
use ext2/3 at all if xfs is faster for Postgres?
Criag
Ext2 vs XFS on my setup there is difference in the
Justin wrote:
2000 tps ??? do you have fsync turned off ?
Dave
No its turned on.
Unless I'm seriously confused, something is wrong with these numbers. That's
the sort of performance you expect from a good-sized RAID 10 six-disk array.
With a single 7200 rpm SATA disk and XFS, I get 640
Craig James wrote:
Justin wrote:
2000 tps ??? do you have fsync turned off ?
Dave
No its turned on.
Unless I'm seriously confused, something is wrong with these numbers.
That's the sort of performance you expect from a good-sized RAID 10
six-disk array. With a single 7200 rpm SATA
Just out of curiosity: Last time I did research, the word seemed to be
that xfs was better than ext2 or ext3. Is that not true? Why use
ext2/3 at all if xfs is faster for Postgres?
Criag
Ext2 vs XFS on my setup there is difference in the performance between
the two file systems but
Hi Justin,
Il giorno 17/mar/08, alle ore 20:38, Justin ha scritto:
it is a RAID 10 controller with 6 SAS 10K 73 gig drives.The
server is 3 weeks old now.
it has 16 gigs of RAM
2 quad core Xenon 1.88 Ghz processors
2 gig Ethernet cards. RAID controller perc 6/i with battery backup
On Mon, Mar 17, 2008 at 2:58 PM, Enrico Sirola [EMAIL PROTECTED] wrote:
Hi Justin,
Il giorno 17/mar/08, alle ore 20:38, Justin ha scritto:
it is a RAID 10 controller with 6 SAS 10K 73 gig drives.The
server is 3 weeks old now.
it has 16 gigs of RAM
2 quad core Xenon 1.88 Ghz
On 16-Mar-08, at 2:19 AM, Justin wrote:
I decided to reformat the raid 10 into ext2 to see if there was any
real big difference in performance as some people have noted here
is the test results
please note the WAL files are still on the raid 0 set which is still
in ext3 file system
Dave Cramer wrote:
On 16-Mar-08, at 2:19 AM, Justin wrote:
I decided to reformat the raid 10 into ext2 to see if there was any
real big difference in performance as some people have noted here is
the test results
please note the WAL files are still on the raid 0 set which is still
in
Craig James wrote:
Dave Cramer wrote:
On 16-Mar-08, at 2:19 AM, Justin wrote:
I decided to reformat the raid 10 into ext2 to see if there was any
real big difference in performance as some people have noted here
is the test results
please note the WAL files are still on the raid 0 set
On 16-Mar-08, at 3:04 PM, Craig James wrote:
Dave Cramer wrote:
On 16-Mar-08, at 2:19 AM, Justin wrote:
I decided to reformat the raid 10 into ext2 to see if there was
any real big difference in performance as some people have noted
here is the test results
please note the WAL files
On Sun, Mar 16, 2008 at 1:36 PM, Dave Cramer [EMAIL PROTECTED] wrote:
On 16-Mar-08, at 3:04 PM, Craig James wrote:
Just out of curiosity: Last time I did research, the word seemed to
be that xfs was better than ext2 or ext3. Is that not true? Why
use ext2/3 at all if xfs is faster
I decided to reformat the raid 10 into ext2 to see if there was any real
big difference in performance as some people have noted here is the
test results
please note the WAL files are still on the raid 0 set which is still in
ext3 file system format. these test where run with the fsync
On Thu, Mar 13, 2008 at 4:53 PM, justin [EMAIL PROTECTED] wrote:
I'm ran pgbench from my laptop to the new server
My laptop is dual core with 2 gigs of ram and 1 gig enthernet connection to
server. so i don't think the network is going to be a problem in the test.
When i look at the
On Thu, Mar 13, 2008 at 3:09 PM, justin [EMAIL PROTECTED] wrote:
I chose to use ext3 on these partition
You should really consider another file system. ext3 has two flaws
that mean I can't really use it properly. A 2TB file system size
limit (at least on the servers I've tested) and it locks
On Fri, Mar 14, 2008 at 12:17 AM, Jesper Krogh [EMAIL PROTECTED] wrote:
Scott Marlowe wrote:
On Thu, Mar 13, 2008 at 3:09 PM, justin [EMAIL PROTECTED] wrote:
I chose to use ext3 on these partition
You should really consider another file system. ext3 has two flaws
that mean I
On Fri, Mar 14, 2008 at 12:19 AM, Scott Marlowe [EMAIL PROTECTED] wrote:
On Fri, Mar 14, 2008 at 12:17 AM, Jesper Krogh [EMAIL PROTECTED] wrote:
Scott Marlowe wrote:
On Thu, Mar 13, 2008 at 3:09 PM, justin [EMAIL PROTECTED] wrote:
I chose to use ext3 on these partition
Scott Marlowe wrote:
On Fri, Mar 14, 2008 at 12:17 AM, Jesper Krogh [EMAIL PROTECTED] wrote:
Scott Marlowe wrote:
On Thu, Mar 13, 2008 at 3:09 PM, justin [EMAIL PROTECTED] wrote:
I chose to use ext3 on these partition
You should really consider another file system. ext3 has two
Scott Marlowe wrote:
On Thu, Mar 13, 2008 at 3:09 PM, justin [EMAIL PROTECTED] wrote:
I chose to use ext3 on these partition
You should really consider another file system. ext3 has two flaws
that mean I can't really use it properly. A 2TB file system size
limit (at least on the servers
On Fri, 14 Mar 2008, Justin wrote:
I played with shared_buffer and never saw much of an improvement from
100 all the way up to 800 megs moved the checkpoints from 3 to 30 and
still never saw no movement in the numbers.
Increasing shared_buffers normally improves performance as the size of
I recent just got a new server also from dell 2 weeks ago
went with more memory slower CPU, and smaller harddrives
have not run pgbench
Dell PE 2950 III
2 Quad Core 1.866 Ghz
16 gigs of ram.
8 hard drives 73Gig 10k RPM SAS
2 drives in Mirrored for OS, Binaries, and WAL
6 in
I did not run into one install problem, I read a thread about people having
problems but the thread is over a year old now.
I used the 7.1 gutsy amd64 server version
I then installed gnome desktop because its not installed by default. i'm a
windows admin i have to have my gui
then
Justin,
This may be a bit out of context, but did you run into any troubles
getting your Perc6i RAID controller to work under Ubuntu 7.1? I've
heard there were issues with that.
Thanks,
Will
On Mar 13, 2008, at 3:11 AM, Justin Graf wrote:
I recent just got a new server also from dell 2
Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Wed, 12 Mar 2008 21:55:18 -0700
Craig James [EMAIL PROTECTED] wrote:
Diffs from original configuration:
max_connections = 1000
shared_buffers = 400MB
work_mem = 256MB
max_fsm_pages = 100
max_fsm_relations = 5000
Absolutely on the battery backup.
I did not load the linux drivers from dell, it works so i figured i was not
going to worry about it. This server is so oversized for its load its
unreal. I have always gone way overboard on server specs and making sure
its redundant.
The difference in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Thu, 13 Mar 2008 12:01:50 -0400 (EDT)
Greg Smith [EMAIL PROTECTED] wrote:
On Thu, 13 Mar 2008, Craig James wrote:
wal_sync_method = open_sync
There was a bug report I haven't had a chance to investigate yet that
suggested some recent
On Thu, 13 Mar 2008, Joshua D. Drake wrote:
Greg Smith [EMAIL PROTECTED] wrote:
wal_sync_method = open_sync
There was a bug report I haven't had a chance to investigate yet that
suggested some recent Linux versions have issues when using
open_sync. I'd suggest popping that back to the
- Original Message -
From: Greg Smith [EMAIL PROTECTED]
To: pgsql-performance@postgresql.org
Sent: Thursday, March 13, 2008 4:27 PM
Subject: Re: [PERFORM] Benchmark: Dell/Perc 6, 8 disk RAID 10
On Thu, 13 Mar 2008, Joshua D. Drake wrote:
Greg Smith [EMAIL PROTECTED] wrote
On Wed, Mar 12, 2008 at 9:55 PM, Craig James [EMAIL PROTECTED] wrote:
I just received a new server and thought benchmarks would be interesting. I
think this looks pretty good, but maybe there are some suggestions about the
configuration file. This is a web app, a mix of read/write, where
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Wed, 12 Mar 2008 21:55:18 -0700
Craig James [EMAIL PROTECTED] wrote:
Diffs from original configuration:
max_connections = 1000
shared_buffers = 400MB
work_mem = 256MB
max_fsm_pages = 100
max_fsm_relations = 5000
wal_buffers = 256kB
On Mon, 4 Feb 2008 15:09:58 -0500 (EST)
Greg Smith [EMAIL PROTECTED] wrote:
On Mon, 4 Feb 2008, Simon Riggs wrote:
Would anybody like to repeat these tests with the latest production
versions of these databases (i.e. with PGSQL 8.3)
Do you have any suggestions on how people should run
On Mon, 04 Feb 2008 17:33:34 -0500
Jignesh K. Shah [EMAIL PROTECTED] wrote:
Hi Simon,
I have some insight into TPC-H on how it works.
First of all I think it is a violation of TPC rules to publish numbers
without auditing them first. So even if I do the test to show the
better
Le jeudi 07 février 2008, Greg Smith a écrit :
Le mercredi 06 février 2008, Dimitri Fontaine a écrit :
In other cases, a logical line is a physical line, so we start after first
newline met from given lseek start position, and continue reading after the
last lseek position until a newline.
On Thu, Feb 07, 2008 at 12:06:42PM -0500, Greg Smith wrote:
On Thu, 7 Feb 2008, Dimitri Fontaine wrote:
I was thinking of not even reading the file content from the controller
thread, just decide splitting points in bytes (0..ST_SIZE/4 -
ST_SIZE/4+1..2*ST_SIZE/4 etc) and let the reading
On Thu, 7 Feb 2008, Dimitri Fontaine wrote:
I was thinking of not even reading the file content from the controller
thread, just decide splitting points in bytes (0..ST_SIZE/4 -
ST_SIZE/4+1..2*ST_SIZE/4 etc) and let the reading thread fine-tune by
beginning to process input after having read
I was thinking of not even reading the file content from the controller
thread, just decide splitting points in bytes (0..ST_SIZE/4 -
ST_SIZE/4+1..2*ST_SIZE/4 etc) and let the reading thread fine-tune by
beginning to process input after having read first newline, etc.
The problem I was
On Thu, 7 Feb 2008, Greg Smith wrote:
The problem I was pointing out is that if chunk#2 moved foward a few bytes
before it started reading in search of a newline, how will chunk#1 know that
it's supposed to read up to that further point? You have to stop #1 from
reading further when it
Le mercredi 06 février 2008, Greg Smith a écrit :
pgloader is a great tool for a lot of things, particularly if there's any
chance that some of your rows will get rejected. But the way things pass
through the Python/psycopg layer made it uncompetative (more than 50%
slowdown) against the
On Wed, 2008-02-06 at 12:27 +0100, Dimitri Fontaine wrote:
Multi-Threading behavior and CE support
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Now, pgloader will be able to run N threads, each one loading some
data to a
partitionned child-table target. N will certainly be configured
depending on
Hi,
I've been thinking about this topic some more, and as I don't know when I'll
be able to go and implement it I'd want to publish the ideas here. This way
I'll be able to find them again :)
Le mardi 05 février 2008, Dimitri Fontaine a écrit :
Le mardi 05 février 2008, Simon Riggs a écrit :
Le mercredi 06 février 2008, Simon Riggs a écrit :
For me, it would be good to see a --parallel=n parameter that would
allow pg_loader to distribute rows in round-robin manner to n
different concurrent COPY statements. i.e. a non-routing version.
What happen when you want at most N parallel
On Wed, 6 Feb 2008, Dimitri Fontaine wrote:
Did you compare to COPY or \copy?
COPY. If you're loading a TB, if you're smart it's going onto the server
itself if it all possible and loading directly from there. Would probably
get a closer comparision against psql \copy, but recognize
On Wed, 6 Feb 2008, Simon Riggs wrote:
For me, it would be good to see a --parallel=n parameter that would
allow pg_loader to distribute rows in round-robin manner to n
different concurrent COPY statements. i.e. a non-routing version.
Let me expand on this. In many of these giant COPY
Hi Greg,
On 2/6/08 7:56 AM, Greg Smith [EMAIL PROTECTED] wrote:
If I'm loading a TB file, odds are good I can split that into 4 or more
vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4 loaders
at once, and get way more than 1 disk worth of throughput reading. You
have to
Greg Smith wrote:
On Wed, 6 Feb 2008, Simon Riggs wrote:
For me, it would be good to see a --parallel=n parameter that would
allow pg_loader to distribute rows in round-robin manner to n
different concurrent COPY statements. i.e. a non-routing version.
Let me expand on this. In many of
Le mercredi 06 février 2008, Greg Smith a écrit :
COPY. If you're loading a TB, if you're smart it's going onto the server
itself if it all possible and loading directly from there. Would probably
get a closer comparision against psql \copy, but recognize you're always
going to be compared
Le mercredi 06 février 2008, Greg Smith a écrit :
If I'm loading a TB file, odds are good I can split that into 4 or more
vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4 loaders
at once, and get way more than 1 disk worth of throughput reading.
pgloader already supports
[mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 06, 2008 12:41 PM Eastern Standard Time
To: pgsql-performance@postgresql.org
Cc: Greg Smith
Subject:Re: [PERFORM] Benchmark Data requested --- pgloader CE design
ideas
Le mercredi 06 février 2008, Greg Smith a écrit :
If I'm
Le Wednesday 06 February 2008 18:37:41 Dimitri Fontaine, vous avez écrit :
Le mercredi 06 février 2008, Greg Smith a écrit :
If I'm loading a TB file, odds are good I can split that into 4 or more
vertical pieces (say rows 1-25%, 25-50%, 50-75%, 75-100%), start 4
loaders at once, and get
Le Wednesday 06 February 2008 18:49:56 Luke Lonergan, vous avez écrit :
Improvements are welcome, but to compete in the industry, loading will need
to speed up by a factor of 100.
Oh, I meant to compete with internal COPY command instead of \copy one, not
with the competition. AIUI competing
On Wed, 6 Feb 2008, Dimitri Fontaine wrote:
In fact, the -F option works by having pgloader read the given number of lines
but skip processing them, which is not at all what Greg is talking about here
I think.
Yeah, that's not useful.
Greg, what would you think of a pgloader which will
On Mon, 2008-02-04 at 17:33 -0500, Jignesh K. Shah wrote:
First of all I think it is a violation of TPC rules to publish numbers
without auditing them first. So even if I do the test to show the
better performance of PostgreSQL 8.3, I cannot post it here or any
public forum without doing
On Mon, 2008-02-04 at 17:55 -0500, Jignesh K. Shah wrote:
Doing it at low scales is not attractive.
Commercial databases are publishing at scale factor of 1000(about 1TB)
to 1(10TB) with one in 30TB space. So ideally right now tuning
should start at 1000 scale factor.
I don't
Hi,
Le lundi 04 février 2008, Jignesh K. Shah a écrit :
Single stream loader of PostgreSQL takes hours to load data. (Single
stream load... wasting all the extra cores out there)
I wanted to work on this at the pgloader level, so CVS version of pgloader is
now able to load data in parallel,
On Tue, 2008-02-05 at 14:43 +, Richard Huxton wrote:
Simon Riggs wrote:
On Tue, 2008-02-05 at 15:06 +0100, Dimitri Fontaine wrote:
Le lundi 04 février 2008, Jignesh K. Shah a écrit :
Multiple table loads ( 1 per table) spawned via script is bit better
but hits wal problems.
On Tue, 5 Feb 2008, Richard Huxton wrote:
In the case of a bulk upload to an empty table (or partition?) could you not
optimise the WAL away?
Argh. If I hadn't had to retype my email, I would have suggested that
before you.
;)
Matthew
--
Unfortunately, university regulations probably
On Tue, 2008-02-05 at 15:06 +0100, Dimitri Fontaine wrote:
Hi,
Le lundi 04 février 2008, Jignesh K. Shah a écrit :
Single stream loader of PostgreSQL takes hours to load data. (Single
stream load... wasting all the extra cores out there)
I wanted to work on this at the pgloader level,
Simon Riggs wrote:
On Tue, 2008-02-05 at 14:43 +, Richard Huxton wrote:
Simon Riggs wrote:
On Tue, 2008-02-05 at 15:06 +0100, Dimitri Fontaine wrote:
Le lundi 04 février 2008, Jignesh K. Shah a écrit :
Multiple table loads ( 1 per table) spawned via script is bit better
but hits wal
On Tue, 2008-02-05 at 15:05 +, Richard Huxton wrote:
Only by locking the table, which serializes access, which then slows you
down or at least restricts other options. Plus if you use pg_loader then
you'll find only the first few rows optimized and all the rest not.
Hmm - the
On Tue, 5 Feb 2008, Simon Riggs wrote:
In the case of a bulk upload to an empty table (or partition?) could you
not optimise the WAL away? That is, shouldn't the WAL basically be a
simple transformation of the on-disk blocks? You'd have to explicitly
sync the file(s) for the table/indexes of
Simon Riggs wrote:
On Tue, 2008-02-05 at 15:05 +, Richard Huxton wrote:
Only by locking the table, which serializes access, which then slows you
down or at least restricts other options. Plus if you use pg_loader then
you'll find only the first few rows optimized and all the rest not.
Hmm
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
1 - 100 of 143 matches
Mail list logo