On Mon, Jun 14, 2010 at 15:59, Tom Lane t...@sss.pgh.pa.us wrote:
Magnus Hagander mag...@hagander.net writes:
On Sun, Jun 13, 2010 at 21:19, David Jarvis thanga...@gmail.com wrote:
I prefer to_timestamp and to_date over the more verbose construct_timestamp.
Yeah, I agree with that.
Those
On Jun 15, 4:18 pm, j...@agliodbs.com (Josh Berkus) wrote:
On 6/15/10 10:37 AM, Chris Browne wrote:
I'd like to see some figures about WAL on RAMfs vs. simply turning off
fsync and full_page_writes. Per Gavin's tests, PostgreSQL is already
close to TokyoCabinet/MongoDB performance just with
On 16/06/10 18:30, jgard...@jonathangardner.net wrote:
On Jun 15, 4:18 pm, j...@agliodbs.com (Josh Berkus) wrote:
On 6/15/10 10:37 AM, Chris Browne wrote:
I'd like to see some figures about WAL on RAMfs vs. simply turning off
fsync and full_page_writes. Per Gavin's tests, PostgreSQL is
Have you tried connecting using a UNIX socket instead of a TCP socket on
localhost ? On such very short queries, the TCP overhead is significant.
Actually UNIX sockets are the default for psycopg2, had forgotten that.
I get 7400 using UNIX sockets and 3000 using TCP (host=localhost)
--
Sent
jgard...@jonathangardner.net wrote:
NOTE: If I do one giant commit instead of lots of littler ones, I get
much better speeds for the slower cases, but I never exceed 5,500
which appears to be some kind of wall I can't break through.
That's usually about where I run into the upper limit on
Fair enough. How about something like make_timestamp? It's at least
shorter and easier than construct :-)
Agreed.
Dave
FYI I've tweaked this program a bit :
import psycopg2
from time import time
conn = psycopg2.connect(database='peufeu')
cursor = conn.cursor()
cursor.execute(CREATE TEMPORARY TABLE test (data int not null))
conn.commit()
cursor.execute(PREPARE ins AS INSERT INTO test VALUES ($1))
Excerpts from jgard...@jonathangardner.net's message of mié jun 16 02:30:30
-0400 2010:
NOTE: If I do one giant commit instead of lots of littler ones, I get
much better speeds for the slower cases, but I never exceed 5,500
which appears to be some kind of wall I can't break through.
If
* fsync=off = 5,100
* fsync=off and synchronous_commit=off = 5,500
Now, this *is* interesting ... why should synch_commit make a difference
if fsync is off?
Anyone have any ideas?
tmpfs, WAL on same tmpfs:
* Default config: 5,200
* full_page_writes=off = 5,200
* fsync=off = 5,250
*
Good morning List
In relation to the process of tuning the engine PostgreSQL database, especially
7.3.7 version that is being used currently, agreaceria me clarify a procedure
If I have a server with 2 GB of RAM, it is said that the shared memory segment
for
the engine of the database on a
On Wed, Jun 16, 2010 at 1:27 AM, Greg Smith g...@2ndquadrant.com wrote:
I normally just write little performance test cases in the pgbench scripting
language, then I get multiple clients and (in 9.0) multiple driver threads
all for free.
See, this is why I love these mailing lists. I totally
On Wed, Jun 16, 2010 at 4:22 AM, Pierre C li...@peufeu.com wrote:
import psycopg2
from time import time
conn = psycopg2.connect(database='peufeu')
cursor = conn.cursor()
cursor.execute(CREATE TEMPORARY TABLE test (data int not null))
conn.commit()
cursor.execute(PREPARE ins AS INSERT INTO
On Wed, Jun 16, 2010 at 12:00 PM, Josh Berkus j...@agliodbs.com wrote:
* fsync=off = 5,100
* fsync=off and synchronous_commit=off = 5,500
Now, this *is* interesting ... why should synch_commit make a difference
if fsync is off?
Anyone have any ideas?
I may have stumbled upon this by my
Juan Pablo Sandoval Rivera juapab...@tutopia.com wrote:
In relation to the process of tuning the engine PostgreSQL
database, especially 7.3.7 version that is being used currently,
Do you really mean PostgreSQL version 7.3.7 (not 8.3.7)? If so, you
should really consider upgrading.
http://www.postgresql.org/docs/current/static/wal-async-commit.html
the server waits for the transaction's WAL records to be flushed to permanent
storage before returning a success indication to the client.
I think with fynch=off, whether WAL gets written to disk or not is still
controlled by
On Wed, Jun 16, 2010 at 7:46 AM, Juan Pablo Sandoval Rivera
juapab...@tutopia.com wrote:
Good morning List
In relation to the process of tuning the engine PostgreSQL database,
especially
7.3.7 version that is being used currently, agreaceria me clarify a procedure
I concur with the other
On Wed, Jun 16, 2010 at 12:51 AM, Pierre C li...@peufeu.com wrote:
Have you tried connecting using a UNIX socket instead of a TCP socket on
localhost ? On such very short queries, the TCP overhead is significant.
Unfortunately, this isn't an option for my use case. Carbonado only
supports TCP
On 6/16/10 12:00 PM, Josh Berkus wrote:
* fsync=off = 5,100
* fsync=off and synchronous_commit=off = 5,500
Now, this *is* interesting ... why should synch_commit make a difference
if fsync is off?
Anyone have any ideas?
I found that pgbench has noise of about 20% (I posted about this a
I'm not surprised that Python add is so slow, but I am surprised that
I didn't remember it was... ;-)
it's not the add(), it's the time.time()...
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
David Jarvis thanga...@gmail.com writes:
Fair enough. How about something like make_timestamp? It's at least
shorter and easier than construct :-)
Agreed.
No objection here either.
regards, tom lane
--
Sent via pgsql-performance mailing list
Hello,I will have a web application having postgres 8.4+ as backend. At any
given time, there will be max of 1000 parallel web-users interacting with the
database (read/write)I wish to do performance testing of 1000 simultaneous
read/write to the database.
I can do a simple unix script on the
Excerpts from Tom Lane's message of lun jun 14 23:57:11 -0400 2010:
Scott Carey sc...@richrelevance.com writes:
Great points. There is one other option that is decent for the WAL:
If splitting out a volume is not acceptable for the OS and WAL --
absolutely split those two out into their
We have a fairly unique need for a local, in-memory cache. This will
store data aggregated from other sources. Generating the data only
takes a few minutes, and it is updated often. There will be some
fairly expensive queries of arbitrary complexity run at a fairly high
rate. We're looking for
Thanks. I will try with a more sensible value of wal_buffers.. I was
hoping to keep more in memory and therefore reduce the frequency of disk
IOs..
Any suggestions for good monitoring software for linux?
On 15/06/2010 00:08, Robert Haas wrote:
On Mon, Jun 14, 2010 at 2:53 PM, Tom
Balkrishna Sharma b...@hotmail.com wrote:
I wish to do performance testing of 1000 simultaneous read/write
to the database.
You should definitely be using a connection pool of some sort. Both
your throughput and response time will be better that way. You'll
want to test with different pool
On Jun 14, 7:14 pm, jgard...@jonathangardner.net
jgard...@jonathangardner.net wrote:
We have a fairly unique need for a local, in-memory cache. This will
store data aggregated from other sources. Generating the data only
takes a few minutes, and it is updated often. There will be some
fairly
26 matches
Mail list logo