Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2011-01-27 Thread Jeff Janes
On Tue, Jan 25, 2011 at 5:32 PM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On Wed, Jan 19, 2011 at 12:07 PM, Bruce Momjian br...@momjian.us wrote:

  ? ? ? ?http://developer.postgresql.org/pgdocs/postgres/non-durability.html

 This sentence looks to me like it should be removed, or perhaps clarified:

     This does affect database crash transaction durability.

 Uh, doesn't it affect database crash transaction durability?  I have
 applied the attached patch to clarify things.  Thanks.

I think the point that was trying to be made there was that the other
parameters only lose and corrupt data when the machine crashes.
Synchronous commit turned off will lose data on a mere postgresql
server crash, it doesn't require a machine-level crash to cause data
loss.

Indeed, the currently committed doc is quite misleading.

 The following are configuration changes you can make
to improve performance in such cases;  they do not invalidate
commit guarantees related to database crashes, only abrupt operating
system stoppage, except as mentioned below

We've now removed the thing being mentioned below, but did not remove
the promise we would be mentioning those things.

Cheers,

Jeff

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2011-01-27 Thread Bruce Momjian
Jeff Janes wrote:
 On Tue, Jan 25, 2011 at 5:32 PM, Bruce Momjian br...@momjian.us wrote:
  Robert Haas wrote:
  On Wed, Jan 19, 2011 at 12:07 PM, Bruce Momjian br...@momjian.us wrote:
 
   ? ? ? 
   ?http://developer.postgresql.org/pgdocs/postgres/non-durability.html
 
  This sentence looks to me like it should be removed, or perhaps clarified:
 
  ? ? This does affect database crash transaction durability.
 
  Uh, doesn't it affect database crash transaction durability? ?I have
  applied the attached patch to clarify things. ?Thanks.
 
 I think the point that was trying to be made there was that the other
 parameters only lose and corrupt data when the machine crashes.
 Synchronous commit turned off will lose data on a mere postgresql
 server crash, it doesn't require a machine-level crash to cause data
 loss.
 
 Indeed, the currently committed doc is quite misleading.
 
  The following are configuration changes you can make
 to improve performance in such cases;  they do not invalidate
 commit guarantees related to database crashes, only abrupt operating
 system stoppage, except as mentioned below
 
 We've now removed the thing being mentioned below, but did not remove
 the promise we would be mentioning those things.

Excellent point.  The old wording was just too clever and even I forgot
why I was making that point.  I have updated the docs to clearly state
why this setting is different from the ones above.  Thanks for spotting
this.

Applied patch attached.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml
index 0a10457..1bec5b1 100644
*** a/doc/src/sgml/perform.sgml
--- b/doc/src/sgml/perform.sgml
*** SELECT * FROM x, y, a, b, c WHERE someth
*** 1157,1165 
  
   listitem
para
!Turn off xref linkend=guc-synchronous-commit;  there is no
 need to write the acronymWAL/acronym to disk on every
!commit.
/para
   /listitem
  /itemizedlist
--- 1157,1166 
  
   listitem
para
!Turn off xref linkend=guc-synchronous-commit;  there might be no
 need to write the acronymWAL/acronym to disk on every
!commit.  This does enable possible tranaction loss in case of
!a emphasisdatabase/ crash.
/para
   /listitem
  /itemizedlist

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2011-01-25 Thread Bruce Momjian
Robert Haas wrote:
 On Wed, Jan 19, 2011 at 12:07 PM, Bruce Momjian br...@momjian.us wrote:
  Chris Browne wrote:
  gentosa...@gmail.com (A B) writes:
   If you just wanted PostgreSQL to go as fast as possible WITHOUT any
   care for your data (you accept 100% dataloss and datacorruption if any
   error should occur), what settings should you use then?
 
  Use /dev/null. ?It is web scale, and there are good tutorials.
 
  But seriously, there *are* cases where blind speed is of use. ?When
  loading data into a fresh database is a good time for this; if things
  fall over, it may be pretty acceptable to start from scratch with
  mkfs/initdb.
 
  I'd:
  - turn off fsync
  - turn off synchronous commit
  - put as much as possible onto Ramdisk/tmpfs/similar as possible
 
  FYI, we do have a documentation section about how to configure Postgres
  for improved performance if you don't care about durability:
 
  ? ? ? ?http://developer.postgresql.org/pgdocs/postgres/non-durability.html
 
 This sentence looks to me like it should be removed, or perhaps clarified:
 
 This does affect database crash transaction durability.

Uh, doesn't it affect database crash transaction durability?  I have
applied the attached patch to clarify things.  Thanks.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml
index 2699828..fb55598 100644
*** a/doc/src/sgml/perform.sgml
--- b/doc/src/sgml/perform.sgml
*** SELECT * FROM x, y, a, b, c WHERE someth
*** 1159,1165 
para
 Turn off xref linkend=guc-synchronous-commit;  there might be no
 need to write the acronymWAL/acronym to disk on every
!commit.  This does affect database crash transaction durability.
/para
   /listitem
  /itemizedlist
--- 1159,1165 
para
 Turn off xref linkend=guc-synchronous-commit;  there might be no
 need to write the acronymWAL/acronym to disk on every
!commit.  This can cause transaction loss after a server crash.
/para
   /listitem
  /itemizedlist

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2011-01-20 Thread Robert Haas
On Wed, Jan 19, 2011 at 12:07 PM, Bruce Momjian br...@momjian.us wrote:
 Chris Browne wrote:
 gentosa...@gmail.com (A B) writes:
  If you just wanted PostgreSQL to go as fast as possible WITHOUT any
  care for your data (you accept 100% dataloss and datacorruption if any
  error should occur), what settings should you use then?

 Use /dev/null.  It is web scale, and there are good tutorials.

 But seriously, there *are* cases where blind speed is of use.  When
 loading data into a fresh database is a good time for this; if things
 fall over, it may be pretty acceptable to start from scratch with
 mkfs/initdb.

 I'd:
 - turn off fsync
 - turn off synchronous commit
 - put as much as possible onto Ramdisk/tmpfs/similar as possible

 FYI, we do have a documentation section about how to configure Postgres
 for improved performance if you don't care about durability:

        http://developer.postgresql.org/pgdocs/postgres/non-durability.html

This sentence looks to me like it should be removed, or perhaps clarified:

This does affect database crash transaction durability.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2011-01-19 Thread Bruce Momjian
Chris Browne wrote:
 gentosa...@gmail.com (A B) writes:
  If you just wanted PostgreSQL to go as fast as possible WITHOUT any
  care for your data (you accept 100% dataloss and datacorruption if any
  error should occur), what settings should you use then?
 
 Use /dev/null.  It is web scale, and there are good tutorials.
 
 But seriously, there *are* cases where blind speed is of use.  When
 loading data into a fresh database is a good time for this; if things
 fall over, it may be pretty acceptable to start from scratch with
 mkfs/initdb.
 
 I'd:
 - turn off fsync
 - turn off synchronous commit
 - put as much as possible onto Ramdisk/tmpfs/similar as possible

FYI, we do have a documentation section about how to configure Postgres
for improved performance if you don't care about durability:

http://developer.postgresql.org/pgdocs/postgres/non-durability.html

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2011-01-19 Thread Fabrízio de Royes Mello
2011/1/19 Bruce Momjian br...@momjian.us


 FYI, we do have a documentation section about how to configure Postgres
 for improved performance if you don't care about durability:

http://developer.postgresql.org/pgdocs/postgres/non-durability.html



A sometime ago I wrote in my blog [1] (sorry but available only in
pt-br) how to create an in-memory database with PostgreSQL. This little
article is based on post of Mr. Robert Haas about this topic [2].

[1]
http://fabriziomello.blogspot.com/2010/06/postgresql-na-memoria-ram-in-memory.html
[2]
http://rhaas.blogspot.com/2010/06/postgresql-as-in-memory-only-database_24.html

-- 
Fabrízio de Royes Mello
 Blog sobre TI: http://fabriziomello.blogspot.com
 Perfil Linkedin: http://br.linkedin.com/in/fabriziomello


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-15 Thread Robert Haas
On Fri, Nov 5, 2010 at 8:12 AM, Jon Nelson jnelson+pg...@jamponi.net wrote:
 On Fri, Nov 5, 2010 at 7:08 AM, Guillaume Cottenceau g...@mnc.ch wrote:
 Marti Raudsepp marti 'at' juffo.org writes:

 On Fri, Nov 5, 2010 at 13:32, A B gentosa...@gmail.com wrote:
 I was just thinking about the case where I will have almost 100%
 selects, but still needs something better than a plain key-value
 storage so I can do some sql queries.
 The server will just boot, load data, run,  hopefully not crash but if
 it would, just start over with load and run.

 If you want fast read queries then changing
 fsync/full_page_writes/synchronous_commit won't help you.

 That illustrates how knowing the reasoning of this particular
 requests makes new suggestions worthwhile, while previous ones
 are now seen as useless.

 I disagree that they are useless - the stated mechanism was start,
 load data, and run. Changing the params above won't likely change
 much in the 'run' stage but would they help in the 'load' stage?

Yes, they certainly will.  And they might well help in the run stage,
too, if there are temporary tables in use, or checkpoints flushing
hint bit updates, or such things.

It's also important to crank up checkpoint_segments and
checkpoint_timeout very high, especially for the bulk data load but
even afterwards if there is any write activity at all.  And it's
important to set shared_buffers correctly, too, which helps on
workloads of all kinds.  But as said upthread, turning off fsync,
full_page_writes, and synchronous_commit are the things you can do
that specifically trade reliability away to get speed.

In 9.1, I'm hopeful that we'll have unlogged tables, which will even
better than turning these parameters off, and for which I just posted
a patch to -hackers.  Instead of generating WAL and writing WAL to the
OS and then NOT trying to make sure it hits the disk, we just won't
generate it in the first place.  But if PostgreSQL or the machine it's
running on crashes, you won't need to completely blow away the cluster
and start over; instead, the particular tables that you chose to
create as unlogged will be truncated, and the rest of your data,
including the system catalogs, will still be intact.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-15 Thread Andy Colson

On 11/15/2010 9:06 AM, Robert Haas wrote:

In 9.1, I'm hopeful that we'll have unlogged tables, which will even
better than turning these parameters off, and for which I just posted
a patch to -hackers.  Instead of generating WAL and writing WAL to the
OS and then NOT trying to make sure it hits the disk, we just won't
generate it in the first place.  But if PostgreSQL or the machine it's
running on crashes, you won't need to completely blow away the cluster
and start over; instead, the particular tables that you chose to
create as unlogged will be truncated, and the rest of your data,
including the system catalogs, will still be intact.



if I am reading this right means: we can run our db safely (with fsync 
and full_page_writes enabled) except for tables of our choosing?


If so, I am very +1 for this!

-Andy

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-15 Thread Robert Haas
On Mon, Nov 15, 2010 at 2:27 PM, Andy Colson a...@squeakycode.net wrote:
 On 11/15/2010 9:06 AM, Robert Haas wrote:

 In 9.1, I'm hopeful that we'll have unlogged tables, which will even
 better than turning these parameters off, and for which I just posted
 a patch to -hackers.  Instead of generating WAL and writing WAL to the
 OS and then NOT trying to make sure it hits the disk, we just won't
 generate it in the first place.  But if PostgreSQL or the machine it's
 running on crashes, you won't need to completely blow away the cluster
 and start over; instead, the particular tables that you chose to
 create as unlogged will be truncated, and the rest of your data,
 including the system catalogs, will still be intact.


 if I am reading this right means: we can run our db safely (with fsync and
 full_page_writes enabled) except for tables of our choosing?

 If so, I am very +1 for this!

Yep.  But we need some vic^H^Holunteers to reviews and test the patches.

https://commitfest.postgresql.org/action/patch_view?id=424

Code review, benchmarking, or just general tinkering and reporting
what you find out on the -hackers thread would be appreciated.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-08 Thread Lello, Nick
How about either:-

a)   Size the pool so all your data fits into it.

b)   Use a RAM-based filesystem (ie: a memory disk or SSD) for the
data storage [memory disk will be faster] with a Smaller pool
- Your seed data should be a copy of the datastore on disk filesystem;
at startup time copy the storage files from the physical to memory.

A bigger gain can probably be had if you have a tightly controlled
suite of queries that will be run against the database and you can
spend the time to tune each to ensure it performs no sequential scans
(ie: Every query uses index lookups).


On 5 November 2010 11:32, A B gentosa...@gmail.com wrote:
 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then?



 I'm just curious, what do you need that for?

 regards
 Szymon

 I was just thinking about the case where I will have almost 100%
 selects, but still needs something better than a plain key-value
 storage so I can do some sql queries.
 The server will just boot, load data, run,  hopefully not crash but if
 it would, just start over with load and run.

 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance




-- 


Nick Lello | Web Architect
o +1 503.284.7581 x418 / +44 (0) 8433309374 | m +44 (0) 7917 138319
Email: nick.lello at rentrak.com
RENTRAK | www.rentrak.com | NASDAQ: RENT

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-08 Thread Dimitri Fontaine
Lello, Nick nick.le...@rentrakmail.com writes:
 A bigger gain can probably be had if you have a tightly controlled
 suite of queries that will be run against the database and you can
 spend the time to tune each to ensure it performs no sequential scans
 (ie: Every query uses index lookups).

Given a fixed pool of queries, you can prepare them in advance so that
you don't usually pay the parsing and planning costs. I've found that
the planning is easily more expensive than the executing when all data
fits in RAM.

Enter pgbouncer and preprepare :
  http://wiki.postgresql.org/wiki/PgBouncer
  http://preprepare.projects.postgresql.org/README.html

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-08 Thread Klaus Ita
Use a replicated setup?

On Nov 8, 2010 4:21 PM, Lello, Nick nick.le...@rentrakmail.com wrote:

How about either:-

a)   Size the pool so all your data fits into it.

b)   Use a RAM-based filesystem (ie: a memory disk or SSD) for the
data storage [memory disk will be faster] with a Smaller pool
- Your seed data should be a copy of the datastore on disk filesystem;
at startup time copy the storage files from the physical to memory.

A bigger gain can probably be had if you have a tightly controlled
suite of queries that will be run against the database and you can
spend the time to tune each to ensure it performs no sequential scans
(ie: Every query uses index lookups).



On 5 November 2010 11:32, A B gentosa...@gmail.com wrote:
 If you just wanted PostgreSQL to g...
--


Nick Lello | Web Architect
o +1 503.284.7581 x418 / +44 (0) 8433309374 | m +44 (0) 7917 138319
Email: nick.lello at rentrak.com
RENTRAK | www.rentrak.com | NASDAQ: RENT


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to y...


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-06 Thread Craig Ringer

On 11/05/2010 07:32 PM, A B wrote:


The server will just boot, load data, run,  hopefully not crash but if
it would, just start over with load and run.


Have you looked at VoltDB? It's designed for fast in-memory use.

--
Craig Ringer

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Thom Brown
On 5 November 2010 10:59, A B gentosa...@gmail.com wrote:

 Hi there.

 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then?


Turn off fsync and full_page_writes (i.e. running with scissors).

Also depends on what you mean by as fast as possible.  Fast at doing
what?  Bulk inserts, selecting from massive tables?

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Thom Brown
On 5 November 2010 11:14, Thom Brown t...@linux.com wrote:

 On 5 November 2010 10:59, A B gentosa...@gmail.com wrote:

 Hi there.

 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then?


 Turn off fsync and full_page_writes (i.e. running with scissors).

 Also depends on what you mean by as fast as possible.  Fast at doing
 what?  Bulk inserts, selecting from massive tables?


Oh, and turn synchronous_commit off too.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Guillaume Cottenceau
A B gentosaker 'at' gmail.com writes:

 Hi there.

 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then?

Don't use PostgreSQL, just drop your data, you will end up with
the same results and be even faster than any use of PostgreSQL.
If anyone needs data, then just say you had data corruption, and
that since 100% dataloss is accepted, then all's well.

-- 
Guillaume Cottenceau

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Szymon Guz
On 5 November 2010 11:59, A B gentosa...@gmail.com wrote:

 Hi there.

 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then?



I'm just curious, what do you need that for?


regards
Szymon


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread A B
 Turn off fsync and full_page_writes (i.e. running with scissors).
 Also depends on what you mean by as fast as possible.  Fast at doing
 what?  Bulk inserts, selecting from massive tables?

I guess some tuning has to be done to make it work well with the
particular workload (in this case most selects). But thanks for the
suggestions on the more general parameters.

running with scissors sounds nice :-)

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Craig Ringer
On 05/11/10 18:59, A B wrote:
 Hi there.
 
 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then?

Others have suggested appropriate parameters (running with scissors).

I'd like to add something else to the discussion: have you looked at
memcached yet? Or pgpool? If you haven't, start there.

-- 
Craig Ringer

Tech-related writing: http://soapyfrogs.blogspot.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread A B
 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then?



 I'm just curious, what do you need that for?

 regards
 Szymon

I was just thinking about the case where I will have almost 100%
selects, but still needs something better than a plain key-value
storage so I can do some sql queries.
The server will just boot, load data, run,  hopefully not crash but if
it would, just start over with load and run.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread A B
 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then?

 Others have suggested appropriate parameters (running with scissors).

 I'd like to add something else to the discussion: have you looked at
 memcached yet? Or pgpool? If you haven't, start there.


memcahced has been mentioned in some discussions, but I have not studied it yet.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Marti Raudsepp
On Fri, Nov 5, 2010 at 13:32, A B gentosa...@gmail.com wrote:
 I was just thinking about the case where I will have almost 100%
 selects, but still needs something better than a plain key-value
 storage so I can do some sql queries.
 The server will just boot, load data, run,  hopefully not crash but if
 it would, just start over with load and run.

If you want fast read queries then changing
fsync/full_page_writes/synchronous_commit won't help you.

Just follow the regular tuning guide. shared_buffers,
effective_cache_size, work_mem, default_statistics_target can make a
difference.

http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

Regards,
Marti

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Thom Brown
On 5 November 2010 11:36, Marti Raudsepp ma...@juffo.org wrote:

 On Fri, Nov 5, 2010 at 13:32, A B gentosa...@gmail.com wrote:
  I was just thinking about the case where I will have almost 100%
  selects, but still needs something better than a plain key-value
  storage so I can do some sql queries.
  The server will just boot, load data, run,  hopefully not crash but if
  it would, just start over with load and run.

 If you want fast read queries then changing
 fsync/full_page_writes/synchronous_commit won't help you.


Yes, those will be for write-performance only, so useless in this case.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Marti Raudsepp
On Fri, Nov 5, 2010 at 13:11, Guillaume Cottenceau g...@mnc.ch wrote:
 Don't use PostgreSQL, just drop your data, you will end up with
 the same results and be even faster than any use of PostgreSQL.
 If anyone needs data, then just say you had data corruption, and
 that since 100% dataloss is accepted, then all's well.

You're not helping. There are legitimate reasons for trading off
safety for performance.

Regards,
Marti

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Guillaume Cottenceau
Marti Raudsepp marti 'at' juffo.org writes:

 On Fri, Nov 5, 2010 at 13:11, Guillaume Cottenceau g...@mnc.ch wrote:
 Don't use PostgreSQL, just drop your data, you will end up with
 the same results and be even faster than any use of PostgreSQL.
 If anyone needs data, then just say you had data corruption, and
 that since 100% dataloss is accepted, then all's well.

 You're not helping. There are legitimate reasons for trading off
 safety for performance.

Cccepting 100% dataloss and datacorruption deserves a little
reasoning, otherwise I'm afraid I'm right in suggesting it makes
little difference to use PG or to drop data altogether.

-- 
Guillaume Cottenceau

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Guillaume Cottenceau
Marti Raudsepp marti 'at' juffo.org writes:

 On Fri, Nov 5, 2010 at 13:32, A B gentosa...@gmail.com wrote:
 I was just thinking about the case where I will have almost 100%
 selects, but still needs something better than a plain key-value
 storage so I can do some sql queries.
 The server will just boot, load data, run,  hopefully not crash but if
 it would, just start over with load and run.

 If you want fast read queries then changing
 fsync/full_page_writes/synchronous_commit won't help you.

That illustrates how knowing the reasoning of this particular
requests makes new suggestions worthwhile, while previous ones
are now seen as useless.

-- 
Guillaume Cottenceau

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Jon Nelson
On Fri, Nov 5, 2010 at 7:08 AM, Guillaume Cottenceau g...@mnc.ch wrote:
 Marti Raudsepp marti 'at' juffo.org writes:

 On Fri, Nov 5, 2010 at 13:32, A B gentosa...@gmail.com wrote:
 I was just thinking about the case where I will have almost 100%
 selects, but still needs something better than a plain key-value
 storage so I can do some sql queries.
 The server will just boot, load data, run,  hopefully not crash but if
 it would, just start over with load and run.

 If you want fast read queries then changing
 fsync/full_page_writes/synchronous_commit won't help you.

 That illustrates how knowing the reasoning of this particular
 requests makes new suggestions worthwhile, while previous ones
 are now seen as useless.

I disagree that they are useless - the stated mechanism was start,
load data, and run. Changing the params above won't likely change
much in the 'run' stage but would they help in the 'load' stage?


-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Devrim GÜNDÜZ
On Fri, 2010-11-05 at 11:59 +0100, A B wrote:
 
 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then? 

You can initdb to ramdisk, if you have enough RAM. It will fast, really.

-- 
Devrim GÜNDÜZ
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
PostgreSQL RPM Repository: http://yum.pgrpms.org
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Chris Browne
gentosa...@gmail.com (A B) writes:
 If you just wanted PostgreSQL to go as fast as possible WITHOUT any
 care for your data (you accept 100% dataloss and datacorruption if any
 error should occur), what settings should you use then?

Use /dev/null.  It is web scale, and there are good tutorials.

But seriously, there *are* cases where blind speed is of use.  When
loading data into a fresh database is a good time for this; if things
fall over, it may be pretty acceptable to start from scratch with
mkfs/initdb.

I'd:
- turn off fsync
- turn off synchronous commit
- put as much as possible onto Ramdisk/tmpfs/similar as possible
-- 
output = reverse(moc.liamg @ enworbbc)
http://linuxfinances.info/info/lsf.html
43% of all statistics are worthless.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Running PostgreSQL as fast as possible no matter the consequences

2010-11-05 Thread Mladen Gogala

Devrim GÜNDÜZ wrote:

On Fri, 2010-11-05 at 11:59 +0100, A B wrote:
  

If you just wanted PostgreSQL to go as fast as possible WITHOUT any
care for your data (you accept 100% dataloss and datacorruption if any
error should occur), what settings should you use then? 



You can initdb to ramdisk, if you have enough RAM. It will fast, really.

  
That is approximately the same thing as the answer to the question 
whether Ford Taurus can reach 200mph.

It can, just once,  if you run it down the cliff.

--

Mladen Gogala 
Sr. Oracle DBA

1500 Broadway
New York, NY 10036
(212) 329-5251
http://www.vmsinfo.com 
The Leader in Integrated Media Intelligence Solutions





--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance