[PERFORM] OpenMP in PostgreSQL-8.4.0

2009-11-28 Thread Reydan Cankur

Hi,

I am trying to run postgresql functions with threads by using OpenMP.  
I tried to parallelize slot_deform_tuple function(src/backend/access/ 
common/heaptuple.c) and added below lines to the code.


#pragma omp parallel
{
#pragma omp sections
{
#pragma omp section
values[attnum] = fetchatt(thisatt, tp + off);

#pragma omp section
off = att_addlength_pointer(off, thisatt-attlen, tp + off);
}
}

During ./configure I saw the information message for  heaptuple.c as  
below:

OpenMP defined section was parallelized.

Below is the configure that I have run:
./configure CC=/path/to/icc -openmp CFLAGS=-O2 --prefix=/path/to/ 
pgsql --bindir=/path/to/pgsql/bin --datadir=/path/to/pgsql/share -- 
sysconfdir=/path/to/pgsql/etc --libdir=/path/to/pgsql/lib -- 
includedir=/path/to/pgsql/include --mandir=/path/to/pgsql/man --with- 
pgport=65432 --with-readline --without-zlib


After configure I ran gmake and gmake install and I saw PostgreSQL  
installation complete.


When I begin to configure for initdb and run below command:
 /path/to/pgsql/bin/initdb -D /path/to/pgsql/data

I get following error:

The files belonging to this database system will be owned by user  
reydan.cankur.

This user must also own the server process.

The database cluster will be initialized with locale en_US.UTF-8.
The default database encoding has accordingly been set to UTF8.
The default text search configuration will be set to english.

fixing permissions on existing directory /path/to/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 32MB
creating configuration files ... ok
creating template1 database in /path/to/pgsql/data/base/1 ... FATAL:   
could not create unique index pg_type_typname_nsp_index

DETAIL:  Table contains duplicated values.
child process exited with exit code 1
initdb: removing contents of data directory /path/to/pgsql/data

I could not get the point between initdb process and the change that I  
have made.

I need your help on solution of this issue.

Thanks in advance,
Reydan





Re: [PERFORM] SSD + RAID

2009-11-28 Thread Bruce Momjian
Greg Smith wrote:
 Merlin Moncure wrote:
  I am right now talking to someone on postgresql irc who is measuring
  15k iops from x25-e and no data loss following power plug test.
 The funny thing about Murphy is that he doesn't visit when things are 
 quiet.  It's quite possible the window for data loss on the drive is 
 very small.  Maybe you only see it one out of 10 pulls with a very 
 aggressive database-oriented write test.  Whatever the odd conditions 
 are, you can be sure you'll see them when there's a bad outage in actual 
 production though.
 
 A good test program that is a bit better at introducing and detecting 
 the write cache issue is described at 
 http://brad.livejournal.com/2116715.html

Wow, I had not seen that tool before.  I have added a link to it from
our documentation, and also added a mention of our src/tools/fsync test
tool to our docs.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +
Index: doc/src/sgml/config.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/config.sgml,v
retrieving revision 1.233
diff -c -c -r1.233 config.sgml
*** doc/src/sgml/config.sgml	13 Nov 2009 22:43:39 -	1.233
--- doc/src/sgml/config.sgml	28 Nov 2009 16:12:46 -
***
*** 1432,1437 
--- 1432,1439 
  The default is the first method in the above list that is supported
  by the platform.
  The literalopen_/* options also use literalO_DIRECT/ if available.
+ The utility filenamesrc/tools/fsync/ in the PostgreSQL source tree
+ can do performance testing of various fsync methods.
  This parameter can only be set in the filenamepostgresql.conf/
  file or on the server command line.
 /para
Index: doc/src/sgml/wal.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/wal.sgml,v
retrieving revision 1.59
diff -c -c -r1.59 wal.sgml
*** doc/src/sgml/wal.sgml	9 Apr 2009 16:20:50 -	1.59
--- doc/src/sgml/wal.sgml	28 Nov 2009 16:12:57 -
***
*** 86,91 
--- 86,93 
 ensure data integrity.  Avoid disk controllers that have non-battery-backed
 write caches.  At the drive level, disable write-back caching if the
 drive cannot guarantee the data will be written before shutdown.
+You can test for reliable I/O subsystem behavior using ulink
+url=http://brad.livejournal.com/2116715.html;diskchecker.pl/ulink.
/para
  
para

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] OpenMP in PostgreSQL-8.4.0

2009-11-28 Thread Tom Lane
Reydan Cankur reydan.can...@gmail.com writes:
 I am trying to run postgresql functions with threads by using OpenMP.  

This is pretty much doomed to failure.  It's *certainly* doomed to
failure if you just hack up one area of the source code without dealing
with the backend's general lack of support for threading.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Analyse without locking?

2009-11-28 Thread Tom Lane
Richard Neill rn...@cam.ac.uk writes:
 Now, I understand that increasing checkpoint_segments is generally a 
 good thing (subject to some limit), but doesn't that just mean that 
 instead of say a 1 second outage every minute, it's a 10 second outage 
 every 10 minutes?

In recent PG versions you can spread the checkpoint I/O out over a
period of time, so it shouldn't be an outage at all, just background
load.  Other things being equal, a longer checkpoint cycle is better
since it improves the odds of being able to coalesce multiple changes
to the same page into a single write.  The limiting factor is your
threshold of pain on how much WAL-replay work would be needed to recover
after a crash.

 Is it possible (or even sensible) to do a manual vacuum analyze with 
 nice/ionice?

There's no support for that in PG.  You could try manually renice'ing
the backend that's running your VACUUM but I'm not sure how well it
would work; there are a number of reasons why it might be
counterproductive.  Fooling with the vacuum_cost_delay parameters is the
recommended way to make a vacuum run slower and use less of the machine.

regards, tom lane

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] OpenMP in PostgreSQL-8.4.0

2009-11-28 Thread Reydan Cankur
You mean that backend does not support threading and everything I try  
is useless

Is there a way to overcome this issue?
Is there anything I can adjust on backend to enable threading?
Is there any documentation to advise?

Best Regards,
Reydan


On Nov 28, 2009, at 6:42 PM, Tom Lane wrote:


Reydan Cankur reydan.can...@gmail.com writes:

I am trying to run postgresql functions with threads by using OpenMP.


This is pretty much doomed to failure.  It's *certainly* doomed to
failure if you just hack up one area of the source code without  
dealing

with the backend's general lack of support for threading.

regards, tom lane



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Analyse without locking?

2009-11-28 Thread Richard Neill

Thanks for your explanations.

Tom Lane wrote:

Richard Neill rn...@cam.ac.uk writes:
Now, I understand that increasing checkpoint_segments is generally a 
good thing (subject to some limit), but doesn't that just mean that 
instead of say a 1 second outage every minute, it's a 10 second outage 
every 10 minutes?


In recent PG versions you can spread the checkpoint I/O out over a
period of time, so it shouldn't be an outage at all, just background
load.  Other things being equal, a longer checkpoint cycle is better
since it improves the odds of being able to coalesce multiple changes
to the same page into a single write.  The limiting factor is your
threshold of pain on how much WAL-replay work would be needed to recover
after a crash.


That makes sense. I think that 64 is sane - it means crash-recovery 
takes less than 1 minute, yet we aren't seeing the warning that 
checkpoints are too frequent.


Is it possible (or even sensible) to do a manual vacuum analyze with 
nice/ionice?


There's no support for that in PG.  You could try manually renice'ing
the backend that's running your VACUUM but I'm not sure how well it
would work; there are a number of reasons why it might be
counterproductive.  Fooling with the vacuum_cost_delay parameters is the
recommended way to make a vacuum run slower and use less of the machine.


I see why it might not work well - priority inversion etc.

What I was trying to achieve is to say that vacuum can have all the 
spare idle CPU/IO that's available, but must *immediately* back off when 
something else needs the CPU/IO/Locks.


For example,
  nice -n 20 yes  /dev/null
  ionice -c 3 dd if=/dev/zero  tmp.del

will both get quite a lot of work done on a medium-loaded system (try 
this on your own laptop), but have zero impact on other processes.


On the other hand, changing vacuum_cost_delay means that vacuum runs 
slowly even if the CPU is otherwise idle; yet it still impacts on the 
responsiveness of some queries.



Richard

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance