Re: [HACKERS] Transaction Speed

2006-07-21 Thread Enver ALTIN
Hi,

On Thu, Jul 20, 2006 at 02:36:53PM +0200, moises wrote:
 For example Inserts, Update, delete, etc.

If you need a storage for structured data, database servers are good to
go. If you need a very fast flow of not-so-needed and
okay-to-miss-we-can-regenerate type of data storage you can go with
in-memory storage such as memcache[1] or PostgreSQL's temporary
table[2] feature. I think the latter would slightly perform better.

Enjoy,

[1] http://www.danga.com/memcached/
[2] http://www.postgresql.org/docs/8.1/static/sql-createtable.html
-- 
Enver

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL

2006-07-21 Thread Robert Lor

Tom Lane wrote:


Tatsuo Ishii [EMAIL PROTECTED] writes:
 


18% in s_lock is definitely bad :-(.  Were you able to determine which
LWLock(s) are accounting for the contention?
   



 


Sorry for the delay. Finally I got the oprofile data. It's
huge(34MB). If you are interested, I can put somewhere. Please let me
know.
   



I finally got a chance to look at this, and it seems clear that all the
traffic is on the BufMappingLock.  This is essentially the same problem
we were discussing with respect to Gavin Hamill's report of poor
performance on an 8-way IBM PPC64 box (see hackers archives around
2006-04-21).  If your database is fully cached in shared buffers, then
you can do a whole lot of buffer accesses per unit time, and even though
all the BufMappingLock acquisitions are in shared-LWLock mode, the
LWLock's spinlock ends up being heavily contended on an SMP box.

It's likely that CVS HEAD would show somewhat better performance because
of the btree change to cache local copies of index metapages (which
eliminates a fair fraction of buffer accesses, at least in Gavin's test
case).   Getting much further than that seems to require partitioning
the buffer mapping table.  The last discussion stalled on my concerns
about unpredictable shared memory usage, but I have some ideas on that
which I'll post separately.  In the meantime, thanks for sending along
the oprofile data!

regards, tom lane
 

I ran pgbench and fired up a DTrace script using the lwlock probes we've 
added, and it looks like BufMappingLock is the most contended lock, but  
CheckpointStartLocks are held for longer duration!


Lock IdMode   Count
ControlFileLock   Exclusive   1
SubtransControlLock   Exclusive   1
   BgWriterCommLock   Exclusive   6
  FreeSpaceLock   Exclusive   6
   FirstLockMgrLock   Exclusive  48
BufFreelistLock   Exclusive  74
 BufMappingLock   Exclusive  74
CLogControlLock   Exclusive 184
 XidGenLock   Exclusive 184
CheckpointStartLock  Shared 185
   WALWriteLock   Exclusive 185
  ProcArrayLock   Exclusive 368
CLogControlLock  Shared 552
SubtransControlLock  Shared1273
  WALInsertLock   Exclusive1476
 XidGenLock  Shared1842
  ProcArrayLock  Shared3160
 SInvalLock  Shared3684
 BufMappingLock  Shared   14578

Lock Id   Combined Time (ns)
ControlFileLock 7915
   BgWriterCommLock43438
  FreeSpaceLock   39
BufFreelistLock   448530
   FirstLockMgrLock  2879957
CLogControlLock  4237750
SubtransControlLock  6378042
 XidGenLock  9500422
  WALInsertLock 16372040
 SInvalLock 23284554
  ProcArrayLock 32188638
 BufMappingLock113128512
   WALWriteLock142391501
CheckpointStartLock   4171106665


Regards,
-Robert

 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[HACKERS] BF Failure on Bandicoot

2006-07-21 Thread Dave Page
Bandicoot is currently failing on HEAD with the following error shown in
a message box during make check:

---
postgres.exe - Entry Point Not Found
---
The procedure entry point ldap_start_tls_sA could not be located in the
dynamic link library wldap32.dll. 
---
OK   
---

The failure hangs up the build farm run so it doesn't report back. It
appears to occur in initdb, which subsequently cannot be killed :-(

Regards, Dave.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace patch

2006-07-21 Thread Peter Eisentraut
Robert Lor wrote:
 I've have attached a patch along with two new files.This patch should
 reflect what we discussed at the Summit. Please let me know if I miss
 anything.

I would prefer to drop the PG_ prefixes on PG_TRACE and pg_trace.h.  We 
know which software we're dealing with.

 1) The current logic in src/backend/Makefile will only work for
 Solaris versions with DTrace, and Peter has offered to help fix this
 one.

We should probably move the probes file to a subdirectory.  Anyone know 
a good place?

Also, again, the pgsql prefix should be dropped.

 2) Currently an environment variable called DTRACE_DATA_MODEL is used
 in src/backend/Makefile to tell the dtrace command whether to
 generate a 32 or 64 bit binary.  This may not be a reliable approach
 since a user can forget to set this variable. Perhaps adding a flag
 like DTRACEFLAGS to the configure script is a better approach.

Certainly doable, but will that be more reliable?  Can't we convince 
dtrace to create binaries for the host platform by default?

 3)  When using --enable-depend, gmake clean removes all *.d files,

I'm working on renaming the dependency files.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] BF Failure on Bandicoot

2006-07-21 Thread Sandeep Jakkaraju(Navolve)
Hi All I am looking for a C/C++ library which can talk to postgresql/postgis other than libpqxx!!thanx in advancesandeep


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace patch

2006-07-21 Thread Martijn van Oosterhout
On Fri, Jul 21, 2006 at 01:42:26PM +0200, Peter Eisentraut wrote:
 Robert Lor wrote:
  I've have attached a patch along with two new files.This patch should
  reflect what we discussed at the Summit. Please let me know if I miss
  anything.
 
 I would prefer to drop the PG_ prefixes on PG_TRACE and pg_trace.h.  We 
 know which software we're dealing with.

I don't know. trace is a fairly generic word, how do you know that
none of the dozen other libraries we include don't already have a
trace.h or a TRACE() macro? On any of our supported platforms?

Debian already counts more than a dozen files called trace.h. While
none are in libraries we're likely to use, this is just one platform.
If it were in a subdirectory (say utils/trace.h) that would be OK
too...

  1) The current logic in src/backend/Makefile will only work for
  Solaris versions with DTrace, and Peter has offered to help fix this
  one.
 
 We should probably move the probes file to a subdirectory.  Anyone know 
 a good place?
 
 Also, again, the pgsql prefix should be dropped.

The prefix here is redundant. We know which directory it's in.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] plPHP and plRuby

2006-07-21 Thread Jeff Trout


On Jul 20, 2006, at 8:49 PM, Joshua D. Drake wrote:


It could be interesting to have something like this:

./configure --with-plruby


and it would actually fetch the latest plruby sources from the net  
and build. Ala Ports.




Or if we didn't want to develop that infastructure of auto-fetching   
whatnot we could have --with-plruby output some info like download  
foo, put there and rerun configure (essentially what would be in the  
src/pl*/README.txt idea).   A lot of folks would look at the output  
of configure --help to see what's available instead of poking around  
src/pl/*


--
Jeff Trout [EMAIL PROTECTED]
http://www.dellsmartexitin.com/
http://www.stuarthamm.net/




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Further reduction of bufmgr lock contention

2006-07-21 Thread Simon Riggs
On Thu, 2006-07-20 at 15:41 -0400, Tom Lane wrote:

 Usage of a partitioned hash table would then be like
 
   compute hashtable lookup key;
   entryhashcode = calc_hash(lookup key);
   partitionnumber = entryhashcode % NumPartitions;
   LWLockAcquire(PartitionLock[partitionnumber], ...);
   manipulate hashtable;
   LWLockRelease(PartitionLock[partitionnumber]);
 
 We could do this without changing the API of hash_search, but then we'd
 be computing the entry hashcode twice, so I'm inclined to provide an
 additional entry point that takes a precalculated hashcode.

That should be an additional win anyway, since hash_any() uses about 1%
CPU on tests I've seen - so we will hold locks slightly shorter
duration.

 Potential downsides of applying this idea to the buffer mapping table:
 
 1. Reassigning a buffer to a new page will (usually) require two cycles
 of LWLockAcquire/Release for the two different partitions involved.
 Since this path also requires at least a read() kernel call (maybe a
 write() too), I don't think there'll be any meaningful slowdown.

 3. Taking the freelist spinlock is new computation that wasn't there
 before.  But, again, it's only needed in code paths that will also be
 doing a kernel call.

...So the additional overhead sounds acceptable, given we will save
somewhat on the hash_any()

 If we do this we should probably also handle the lmgr lock tables the
 same way (partially reverting my partition-the-LockMgrLock patch of a
 couple months ago).  However, downside #3 might be a stronger objection
 for lmgr, since it can create or destroy lock objects without necessarily
 doing any I/O.

We should be in a position to test this soon.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] set search_path in dump output considered harmful

2006-07-21 Thread Jim C. Nasby
On Thu, Jul 13, 2006 at 07:17:31PM -0400, Tom Lane wrote:
 There has been talk of attaching a search_path setting to each function
 so that it's independent of the caller's search_path, but the
 performance hit seems a bit daunting.  In any case it's not pg_dump's
 fault that this feature doesn't exist...

Since most (all?) the stuff in functions ends up as prepared statements,
you'd only have to set search_path the first time the function was run,
no? That doesn't seem nearly as bad.

And FWIW, I agree that it would be better if pg_dump quit using
search_path.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] BF Failure on Bandicoot

2006-07-21 Thread Hannu Krosing
Ühel kenal päeval, R, 2006-07-21 kell 17:14, kirjutas Sandeep
Jakkaraju(Navolve):
 Hi All 
 
 
 I am looking for  a C/C++ library which can talk to postgresql/postgis
 other than libpqxx!!

Why ?

 thanx in advance
 
 sandeep
-- 

Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia

Skype me:  callto:hkrosing
Get Skype for free:  http://www.skype.com



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] How does the planner deal with multiple possible indexes?

2006-07-21 Thread Jim C. Nasby
On Wed, Jul 19, 2006 at 07:54:49PM -0400, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  Indeed, if I find a case where there's a large enough number of rows it
  will choose the smaller index. But I'm wondering if it would be better
  to always favor the smaller index, since it would (presumably) be easier
  to keep it in cache?
 
 AFAICS, in existing releases that should happen, because the cost
 estimate varies with the size of the index.  And it does happen for me
 in simple tests.  You did not provide the requested information to help
 us find out why it's not happening for you.
 
 (I'm a bit worried about whether CVS HEAD may have broken this behavior
 with the recent changes in the indexscan cost equations ... but unless
 you are working with HEAD that's not relevant.)

No, this is 8.1.3, and it's a production machine so I'd prefer not to go
about dropping indexes to get cost comparisons; unless there's some way
to disable the use of an index in a given backend? Otherwise I'll try
and come up with a test case.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Transaction Speed and real time database

2006-07-21 Thread moises








Thanks for your answer, 

I have experience with
postgres and I know perfectly that not a TOY.

But some concepts of real
time system dont based only in speed.



Real time databases  needs
some other kinds of semantics and features that postgres dont have.



Postgres dont supports real time constrains semantics in
transactions. In other hands the concurrent transactions dont wok well
based on priorities of task.

The program scheduler of transaction based on real time mechanism like
system how RTAI, or RTlinux, and much more.

POstrgres need to be extending for real time databases.



I want to know, in a hypothetical
server, how many transaction postgres support for a first approximation.



I found this data of MySQL and DB4o data bases but I can´t find any
of Postgres.

A mite that postgres is slow, but I need some numbers, insertion speed,
and update speed is the more important part?



Thanks

M.











De: Adnan DURSUN
[mailto:[EMAIL PROTECTED] 
Enviado el: jueves, 20 de julio de
2006 23:05
Para: moises;
pgsql-hackers@postgresql.org
Asunto: Re: [HACKERS] Transaction
Speed













  
This depends on your server capability and performance.. You can use PostgreSQL
as real time database. It is real not a toy :-)











Adnan DURSUN
ASRIN Bilisim Ltd.





Turkey







- Original Message - 





From: moises 





To: pgsql-hackers@postgresql.org






Sent: Thursday, July 20,
2006 3:36 PM





Subject: [HACKERS]
Transaction Speed









Can any body talk me how many transactions make
postgres in a second?

For example Inserts, Update, delete, etc.



Im very interesting in this data, because I
want to use postgres for a real time database for process control.



Thanks and regards





M.Sc.Moisés Herrera Vázquez

Especialista Superior en Automática

Empresa de Automatización Integral, CEDAI

Sucursal Villa Clara.

Teléfono: (53) 42 -203311

___
















Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL

2006-07-21 Thread Tom Lane
Robert Lor [EMAIL PROTECTED] writes:
 I ran pgbench and fired up a DTrace script using the lwlock probes we've 
 added, and it looks like BufMappingLock is the most contended lock, but  
 CheckpointStartLocks are held for longer duration!

Those numbers look a bit suspicious --- I'd expect to see some of the
LWLocks being taken in both shared and exclusive modes, but you don't
show any such cases.  You sure your script is counting correctly?
Also, it'd be interesting to count time spent holding shared lock
separately from time spent holding exclusive.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Freezing tuples on pages dirtied by vacuum

2006-07-21 Thread Jim C. Nasby
On Wed, Jul 19, 2006 at 07:45:24PM -0400, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  ISTM that as soon as vacuum dirties a page, it might as well update all
  tuples it can (any where Xmin  GetOldestXmin()), since that won't take
  much time compared to the cost of writing the page out.
 
 Perhaps not, but what it will do is destroy data that you might wish you
 had later.  Check the archives and note how often we ask people for xmin
 values when trying to debug a problem.  I don't think it's a good idea
 for aggressive freezing of tuples to be the default behavior.  Moreover,
 I can't see that there'd be any real gain from having done it --- it
 doesn't look to me like it would save any vacuum-to-prevent-wraparound
 operations, since nothing would happen at non-dirty pages.

For any tables that see even a trivial rate of updates spread through
the table, odds are that all tuples will end up frozen well before 1B
transactions have passed. Yes, you'll still need to vacuum every 1B
transactions, but that vacuum wouldn't need to dirty any pages just to
freeze tuples.

For clobbering xmin too early, we could make it so that only tuples
older than some threashold would be subject to 'early freezing'. One
possibility is that early freeze is at 1B transactions and we push
forced-freeze back to 1.5B transactions (the current forced-freeze at 1B
transactions seems rather aggresive anyway, now that the server will
refuse to issue new commands rather than lose data due to wraparound).

BTW, the freeze limits for vacuum and autovac are currently defined in
different places; should I submit a patch to refactor that into one
place? (Presumably vacuum.c)
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Units in postgresql.conf

2006-07-21 Thread korry






  
Time units is easy:
1h = 60min = 3600s = 360ms

  
  
We don't need anything larger than seconds at the moment.
  

Except for log_rotation_age perhaps?

  -- Korry





Re: [HACKERS] Transaction Speed and real time database

2006-07-21 Thread Albe Laurenz
 Real time databases  needs some other kinds of semantics and 
 features that postgres don't have.
 
 Postgres don't supports real time constrains semantics in 
 transactions. In other hands the concurrent transactions 
 don't wok well based on priorities of task.
 
 The program scheduler of transaction based on real time 
 mechanism like system how RTAI, or RTlinux, and much more.
 
 POstrgres need to be extending for real time databases.

PostgreSQL ist no program for real time applications. Period.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] How does the planner deal with multiple possible

2006-07-21 Thread Hannu Krosing
Ühel kenal päeval, R, 2006-07-21 kell 08:29, kirjutas Jim C. Nasby:

 No, this is 8.1.3, and it's a production machine so I'd prefer not to go
 about dropping indexes to get cost comparisons; unless there's some way
 to disable the use of an index in a given backend?

Currently the closest thing is 

BEGIN;
DROP INDEX xxx;

test query here

ABORT;

  Otherwise I'll try
 and come up with a test case.
-- 

Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia

Skype me:  callto:hkrosing
Get Skype for free:  http://www.skype.com




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Freezing tuples on pages dirtied by vacuum

2006-07-21 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 For clobbering xmin too early, we could make it so that only tuples
 older than some threashold would be subject to 'early freezing'.

OK, that might be acceptable.

 One
 possibility is that early freeze is at 1B transactions and we push
 forced-freeze back to 1.5B transactions (the current forced-freeze at 1B
 transactions seems rather aggresive anyway, now that the server will
 refuse to issue new commands rather than lose data due to wraparound).

No, the freeze-at-1B rule is the maximum safe delay.  Read the docs.
But we could do early freeze at 0.5B and forced freeze at 1B and
probably still get the effect you want.

However, I remain unconvinced that this is a good idea.  You'll be
adding very real cycles to regular vacuum processing (to re-scan tuples
already examined) in hopes of obtaining a later savings that is really
pretty hypothetical.  Where is your evidence that writes caused solely
by tuple freezing are a performance issue?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] [HACKERS] 8.2 features?

2006-07-21 Thread Jim C. Nasby
On Thu, Jul 20, 2006 at 08:46:13PM -0400, Tom Lane wrote:
 Joe Conway [EMAIL PROTECTED] writes:
  I'm liking this too. But when you say jointree node, are you saying to 
  model the new node type after NestLoop/MergeJoin/HashJoin nodes? These 
  are referred to as join nodes in ExecInitNode. Or as you mentioned a 
  couple of times, should this look more like an Append node?
 
 No, I guess I confused you by talking about the executor representation
 at the same time.  This is really unrelated to the executor.  The join
 tree I'm thinking of here is the data structure that dangles off
 Query.jointree --- it's a representation of the query's FROM clause,
 and (at present) can contain RangeTblRef, FromExpr, and JoinExpr nodes.
 See the last hundred or so lines of primnodes.h for some details.
 The jointree is used by the planner to compute the plan node tree that
 the executor will run, but it's not the same thing.
 
 There are basically two ways you could go about this:
 1. Make a new jointree leaf node type to represent a VALUES construct,
and dangle the list of lists of expressions off that.
 2. Make a new RangeTblEntry type to represent a VALUES construct, and
just put a RangeTblRef to it into the jointree.  The expressions
dangle off the RangeTblEntry.
 
 Offhand I'm not certain which of these would be cleanest.  The second
 way has some similarities to the way we handle set operation trees
 (UNION et al), so it might be worth looking at that stuff.  However,
 being a RangeTblEntry has a lot of baggage (eg, various routines expect
 to find an RTE alias, column names, column types, etc) and maybe we
 don't need all that for VALUES.

I misread that to include SRFs, but it got me thinking... another
possibility would be to changes VALUES() so that it was treated as a
function, and allow it to have an arbitrary number of parameters. That
would automatically allow the case of SELECT * FROM VALUES(...). INSERT
would need to learn how to accept SRFs, but that would have the nice
side-effect of allowing INSERT INTO table set_returning_function();

Of course, adding the ability for functions to have an arbitrary
argument list could well be more complex than any of the options
discussed thusfar... though it would be a very handy feature to have.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] How does the planner deal with multiple possible indexes?

2006-07-21 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 No, this is 8.1.3, and it's a production machine so I'd prefer not to go
 about dropping indexes to get cost comparisons; unless there's some way
 to disable the use of an index in a given backend?

The traditional hack for that is

begin;
drop index foo;
explain whatever;
rollback;

The DROP acquires an exclusive lock on the table, but it's only held for
a very short time while you EXPLAIN (you might want to put the whole
thing in a script file instead of relying on human typing speed).  So
unless you've got seriously strict response time requirements, this is
generally OK even in production DBs.  You do have to watch out for long
running transactions holding non-exclusive locks, eg don't try this
while a VACUUM is running on the table --- else the DROP blocks on the
vacuum and all other accesses start to queue up behind the DROP.

If the online-index-build patch gets in, there will be a cleaner option
which is to just mark the index disabled in pg_index.  That doesn't
require any exclusive lock, indeed won't be visible to other backends at
all if you do it within a transaction as above.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Transaction Speed and real time database

2006-07-21 Thread Martijn van Oosterhout
On Fri, Jul 21, 2006 at 09:38:41AM +0200, moises wrote:
 I want to know, in a hypothetical server, how many transaction postgres
 support for a first approximation.
 
 I found this data of MySQL and DB4o data bases but I can´t find any of
 Postgres.

I think you're asking the wrong question. I think what you want is
what performence does postgresql get on benchmark X with hardware Y.
I don't beleive you actually get performence numbers for other
databases without a whole list of constraining factors. You need to be
a lot more specific about what you want to know. You can find
benchmarks on the web, but really you need to evaluate whether they're
testing what you want to know.

But ok, here's a number: I can insert 100,000 rows per second on this
machine next to me. How does that help you? It doesn't: you don't know
the machine or the software or how it's configured or what I'm
inserting. You don't even know if this machine exists (hint: it
doesn't).

Also, insert/update speeds are useless measurements. If all you want is
that, please look at berkley DB. PostgreSQL is a SQL compliant
database.

Anyway, here are some sites to help you formulate your question better:

http://www.geocities.com/mailsoftware42/db/
http://www.devx.com/dbzone/Article/29480?trk=DXRSS_DB

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] BF Failure on Bandicoot

2006-07-21 Thread Tom Lane
Dave Page dpage@vale-housing.co.uk writes:
 Bandicoot is currently failing on HEAD with the following error shown in
 a message box during make check:

 The procedure entry point ldap_start_tls_sA could not be located in the
 dynamic link library wldap32.dll. 

Just guessing from the function name, but does this go away if you don't
configure --with-ldap?

If so it's presumably a portability issue in this recent patch:

2006-03-06 12:41  momjian

* configure, configure.in, src/backend/libpq/auth.c,
src/backend/libpq/hba.c, src/backend/libpq/pg_hba.conf.sample,
src/include/pg_config.h.in, src/include/libpq/hba.h: This patch
adds native LDAP auth, for those platforms that don't have PAM
(such as Win32, but also unixen without PAM). On Unix, uses
OpenLDAP. On win32, uses the builin WinLDAP library.

Magnus Hagander

I'd counsel just leaving --with-ldap off until Magnus gets back from
vacation.  We can always revert the patch later if he can't fix it.
(There's some fairly ugly stuff going on at lines 69-89 of
backend/libpq/auth.c, which I bet is the root of the trouble, but
I'm not going to dig further --- I've wasted more than enough time
on Windows this week ;-))

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] BF Failure on Bandicoot

2006-07-21 Thread Dave Page
 

 -Original Message-
 From: Tom Lane [mailto:[EMAIL PROTECTED] 
 Sent: 21 July 2006 15:20
 To: Dave Page
 Cc: pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] BF Failure on Bandicoot 
 
 Dave Page dpage@vale-housing.co.uk writes:
  Bandicoot is currently failing on HEAD with the following 
 error shown in
  a message box during make check:
 
  The procedure entry point ldap_start_tls_sA could not be 
 located in the
  dynamic link library wldap32.dll. 
 
 Just guessing from the function name, but does this go away 
 if you don't
 configure --with-ldap?
 
 If so it's presumably a portability issue in this recent patch:
 
 2006-03-06 12:41  momjian
 
   * configure, configure.in, src/backend/libpq/auth.c,
   src/backend/libpq/hba.c, src/backend/libpq/pg_hba.conf.sample,
   src/include/pg_config.h.in, src/include/libpq/hba.h: This patch
   adds native LDAP auth, for those platforms that don't have PAM
   (such as Win32, but also unixen without PAM). On Unix, uses
   OpenLDAP. On win32, uses the builin WinLDAP library.
   
   Magnus Hagander
 
 I'd counsel just leaving --with-ldap off until Magnus gets back from
 vacation.  We can always revert the patch later if he can't fix it.
 (There's some fairly ugly stuff going on at lines 69-89 of
 backend/libpq/auth.c, which I bet is the root of the trouble, but
 I'm not going to dig further --- I've wasted more than enough time
 on Windows this week ;-))

Yeah, I spotted that ugliness. I'll turn it off for now, and CC this to
Magnus so he has no excuse for forgetting :-p

Regards, Dave.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace patch

2006-07-21 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 On Fri, Jul 21, 2006 at 01:42:26PM +0200, Peter Eisentraut wrote:
 I would prefer to drop the PG_ prefixes on PG_TRACE and pg_trace.h.  We
 know which software we're dealing with.

 I don't know. trace is a fairly generic word, how do you know that
 none of the dozen other libraries we include don't already have a
 trace.h or a TRACE() macro? On any of our supported platforms?

I concur with Martijn.  We've already regretted using ERROR as a macro
name, let's not make the same mistake with TRACE.  PG_TRACE is good,
and so is pg_trace.h.  (But invoking it as utils/trace.h would be ok.)

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace

2006-07-21 Thread korry






  
On Fri, Jul 21, 2006 at 01:42:26PM +0200, Peter Eisentraut wrote:


  I would prefer to drop the PG_ prefixes on PG_TRACE and pg_trace.h.  We
know which software we're dealing with.
  

  
  
  
  
I don't know. "trace" is a fairly generic word, how do you know that
none of the dozen other libraries we include don't already have a
"trace.h" or a TRACE() macro? On any of our supported platforms?

  
  
I concur with Martijn.  We've already regretted using ERROR as a macro
name, let's not make the same mistake with TRACE.  PG_TRACE is good,
and so is pg_trace.h.  (But invoking it as utils/trace.h would be ok.)

  

How about the obvious DTRACE(  ) or some similar variant?

  -- Korry





Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL

2006-07-21 Thread Jim C. Nasby
On Fri, Jul 21, 2006 at 12:56:56AM -0700, Robert Lor wrote:
 I ran pgbench and fired up a DTrace script using the lwlock probes we've 
 added, and it looks like BufMappingLock is the most contended lock, but  
 CheckpointStartLocks are held for longer duration!
 
Not terribly surprising given that that lock can generate a substantial
amount of IO (though looking at the numbers, you might want to make
bgwriter more aggressive). Also, that's a shared lock, so it won't have
nearly the impact that BufMappingLock does.

 Lock IdMode   Count
 ControlFileLock   Exclusive   1
 SubtransControlLock   Exclusive   1
BgWriterCommLock   Exclusive   6
   FreeSpaceLock   Exclusive   6
FirstLockMgrLock   Exclusive  48
 BufFreelistLock   Exclusive  74
  BufMappingLock   Exclusive  74
 CLogControlLock   Exclusive 184
  XidGenLock   Exclusive 184
 CheckpointStartLock  Shared 185
WALWriteLock   Exclusive 185
   ProcArrayLock   Exclusive 368
 CLogControlLock  Shared 552
 SubtransControlLock  Shared1273
   WALInsertLock   Exclusive1476
  XidGenLock  Shared1842
   ProcArrayLock  Shared3160
  SInvalLock  Shared3684
  BufMappingLock  Shared   14578
 
 Lock Id   Combined Time (ns)
 ControlFileLock 7915
BgWriterCommLock43438
   FreeSpaceLock   39
 BufFreelistLock   448530
FirstLockMgrLock  2879957
 CLogControlLock  4237750
 SubtransControlLock  6378042
  XidGenLock  9500422
   WALInsertLock 16372040
  SInvalLock 23284554
   ProcArrayLock 32188638
  BufMappingLock113128512
WALWriteLock142391501
 CheckpointStartLock   4171106665
 
 
 Regards,
 -Robert
 
  
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
   http://archives.postgresql.org
 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL

2006-07-21 Thread Sven Geisler
Hi,

Tom Lane schrieb:
 Robert Lor [EMAIL PROTECTED] writes:
 I ran pgbench and fired up a DTrace script using the lwlock probes we've 
 added, and it looks like BufMappingLock is the most contended lock, but  
 CheckpointStartLocks are held for longer duration!
 
 Those numbers look a bit suspicious --- I'd expect to see some of the
 LWLocks being taken in both shared and exclusive modes, but you don't
 show any such cases.  You sure your script is counting correctly?
 Also, it'd be interesting to count time spent holding shared lock
 separately from time spent holding exclusive.

Is there a test case which shows the contention for this full cached
tables? It would be nice to have measurable numbers like context
switches and queries per second.

Sven.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace

2006-07-21 Thread Tom Lane
korry [EMAIL PROTECTED] writes:
 How about the obvious DTRACE(  ) or some similar variant?

Because it's supposed to be generic, ie, not strictly tied to DTrace.
(I'm not sure there is any realistic other alternative at the moment,
but that's the idea...)

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace patch

2006-07-21 Thread Robert Lor

Peter Eisentraut wrote:

I would prefer to drop the PG_ prefixes on PG_TRACE and pg_trace.h.  We 
know which software we're dealing with.


 


I also agree with Martin  Tom to keep the PG_ prefixes.

We should probably move the probes file to a subdirectory.  Anyone know 
a good place?


Also, again, the pgsql prefix should be dropped.
 

To keep it consistent with the header file, perhaps it can be renamed to 
pg_probes.d


Certainly doable, but will that be more reliable?  Can't we convince 
dtrace to create binaries for the host platform by default?
 

The user needs to have the flexibility to build a 32 bit PG binary even 
when he run the 64 bit kernel. If I understand you correctly, your 
suggestion will not allow a 32 bit binary to be built on a 64 bit OS.



3)  When using --enable-depend, gmake clean removes all *.d files,
   



I'm working on renaming the dependency files.

 


Excellent!

Thanks Peter for your help!

Regards,
-Robert


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace

2006-07-21 Thread Robert Lor

korry wrote:


How about the obvious DTRACE(  ) or some similar variant?


The idea is to keep the macro name generic since it can be mapped to 
other tracing facility on other platforms.


Regards,
-Robert

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[HACKERS] [RTLWS8-CFP] Eighth Real-Time Linux Workshop 2nd CFP

2006-07-21 Thread mcguire

We apologize for multiple receipts.







  Eighth Real-Time Linux Workshop

October 12-15, 2006
 Lanzhou University - SISE
  Tianshui South Road 222
   Lanzhou, Gansu 73
 P.R.China


  General

   Following  the  meetings  of  developers  and  users at the previous 7
   successful  real-time Linux workshops held in Vienna, Orlando, Milano,
   Boston,  and  Valencia, Singapore, Lille, the Real-Time Linux Workshop
   for  2006  will  come back to Asia again, to be held at the School for
   Information  Science  and  Engineering, Lanzhou University, in Lanzhou
   China.

   Embedded  and  real-time Linux is rapidly gaining traction in the Asia
   Pacific  region.  Embedded  systems  in  both  automation/control  and
   entertainment moving to 32/64bit systems, opening the door for the use
   of  full  featured  OS  like  GNU/Linux  on  COTS  based systems. With
   real-time  capabilities being a common demand for embedded systems the
   soft  and  hard  real-time  variants are an important extension to the
   versatile GNU/Linux GPOS.

   Authors  are  invited  to  submit  original  work dealing with general
   topics  related  to  real-time  Linux  research,  experiments and case
   studies,  as  well  as issues of integration of real-time and embedded
   Linux.  A  special focus will be on industrial case studies. Topics of
   interest include, but are not limited to:

 * Modifications and variants of the GNU/Linux operating system
   extending its real-time capabilities,
 * Contributions to real-time Linux variants, drivers and extensions,
 * User-mode real-time concepts, implementation and experience,
 * Real-time Linux applications, in academia, research and industry,
 * Work in progress reports, covering recent developments,
 * Educational material on real-time Linux,
 * Tools for embedding Linux or real-time Linux and embedded
   real-time Linux applications,
 * RTOS core concepts, RT-safe synchronization mechanisms,
 * RT-safe interaction of RT and non RT components,
 * IPC mechanisms in RTOS,
 * Analysis and Benchmarking methods and results of 
   real-time GNU/Linux variants,
 * Debugging techniques and tools, both for code and temporal
   debugging of core RTOS components, drivers and real-time
   applications,
 * Real-time related extensions to development environments.
  
  Further information:
 
  EN: http://www.realtimelinuxfoundation.org/events/rtlws-2006/ws.html 
  CN: http://dslab.lzu.edu.cn/rtlws8/index.html

  Awarded papers

  The  Programme Committee  will award a best paper in the category Real-
  Time Systems Theory.  This best paper will be invited  for  publication 
  to the Real-Time Systems Journal, RTSJ. 
  
  The  Programme Committee will award a best paper in the category Real-
  Time Systems Application. This best paper will be invited for publication 
  to the Dr Dobbs Journal. Moreover, the publication of the other papers in
  a special issue of Dr Dobbs Journal is in discussion. 

  Abstract submission

  In  order register an abstract, please go to:
  http://www.realtimelinuxfoundation.org/rtlf/register-abstract.html

  Venue

  Lanzhou University Information Building, School of Information Science
  and Engineering, Laznhou University, http://www.lzu.edu.cn/.

  Registration

  In  order  to  participate  to  the  workshop,  please register on the
  registration page at:
  http://www.realtimelinuxfoundation.org/rtlf/register-participant.html

  Accommodation

  Please refer to the Lanzhou hotel page for accomodation at
  http://dslab.lzu.edu.cn/rtlws8/hotels/hotels.htm

  Travel information

  For travel information and directions how to get to Lanzhou from an 
  international airport in China please refer to:
  http://www.realtimelinuxfoundation.org/events/rtlws-2006/

  Important dates

  August28:  Abstract submission
  September 15:  Notification of acceptance
  September 29:  Final paper

  Pannel Participants:

 o Roberto Bucher - Scuola Universitaria Professionale della Svizzera
   Italiana, Switzerland, RTAI/ADEOS/RTAI-Lab.

 o Alfons Crespo Lorente - University of Valenica, Spain,Departament
   d'Informtica de Sistemes i Computadors, XtratuM.

 o Herman Haertig - Technical University Dresden, Germany,Institute for
   System Architecture, L4/Fiasco/L4Linux.

 o Nicholas Mc Guire - Lanzhou University, P.R. China, Distributed and
   Embedded Systems Lab, RTLinux/GPL.

 o Douglas Niehaus - University of Kansas, USA, Information and
   Telecommunication Technology Center, RT-preempt.

  Organization committee:

 * Prof. Li LIAN (Co-Chair), (SISE, Lanzhou University, CHINA)
 * Xiaoping ZHANG, LZU, CHINA
 * 

Re: [HACKERS] contrib promotion?

2006-07-21 Thread Jim C. Nasby
On Fri, Jul 14, 2006 at 08:08:11PM +0200, Stefan Kaltenbrunner wrote:
 Greg Sabino Mullane wrote:
  
  Doesn't our inclusion of md5() pretty much blow that argument away?
  (Just asking).
  I don't think so because md5 is just a one way hash function. There
  is no method to decrypt anything :).
  
  Actually, I've had to install pgcrypto on more than one occasion for
  clients who needed to have sha1 instead of md5. I've had to install
  pgcrypto for other functions as well, so +1 for me on coring it, but
  at the least please consider adding in sha1.
 
 I don't have a very strong opinion on that but sha1() is something I
 need on a regular base too from pgcrypto.

sha1 would be nice, as would actual datatypes for them (though the
datatypes are probably better left to pgFoundry).
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace patch

2006-07-21 Thread Peter Eisentraut
Robert Lor wrote:
 The user needs to have the flexibility to build a 32 bit PG binary
 even when he run the 64 bit kernel. If I understand you correctly,
 your suggestion will not allow a 32 bit binary to be built on a 64
 bit OS.

I'm not sure about the context.  How do you control whether the 
PostgreSQL binaries you are about to build end up 32 bit or 64 bit?  
Presumably there is some default, and you switch it using CFLAGS or 
LDFLAGS.  Then it would make sense to let dtrace be controled by 
DTRACE_FLAGS or some such.  But what does dtrace do if no flag at all 
is given?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Loading the PL/pgSQL debugger (and other plugins)

2006-07-21 Thread korry




Sorry to poke - but I'd like to get a patch submitted next week. Any
more comments? Thanks.

   -- Korry

Thanks
for the quick feedback.
  

  1) I think the most straightforward way to load an instrumentation 
plugin is to define a new custom GUC variable (using the 
custom_variable_classes mechanism).



This seems a bit messy and special-purpose.  
  
Agreed, I'm not crazy about using a custom_variable_class variable
either.
  
I see no good reason to tie
it to plpgsql; we'll just need another one for every other language.
  
  
Hmmm... but the plugins themselves would be language-specific. I can't
imagine that a plugin (say a profiler) for PL/python would work for
PL/pgSQL. It seems to me that, even if we come up with a common
mechanism, we'll still need a separate GUC variable *name* for each
PL. Or am I not understanding something? Can you post an example of
what you are thinking (what would such a GUC variable look like)?
  
  
IMHO what we want is something with similar properties to preload_libraries,
but processed on a per-backend basis instead of once at postmaster start.
(You could almost just tell people to select the plugin they want by
LOADing it, but that is hard to use if you're trying to debug a
non-interactive application.  A GUC variable can be set for an app
without much cooperation from the app.)
  
  
Agreed. 
  
When the plugin's shared library gets loaded, one way or the other,
it should construct the function-pointer struct and then pass it to a
function defined by plpgsql (this lets us hide/postpone the decision
about whether there can be more than one active plugin).
  
  
But there's a timing issue there. If you ask the plugin to call a
call-handler function, then you can't load the plugin at backend
startup because the PL/pgSQL call-handler isn't loaded until it's
required. Since both the plugin and the call-handler are dynamically
loaded, I think one of them has to load the other. We already have a
mechanism for loading call-handlers on demand - it seems kind of messy
to introduce another mechanism for loading plugins (that in turn load
the call-handlers).
  
The PL/pgSQL call-handler has a convenient initialization function that
could read the GUC variable and load the referenced plugin (that's what
I'm doing right now).
  
What I'm thinking is that the plpgsql_init() function would look
something like this (my changes in red);
  
  PLpgSQL_plugin pluginHooks;
typedef void (*plugin_loader_func)(PLpgSQL_plugin *hooks);
  
void
plpgsql_init(void)
{
   static char * pluginName;
 plugin_load_func plugin_loader();
  
 /* Do initialization only once */
 if (!plpgsql_firstcall)
  return;
  
 plpgsql_HashTableInit();
 RegisterXactCallback(plpgsql_xact_cb, NULL);
 plpgsql_firstcall = false;
  
   /* Load any instrumentation plugins */
 DefineCustomStringVariable( "plpgsql.plugin", 
"Name of instrumentation plugin to use
when PL/pgSQL function is invoked",
NULL,
pluginName,
PGC_USERSET,
NULL,
NULL );
  
 EmitWarningsOnPlaceholders("plpgsql");
  
 if (pluginName )
 {
 plugin_loader = (plugin_loader_func
*)load_external_function(pluginName, "plugin_loader", false, NULL );
  
 if (plugin_loader)
 (*plugin_loader)(pluginHooks);
 }
  } 
  
(Ignore the custom variable stuff for now)
  
Each plugin would export a plugin_loader() function - that function,
given a pointer to a PLpgSQL_plugin structure, would fill in that
structure with the required function pointers. 
  
One issue that needs to be thought about with either this proposal or
your original is what permissions are needed to set the GUC variable.
I don't think we dare allow non-superusers to specify LOADing of
arbitrary shared libraries, so there has to be some filter function.

Perhaps a better way is that the GUC variable specifies a (list of)
initialization functions to call at backend start, and then the
superuserness is involved with installing the init functions into
pg_proc, and the GUC variable itself needs no special permissions.
Again, a plugin's init function would just register its function-pointer
struct with plpgsql.
  
  
You're right, privileges are an issue. Is it safe enough if we force
all plugins to reside in $libdir? Each plugin could enforce additional
security as needed that way, but you'd have to hold enough privileges
to get your plugin into $libdir to begin with so you can't write your
own nasty plugin to gain more privileges than you ought to have.
  
We should also think about a deregistration function.  This would allow
you to turn debugging on and off within an interactive session.  The
GUC variable is really only for coercing non-interactive applications
into being debuggable --- I don't see it as being important for
interactive debugging, as compared to just "select plugin_init();" ...
  
  
Ok.
  

  3) Any comments on the PLpgSQL_plugin structure?  Should it include (as 
it's first member) a 

Re: [HACKERS] Units in postgresql.conf

2006-07-21 Thread Josh Berkus
Gavin, Peter,

 I would imagine that Peter intends to handle backward compatibility by
 processing values without explicit units in the units assumed pre 8.2.

Aha, I misunderstood.

-- 
Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Transaction Speed and real time database

2006-07-21 Thread moises
Hello, 
Thanks for your answer.
Maybe I asking a wrong question, but I just need some ideas like you say
that posgres can insert 100,000 rows per second, is a good approximation...
In other hand if you visit some links like
http://developer.db4o.com/forums/post/25599.aspx
You will find something like this in Spanish
You can store 250 millions objects with just constant 40 MB RAM and constant
insertions about 8.000 objects per second -- these measures had been take in
a show HDD 5600 rpm. It had been stored 100.000 objects with less than
constant 2MB RAM. Db4o can store in 10 millions of objects in 400
milliseconds.

How can you see, this numbers are just I want but for postgres.

I know that hardware is very important, join to OS and config of Shared
Memory, System V, etc, but I just looking for an Idea. Suppose that every
body say me that POStgres is to slow for real time databases, then I will be
very full trying to resolve this problems with postgres, don't think that?

Thanks again.

Moises

-Mensaje original-
De: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] En nombre de Martijn van
Oosterhout
Enviado el: viernes, 21 de julio de 2006 16:19
Para: moises
CC: 'Adnan DURSUN'; pgsql-hackers@postgresql.org
Asunto: Re: [HACKERS] Transaction Speed and real time database

On Fri, Jul 21, 2006 at 09:38:41AM +0200, moises wrote:
 I want to know, in a hypothetical server, how many transaction postgres
 support for a first approximation.
 
 I found this data of MySQL and DB4o data bases but I can´t find any of
 Postgres.

I think you're asking the wrong question. I think what you want is
what performence does postgresql get on benchmark X with hardware Y.
I don't beleive you actually get performence numbers for other
databases without a whole list of constraining factors. You need to be
a lot more specific about what you want to know. You can find
benchmarks on the web, but really you need to evaluate whether they're
testing what you want to know.

But ok, here's a number: I can insert 100,000 rows per second on this
machine next to me. How does that help you? It doesn't: you don't know
the machine or the software or how it's configured or what I'm
inserting. You don't even know if this machine exists (hint: it
doesn't).

Also, insert/update speeds are useless measurements. If all you want is
that, please look at berkley DB. PostgreSQL is a SQL compliant
database.

Anyway, here are some sites to help you formulate your question better:

http://www.geocities.com/mailsoftware42/db/
http://www.devx.com/dbzone/Article/29480?trk=DXRSS_DB

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to
litigate.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Transaction Speed and real time database

2006-07-21 Thread Csaba Nagy
 [snip] Suppose that every
 body say me that POStgres is to slow for real time databases, then I will be
 very full trying to resolve this problems with postgres, don't think that?

I think you didn't understand correctly: postgres is not slow, it is
just not suitable for real RT applications because of a few reasons,
which in fact make other data bases also not suitable for this purpose.

The main concern is that a RT application usually needs predictable
response times, possibly with guaranties for upper bounds of response
times... and most data bases which are transactional and offer
concurrent access won't give you such guaranties, due to locking issues.

The question is, your application is really RT in the proper sense of
the word, or it is just an OLTP application which needs to be fast but
won't cause a nuclear explosion if one response in 100 will be slower
than expected... in that case postgres might be good for you.

Cheers,
Csaba.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Transaction Speed and real time database

2006-07-21 Thread Andrew Dunstan


What you are asking is essentially the equivalent of asking How long is 
a piece of string? The question is meaningless and so will be any 
answer. The fact that there are web sites which are happy to supply you 
with meaningless information doesn't mean we should follow suit.


And frankly, I would be very dubious about using PostgreSQL or just 
about any other RDBMS in a Real Time System, hard or soft.


cheers

andrew

moises wrote:
Hello, 
Thanks for your answer.

Maybe I asking a wrong question, but I just need some ideas like you say
that posgres can insert 100,000 rows per second, is a good approximation...
In other hand if you visit some links like
http://developer.db4o.com/forums/post/25599.aspx
You will find something like this in Spanish
You can store 250 millions objects with just constant 40 MB RAM and constant
insertions about 8.000 objects per second -- these measures had been take in
a show HDD 5600 rpm. It had been stored 100.000 objects with less than
constant 2MB RAM. Db4o can store in 10 millions of objects in 400
milliseconds.

How can you see, this numbers are just I want but for postgres.

I know that hardware is very important, join to OS and config of Shared
Memory, System V, etc, but I just looking for an Idea. Suppose that every
body say me that POStgres is to slow for real time databases, then I will be
very full trying to resolve this problems with postgres, don't think that?

Thanks again.

Moises

-Mensaje original-
De: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] En nombre de Martijn van
Oosterhout
Enviado el: viernes, 21 de julio de 2006 16:19
Para: moises
CC: 'Adnan DURSUN'; pgsql-hackers@postgresql.org
Asunto: Re: [HACKERS] Transaction Speed and real time database

On Fri, Jul 21, 2006 at 09:38:41AM +0200, moises wrote:
  

I want to know, in a hypothetical server, how many transaction postgres
support for a first approximation.

I found this data of MySQL and DB4o data bases but I can´t find any of
Postgres.



I think you're asking the wrong question. I think what you want is
what performence does postgresql get on benchmark X with hardware Y.
I don't beleive you actually get performence numbers for other
databases without a whole list of constraining factors. You need to be
a lot more specific about what you want to know. You can find
benchmarks on the web, but really you need to evaluate whether they're
testing what you want to know.

But ok, here's a number: I can insert 100,000 rows per second on this
machine next to me. How does that help you? It doesn't: you don't know
the machine or the software or how it's configured or what I'm
inserting. You don't even know if this machine exists (hint: it
doesn't).

Also, insert/update speeds are useless measurements. If all you want is
that, please look at berkley DB. PostgreSQL is a SQL compliant
database.

Anyway, here are some sites to help you formulate your question better:

http://www.geocities.com/mailsoftware42/db/
http://www.devx.com/dbzone/Article/29480?trk=DXRSS_DB

Hope this helps,
  



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Loading the PL/pgSQL debugger (and other plugins)

2006-07-21 Thread John DeSoi

Hi Korry,

On Jul 21, 2006, at 12:51 PM, korry wrote:

Sorry to poke - but I'd like to get a patch submitted next week.   
Any more comments?  Thanks.



I'm unqualified to comment on the server side design, but I was  
wondering if there was consensus on how the client interface to the  
debugger would work. From previous threads I saw DBGP mentioned  
(http://xdebug.org/docs-dbgp.php), but I don't recall seeing any  
final commitment to it.


Thanks,

John




John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Loading the PL/pgSQL debugger (and other plugins)

2006-07-21 Thread korry


I'm unqualified to comment on the server side design, but I was 
wondering if there was consensus on how the client interface to the 
debugger would work. From previous threads I saw DBGP mentioned 
(http://xdebug.org/docs-dbgp.php), but I don't recall seeing any final 
commitment to it.
The patch that I'll be submitting for 8.2 will implement a way to 
instrument PL/pgSQL (and that idea can be extended to other PL 
languages).  'Instrumentation' can mean different things - it may be a 
debugger, a profiler, a coverage analyzer, a tracer, ... 

EnterpriseDB has developed a few plugins that we'll be contributing soon 
(a debugger, a profiler, and a tracer).  The debugger is by far the 
largest plugin that we've developed and we implemented it before we had 
the idea to use a modular architecture (we're still in the process of 
converting the debugger to modular form, at the moment it's pretty 
heavily integrated into the PL/pgSQL interpreter).  As soon as we get a 
patch in for the plugin architecture, we'll open-source at least one or 
two of the plugins so others can use them and/or write more (the 
debugger will take a little longer). 

That means that we (i.e. the community) haven't made a firm commitment 
to the debugger client protocol.  I can tell you a little about the 
protocol that we are currently using, but it may change by the time 
we're ready to open-source the debugger.  I gave a presentation at the 
anniversary summit that described the overall architecture and also 
showed the client/server protocol - the slides and audio should be 
available at the conference web site real soon now. 

The most important part, from your perspective (assuming that you might 
want to add a debugger to pgEdit), is the method that a debugger client 
application uses to interact with the debugger server.  That's done 
through a collection of server-side functions that you can call from any 
libpq application.  For example, to set a breakpoint, you would:


   SELECT * FROM pldbg_set_breakpoint( sessionHandle, functionOID, 
lineNumber, processID );


to step/over:

   SELECT * FROM pldbg_step_over( sessionHandle );

to step/into:

   SELECT * FROM pldbg_step_into( sessionHandle );

to get a copy of all local variables:

   SELECT * FROM pldbg_get_variables( sessionHandle, stackFrame );

and so on.  There are a few functions that you can call to attach your 
debugger client to a target server and to set global breakpoints.


I'll be posting more information as we get closer to releasing this stuff. 


-- Korry

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Loading the PL/pgSQL debugger (and other plugins)

2006-07-21 Thread Tom Lane
korry [EMAIL PROTECTED] writes:
 I see no good reason to tie
 it to plpgsql; we'll just need another one for every other language.
 
 Hmmm... but the plugins themselves would be language-specific.

You miss my point.  The plugins will be language-specific but the
mechanism for selecting/loading them shouldn't be.

 When the plugin's shared library gets loaded, one way or the other,
 it should construct the function-pointer struct and then pass it to a
 function defined by plpgsql (this lets us hide/postpone the decision
 about whether there can be more than one active plugin).
 
 But there's a timing issue there.  If you ask the plugin to call a 
 call-handler function, then you can't load the plugin at backend startup 
 because the PL/pgSQL call-handler isn't loaded until it's required.  
 Since both the plugin and the call-handler are dynamically loaded, I 
 think one of them has to load the other.

Right, but if you set up the mechanism such that each individual PL is
responsible for loading plugins, then we'll have to duplicate all that
code each time we instrument another PL.  I want to do as much as
possible of the work in the core code so that we don't end up with
duplicate code to maintain.

That being the case, I don't see anything wrong with having the
selection mechanism pull in the selected plugin(s) and then those
force loading of the language handlers so that they can call the plugin
installation function.  Sure, sometimes this would result in loading
a plugin and handler that don't get used in the current session, but
given that people would only load plugins they intend to use, I don't
see that as a significant objection.

I'm thinking that the cleanest way to handle this would be to add
another column to pg_language containing the OID of the plugin receptor
function for each PL.  Then the plugin just calls that function passing
its constructed function-pointer struct.  This eliminates the need for
hard-wired assumptions about function names and so forth, and also lets
you use the existing fmgr functionality to pull in the PL's handler
library.  OTOH this requires extending the syntax of CREATE LANGUAGE
and so on.  That is all doable (it's basically the same kind of work
that got done when we added validator functions for PLs) but it might
be more work than we think the plugin idea is worth.

To do it without a pg_language column, we'd need code in each plugin to
identify the language shared library (by looking in pg_language), force
loading of same (using existing fmgr code), and look up and call a
plugin receptor function given an expected C-code name for it (again,
most of this already exists in fmgr).  It's not a huge amount of code,
probably, but again duplicating it in each plugin seems unappealing.
I suppose we could make fmgr export a general function to find a plugin
receptor function given the PL name and the expected C symbol.

Comments anyone?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] gin support for tsearch2

2006-07-21 Thread Monica D'Arcy
I thought I had successfully added the patch for gin support to my  
8.1.4 installation, however I get the error: access method gin does  
not exist.  There is no record for the gin index in the pg_am  
table, which I assume needs to be present.  I have restarted  
postgres.  Is there anything additional I should be doing in order to  
have gin functionality?


Thanks for any help... I am very excited about getting this to work,
Monica

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] gin support for tsearch2

2006-07-21 Thread Oleg Bartunov

You need initdb

Oleg
On Fri, 21 Jul 2006, Monica D'Arcy wrote:

I thought I had successfully added the patch for gin support to my 8.1.4 
installation, however I get the error: access method gin does not exist. 
There is no record for the gin index in the pg_am table, which I assume 
needs to be present.  I have restarted postgres.  Is there anything 
additional I should be doing in order to have gin functionality?


Thanks for any help... I am very excited about getting this to work,
Monica

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] gin support for tsearch2

2006-07-21 Thread Oleg Bartunov

I forgot to note, that this patch doesn't contains last improvements Teodor
did on Code Sprint - significant speedup of index creation.

Oleg

On Fri, 21 Jul 2006, Monica D'Arcy wrote:

I thought I had successfully added the patch for gin support to my 8.1.4 
installation, however I get the error: access method gin does not exist. 
There is no record for the gin index in the pg_am table, which I assume 
needs to be present.  I have restarted postgres.  Is there anything 
additional I should be doing in order to have gin functionality?


Thanks for any help... I am very excited about getting this to work,
Monica

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq


Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Loading the PL/pgSQL debugger (and other plugins)

2006-07-21 Thread korry






  

  When the plugin's shared library gets loaded, one way or the other,
it should construct the function-pointer struct and then pass it to a
function defined by plpgsql (this lets us hide/postpone the decision
about whether there can be more than one active plugin).

  

But there's a timing issue there.  If you ask the plugin to call a 
call-handler function, then you can't load the plugin at backend startup 
because the PL/pgSQL call-handler isn't loaded until it's required.  
Since both the plugin and the call-handler are dynamically loaded, I 
think one of them has to load the other.

  
  
Right, but if you set up the mechanism such that each individual PL is
responsible for loading plugins, then we'll have to duplicate all that
code each time we instrument another PL.  I want to do as much as
possible of the work in the core code so that we don't end up with
duplicate code to maintain.
  

I think I'm missing something important here. 

At minimum, you need a way to identify a plugin (or a list of plugins),
and, if we generalize the mechanism, a way to identify the language
that that plugin is associated with (like, this profiler works with
PL/tcl, this debugger works with PL/Java, ...).

Once you have that, you've got two choices: 

1) The plugin loads the language
  or
2) The language loads the plugin

You are suggesting option 1. That means that we must:

a) come up with a way to identify the set of plugins
desired (probably some GUC variables?)
b) Extend the pg_language structure
c) Extend the CREATE LANGUAGE statement
d) come up with a way for the backend to load the plugins (the backend
already knows how to load a language-handler)
e) add loader code to each plugin (there should be more plugins than
languages eventually)
f) add loader code to each language (at least each language that wants
to support a plugin)


On the other hand, if the language loads the plugin, we must:

a) come up with a way to identify the set of plugins
desired (probably some GUC variables?)
b) add loader code to each plugin
c) add loader code to each language (that wants to support a plugin)


In either case, the loader code in the language-handlers and the loader
code
in the plugins could be simple calls to common functions that are
defined in the core, avoiding a lot of duplicate code. For example,
each language handler (in it's initialization code) could include a
call such as:

 pl_load_plugins( "pl/pgsql", functionPointers );

or

 pl_load_plugins( "pl/java", functionPointers );

pl_load_plugins() would reside in the core, it would find the list of
plugins, load each one, find the plugin's initialization function, and
call that function with functionPointers (the initializer would
fill in the functionPointers structure).

So what am I missing? What's the advantage to having the plugin load
the language?


  
To do it without a pg_language column, we'd need code in each plugin to
identify the language shared library (by looking in pg_language), force
loading of same (using existing fmgr code), and look up and call a
plugin receptor function given an expected C-code name for it (again,
most of this already exists in fmgr).  It's not a huge amount of code,
probably, but again duplicating it in each plugin seems unappealing.
I suppose we could make fmgr export a general function to find a plugin
receptor function given the PL name and the expected C symbol.
  

That's what I was thinking too. But we could avoid hard-coded names
using a syntax similar to preload_libraries (each entry in
preload_libraries can contain the name of an optional initialization
function). If you specify libraryName:functionName, we would assume
that functionName was the loader function, if you just specify
libraryName, we could look for a hard-coded default.


(Oh, and any more comments on security? Is it enough to require that
all plugins live in $libdir?)

   -- Korry




Re: [HACKERS] contrib promotion?

2006-07-21 Thread Jim C. Nasby
On Tue, Jul 18, 2006 at 03:37:52PM +0300, Marko Kreen wrote:
 On 7/14/06, Tom Lane [EMAIL PROTECTED] wrote:
 I don't see a strong need for moving pgcrypto into core, and there's at
 least one argument against it: if someone needs a crypto-free version of
 postgres for use someplace with benighted laws, they would be screwed.
 
 Image of hypothetical evil government is not a thing to base decisions on :)
 
 Although I've tried to develop pgcrypto to be easily mergable into core,
 I don't want to push it myself, the push should come from users.
 
 That said, there is one situation that is badly handled in current
 setup - storing passwords in database.  There is md5() function in
 core and everything in /contrib in basically invisible in website
 and official docs.  So even PG core devs suggest using md5() for
 this task.  But this is inadequate - bruteforcing md5 hash can be
 done pretty easily on todays desktop computers.  PostgreSQL itself
 can get away with it only because it regular users cant see the hash.
 But that is not so for ordinary apps.
 
 So I would like either some mention of the more useful/stable modules
 in core docs or a way for contrib modules to become 'official' add-on
 modules (like PL-s are).
 
This is actually an issue that goes way beyond pgcrypto. I think the
manual should formally mention both /contrib and pgFoundry.org as
someplace to get add-on features. An even better long-term solution
would be something akin to CPAN, but I'm not holding my breath for
that...

 Full merge into core would fix this also, but indeed there is not many
 techical reasons for it.  (And editing pg_proc.h is PITA - I'd consider
 it technical reason against it ;)
 
 -- 
 marko
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend
 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Transaction Speed and real time database

2006-07-21 Thread Hannu Krosing
Ühel kenal päeval, R, 2006-07-21 kell 13:29, kirjutas Andrew Dunstan:
 What you are asking is essentially the equivalent of asking How long is 
 a piece of string? The question is meaningless and so will be any 
 answer. The fact that there are web sites which are happy to supply you 
 with meaningless information doesn't mean we should follow suit.
 
 And frankly, I would be very dubious about using PostgreSQL or just 
 about any other RDBMS in a Real Time System, hard or soft.

It would probably be possible to get constant-time inserts into an
indexless table, once checkpoints are spread out over the whole
checpoint interval, as described by the NTT staff at the conference, but
this is probably the only RT compatible scenario you can expect from an
RDBMS.

Another way to get RT is just use long required completion times and
light enough load on db that you always meet your time limit. 

Remember, RT does not neccesarily mean Fast it just needs to be
Predictable!

-- 

Hannu Krosing
Database Architect
Skype Technologies OÜ
Akadeemia tee 21 F, Tallinn, 12618, Estonia

Skype me:  callto:hkrosing
Get Skype for free:  http://www.skype.com


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] contrib promotion?

2006-07-21 Thread Joshua D. Drake



So I would like either some mention of the more useful/stable modules
in core docs or a way for contrib modules to become 'official' add-on
modules (like PL-s are).
 
This is actually an issue that goes way beyond pgcrypto. I think the

manual should formally mention both /contrib and pgFoundry.org as
someplace to get add-on features. An even better long-term solution
would be something akin to CPAN, but I'm not holding my breath for
that...


The manual does talk about pgFoundry, especially in 8.2 (that was my 
patch ;)).


It talks about it in 8.1 as well but it is not as apparent.

Joshua D. Drake







Full merge into core would fix this also, but indeed there is not many
techical reasons for it.  (And editing pg_proc.h is PITA - I'd consider
it technical reason against it ;)

--
marko

---(end of broadcast)---
TIP 6: explain analyze is your friend






--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Transaction Speed and real time database

2006-07-21 Thread Andrew Dunstan

Hannu Krosing wrote:

Ühel kenal päeval, R, 2006-07-21 kell 13:29, kirjutas Andrew Dunstan:
  
What you are asking is essentially the equivalent of asking How long is 
a piece of string? The question is meaningless and so will be any 
answer. The fact that there are web sites which are happy to supply you 
with meaningless information doesn't mean we should follow suit.


And frankly, I would be very dubious about using PostgreSQL or just 
about any other RDBMS in a Real Time System, hard or soft.



It would probably be possible to get constant-time inserts into an
indexless table, once checkpoints are spread out over the whole
checpoint interval, as described by the NTT staff at the conference, but
this is probably the only RT compatible scenario you can expect from an
RDBMS.

Another way to get RT is just use long required completion times and
light enough load on db that you always meet your time limit. 


Remember, RT does not neccesarily mean Fast it just needs to be
Predictable!

  


Increasing required time constraints only lowers the likelihood that you
will fail to meet RT requirements, rather than eliminating it.

I suspect that for genuine RT performance we'd need to build in some
sort of preemption mechanism (although I freely confess this isn't my
area of expertise).

cheers

andrew


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Loading the PL/pgSQL debugger (and other plugins)

2006-07-21 Thread Hiroshi Saito
Hi korry-san.

From: korry 


 
  I'm unqualified to comment on the server side design, but I was 
  wondering if there was consensus on how the client interface to the 
  debugger would work. From previous threads I saw DBGP mentioned 
  (http://xdebug.org/docs-dbgp.php), but I don't recall seeing any final 
  commitment to it.
 The patch that I'll be submitting for 8.2 will implement a way to 
 instrument PL/pgSQL (and that idea can be extended to other PL 
 languages).  'Instrumentation' can mean different things - it may be a 
 debugger, a profiler, a coverage analyzer, a tracer, ... 

I can regard it as very great. probably, It is expected that workstation 
(edb-debugger) is realizable with an addition of some language parser.:-)

 
 EnterpriseDB has developed a few plugins that we'll be contributing soon 
 (a debugger, a profiler, and a tracer).  The debugger is by far the 
 largest plugin that we've developed and we implemented it before we had 
 the idea to use a modular architecture (we're still in the process of 
 converting the debugger to modular form, at the moment it's pretty 
 heavily integrated into the PL/pgSQL interpreter).  As soon as we get a 
 patch in for the plugin architecture, we'll open-source at least one or 
 two of the plugins so others can use them and/or write more (the 
 debugger will take a little longer). 
 
 That means that we (i.e. the community) haven't made a firm commitment 
 to the debugger client protocol.  I can tell you a little about the 
 protocol that we are currently using, but it may change by the time 
 we're ready to open-source the debugger.  I gave a presentation at the 
 anniversary summit that described the overall architecture and also 
 showed the client/server protocol - the slides and audio should be 
 available at the conference web site real soon now. 

Great.! 
Your session was very wonderful. People who were not able to hear it will be 
seen.

 
 The most important part, from your perspective (assuming that you might 
 want to add a debugger to pgEdit), is the method that a debugger client 
 application uses to interact with the debugger server.  That's done 
 through a collection of server-side functions that you can call from any 
 libpq application.  For example, to set a breakpoint, you would:
 
 SELECT * FROM pldbg_set_breakpoint( sessionHandle, functionOID, 
 lineNumber, processID );
 
 to step/over:
 
 SELECT * FROM pldbg_step_over( sessionHandle );
 
 to step/into:
 
 SELECT * FROM pldbg_step_into( sessionHandle );
 
 to get a copy of all local variables:
 
 SELECT * FROM pldbg_get_variables( sessionHandle, stackFrame );
 
 and so on.  There are a few functions that you can call to attach your 
 debugger client to a target server and to set global breakpoints.
 
 I'll be posting more information as we get closer to releasing this stuff. 

This regards me as a very great contribution.! 
As for me, the feeling of workstation (edb-debugger) was pleased very much.
I consider it so that often to pgAdmin. Then, I am looking forward to the 
evolution.:-)

Thanks!!

Regards,
Hiroshi Saito




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] hot standby system

2006-07-21 Thread Chris Campbell
The documentation [1] says this about On-line backup and point-in- 
time recovery:


If we continuously feed the series of WAL files to another machine  
that has been loaded with the same base backup file, we have a hot  
standby system: at any point we can bring up the second machine  
and it will have a nearly-current copy of the database.


Is this possible today in a stable and robust way? If so, can we  
document the procedure? If not, should we alter the documentation so  
it's not misleading? I've had several people ask me where to enable  
the hot standby feature, not realizing that PostgreSQL only has  
some of the raw materials that could be used to architect such a thing.


Thanks!

- Chris

[1] http://www.postgresql.org/docs/8.1/interactive/backup-online.html


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] hot standby system

2006-07-21 Thread Joshua D. Drake


Is this possible today in a stable and robust way? If so, can we 
document the procedure? If not, should we alter the documentation so 
it's not misleading? I've had several people ask me where to enable the 
hot standby feature, not realizing that PostgreSQL only has some of 
the raw materials that could be used to architect such a thing.


Well it works fine depending on how you set it up :) Please feel free to 
submit a patch to the docs.


Sincerely,

Joshua D. Drake




Thanks!

- Chris

[1] http://www.postgresql.org/docs/8.1/interactive/backup-online.html


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org




--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [COMMITTERS] pgsql: Ah, I finally realize why Magnus wanted to add a --bindir option

2006-07-21 Thread Peter Eisentraut
Tom Lane wrote:
 Ah, I finally realize why Magnus wanted to add a --bindir option to
 pg_regress: there's no other way to cope with testing a relocated
 installation.  Seems better to call it --psqldir though, since the
 only thing we need to find in that case is psql.

Well, if the ecpg tests materialize, maybe not.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [COMMITTERS] pgsql: Ah, I finally realize why Magnus

2006-07-21 Thread Andrew Dunstan

Peter Eisentraut wrote:

Tom Lane wrote:
  

Ah, I finally realize why Magnus wanted to add a --bindir option to
pg_regress: there's no other way to cope with testing a relocated
installation.  Seems better to call it --psqldir though, since the
only thing we need to find in that case is psql.



Well, if the ecpg tests materialize, maybe not.

  
Maybe installbindir or some such. As Tom has no doubt observed, this 
change also manages to handle the problem we faced in running 
installcheck under Msys, even when the installation is not relocated 
(buildfarm does not relocate the install - it installs in the configured 
path).


cheers

andrew

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Units in postgresql.conf

2006-07-21 Thread Robert Treat
On Thursday 20 July 2006 18:16, Ron Mayer wrote:
 Peter Eisentraut wrote:
  I think it would be useful to allow units to be added to these settings,
  for example...
  shared_buffers = 512MB
  which is a bit cumbersome to calculate right now (you'd need = 65536).
 
  I haven't thought yet how to parse or implement this, but would people
  find this useful?

 Would this extend to things like random_page_cost and similar?

 If the random_page_cost were specifiable in seconds or ms it might be
 easier to someday write a program to measure such values on particular
 hardware platforms.   (though I guess for that to work, the config file
 would also need to add the reference cost (is it a non-random page access)
 as well...)


I'd think no, since random page cost doesn't actually map to any real world 
value.  Unless of course we wanted to add MV for magic value, but then 
people would want to use that for everything ;-D

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] cygwin initdb failure

2006-07-21 Thread Kris Jurka


The cygwin buildfarm is failing:

http://pgbuildfarm.org/cgi-bin/show_log.pl?nm=eeldt=2006-07-21%2015:00:01

initdb failed
Examine ./log/initdb.log for the reason.

initdb.log is empty, but a dialog box pops up saying:

initdb.exe - Unable To Locate DLL
The dynamic link library cygpq.dll could not be found in the specified 
path


[where I checked out the 
source]\src\test\regress\tmp_check\install\home\mingfarm\tmp\inst\bin;
.;C:\WINNT\system; C:\WINNT; C:\cygwin\usr\local\bin; C:\cygwin\bin; 
C:\cygwin\usr\usr\X11R6\bin; [The rest of my %PATH% environment setting]


Kris Jurka

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL

2006-07-21 Thread Robert Lor

Tom Lane wrote:


Also, it'd be interesting to count time spent holding shared lock
separately from time spent holding exclusive.

 


Tom,

Here is the break down between exclusive  shared LWLocks. Do the 
numbers look reasonable to you?


Regards,
-Robert

bash-3.00# time ./Tom_lwlock_acquire.d `pgrep -n postgres`
** LWLock Count: Exclusive **
Lock IdMode   Count
ControlFileLock   Exclusive   1
  FreeSpaceLock   Exclusive   9
 XidGenLock   Exclusive 202
CLogControlLock   Exclusive 203
   WALWriteLock   Exclusive 203
   BgWriterCommLock   Exclusive 222
BufFreelistLock   Exclusive 305
 BufMappingLock   Exclusive 305
  ProcArrayLock   Exclusive 405
   FirstLockMgrLock   Exclusive 670
  WALInsertLock   Exclusive1616

** LWLock Count: Shared **
Lock IdMode   Count
CheckpointStartLock  Shared 202
CLogControlLock  Shared 450
SubtransControlLock  Shared 776
 XidGenLock  Shared2020
  ProcArrayLock  Shared3778
 SInvalLock  Shared4040
 BufMappingLock  Shared   40838

** LWLock Time: Exclusive **
Lock Id   Combined Time (ns)
ControlFileLock 8301
  FreeSpaceLock80590
CLogControlLock  1603557
   BgWriterCommLock  1607122
BufFreelistLock  1997406
 XidGenLock  2312442
 BufMappingLock  3161683
   FirstLockMgrLock  5392575
  ProcArrayLock  6034396
  WALInsertLock 12277693
   WALWriteLock324869744

** LWLock Time: Shared **
Lock Id   Combined Time (ns)
CLogControlLock  3183788
SubtransControlLock  6956229
 XidGenLock 12012576
 SInvalLock 35567976
  ProcArrayLock 45400779
 BufMappingLock300669441
CheckpointStartLock   4056134243


real0m24.718s
user0m0.382s
sys 0m0.181s



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace patch

2006-07-21 Thread Robert Lor

Peter Eisentraut wrote:


Robert Lor wrote:
 


The user needs to have the flexibility to build a 32 bit PG binary
even when he run the 64 bit kernel. If I understand you correctly,
your suggestion will not allow a 32 bit binary to be built on a 64
bit OS.
   



I'm not sure about the context.  How do you control whether the 
PostgreSQL binaries you are about to build end up 32 bit or 64 bit?  
Presumably there is some default, and you switch it using CFLAGS or 
LDFLAGS. 

To build 64 bit binary, I use the following flag depending on the 
compiler. Without -m64 or -xtarget=native64, it defaults to 32 bit.

CC='gcc -m64'
CC='/path_to_sun_compiler/cc -xtarget=native64'

Then it would make sense to let dtrace be controled by 
DTRACE_FLAGS or some such.  But what does dtrace do if no flag at all 
is given?
 


We want to be able to set DTRACEFLAGS to 32 or 64 (e.g. DTRACEFLAGS='64').

If DTRACEFLAGS is not set, can we provide a default value to 32? 
Otherwise, the compile will fail. It's also possible that the CC and 
DTRACEFLAGS are in conflict, and in that case the compile will also 
fail, which is probably okay.


Regards,
-Robert


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] Generic Monitoring Framework with DTrace patch

2006-07-21 Thread Robert Lor

Peter, I'll test the patch on Solaris.  Thanks!

Regards,
-Robert


Peter Eisentraut wrote:

Here is a consolidated patch that contains all the files.  I made some 
configure and makefile adjustments and put standard comment headers in 
all the files.  You can use DTRACEFLAGS to pass options to configure, 
which should help sorting out the 32/64-bit issue.  The problem of the 
*.d files is already gone in CVS.


Since I don't have access to a Solaris system, this is untested for the 
DTrace-enabled case.  The only thing left to do besides actually 
testing that case would be moving the probes.d file to a different 
location, since we probably don't want to have special-purpose files in 
src/backend.


 




diff -uNr ../cvs-pgsql/configure ./configure
--- ../cvs-pgsql/configure  2006-07-21 23:35:48.0 +0200
+++ ./configure 2006-07-22 01:21:54.0 +0200
@@ -314,7 +314,7 @@
# include unistd.h
#endif

-ac_subst_vars='SHELL PATH_SEPARATOR PACKAGE_NAME PACKAGE_TARNAME 
PACKAGE_VERSION PACKAGE_STRING PACKAGE_BUGREPORT exec_prefix prefix 
program_transform_name bindir sbindir libexecdir datadir sysconfdir 
sharedstatedir localstatedir libdir includedir oldincludedir infodir mandir 
build_alias host_alias target_alias DEFS ECHO_C ECHO_N ECHO_T LIBS 
configure_args build build_cpu build_vendor build_os host host_cpu host_vendor 
host_os PORTNAME docdir enable_nls WANTED_LANGUAGES default_port enable_shared 
enable_rpath enable_debug CC CFLAGS LDFLAGS CPPFLAGS ac_ct_CC EXEEXT OBJEXT CPP 
GCC TAS autodepend INCLUDES enable_thread_safety with_tcl with_perl with_python 
with_krb5 krb_srvtab with_pam with_ldap with_bonjour with_openssl with_zlib 
EGREP ELF_SYS LDFLAGS_SL AWK FLEX FLEXFLAGS LN_S LD with_gnu_ld ld_R_works 
RANLIB ac_ct_RANLIB TAR STRIP ac_ct_STRIP STRIP_STATIC_LIB STRIP_SHARED_LIB 
YACC YFLAGS PERL perl_archlibexp perl_privlibexp perl_useshrplib 
perl_embed_ldflags PYTHON python_version python_configdir python_includespec 
python_libdir python_libspec python_additional_libs HAVE_IPV6 LIBOBJS 
acx_pthread_config PTHREAD_CC PTHREAD_LIBS PTHREAD_CFLAGS HAVE_POSIX_SIGNALS 
MSGFMT MSGMERGE XGETTEXT localedir TCLSH TCL_CONFIG_SH TCL_INCLUDE_SPEC 
TCL_LIB_FILE TCL_LIBS TCL_LIB_SPEC TCL_SHARED_BUILD TCL_SHLIB_LD_LIBS NSGMLS 
JADE have_docbook DOCBOOKSTYLE COLLATEINDEX SGMLSPL vpath_build LTLIBOBJS'
+ac_subst_vars='SHELL PATH_SEPARATOR PACKAGE_NAME PACKAGE_TARNAME 
PACKAGE_VERSION PACKAGE_STRING PACKAGE_BUGREPORT exec_prefix prefix 
program_transform_name bindir sbindir libexecdir datadir sysconfdir 
sharedstatedir localstatedir libdir includedir oldincludedir infodir mandir 
build_alias host_alias target_alias DEFS ECHO_C ECHO_N ECHO_T LIBS 
configure_args build build_cpu build_vendor build_os host host_cpu host_vendor 
host_os PORTNAME docdir enable_nls WANTED_LANGUAGES default_port enable_shared 
enable_rpath enable_debug DTRACE DTRACEFLAGS enable_dtrace CC CFLAGS LDFLAGS 
CPPFLAGS ac_ct_CC EXEEXT OBJEXT CPP GCC TAS autodepend INCLUDES 
enable_thread_safety with_tcl with_perl with_python with_krb5 krb_srvtab 
with_pam with_ldap with_bonjour with_openssl with_zlib EGREP ELF_SYS LDFLAGS_SL 
AWK FLEX FLEXFLAGS LN_S LD with_gnu_ld ld_R_works RANLIB ac_ct_RANLIB TAR STRIP 
ac_ct_STRIP STRIP_STATIC_LIB STRIP_SHARED_LIB YACC YFLAGS PERL perl_archlibexp 
perl_privlibexp perl_useshrplib perl_embed_ldflags PYTHON python_version 
python_configdir python_includespec python_libdir python_libspec 
python_additional_libs HAVE_IPV6 LIBOBJS acx_pthread_config PTHREAD_CC 
PTHREAD_LIBS PTHREAD_CFLAGS HAVE_POSIX_SIGNALS MSGFMT MSGMERGE XGETTEXT 
localedir TCLSH TCL_CONFIG_SH TCL_INCLUDE_SPEC TCL_LIB_FILE TCL_LIBS 
TCL_LIB_SPEC TCL_SHARED_BUILD TCL_SHLIB_LD_LIBS NSGMLS JADE have_docbook 
DOCBOOKSTYLE COLLATEINDEX SGMLSPL vpath_build LTLIBOBJS'
ac_subst_files=''

# Initialize some variables set by options.
@@ -865,6 +865,7 @@
  --disable-rpath do not embed shared library search path in executables
  --disable-spinlocks do not use spinlocks
  --enable-debug  build with debugging symbols (-g)
+  --enable-dtrace build with DTrace support
  --enable-depend turn on automatic dependency tracking
  --enable-cassertenable assertion checks (for debugging)
  --enable-thread-safety  make client libraries thread-safe
@@ -1947,6 +1948,82 @@


#
+# DTrace
+#
+
+
+
+# Check whether --enable-dtrace or --disable-dtrace was given.
+if test ${enable_dtrace+set} = set; then
+  enableval=$enable_dtrace
+
+  case $enableval in
+yes)
+
+cat confdefs.h \_ACEOF
+#define ENABLE_DTRACE 1
+_ACEOF
+
+for ac_prog in dtrace
+do
+  # Extract the first word of $ac_prog, so it can be a program name with 
args.
+set dummy $ac_prog; ac_word=$2
+echo $as_me:$LINENO: checking for $ac_word 5
+echo $ECHO_N checking for $ac_word... $ECHO_C 6
+if test ${ac_cv_prog_DTRACE+set} = set; then
+  echo $ECHO_N (cached) $ECHO_C 6
+else
+  if 

Re: [HACKERS] [COMMITTERS] pgsql: Ah, I finally realize why Magnus

2006-07-21 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 Peter Eisentraut wrote:
 Tom Lane wrote:
 Ah, I finally realize why Magnus wanted to add a --bindir option to
 pg_regress: there's no other way to cope with testing a relocated
 installation.  Seems better to call it --psqldir though, since the
 only thing we need to find in that case is psql.
 
 Well, if the ecpg tests materialize, maybe not.
 
 Maybe installbindir or some such.

The problem with --bindir is that it confuses the purpose with the
build/install-time bindir, which is actually nearly unrelated --- it'll
likely have a similar path tail, but path head could be completely
different.  (Confusing this purpose with bindir was why I failed to grok
the point to start with.)  Call it what you like so long as it ain't
bindir ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings