Re: [HACKERS] Replication

2006-08-21 Thread Fujii Masao

Joshua D. Drake wrote:
Modern systems *must* scale beyond a single computer, and the PostgreSQL 
support shipped in modern Linux distros is completely incapable of this.


Slony-I is quite capable as a production class FOSS replication system 
and is in use widely.


Slony-I is not enough because it can cause the inconsistency of data between 
servers.
IMO, log-based replication is needed also for PostgreSQL just like MySQL.


Regards;


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Replication

2006-08-21 Thread Stefan Kaltenbrunner
Fujii Masao wrote:
 Joshua D. Drake wrote:
 Modern systems *must* scale beyond a single computer, and the
 PostgreSQL support shipped in modern Linux distros is completely
 incapable of this.

 Slony-I is quite capable as a production class FOSS replication system
 and is in use widely.
 
 Slony-I is not enough because it can cause the inconsistency of data
 between servers.

hmm what are you refering to here ? slony1 does row-level replication
(something that MySQL cannot do until 5.1 which is still beta) - so it
should not be possible to cause data-inconsistency.
It is however async replication so you can loose data commited on the
master but not yet replicated to the slaves in case you loose the master
 completely.


Stefan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] BF Failure on Bandicoot

2006-08-21 Thread Magnus Hagander
  The procedure entry point ldap_start_tls_sA could not be
 located in
 the
  dynamic link library wldap32.dll.
 
  I'd counsel just leaving --with-ldap off until Magnus gets back
 from
  vacation.  We can always revert the patch later if he can't fix
 it.
 
  Looking at the MSDN documentation it seems this function only
 exists
  on Windows 2003 and Windows XP. Which is very interesting,
 because I
  have it working on one of my 2000 boxes. But I guess I got it
 through
  some patch or other.
 
  I'm going to have to redo it to dynamically load that function
 and
  exit with a proper error message if it's not found. Shouldn't be
 too
  hard, I'll get to it as soon as I can get down for a coding
 session.
 
 On my Windows 2000 (5.00.2195) Service Pack 4, the LDAP library
 does not contain any ldap_start_tls*
 
 File version of WLDAP32.DLL is 5.0.2195.7017.

Yeah, I have somehow got a version 5.2 DLL in there. I wonder where the
heck it came from :-) I'm set up with a different test box now that
clearly shows the problem, so I'll get to work on fixing it.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Bug with initDB under windows 2003

2006-08-21 Thread Magnus Hagander
  I'd be interested in seeing the output from the command:
  Subinacl /service NULL
 
  On a system where this does not work.
 
 
 Here is the output for Subinacl /service NULL
 Both, for the Administrator user and for the Postgres user:

Thanks.


 Postgres user:
 Subinacl /service NULL
 SeSecurityPrivilege : Access is denied.
 WARNING :Unable to set SeSecurityPrivilege privilege. This
 privilege may be required.
 Error OpenSCManager : Access is denied.

That's quite normal - the postgres user doesn't have permission to open
the SC Manager to view the permissions, because it's not a Power User.


 Administrator user:

This is good. It shows one very clear difference from what I have on a
working system, which is:

 /pace =authenticated users  ACCESS_ALLOWED_ACE_TYPE-0x0
 SERVICE_USER_DEFINED_CONTROL-0x0100

On my system, I have:
/pace =authenticated users  ACCESS_ALLOWED_ACE_TYPE-0x0
SERVICE_QUERY_CONFIG-0x1   SERVICE_QUERY_STATUS-0x4

SERVICE_ENUMERATE_DEPEND-0x8
SERVICE_INTERROGATE-0x80   READ_CONTROL-0x2

SERVICE_USER_DEFINED_CONTROL-0x0100



So this is the problem. Now to figure out how to fix it :-) From what I
can tell it simply needs to add back the missing ACE flags. This command
hopefully should work (not tested apart from the syntax, since I don't
have a good testig place, but  please try it and if it doesn't work see
if you can figure out what to change):

Subinacl /service NULL /grant=authenticated users=QSEILU


You need to run this as administrator of course, but it should hopefully
unlock the NUL device again.

//Magnus


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] Optimizer degradation since 8.0

2006-08-21 Thread Teodor Sigaev

Table with one million rows:

wow=# \d _document83
  Table public._document83
   Column   |Type | Modifiers
+-+---
 _idrref| bytea   | not null
 _marked| boolean | not null
 _date_time | timestamp without time zone | not null
 _number| character(10)   | not null
 _posted| boolean | not null
Indexes:
_document83ng_pkey PRIMARY KEY, btree (_idrref)
_document83_bydocdate_tr btree (_date_time, _idrref)
qq btree (_date_time)


Query:
SELECT
_Date_Time,
_IDRRef,
FROM
_Document83
WHERE
_Date_Time = '2006-06-21 11:24:56'::timestamp AND
_IDRRef  '\\234?PPTP00\\021\\333\\000\\367\\004:v:'::bytea
OR _Date_Time  '2006-06-21 11:24:56'::timestamp
LIMIT 20;

Explain analyze in postgres 8.0:
 Limit  (cost=0.00..0.63 rows=20 width=44) (actual time=0.250..0.485 rows=20 
loops=1)
   -  Index Scan using qq, qq on _document83  (cost=0.00..6679.90 rows=211427 
width=44) (actual time=0.238..0.416 rows=20 loops=1)
 Index Cond: ((_date_time = '2006-06-21 11:24:56'::timestamp without 
time zone) OR (_date_time  '2006-06-21 11:24:56'::timestamp without time zone))
 Filter: (((_date_time = '2006-06-21 11:24:56'::timestamp without time 
zone) AND (_idrref  '\\234?PPTP00\\021\\333\\000\\367\\004:v:'::bytea)) OR 
(_date_time  '2006-06-21 11:24:56'::timestamp without time zone))

 Total runtime: 2.313 ms


Explain analyze in postgres 8.1:
 Limit  (cost=0.00..2.82 rows=20 width=44) (actual time=1448.897..1610.386 
rows=20 loops=1)
   -  Seq Scan on _document83  (cost=0.00..29729.04 rows=210782 width=44) 
(actual time=1448.889..1610.314 rows=20 loops=1)
 Filter: ((_date_time  '2006-06-21 11:24:56'::timestamp without time 
zone) OR ((_date_time = '2006-06-21 11:24:56'::timestamp without time zone) AND 
(_idrref  E'\\234?PPTP00\\021\\333\\000\\367\\004:v:'::bytea)))

 Total runtime: 1610.524 ms
(4 rows)

With ENABLE_SEQSCAN=OFF:
 Limit  (cost=1319.83..1321.23 rows=20 width=44) (actual time=193.261..193.382 
rows=20 loops=1)
   -  Bitmap Heap Scan on _document83  (cost=1319.83..16029.62 rows=210782 
width=44) (actual time=193.253..193.314 rows=20 loops=1)
 Recheck Cond: (((_date_time = '2006-06-21 11:24:56'::timestamp without 
time zone) AND (_idrref  E'\\234?PPTP00\\021\\333\\000\\367\\004:v:'::bytea)) 
OR (_date_time  '2006-06-21 11:24:56'::timestamp without time zone))
 -  BitmapOr  (cost=1319.83..1319.83 rows=210788 width=0) (actual 
time=191.203..191.203 rows=0 loops=1)
   -  Bitmap Index Scan on _document83_bydocdate_tr 
(cost=0.00..2.18 rows=30 width=0) (actual time=2.470..2.470 rows=43 loops=1)
 Index Cond: ((_date_time = '2006-06-21 
11:24:56'::timestamp without time zone) AND (_idrref  
E'\\234?PPTP00\\021\\333\\000\\367\\004:v:'::bytea))
   -  Bitmap Index Scan on qq  (cost=0.00..1317.65 rows=210758 
width=0) (actual time=188.720..188.720 rows=275800 loops=1)
 Index Cond: (_date_time  '2006-06-21 11:24:56'::timestamp 
without time zone)

 Total runtime: 193.872 ms

So, 8.0 is better at least in 100 times. Expected number of rows is close to 
real value ( ~27 ). Rewriting query with UNION ALL makes better performance 
(about 1 ms):
 Limit  (cost=0.00..0.73 rows=20 width=44) (actual time=0.654..0.851 rows=20 
loops=1)
   -  Append  (cost=0.00..7712.53 rows=210788 width=44) (actual 
time=0.648..0.791 rows=20 loops=1)
 -  Index Scan using qq on _document83  (cost=0.00..6.42 rows=30 
width=44) (actual time=0.645..0.733 rows=20 loops=1)
   Index Cond: (_date_time = '2006-06-21 11:24:56'::timestamp 
without time zone)
   Filter: (_idrref  
E'\\234?PPTP00\\021\\333\\000\\367\\004:v:'::bytea)
 -  Index Scan using qq on _document83  (cost=0.00..5598.23 
rows=210758 width=44) (never executed)
   Index Cond: (_date_time  '2006-06-21 11:24:56'::timestamp 
without time zone)

 Total runtime: 1.059 ms
But it's not always possible to rewrite automatically generated query...

After adding 'order by', postgres uses index but plan becomes the same as 
before, with seqscan=off.


Can I tweak something in 8.1 or it's a bug?



--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Replication

2006-08-21 Thread Fujii Masao

Stefan Kaltenbrunner wrote:

It is however async replication so you can loose data commited on the
master but not yet replicated to the slaves in case you loose the master
 completely.


Yes, here is an insufficient point of Slony-I, i think.
Most systems will not permit the committed data to be lost, so use is limited.




IMO, log-based replication is needed also for PostgreSQL just like MySQL.


Well, I had misunderstood MySQL. Its replication is also asynchronous.

regards;

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread mark
On Mon, Aug 21, 2006 at 02:56:10PM +1200, Andrej Ricnik-Bay wrote:
 On 8/21/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Is there an interest, or any active project to examine PostgreSQL in
 the area of 64-bit processors? Has it already been done? I don't recall
 seeing a reference to it in my travels. I'm also not sure on what to
 expect for results, as the territory is still new. 64-bit processors
 have existed for a while, but 32-bit processors have been the popular
 choice, making 64-bit support an after thought?
 That's certainly just a reference to the wintel world? AIX, HP-UX
 and Solaris-Sparc have been 64-bit for a while now...

I don't think so. In the Open Source world, most projects are still 32-bit
centric, regardless of how many years the products have been supported on
64-bit platforms.

What application were you thinking of that takes full advantage of 64-bit,
making the 64-bit application much significantly faster than the 32-bit
application? The only area I am aware of, is video processing.

It's often a surprise to people that an upgrade to 64-bit, regardless of
CPU architecture, too often ends up slower, rather than faster.

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread mark
On Sun, Aug 20, 2006 at 11:00:26PM -0400, Douglas McNaught wrote:
 [EMAIL PROTECTED] writes:
  Is there an interest, or any active project to examine PostgreSQL in
  the area of 64-bit processors? Has it already been done? I don't recall
  seeing a reference to it in my travels. I'm also not sure on what to
  expect for results, as the territory is still new. 64-bit processors
  have existed for a while, but 32-bit processors have been the popular
  choice, making 64-bit support an after thought?
 I find this question a bit amusing, since PG has run on 64-bit
 architectures such as MIPS, Sparc, Alpha and PA-RISC for quite a while
 now. :)

I don't think so. Software can be designed to take best advantage of
hardware. Recompiling it for a different architecture, running test
cases, and declaring support, is not the same as optimizing for.

 As I said in a private email to Naz, the main advantage I think you'd
 see from 64-bit is the ability to run with more than 2GB or so of
 shared buffers on a system with lots of RAM.  Whether you'd want to do
 that, or let the OS do most of the buffering, is an open question...

This is what I mean by after thought. PostgreSQL is designed for
32-bit processors. Which is fine. I'm not complaining. The question
was whether there is an interest in pursuing 64-bit specific
optimizations. In the PostgreSQL code, a quick check points me only to
has long int 64 as a 64-bit source code #ifdef. Of the six places
that reference this, five of them actually slow down the code, as they
check for overflow of the 'long int' result beyond 4 bytes of
data. The sixth place is used to define the 64-bit type in use by
PostgreSQL, which I suspect is infrequently used.

I believe the answer is no. No or few 64-bit optimization possibilities
have been chased down, probably because some or many of these would:

1) require significant re-architecture

2) reduce the performance in a 32-bit world

It's a question that only half interests me. As with most projects, I
don't think the projects are ready to re-architect for this
purpose. Perhaps once 50%+ of people are running PostgreSQL in 64-bit
mode, the question will be more serious to more people.

As a half interesting question, I'm defending it as a valid question.
Please don't write it off, but it is fine to say not yet, we have more
important things to work on.

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] snapshot replication with pg_dump

2006-08-21 Thread Paul Silveira

Yes the needs are simple.  I was also thinking about using DBI.  The most
important thing to me is that everything is kept in a transaction so that
users can still read the data while I'm snapshotting it at the same time. 
If my transaction is isolated from all the reads happening, then it
shouldn't matter how long it takes for me to move the data over (granted,
that will increase latency, but in this project that's not really too
sensitive) and it will be transparent to the end users.  

Does anyone have any examples of using pg_dump in a transaction with a
DELETE or TRUNCATE command?  I have begun writing this to get the job
done...

cat DELETE.sql COPYDATA.sql | psql -Upostgres -dMyDBName -hTestServer2

This command will combine the two sql files that I have (the first one just
deletes all from a certain table and the second one is a COPY command from a
previous pg_dump of a specific table) and then it pipes that out to psql to
run it on the remote server.  

I like what I have so far but would like to make it more dynamic.  If I
could eliminate the need for the two .sql files and make it all happen
within the command line, that would rock.  

I guess I'd need something like this... (Pseudo code...)

cat DELETE FROM MyTable pg_dump MyDBName -hTestServer1 -a -tMyTableName |
psql -Upostgres -dMyDBName -hTestServer2


I'm not sure how to cat the DELETE at the beginning of the COPY command that
would be delivered from the pg_dump and then pipe that complete thing to the
remote server to be executed as a transaction so that users could still read
from that able while my command was running.  

Any ideas???

Thanks in advance,

Paul






Christopher Browne-4 wrote:
 
 [EMAIL PROTECTED] (Paul Silveira) writes:
 Does anyone have any good examples of implementing snapshot
 replication. I know that PostgreSQL does not have snapshot
 replication and that Slony-I is the recomended replication senario
 but I've configured it and it seems rather advanced for a shop that
 is implementing PostgreSQL for the first time.  I have an
 application that will be mostly reads and snapshot replication would
 probably be simple enough and would work.  I was thinking about just
 using pg_dump to do the trick because the DB should not get very
 large.  Does anyone have any advanced examples of doing something
 like this? Also, does anyone have any comments they'd like to share
 about this...
 
 If your database is small, and your needs simple, then using pg_dump
 to generate snapshots is a perfectly reasonable idea.
 
 I suppose the primary complication is whether or not you have multiple
 databases around on the cluster...  If you don't, or if they all need
 to be snapshotted, you might consider using pg_dumpall, which also
 creates users and databases.
 
 If pg_dumpall is unsuitable, then you'll still need to grab user
 information that isn't part of pg_dump output...
 -- 
 (reverse (concatenate 'string gro.mca @ enworbbc))
 http://www3.sympatico.ca/cbbrowne/postgresql.html
 This .signature is  shareware.  Send in $20 for  the fully registered
 version...
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
 
 

-- 
View this message in context: 
http://www.nabble.com/snapshot-replication-with-pg_dump-tf2090351.html#a5907049
Sent from the PostgreSQL - hackers forum at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread Florian G. Pflug

[EMAIL PROTECTED] wrote:

This is what I mean by after thought. PostgreSQL is designed for
32-bit processors. Which is fine. I'm not complaining. The question
was whether there is an interest in pursuing 64-bit specific
optimizations. In the PostgreSQL code, a quick check points me only to
has long int 64 as a 64-bit source code #ifdef. Of the six places
that reference this, five of them actually slow down the code, as they
check for overflow of the 'long int' result beyond 4 bytes of
data. The sixth place is used to define the 64-bit type in use by
PostgreSQL, which I suspect is infrequently used.

I believe the answer is no. No or few 64-bit optimization possibilities
have been chased down, probably because some or many of these would:

1) require significant re-architecture

2) reduce the performance in a 32-bit world


Just out of intereset - what areas in postgres do you think could be
improved (performance wise) on 64-bit machines? The only area that
I can see is the int64 datatype - it's stored in palloc()'ed memory
on 32-bit machines AFAIK - I'm not sure if it uses the long long
datatype on 64-bit archs.. But I can't imagine any other area that
could be tuned by making use of (native) 64-bit ints.

greetings, Florian Pflug

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread Martijn van Oosterhout
On Mon, Aug 21, 2006 at 09:16:46AM -0400, [EMAIL PROTECTED] wrote:
 This is what I mean by after thought. PostgreSQL is designed for
 32-bit processors. Which is fine. I'm not complaining. The question
 was whether there is an interest in pursuing 64-bit specific
 optimizations. In the PostgreSQL code, a quick check points me only to
 has long int 64 as a 64-bit source code #ifdef. Of the six places
 that reference this, five of them actually slow down the code, as they
 check for overflow of the 'long int' result beyond 4 bytes of
 data. The sixth place is used to define the 64-bit type in use by
 PostgreSQL, which I suspect is infrequently used.

There are two defines, the end result being to declare an int64 type
which is used a fair bit around the place. biginteger and bigserial
being the obvious ones.

The checks I see relate to strtol, where the code only wants an int4.
There's no strtoi so on 32 bit the range check is built-in, but if long
is 64 bit you have to do the check seperatly.

That's just an interface problem, there's not a lot we can do about
that really.

 I believe the answer is no. No or few 64-bit optimization possibilities
 have been chased down, probably because some or many of these would:
 
 1) require significant re-architecture
 
 2) reduce the performance in a 32-bit world

Can you think of any places at all where 64-bit would make a difference
to processing? 64-bit gives you more memory, and on some x86 chips, more
registers, but that's it.

Have anice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] snapshot replication with pg_dump

2006-08-21 Thread Martijn van Oosterhout
On Mon, Aug 21, 2006 at 06:40:22AM -0700, Paul Silveira wrote:
 
 Yes the needs are simple.  I was also thinking about using DBI.  The most
 important thing to me is that everything is kept in a transaction so that
 users can still read the data while I'm snapshotting it at the same time. 
 If my transaction is isolated from all the reads happening, then it
 shouldn't matter how long it takes for me to move the data over (granted,
 that will increase latency, but in this project that's not really too
 sensitive) and it will be transparent to the end users.  

Looks to me like the -c option to pg_dump should do what you want.

snip

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[HACKERS] Unable to post to -patches (was: Visual C++ build files)

2006-08-21 Thread Magnus Hagander
One more attempt to send this, now without the attachment. It's getting
eaten somewhere. I assume it's not the moderation queue, because someone
would hopefully have gotten to it during the four weeks I was gone..

Anyawy. Third take on the message itself, if it goes through, could
someone who has access please look to see where the other two emails
went? Exact dates in CEST (UTC+2) in the forward headers below. For the
latest send, I got this in my mailserver logs:

Aug 21 15:51:23 altair postfix/smtp[8141]: 221DCF4104:
to=[EMAIL PROTECTED]
sql.org, relay=svr1.postgresql.org[200.46.204.71], delay=2, status=sent
(250 Ok
: queued as 3F45C9FB2BB)

Contains a postfix queue id that could be tracked.

//Magnus

 -Original Message-
 From: Magnus Hagander
 Sent: Monday, August 21, 2006 3:51 PM
 To: pgsql-patches@postgresql.org
 Subject: FW: Visual C++ build files
 
 Argh!
 
 I sent this patch before I left on vacation, but for some reason it
 seems it never came through - I can't seem to find it in the
 archives :-(
 
 Let's see if I can get it through this time.
 
 //Magnus
 
  -Original Message-
  From: Magnus Hagander
  Sent: Thursday, July 13, 2006 10:32 PM
  To: pgsql-patches@postgresql.org
  Subject: Visual C++ build files
 
  Attached is the current version of my scripts to build the
 complete
  PostgreSQL system using Visual C++. See archives for reasons
 why...
 
  The basic idea is to parse as much information as possible out of
 the
  existing Makefiles and configure.in, to make sure we don't add a
 huge
  maintenance headache. Most things are parsed, some are not (some
 not
  possible,some just too hard IMHO). There's also code that
 replaces
  what configure needs to do, and shellscripts like gen_fmgroids.
 
  It generates solution and project files for Visual C++ 2005,
 which
  should work fine with both the commercial and the free versions
 (some
  optimiser features etc not available in the free one, of course,
 but
  that's MS for you). System can be built both from the GUI and
 from the
  commandline.
 
  Currently only a DEBUG build can be built. Naturally, the plan is
 to
  allow RELEASE builds as well :-) That part is just not finished
 yet,
  and for now debug builds are really the only thing interesting...
 
  Per previous message, this patch will require some further work,
 but I
  wanted to get it submitted in good time before feature freeze for
  people to look over. My hope is to have it in for 8.2, but most
 likely
  we'll continue to build the release binaries with mingw for one
 more
  release at least. It would be very good to be able to build
 proper
  debug-enabled binaries to help people debug issues, though - and
 to
  get it on the buildfarm properly.
 
  So, the plan is to update the buildfarm code to deal with
 building and
  testing the VC builds as well. After speaking with Andrew about
 it, it
  seems this shouldn't be all too hard to do. Some required changes
 are
  already made for other reasons (decreasing dependency on external
  tools), others still needed but not started yet.
 
  All files should be uncompressed into a directory named vcbuild
  in the root of the tree. No changes are made to any other files,
 so
  there should be no effect at all on other parts of the system.
 
 
  //Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 Can you think of any places at all where 64-bit would make a difference
 to processing? 64-bit gives you more memory, and on some x86 chips, more
 registers, but that's it.

It would be interesting to look into making int8 and float8 be
pass-by-value datatypes to save palloc overhead on machines where
Datum is going to be 8 bytes wide anyway.  Of course this would
only help queries that use those datatypes extensively, and it
might be an insignificant savings anyhow :-(

For the most part though I think that the main advantage of 64-bit
for a database is the ability to address more memory.  We've been
working slowly towards allowing PG to make better use of large
numbers of shared buffers, for instance.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Bug with initDB under windows 2003

2006-08-21 Thread dror


Hi Magnus,

After trying to unlock the nul device using:
Subinacl/serviceNULL/grant="authenticatedusers"=QSEILU

It doesn't solve the problem (even after restating the machine) 

Here is the new output from running "Subinacl/serviceNULL" after the change

For Administrator:

==+Service NULL==/control=0x0/owner =system/primary group =system/audit ace count =1/aace =everyone SYSTEM_AUDIT_ACE_TYPE-0x2 FAILED_ACCESS_ACE_FLAG-0x80 FAILED_ACCESS_ACE_FLAG-0x0x80 SERVICE_ALL_ACCESS/perm. ace count =6/pace =system ACCESS_ALLOWED_ACE_TYPE-0x0 SERVICE_QUERY_CONFIG-0x1 SERVICE_QUERY_STATUS-0x4 SERVICE_ENUMERATE_DEPEND-0x8 SERVICE_START-0x10 SERVICE_STOP-0x20 SERVICE_PAUSE_CONTINUE-0x40 SERVICE_INTERROGATE-0x80 READ_CONTROL-0x2 SERVICE_USER_DEFINED_CONTROL-0x0100/pace =builtin\administrators ACCESS_ALLOWED_ACE_TYPE-0x0 SERVICE_ALL_ACCESS/pace =interactive ACCESS_ALLOWED_ACE_TYPE-0x0 SERVICE_QUERY_CONFIG-0x1 SERVICE_QUERY_STATUS-0x4 SERVICE_ENUMERATE_DEPEND-0x8 SERVICE_INTERROGATE-0x80 READ_CONTROL-0x2 SERVICE_USER_DEFINED_CONTROL-0x0100/pace =service ACCESS_ALLOWED_ACE_TYPE-0x0 SERVICE_QUERY_CONFIG-0x1 SERVICE_QUERY_STATUS-0x4 SERVICE_ENUMERATE_DEPEND-0x8 SERVICE_INTERROGATE-0x80 READ_CONTROL-0x2 SERVICE_USER_DEFINED_CONTROL-0x0100/pace =builtin\power users ACCESS_ALLOWED_ACE_TYPE-0x0 SERVICE_QUERY_CONFIG-0x1 SERVICE_QUERY_STATUS-0x4 SERVICE_ENUMERATE_DEPEND-0x8 SERVICE_START-0x10 SERVICE_STOP-0x20 SERVICE_PAUSE_CONTINUE-0x40 SERVICE_INTERROGATE-0x80 READ_CONTROL-0x2 SERVICE_USER_DEFINED_CONTROL-0x0100/pace =authenticated users ACCESS_ALLOWED_ACE_TYPE-0x0 SERVICE_QUERY_CONFIG-0x1 SERVICE_QUERY_STATUS-0x4 SERVICE_ENUMERATE_DEPEND-0x8 SERVICE_INTERROGATE-0x80 READ_CONTROL-0x2 SERVICE_USER_DEFINED_CONTROL-0x0100
Elapsed Time: 00 00:00:00Done: 1, Modified 0, Failed 0, Syntax errors 0Last Done : NULLAnyway,
It obvious that it some kind of permission issue, but even if this command would have solved the problem it is still mean thatthe installer shell runthat command before calling to initDB, in order to validate that the user can run the initDB (as it written today), Or adding limitation\Warning to the userfor known issue and suggest the workarounddevice,
Or 
Changing the initDB code that it should test the nul device permission before forwarding output to there
Or
Fix the code asI suggested (but unfortunately rejected by Tom).

Regards
Dror



 Subject: RE: [HACKERS] Bug with initDB under windows 2003 Date: Mon, 21 Aug 2006 13:26:11 +0200 From: [EMAIL PROTECTED] To: [EMAIL PROTECTED]; kleptog@svana.org CC: pgsql-hackers@postgresql.org  I'dbeinterestedinseeingtheoutputfromthecommand: Subinacl/serviceNULL  Onasystemwherethisdoesnotwork.   Hereistheoutputfor"Subinacl/serviceNULL" Both,fortheAdministratoruserandforthePostgresuser:  Thanks.   Postgresuser: Subinacl/serviceNULL SeSecurityPrivilege:Accessisdenied. WARNING:UnabletosetSeSecurityPrivilegeprivilege.This privilegemayberequired. ErrorOpenSCManager:Accessisdenied.  That'squitenormal-thepostgresuserdoesn'thavepermissiontoopen theSCManagertoviewthepermissions,becauseit'snotaPowerUser.   Administratoruser:  Thisisgood.ItshowsoneverycleardifferencefromwhatIhaveona workingsystem,whichis:  /pace=authenticatedusersACCESS_ALLOWED_ACE_TYPE-0x0 SERVICE_USER_DEFINED_CONTROL-0x0100  Onmysystem,Ihave: /pace=authenticatedusersACCESS_ALLOWED_ACE_TYPE-0x0 SERVICE_QUERY_CONFIG-0x1SERVICE_QUERY_STATUS-0x4  SERVICE_ENUMERATE_DEPEND-0x8 SERVICE_INTERROGATE-0x80READ_CONTROL-0x2  SERVICE_USER_DEFINED_CONTROL-0x0100Sothisistheproblem.Nowtofigureouthowtofixit:-)FromwhatI cantellitsimplyneedstoaddbackthemissingACEflags.Thiscommand hopefullyshouldwork(nottestedapartfromthesyntax,sinceIdon't haveagoodtestigplace,butpleasetryitandifitdoesn'tworksee ifyoucanfigureoutwhattochange):  Subinacl/serviceNULL/grant="authenticatedusers"=QSEILU   Youneedtorunthisasadministratorofcourse,butitshouldhopefully unlocktheNULdeviceagain.  //Magnus Express yourself instantly with Windows Live Messenger! Windows Live Messenger!


Re: [HACKERS] Unable to post to -patches (was: Visual C++ build files)

2006-08-21 Thread Magnus Hagander
Another take on this. Dave convinced me that I had access to svr1, which
it turns out I did, so I managed to check the logs myself. And I find:

Aug 21 10:51:24 postgresql postfix/smtp[52021]: 3F45C9FB2BB:
to=pgsql-patches-p
[EMAIL PROTECTED], orig_to=pgsql-patches@postgresql.org,
relay=mai
a.hub.org[200.46.208.251], delay=1, status=sent (250 2.7.1 Ok,
discarded, id=258
35-09 - BANNED: P=p003,L=1,M=multipart/mixed |
P=p002,L=1/2,M=application/x-gzip
,T=gz,N=vcbuild.tar.gz | P=p...)


Seems -patches is rejecting any mail with attached .tar.gz files, if I
read that correctly? That's just *wrong* IMNSHO  - that's exactly what
-patches is *for* :-) (Assuming I read the logline right, that is -
can't say I really recognize the message enough o be sure)

Is there a list somewhere of what kind of stuff gets discarded without
notification?


Anyway. I've put the patch on
http://www.hagander.net/pgsql/vcbuild.tar.gz. I'll be happy to repost it
to the list once it's fixed, but for now please consider using that link
to review...


//Magnus


 -Original Message-
 From: Magnus Hagander
 Sent: Monday, August 21, 2006 4:00 PM
 To: pgsql-hackers@postgresql.org
 Cc: Marc G. Fournier
 Subject: Unable to post to -patches (was: Visual C++ build files)
 
 One more attempt to send this, now without the attachment. It's
 getting eaten somewhere. I assume it's not the moderation queue,
 because someone would hopefully have gotten to it during the four
 weeks I was gone..
 
 Anyawy. Third take on the message itself, if it goes through, could
 someone who has access please look to see where the other two
 emails went? Exact dates in CEST (UTC+2) in the forward headers
 below. For the latest send, I got this in my mailserver logs:
 
 
 Aug 21 15:51:23 altair postfix/smtp[8141]: 221DCF4104: to=pgsql-
 [EMAIL PROTECTED] sql.org, relay=svr1.postgresql.org[200.46.204.71],
 delay=2, status=sent (250 Ok
 : queued as 3F45C9FB2BB)
 
 Contains a postfix queue id that could be tracked.
 
 //Magnus
 
  -Original Message-
  From: Magnus Hagander
  Sent: Monday, August 21, 2006 3:51 PM
  To: pgsql-patches@postgresql.org
  Subject: FW: Visual C++ build files
 
  Argh!
 
  I sent this patch before I left on vacation, but for some reason
 it
  seems it never came through - I can't seem to find it in the
 archives
  :-(
 
  Let's see if I can get it through this time.
 
  //Magnus
 
   -Original Message-
   From: Magnus Hagander
   Sent: Thursday, July 13, 2006 10:32 PM
   To: pgsql-patches@postgresql.org
   Subject: Visual C++ build files
  
   Attached is the current version of my scripts to build the
  complete
   PostgreSQL system using Visual C++. See archives for reasons
  why...
  
   The basic idea is to parse as much information as possible out
 of
  the
   existing Makefiles and configure.in, to make sure we don't add
 a
  huge
   maintenance headache. Most things are parsed, some are not
 (some
  not
   possible,some just too hard IMHO). There's also code that
  replaces
   what configure needs to do, and shellscripts like gen_fmgroids.
  
   It generates solution and project files for Visual C++ 2005,
  which
   should work fine with both the commercial and the free versions
  (some
   optimiser features etc not available in the free one, of
 course,
  but
   that's MS for you). System can be built both from the GUI and
  from the
   commandline.
  
   Currently only a DEBUG build can be built. Naturally, the plan
 is
  to
   allow RELEASE builds as well :-) That part is just not finished
  yet,
   and for now debug builds are really the only thing
 interesting...
  
   Per previous message, this patch will require some further
 work,
  but I
   wanted to get it submitted in good time before feature freeze
 for
   people to look over. My hope is to have it in for 8.2, but most
  likely
   we'll continue to build the release binaries with mingw for one
  more
   release at least. It would be very good to be able to build
  proper
   debug-enabled binaries to help people debug issues, though -
 and
  to
   get it on the buildfarm properly.
  
   So, the plan is to update the buildfarm code to deal with
  building and
   testing the VC builds as well. After speaking with Andrew about
  it, it
   seems this shouldn't be all too hard to do. Some required
 changes
  are
   already made for other reasons (decreasing dependency on
 external
   tools), others still needed but not started yet.
  
   All files should be uncompressed into a directory named
 vcbuild
   in the root of the tree. No changes are made to any other
 files,
  so
   there should be no effect at all on other parts of the system.
  
  
   //Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Optimizer degradation since 8.0

2006-08-21 Thread Tom Lane
Teodor Sigaev [EMAIL PROTECTED] writes:
 Can I tweak something in 8.1 or it's a bug?

It's not a bug, it's an intentional change:

2005-04-24 21:30  tgl

* src/: backend/commands/explain.c,
backend/executor/nodeBitmapIndexscan.c,
backend/executor/nodeIndexscan.c, backend/nodes/copyfuncs.c,
backend/nodes/outfuncs.c, backend/optimizer/path/allpaths.c,
backend/optimizer/path/indxpath.c,
backend/optimizer/path/orindxpath.c,
backend/optimizer/plan/createplan.c,
backend/optimizer/plan/setrefs.c,
backend/optimizer/plan/subselect.c,
backend/optimizer/util/pathnode.c,
backend/optimizer/util/restrictinfo.c,
backend/utils/adt/selfuncs.c, include/executor/nodeIndexscan.h,
include/nodes/execnodes.h, include/nodes/plannodes.h,
include/nodes/relation.h, include/optimizer/paths.h,
include/optimizer/planmain.h, include/optimizer/restrictinfo.h:
Remove support for OR'd indexscans internal to a single IndexScan
plan node, as this behavior is now better done as a bitmap OR
indexscan.  This allows considerable simplification in
nodeIndexscan.c itself as well as several planner modules concerned
with indexscan plan generation.  Also we can improve the sharing of
code between regular and bitmap indexscans, since they are now
working with nigh-identical Plan nodes.

Your example shows a case where a plain indexscan's zero startup cost
is very useful.  I'm disinclined to revert the above patch completely,
because the OR'd-indexscan code was a mess, but perhaps we could
consider ways to make bitmap scans start delivering tuples sooner
than after finishing the whole index scan.  Not for 8.2 though :-(

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread Douglas McNaught
[EMAIL PROTECTED] writes:

 I believe the answer is no. No or few 64-bit optimization possibilities
 have been chased down, probably because some or many of these would:

 1) require significant re-architecture

 2) reduce the performance in a 32-bit world

Honestly, I think the main optimizations happen automatically: the
compiler uses the extra registers (if present) and pointers in LP64
mode are automatically 64-bit, which allows much larger memory areas.
That's probably 95% of the win right there.  What other
optimizations need doing?

People have been running PG with big databases on 64-bit iron for
years; don't you think any glaring infelicities would have been
addressed?

 It's a question that only half interests me. As with most projects, I
 don't think the projects are ready to re-architect for this
 purpose.

What re-architecting would be needed?

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Replication

2006-08-21 Thread Joshua D. Drake

Fujii Masao wrote:

Stefan Kaltenbrunner wrote:

It is however async replication so you can loose data commited on the
master but not yet replicated to the slaves in case you loose the master
 completely.


Yes, here is an insufficient point of Slony-I, i think.
Most systems will not permit the committed data to be lost, so use is 
limited.


Wanna bet?

It is very, very common to have asynchronous replication. I would say 
the need for synchronous is far more limited (although greater desired).


Joshua D. Drake






IMO, log-based replication is needed also for PostgreSQL just like 
MySQL.


Well, I had misunderstood MySQL. Its replication is also asynchronous.

regards;

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] pg_dump versus SERIAL, round N

2006-08-21 Thread Morus Walter
On Sat, 2006-08-19 at 16:01 -0400, Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  The only complaint I can see is that someone
  who wants pg_dump to dump out SERIAL so it appears just as he created
  the table, doesn't get that.  Could we have pg_dump do that if the
  sequences all match the creation (weren't modified)?
 
 pg_dump's output is often very far from what you typed in anyway.
 My response to that person would be get a life ...
 
As the one who brought up the issue (again) I'd like to add that the
suggested solution sounds good to me.

Thanks a lot for your great work on postgres. You rock!

Morus



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread Alexander Kirpa
 WRT 64-bit and Postgres, it depends on the CPU as to whether you
 see a simple performance benefit.  On the Opteron you will see a
 benefit when doing CPU bound work.  When doing the CPU portion, the
 additional registers of the Opteron running in 64-bit mode are used
 by the compiler to produce a 20-30% boost in performance.  On the
 Xeon in 64-bit mode, the same regions of execution will slow down
 by about 5%.

 Postgres benefits automatically from the larger memory addressing
 of the 64-bit kernel by using the larger I/O cache of Linux.

Main benefit Postgres in 64-bit mode possible only in case dedicated 
DB server on system with RAM  3GB and use most part of RAM for 
shared buffers and avoid persistent moving buffers between OS cache 
and shared memory. On system with RAM below 2-3GB to difficult found 
serious gain of performance.

Best regards,
 Alexander Kirpa


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread mark
On Mon, Aug 21, 2006 at 10:23:16AM -0400, Douglas McNaught wrote:
 [EMAIL PROTECTED] writes:
  I believe the answer is no. No or few 64-bit optimization possibilities
  have been chased down, probably because some or many of these would:
  1) require significant re-architecture
  2) reduce the performance in a 32-bit world
 Honestly, I think the main optimizations happen automatically: the
 compiler uses the extra registers (if present) and pointers in LP64
 mode are automatically 64-bit, which allows much larger memory areas.
 That's probably 95% of the win right there.  What other
 optimizations need doing?

Compilers are getting smarter, but having code profiled, and hand tuned
for 32-bits does not automatically get compiled with a 64-bit compiler,
to code that has been profile, and hand tuned for 64-bit.

An example of this is in how algorithms are implemented. For
Intel-style processors at least, using array offsets is cheaper than
using many pointers. For 64-bit processors, 32-bit array offsets would
use less memory.

For re-architecture, the first thing that came to mind was Datum
increasing in size. Copy by value instead of allocating a pointer,
and passing the pointer. The pointer will be 64-bits, so Datum
could support 64-bits as well. Tom caught this, but he did point
out what I missed. This would only benefit data types that are
64-bits in length, which are still likely uncommon (most people
would use serial, not bigserial).

Has anybody taken any of the 64-bit processor optimization manuals,
and walked through it, cross referencing it against bottleneck
programs in PostgreSQL? To counter this, I suggest to you that most
every attempt to optimize PostgreSQL for performance reasons, likely
considered variations in algorithms measured on a 32-bit processor,
finally selecting the version that was simplest, and best performing,
on a 32-bit processor. This is normal evolution for a piece of
software. It is naturally optimized for the processor that is most
used by the users.

I'm not the expert on the matter. I don't see evidence that anybody
has taken this route though.

 People have been running PG with big databases on 64-bit iron for
 years; don't you think any glaring infelicities would have been
 addressed?

I doubt there are 'glaring infelicities'. I doubt any changes required
to provide a noticeable speed improvement would be one liners. They will
not be obvious. They will require a complete understanding of the many
architectural differences between common 32-bit processors, and common
64-bit processors. Few have this information in their head. I don't.
I suspect that anybody who claims that the only difference is the amount
of addressable memory, and the number of registers doesn't. :-)

  It's a question that only half interests me. As with most projects, I
  don't think the projects are ready to re-architect for this
  purpose.
 What re-architecting would be needed?

I'm asking that it be admitted that it has not been looked at. Not
seriously. Nor is this unique to PostgreSQL. I expect the performance
for Linux applications in general to slowly improve on 64-bit
processors as more and more people begin to use it. I recall a few
threads on other mailing lists where the primary developers admitted
to not having ready access to a 64-bit machine. Until they do, the
chance that these applications are hand-tuned for 64-bit is unlikely.

I'm not claiming that I have the answers. I am claiming that a few of
you share this weakness, and that an assumption that PostgreSQL is
designed for 64-bit would be incorrect. :-)

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread mark
On Mon, Aug 21, 2006 at 12:00:58PM +0300, Alexander Kirpa wrote:
  WRT 64-bit and Postgres, it depends on the CPU as to whether you
  see a simple performance benefit.  On the Opteron you will see a
  benefit when doing CPU bound work.  When doing the CPU portion, the
  additional registers of the Opteron running in 64-bit mode are used
  by the compiler to produce a 20-30% boost in performance.  On the
  Xeon in 64-bit mode, the same regions of execution will slow down
  by about 5%.
 
  Postgres benefits automatically from the larger memory addressing
  of the 64-bit kernel by using the larger I/O cache of Linux.
 
 Main benefit Postgres in 64-bit mode possible only in case dedicated 
 DB server on system with RAM  3GB and use most part of RAM for 
 shared buffers and avoid persistent moving buffers between OS cache 
 and shared memory. On system with RAM below 2-3GB to difficult found 
 serious gain of performance.

This is the main difference between PostgreSQL today - designed for
32-bit - when recompiled with a 64-bit compiler.

The additional registers are barely enough to counter the increased
cost of processing in 64-bits.

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] snapshot replication with pg_dump

2006-08-21 Thread Paul Silveira

Can you do that if you have functions tied to the table?  Also would that be
in a transaction?  I need to allow seamless usability to the data while I'm
doing this snapshot.  Not sure the -c option (Clean Drop schema) would work
here.  I want to only drop a table and not the entire db so that I'm not
moving data that doesn't need to be moved.

The goal is to only shapshot data in tables that has changed.  I would like
to wrap that in a transaction.  

-Paul






Martijn van Oosterhout wrote:
 
 On Mon, Aug 21, 2006 at 06:40:22AM -0700, Paul Silveira wrote:
 
 Yes the needs are simple.  I was also thinking about using DBI.  The most
 important thing to me is that everything is kept in a transaction so that
 users can still read the data while I'm snapshotting it at the same time. 
 If my transaction is isolated from all the reads happening, then it
 shouldn't matter how long it takes for me to move the data over (granted,
 that will increase latency, but in this project that's not really too
 sensitive) and it will be transparent to the end users.  
 
 Looks to me like the -c option to pg_dump should do what you want.
 
 snip
 
 Have a nice day,
 -- 
 Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to
 litigate.
 
 

-- 
View this message in context: 
http://www.nabble.com/snapshot-replication-with-pg_dump-tf2090351.html#a5908347
Sent from the PostgreSQL - hackers forum at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread Martijn van Oosterhout
On Mon, Aug 21, 2006 at 10:46:56AM -0400, [EMAIL PROTECTED] wrote:
 Compilers are getting smarter, but having code profiled, and hand tuned
 for 32-bits does not automatically get compiled with a 64-bit compiler,
 to code that has been profile, and hand tuned for 64-bit.

I don't see any evidence that anyone has any kind of hand-tuning, not
for 32-bit, not for 64-bit. That's compiler's job really.

That's said, if you want to go hand-tuning, go for the low-hanging
fruit first: using likely()/unlikely() in often used code paths
(heap_get_attr for example), find the parts of the code that could
benefit from strict-aliasing, etc.

In general we avoid architechture specific code because the cost of
maintainence outweighs the benefits.

 For re-architecture, the first thing that came to mind was Datum
 increasing in size. Copy by value instead of allocating a pointer,
 and passing the pointer. The pointer will be 64-bits, so Datum
 could support 64-bits as well. Tom caught this, but he did point
 out what I missed. This would only benefit data types that are
 64-bits in length, which are still likely uncommon (most people
 would use serial, not bigserial).

A Datum must be 64-bit on a 64-bit arch, how else is it going to hold
the pointer for a pass-by-ref value?

On the whole, PostgreSQL is still in the phase where we're trying to
find the best algorithms. When that's done (if ever), we can start
worrying about processor optimisations...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [HACKERS] Replication

2006-08-21 Thread AgentM


On Aug 21, 2006, at 10:30 , Joshua D. Drake wrote:


Fujii Masao wrote:

Stefan Kaltenbrunner wrote:
It is however async replication so you can loose data commited on  
the
master but not yet replicated to the slaves in case you loose the  
master

 completely.

Yes, here is an insufficient point of Slony-I, i think.
Most systems will not permit the committed data to be lost, so use  
is limited.


Wanna bet?

It is very, very common to have asynchronous replication. I would  
say the need for synchronous is far more limited (although greater  
desired).


I would imagine that multi-master synchronous replication would be  
fairly trivial to implement with 2PC and wal-shipping available, no?


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Replication

2006-08-21 Thread Joshua D. Drake


It is very, very common to have asynchronous replication. I would say 
the need for synchronous is far more limited (although greater desired).


I would imagine that multi-master synchronous replication would be 
fairly trivial to implement with 2PC and wal-shipping available, no?


Trivial? I would say... no. There is a reason it hasn't been done yet, 
and a reason why CMD (for example) has zero plans to even try.


Sincerely,

Joshua D. Drake



--

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Replication

2006-08-21 Thread Markus Schiltknecht

Hi,

AgentM wrote:
I would imagine that multi-master synchronous replication would be 
fairly trivial to implement with 2PC and wal-shipping available, no?


Yes, that could be done. And AFAIK eigter pgpool or PgCluster (1) try to 
do sync, multi-master replication that way.


The problem is that such an implementation is very network intensive and 
scales very badly for writing transactions. If you're interested in a 
good sync, multi-master replication algorithm, you might want to have a 
look at what Slony-II [1] or Postgres-R [2] try to do. I also recommend 
reading Don't be lazy, be consistent of Bettina Kemme [3].


Regards

Markus

[1]: http://www.slony2.org/wiki/index.php?title=Documentation
[2]: http://www.postgres-r.org/
[3]: http://www.cs.mcgill.ca/~kemme/papers/vldb00.html

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread mark
On Mon, Aug 21, 2006 at 05:13:58PM +0200, Martijn van Oosterhout wrote:
 On the whole, PostgreSQL is still in the phase where we're trying to
 find the best algorithms. When that's done (if ever), we can start
 worrying about processor optimisations...

I don't disagree. :-)

You claimed that PostgreSQL is not tuned to a specific processor, and
relies on the kernel, the compiler, and libc to perform some tuning. I
agree with this as well.

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Replication

2006-08-21 Thread Stefan Kaltenbrunner
Fujii Masao wrote:
 Stefan Kaltenbrunner wrote:
 It is however async replication so you can loose data commited on the
 master but not yet replicated to the slaves in case you loose the master
  completely.
 
 Yes, here is an insufficient point of Slony-I, i think.
 Most systems will not permit the committed data to be lost, so use is
 limited.

not sure i agree with most systems here - a _LOT_ of use cases
actually want async (and note that slony1 can do a controlled failover
without any transactions lost).

Nevertheless there are also points for having sync-replication but
calling slony1 insufficient in that regard is a bit much since it is
actually designed to be async and does quite a good job with that.


Stefan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] Fix linking of OpenLDAP libraries

2006-08-21 Thread Albe Laurenz
I have realized that my modifications in configure.in and
src/interfaces/libpq/Makefile to link libpq against
OpenLDAP are buggy.

Here is a proposed patch to fix it.

I write this to pgsql-hackers too because I want to share
the difficulty I'm facing - maybe somebody has a better
idea.

To handle thread safety, OpenLDAP comes with a second
library libldap_r. The thread safe API is identical to
the normal API, the difference is that you must link
against libldap_r instead of libldap to get thead safety.

These are my problems:
- While libpq should be thread safe when ./configured with
  --enable_thread_safety, the backend should be linked
  against the normal libldap.
- At least on RedHat Linux, you need to link against the
  pthread library too if you want to link against libldap_r,
  because the latter has unresolved dependencies.

My solution:
- If thread safety is not desired, I link against libldap.
  No problem.
- If thread safety is desired, I first try to link against
  libldap_r without the thread libraries, and only if that
  fails add the thread libraries to LIBS.
- I tweak src/backend/Makefile so that it strips libldap_r
  and the thread libs from LIBS and replace it with
  libldap if necessary.

That means that if --enable_thread_safety and --with-ldap
is both specified, all executables except 'postgres' will
be linked against libldap_r (and the thread libs, if
necessary).

I tested my patch on RedHat Enterprise Linux 3 and AIX 5.3.

The behaviour for Windows (use the native WLDAP32.DLL)
is unchanged.

Yours,
Laurenz Albe


ldaplink.patch
Description: ldaplink.patch

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] [PATCH] Provide 8-byte transaction IDs to user level

2006-08-21 Thread Tom Lane
Marko Kreen [EMAIL PROTECTED] writes:
 Following patch exports 8 byte txid and snapshot to user level
 allowing its use in regular SQL.  It is based on Slony-I xxid
 module.  It provides special 'snapshot' type for snapshot but
 uses regular int8 for transaction ID's.

Per discussion, I've applied a patch that just implements tracking of
XID epoch in checkpoints.  This should be sufficient to let xxid be
handled as an external module.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] [PATCH] Provide 8-byte transaction IDs to

2006-08-21 Thread Marko Kreen

On 8/21/06, Tom Lane [EMAIL PROTECTED] wrote:

Bruce Momjian [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 (I wouldn't do it like this though --- TransactionIdAdvance itself is
 the place to bump the secondary counter.)

 Agreed.

I reconsidered after trying to do it that way --- although fixing
TransactionIdAdvance itself to maintain a 2-word counter isn't hard,
there are a whole lot of other places that can advance nextXid,
mostly bits like this in WAL recovery:

/* Make sure nextXid is beyond any XID mentioned in the record */
max_xid = xid;
for (i = 0; i  xlrec-nsubxacts; i++)
{
if (TransactionIdPrecedes(max_xid, sub_xids[i]))
max_xid = sub_xids[i];
}
if (TransactionIdFollowsOrEquals(max_xid,
 ShmemVariableCache-nextXid))
{
ShmemVariableCache-nextXid = max_xid;
TransactionIdAdvance(ShmemVariableCache-nextXid);
}

We could hack all these places to know about maintaining an XID-epoch
value, but it's not looking like a simple single-place-to-touch fix :-(


As I was asked to rework the patch, I planned to use
TransactionIdAdvance(ShmemVariableCache), although that would
be conceptually ugly.  Right Thing for this approach would be
to have special struct, but that would touch half the codebase.

That was also the reason I did not want to go that path.


There's still a lot more cruft in the submitted patch than I think
belongs in core, but I'll work on extracting something we can apply.


The only cruft I see is the snapshot on-disk compression and maybe
the pg_sync_txid() funtionality.  Dropping the compression would not
matter much, snapshots would waste space, but at least for our
usage it would not be a problem.  The reast of the functions are all
required for efficient handling.

Dropping the pg_sync_txid() would be loss, because that means that
user cannot just dump and restore the data and just continue where
it left off.  Maybe its not a problem for replication but for generic
queueing it would need delicate juggling when restoring backup.

Although I must admit the pg_sync_txid() is indeed ugly part
of the patch, and it creates new mode for failure - wrapping
epoch.  So I can kind of agree for removing it.

I hope you don't mean that none of the user-level functions belong
to core.  It's not like there is several ways to expose the info.
And it not like there are much more interesting ways for using
the long xid in C level.  Having long xid available in SQL level
means that efficient async replication can be done without any
use of C.

Now that I am back from vacation I can do some coding myself,
if you give hints what needs rework.

--
marko

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PATCHES] [PATCH] Provide 8-byte transaction IDs to

2006-08-21 Thread Tom Lane
Marko Kreen [EMAIL PROTECTED] writes:
 Dropping the pg_sync_txid() would be loss, because that means that
 user cannot just dump and restore the data and just continue where
 it left off.  Maybe its not a problem for replication but for generic
 queueing it would need delicate juggling when restoring backup.

I'm not following the point here.  Dump and restore has never intended
to preserve the transaction counter, so why should it preserve
high-order bits of the transaction counter?

There is another problem with pg_sync_txid, too: because it is willing
to advance the extended XID counter in multiples of 4G XIDs, it turns
wraparound of the extended counter from a never-will-happen scenario
into something that could happen in a poorly-managed installation.
If you've got to be prepared to cope with wraparound of the extended
counter, then what the heck is the point at all?  You might as well just
work with XIDs as they stand.

So I think pg_sync_txid is a bad idea.  In the patch as committed,
anyone who's really intent on munging the epoch can do it with
pg_resetxlog, but there's not a provision for doing it short of that.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Replication

2006-08-21 Thread Jeff Davis
On Mon, 2006-08-21 at 11:33 -0400, AgentM wrote:
 I would imagine that multi-master synchronous replication would be  
 fairly trivial to implement with 2PC and wal-shipping available, no?
 

How does WAL shipping help synchronous replication?

Regards,
Jeff Davis


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread Tom Lane
[EMAIL PROTECTED] writes:
 What re-architecting would be needed?

 I'm asking that it be admitted that it has not been looked at. Not
 seriously. Nor is this unique to PostgreSQL. I expect the performance
 for Linux applications in general to slowly improve on 64-bit
 processors as more and more people begin to use it. I recall a few
 threads on other mailing lists where the primary developers admitted
 to not having ready access to a 64-bit machine. Until they do, the
 chance that these applications are hand-tuned for 64-bit is unlikely.

What you seem to be asking for is that we engage in a lot of
machine-specific tuning for some particular processor or other.
That has not happened and will not happen.  This project believes
in portable code, not optimized for Xeon-of-the-month code.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [PATCHES] [PATCH] Provide 8-byte transaction IDs to

2006-08-21 Thread Marko Kreen

On 8/21/06, Tom Lane [EMAIL PROTECTED] wrote:

Marko Kreen [EMAIL PROTECTED] writes:
 Dropping the pg_sync_txid() would be loss, because that means that
 user cannot just dump and restore the data and just continue where
 it left off.  Maybe its not a problem for replication but for generic
 queueing it would need delicate juggling when restoring backup.

I'm not following the point here.  Dump and restore has never intended
to preserve the transaction counter, so why should it preserve
high-order bits of the transaction counter?


Thus it guarantees that any new issued large txid's will be larger
than existing ones in tables.  Thus code can depend on monotonous
growth.


There is another problem with pg_sync_txid, too: because it is willing
to advance the extended XID counter in multiples of 4G XIDs, it turns
wraparound of the extended counter from a never-will-happen scenario
into something that could happen in a poorly-managed installation.
If you've got to be prepared to cope with wraparound of the extended
counter, then what the heck is the point at all?  You might as well just
work with XIDs as they stand.


Indeed.  I also don't like that scenario.


So I think pg_sync_txid is a bad idea.  In the patch as committed,
anyone who's really intent on munging the epoch can do it with
pg_resetxlog, but there's not a provision for doing it short of that.


I like it.  It is indeed better than having pg_dump issuing a function
call.  This fully satisfactory.

--
marko

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] [PATCHES] [PATCH] Provide 8-byte transaction IDs to

2006-08-21 Thread Tom Lane
Marko Kreen [EMAIL PROTECTED] writes:
 On 8/21/06, Tom Lane [EMAIL PROTECTED] wrote:
 I'm not following the point here.  Dump and restore has never intended
 to preserve the transaction counter, so why should it preserve
 high-order bits of the transaction counter?

 Thus it guarantees that any new issued large txid's will be larger
 than existing ones in tables.  Thus code can depend on monotonous
 growth.

Within a single installation, sure, but I don't buy that we ought to try
to preserve XIDs across installations.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Replication

2006-08-21 Thread Markus Schiltknecht

Jeff Davis wrote:
  How does WAL shipping help synchronous replication?

The WAL is written _before_ commit, logging all the changes the 
transaction wants to write to the disk. This makes it look very similar 
to what is needed for synchronous replication.


Instead of waiting for confirmation from the disk, that your WAL has 
been written, you would have to wait from the network to confirm that 
the changes have been delivered via the network.


Regards

Markus


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] ISBN/ISSN/ISMN/EAN13 module

2006-08-21 Thread Bruce Momjian

Do we want to replace our /contrib/isbn with this, or have it pgfoundry?

---

Jeremy Kronuz wrote:
 I worked on this ISBN/ISSN/ISMN/EAN13 module about more than a year
 ago, and I was wondering if it could be made official, I honestly think
 it's better than the ISBN/ISSN currently included in the official
 release; plus mine would also probably support UPC codes and it already
 support the new ISBN13 codes.
 
 Check my old post: New ISBN/ISSN/ISMN/EAN13 module. at
 http://archives.postgresql.org/pgsql-hackers/2004-11/msg00153.php
 
 In that post I explain what the module does... I was also describing
 some problems I had, but the module it's working now.
 
 Please, share your thoughts.  Kronuz.
 
 _
 Express yourself instantly with MSN Messenger! Download today it's
 FREE!  http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
--
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Replication

2006-08-21 Thread Alvaro Herrera
Markus Schiltknecht wrote:
 Jeff Davis wrote:
   How does WAL shipping help synchronous replication?
 
 The WAL is written _before_ commit, logging all the changes the 
 transaction wants to write to the disk. This makes it look very similar 
 to what is needed for synchronous replication.
 
 Instead of waiting for confirmation from the disk, that your WAL has 
 been written, you would have to wait from the network to confirm that 
 the changes have been delivered via the network.

But the confirmation that needs to come is that the WAL changes have
been applied (fsync'ed), so the performance will be terrible.  So bad,
that I don't think anyone will want to use such a replication system ...

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Altering view ownership doesn't work ...

2006-08-21 Thread Bruce Momjian

Reminding folks this bug is still open.

---

Tom Lane wrote:
 ... because nowhere does it update the checkAsUser fields in the
 view's query to be the OID of the new owner.  This means that
 permission checks about whether the view can access its underlying
 tables will still be done as the old owner.  An example:
 
 regression=# create user u1;
 CREATE ROLE
 regression=# create user u2;
 CREATE ROLE
 regression=# \c - u1
 You are now connected to database regression as user u1.
 regression= create table t1(f1 int);
 CREATE TABLE
 regression= create view v1 as select * from t1;
 CREATE VIEW
 regression= grant select on v1 to u2;
 GRANT
 
 -- at this point u2 can select from v1 but not directly from t1
 
 regression= \c - postgres
 You are now connected to database regression as user postgres.
 regression=# alter table v1 owner to u2;
 ALTER TABLE
 regression=# \c - u2
 You are now connected to database regression as user u2.
 regression= select * from v1;
  f1 
 
 (0 rows)
 
 -- this is WRONG, u2 should not have any ability to select from t1
 
 The same problem applies to all rules, really, not only a view's
 ON SELECT rule.
 
 This is particularly bad because pg_dump is relying heavily on
 ALTER OWNER these days.  After a dump/restore, it is likely that
 every view's original owner will be a superuser, and thus that
 all permission checking is effectively disabled for accesses
 from views.  It wouldn't be too much of a stretch to call that
 a security loophole.
 
 I can think of two basic ways to fix this:
 
 1. Add a bunch of code to ALTER OWNER to update every rule attached to
 the target table.
 
 2. Run setRuleCheckAsUser during rule load rather than rule store.
 
 #2 is a lot simpler, and would fix the problem for existing broken rules
 whereas #1 would not, so I'm kind of inclined to go with that.  I doubt
 there'd be any meaningful performance hit --- parsing the stored form
 of a rule is relatively expensive anyway, so we cache the results.
 
 Comments?
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread mark
On Mon, Aug 21, 2006 at 01:05:13PM -0400, Tom Lane wrote:
 [EMAIL PROTECTED] writes:
  What re-architecting would be needed?
 
  I'm asking that it be admitted that it has not been looked at. Not
  seriously. Nor is this unique to PostgreSQL. I expect the performance
  for Linux applications in general to slowly improve on 64-bit
  processors as more and more people begin to use it. I recall a few
  threads on other mailing lists where the primary developers admitted
  to not having ready access to a 64-bit machine. Until they do, the
  chance that these applications are hand-tuned for 64-bit is unlikely.
 
 What you seem to be asking for is that we engage in a lot of
 machine-specific tuning for some particular processor or other.
 That has not happened and will not happen.  This project believes
 in portable code, not optimized for Xeon-of-the-month code.

Hi Tom.

I'm asking for a statement such as the above, that you provide. :-)

The question was whether PostgreSQL was optimized for 64-bit Linux.
The answer is not specifically.

The answer is quite acceptable to me. It's not the original answer that
was given to the original poster though... :-)

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] PostgreSQL on 64 bit Linux

2006-08-21 Thread Tom Lane
[EMAIL PROTECTED] writes:
 On Mon, Aug 21, 2006 at 01:05:13PM -0400, Tom Lane wrote:
 What you seem to be asking for is that we engage in a lot of
 machine-specific tuning for some particular processor or other.

 The question was whether PostgreSQL was optimized for 64-bit Linux.
 The answer is not specifically.

Well, my notion of optimizing for 64 bit seems to have little to do
with yours.  To me, stuff like making sure we can use lots of buffers
effectively is a 64-bit optimization.  What you seem to have in mind is
assembly-code-level optimization for specific processors.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [PATCHES] [PATCH] Provide 8-byte transaction IDs to

2006-08-21 Thread Marko Kreen

On 8/21/06, Tom Lane [EMAIL PROTECTED] wrote:

Marko Kreen [EMAIL PROTECTED] writes:
 On 8/21/06, Tom Lane [EMAIL PROTECTED] wrote:
 I'm not following the point here.  Dump and restore has never intended
 to preserve the transaction counter, so why should it preserve
 high-order bits of the transaction counter?

 Thus it guarantees that any new issued large txid's will be larger
 than existing ones in tables.  Thus code can depend on monotonous
 growth.

Within a single installation, sure, but I don't buy that we ought to try
to preserve XIDs across installations.


I think you are right in the respect that we should not do it
automatically.

But now that the long xids may end up in data tables, user may have the
need dump/restore it in another installation.  If the application
is eg. Slony like queue, that depends on xid growth, user needs to
be able to bump epoch or application level support for migration.
If he has neither, he needs basically to extract old contents by hand
(as app would not work reliably) and reset everything.

Probably the right thing would be for application have a functions
we moved, fix everything.  But bumping epoch is such a simple
way of fixing it that it should still be available.

And pg_resetxlog is fine for that.  Espacially as using it signals
It's dangerous what you are doing!

--
marko

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Replication

2006-08-21 Thread Gregory Maxwell

On 8/21/06, Alvaro Herrera [EMAIL PROTECTED] wrote:

But the confirmation that needs to come is that the WAL changes have
been applied (fsync'ed), so the performance will be terrible.  So bad,
that I don't think anyone will want to use such a replication system ...


Okay. I give up... Why is waiting for fsync on a fast local network
which takes 15us to send a message (infiniband is cheap..) an
unimaginable delay when we tolerate a local 8ms fsync delay on systems
without writeback cache?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Replication

2006-08-21 Thread Jeff Davis
On Mon, 2006-08-21 at 19:42 +0200, Markus Schiltknecht wrote:
 Jeff Davis wrote:
How does WAL shipping help synchronous replication?
 
 The WAL is written _before_ commit, logging all the changes the 
 transaction wants to write to the disk. This makes it look very similar 
 to what is needed for synchronous replication.
 
 Instead of waiting for confirmation from the disk, that your WAL has 
 been written, you would have to wait from the network to confirm that 
 the changes have been delivered via the network.

Synchronous replication (to me) means that the data has been written to
permanent storage on all masters and all slaves before any master or
slave reports a successful COMMIT. Are you suggesting that you ship the
WAL over the network, wait for it to be written to the slave, and then
report a successful commit?

Regards,
Jeff Davis




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Replication

2006-08-21 Thread D'Arcy J.M. Cain
On Mon, 21 Aug 2006 14:46:05 -0400
Gregory Maxwell [EMAIL PROTECTED] wrote:
 On 8/21/06, Alvaro Herrera [EMAIL PROTECTED] wrote:
  But the confirmation that needs to come is that the WAL changes have
  been applied (fsync'ed), so the performance will be terrible.  So bad,
  that I don't think anyone will want to use such a replication system ...
 
 Okay. I give up... Why is waiting for fsync on a fast local network
 which takes 15us to send a message (infiniband is cheap..) an
 unimaginable delay when we tolerate a local 8ms fsync delay on systems
 without writeback cache?

OK, that solves your problem.  How about my problem where replication
has to happen on servers in three countries on two continents and
thousands of updates a second have to happen in less that 10ms? This is
the critical issue with replication - one size does not fit all.
Syncronous replication, in particular, fits almost no one.

My experience is that any replication needs to be based on your business
rules which will vary widely.

-- 
D'Arcy J.M. Cain darcy@druid.net |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Replication

2006-08-21 Thread Markus Schiltknecht

Alvaro Herrera wrote:

But the confirmation that needs to come is that the WAL changes have
been applied (fsync'ed), so the performance will be terrible.  So bad,
that I don't think anyone will want to use such a replication system ...


Yeah, that's the big problem of sync, multi-master replication. IMHO the 
key to high performance sync, multi-master replication is to minimize 
the efforts needed to get that confirmation.


A hard drive has the built-in functionality to confirm what has been 
written (and hopefully it does not lie about that). A cluster does not 
have this ability. Now, what does it mean to get a confirmation from the 
cluster at all? First of all, you have to be sure the cluster has 
received your message, then you also need to be sure it can apply your 
changes (the so called WriteSet).


Unfortunately 'the cluster' consists of many independent machines. If 
you wait for every single machine in the cluster to confirm being able 
to apply your WriteSet, you won't get beyond single-machine performance 
for writing transactions.


Postgres-R uses a Group Communication System to get very fast 
'confirmation' by optimizing the logic a little: like with patches, you 
can be sure a WriteSet can be applied if you try to apply it to the same 
base. So Postgres-R uses totally ordered communication to ensure all 
transactions are processed in the very same order on all nodes. Another 
optimization is buried in the depth of the GCS: it only guarantees that 
the message you sent _will_be_delivered_. So thanks to the GCS you don't 
have to wait for all other nodes, but only for the GCS to confirm that 
your message will be delivered on the other nodes. Of course the GCS 
also needs to send messages around to be able to confirm that, but GCSes 
are designed to do exactly that very efficiently.


I hope to have brought some light to the sync, multi-master replication 
problem. Please ask if you have more questions. I propose to continue 
that discussion on the Postgres-R mailing list I have set up.


Regards

Markus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] psql 'none' as a HISTFILE special case

2006-08-21 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 On Mon, 2006-08-21 at 19:27 +0300, Martin Atukunda wrote:
 If readline is used by psql, a history file is automatically used.
 This patch adds the special file name 'none', which if set as the
 HISTFILE parameter, will cause psql not to use the history file.

 I think it would be cleaner to use a separate \set variable to control
 whether a history file is written, rather than needlessly overloading
 the meaning of HISTFILE.

Why is this useful at all?  There's already the -n (don't use readline)
switch.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] psql 'none' as a HISTFILE special case

2006-08-21 Thread Bruce Momjian
Tom Lane wrote:
 Neil Conway [EMAIL PROTECTED] writes:
  On Mon, 2006-08-21 at 19:27 +0300, Martin Atukunda wrote:
  If readline is used by psql, a history file is automatically used.
  This patch adds the special file name 'none', which if set as the
  HISTFILE parameter, will cause psql not to use the history file.
 
  I think it would be cleaner to use a separate \set variable to control
  whether a history file is written, rather than needlessly overloading
  the meaning of HISTFILE.
 
 Why is this useful at all?  There's already the -n (don't use readline)
 switch.

Seems he wants readline without history, perhaps for security.  Doesn't
setting HISTFILE to /dev/null work?

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Replication

2006-08-21 Thread Markus Schiltknecht

Gregory Maxwell wrote:

infiniband is cheap..


Can I get one? I'd love to run some tests with Postgres-R ;-)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Replication

2006-08-21 Thread AgentM


On Aug 21, 2006, at 15:00 , D'Arcy J.M. Cain wrote:


On Mon, 21 Aug 2006 14:46:05 -0400
Gregory Maxwell [EMAIL PROTECTED] wrote:

On 8/21/06, Alvaro Herrera [EMAIL PROTECTED] wrote:

But the confirmation that needs to come is that the WAL changes have
been applied (fsync'ed), so the performance will be terrible.  So  
bad,
that I don't think anyone will want to use such a replication  
system ...


Okay. I give up... Why is waiting for fsync on a fast local network
which takes 15us to send a message (infiniband is cheap..) an
unimaginable delay when we tolerate a local 8ms fsync delay on  
systems

without writeback cache?


OK, that solves your problem.  How about my problem where replication
has to happen on servers in three countries on two continents and
thousands of updates a second have to happen in less that 10ms?  
This is

the critical issue with replication - one size does not fit all.
Syncronous replication, in particular, fits almost no one.

My experience is that any replication needs to be based on your  
business

rules which will vary widely.


Sure- and more specifically, replication rules may differ on every  
table according to those rules. The current solutions are on/off for  
a list of tables. I wonder if the various pgsql replication engines  
have any problems co-existing...


-M


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Replication

2006-08-21 Thread Markus Schiltknecht

Jeff Davis wrote:

Synchronous replication (to me) means that the data has been written to
permanent storage on all masters and all slaves before any master or
slave reports a successful COMMIT. Are you suggesting that you ship the
WAL over the network, wait for it to be written to the slave, and then
report a successful commit?


I'm not suggesting doing replication like that. But I think that's what 
people mean when they propose that multi-master synchronous replication 
would be fairly trivial to implement with 2PC and wal-shipping.


This is more or less trivial to implement, yes. But it won't scale.

Regards

Markus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Replication

2006-08-21 Thread Alvaro Herrera
AgentM wrote:
 
 On Aug 21, 2006, at 15:00 , D'Arcy J.M. Cain wrote:
 
 On Mon, 21 Aug 2006 14:46:05 -0400
 Gregory Maxwell [EMAIL PROTECTED] wrote:
 On 8/21/06, Alvaro Herrera [EMAIL PROTECTED] wrote:
 But the confirmation that needs to come is that the WAL changes have
 been applied (fsync'ed), so the performance will be terrible.  So  
 bad,
 that I don't think anyone will want to use such a replication  
 system ...
 
 Okay. I give up... Why is waiting for fsync on a fast local network
 which takes 15us to send a message (infiniband is cheap..) an
 unimaginable delay when we tolerate a local 8ms fsync delay on  
 systems
 without writeback cache?
 
 OK, that solves your problem.  How about my problem where replication
 has to happen on servers in three countries on two continents and
 thousands of updates a second have to happen in less that 10ms?  
 This is
 the critical issue with replication - one size does not fit all.
 Syncronous replication, in particular, fits almost no one.
 
 My experience is that any replication needs to be based on your  
 business
 rules which will vary widely.
 
 Sure- and more specifically, replication rules may differ on every  
 table according to those rules. The current solutions are on/off for  
 a list of tables. I wonder if the various pgsql replication engines  
 have any problems co-existing...

Althought I have never tried, I am sure Mammoth Replicator could
coexist relatively sanely with Slony-I.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PATCHES] [HACKERS] BF Failure on Bandicoot

2006-08-21 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Attached is a patch that fixes this issue at least in my test machine
 (yes, I found one 2000 machine that was broken as well)

Applied.  Dave, would you re-enable the LDAP option on bandicoot so
we can verify this fixes it?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Replication

2006-08-21 Thread D'Arcy J.M. Cain
On Mon, 21 Aug 2006 15:14:10 -0400
AgentM [EMAIL PROTECTED] wrote:
  My experience is that any replication needs to be based on your  
  business
  rules which will vary widely.
 
 Sure- and more specifically, replication rules may differ on every  
 table according to those rules. The current solutions are on/off for  

In fact this is exactly what I did for my app.  Some tables were
multi-master and some were required to be modified from a single master.

-- 
D'Arcy J.M. Cain darcy@druid.net |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] [HACKERS] BF Failure on Bandicoot

2006-08-21 Thread Dave Page
 

 -Original Message-
 From: Tom Lane [mailto:[EMAIL PROTECTED] 
 Sent: 21 August 2006 20:24
 To: pgsql-hackers@postgresql.org; Magnus Hagander
 Cc: Dave Page; pgsql-patches@postgresql.org
 Subject: Re: [PATCHES] [HACKERS] BF Failure on Bandicoot 
 
 Magnus Hagander [EMAIL PROTECTED] writes:
  Attached is a patch that fixes this issue at least in my 
 test machine
  (yes, I found one 2000 machine that was broken as well)
 
 Applied.  Dave, would you re-enable the LDAP option on bandicoot so
 we can verify this fixes it?

Done. I think the next run is in a few hours on that box.

Regards, Dave.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] ISBN/ISSN/ISMN/EAN13 module

2006-08-21 Thread mdean

Bruce Momjian wrote:


Do we want to replace our /contrib/isbn with this, or have it pgfoundry?

---

Jeremy Kronuz wrote:
 


I worked on this ISBN/ISSN/ISMN/EAN13 module about more than a year
ago, and I was wondering if it could be made official, I honestly think
it's better than the ISBN/ISSN currently included in the official
release; plus mine would also probably support UPC codes and it already
support the new ISBN13 codes.

Check my old post: New ISBN/ISSN/ISMN/EAN13 module. at
http://archives.postgresql.org/pgsql-hackers/2004-11/msg00153.php

In that post I explain what the module does... I was also describing
some problems I had, but the module it's working now.

Please, share your thoughts.  Kronuz.

_
Express yourself instantly with MSN Messenger! Download today it's
FREE!  http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
   


--
 Bruce Momjian   [EMAIL PROTECTED]
 EnterpriseDBhttp://www.enterprisedb.com

 + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

 

As a publisher in the last throes of moving to 13 digits, I believe that 
this issue needs tgo be resolved before the ISBN deadline of January 2007. 



--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.3/423 - Release Date: 8/18/2006


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[HACKERS] Dumping old contrib code update

2006-08-21 Thread Josh Berkus
Tom, all,

BTW, the reason that the CVS code for the old contrib projects isn't up on 
pgFoundry yet is that we're having an issue with permissions for which I 
don't have a solution yet.  I am still planning to fix this as soon as I 
can figure out a workaround.

-- 
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] ISBN/ISSN/ISMN/EAN13 module

2006-08-21 Thread Jeremy Kronuz


I suppose having it to replace the current contrib/isbn would be a good option, this 13 digits ISBN will be the standard by 2007, and some publishers are already issuing 13 digit ISBN numbers since last year.

The module I created uses int64 instead of strings, for the numbers, and thus the performance might be better too, though I haven't tested for speed.
Please, let me know if it will be included as a contrib, as I have updated the ISBN range numbersto include the most recent ones.

Kronuz.

 From: [EMAIL PROTECTED] Subject: Re: [HACKERS] ISBN/ISSN/ISMN/EAN13 module To: [EMAIL PROTECTED] Date: Mon, 21 Aug 2006 13:52:26 -0400 CC: pgsql-hackers@postgresql.org   Dowewanttoreplaceour/contrib/isbnwiththis,orhaveitpgfoundry?  ---  JeremyKronuzwrote: IworkedonthisISBN/ISSN/ISMN/EAN13moduleaboutmorethanayear ago,andIwaswonderingifitcouldbemadeofficial,Ihonestlythink it'sbetterthantheISBN/ISSNcurrentlyincludedintheofficial release;plusminewouldalsoprobablysupportUPCcodesanditalready supportthenewISBN13codes.  Checkmyoldpost:"NewISBN/ISSN/ISMN/EAN13module."at http://archives.postgresql.org/pgsql-hackers/2004-11/msg00153.php  InthatpostIexplainwhatthemoduledoes...Iwasalsodescribing someproblemsIhad,butthemoduleit'sworkingnow.  Please,shareyourthoughts.Kronuz.  _ ExpressyourselfinstantlywithMSNMessenger!Downloadtodayit's FREE!http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ -- BruceMomjian[EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com  +Ifyourlifeisaharddrive,Christcanbeyourbackup.+Windows Live Spaces is here! It’s easy to create your own personal Web site. Check it out!


Re: [HACKERS] COPY FROM view

2006-08-21 Thread Bruce Momjian

So we don't want COPY FROM VIEW in 8.2?  Even if we later support COPY
(SELECT ...), aren't we still going to want to copy from a view?  I
guess not because you would just do COPY (SELECT * FROM view)?

---

Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  I've been having the COPY FROM patch that was posted on pgsql-patches
  some time ago (I think from Hannu Krossing), sitting on my machine, with
  the intention to commit it for 8.2.  However there's something I'm not
  very sure about -- the patch creates an execution plan by passing a
  literal SELECT * FROM view to pg_parse_query, pg_analyze_and_rewrite,
  and finally planner().
 
  I'm sure we can make this work appropiately, patching over the quoting
  issues that the patch doesn't deal with, but I'm unsure if this is an
  acceptable approach.  (Actually I think it isn't.)  But what is an
  acceptable way to do it?
 
 It seems to me that we had decided that COPY FROM VIEW is not even the
 conceptually right way to think about the missing feature.  It forces
 you to create a view (at least a temporary one) in order to do what you
 want.  Furthermore it brings up the question of why can't you COPY TO
 VIEW.  The correct way to think about it is to have a way of dumping the
 output of any arbitrary SELECT statement in COPY-like format.
 
 There was some previous discussion of exactly how to go about that;
 check the archives.  Offhand I think we might have liked the syntax
   COPY (parenthesized-SELECT-statement) TO ...
 but there was also some argument in favor of using a separate statement
 that basically sets the output mode for a subsequent SELECT.  I'm
 not sure if anyone thought about how it would play with psql's \copy
 support, but that's obviously something to consider.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] ISBN/ISSN/ISMN/EAN13 module

2006-08-21 Thread mdean

Jeremy Kronuz wrote:

I suppose having it to replace the current contrib/isbn would be a 
good option, this 13 digits ISBN will be the standard by 2007, and 
some publishers are already issuing 13 digit ISBN numbers since last year.
 
The module I created uses int64 instead of strings, for the numbers, 
and thus the performance might be better too, though I haven't tested 
for speed.
Please, let me know if it will be included as a contrib, as I have 
updated the ISBN range numbers to include the most recent ones.
 


I do hope that your algorithm for generating 13 digits from 10 has been 
validated with isbn.org, since all the check digits will change.  I 
believe it is crucial for postgresql to generate isbn codes in both 10 
and 13 digits, and in the two separate ways required of publishers on 
their bar codes, with just the 10 digit input.  This is just a special 
case for the overall classification system, so the initial 3 digit code 
for book publishers should not be hard wired.  (I know, as very 
competent engineers you have thought of all this, I am just being 
obsessive.)


Incidentally, we are looking to publish manuscripts across a widerange 
of subject areas, so if anyone has anything not committed to the big 
names let me know.  We are not a vanity publisher and our cotract 
provides for 20 percent royalties.  We publish under the Cretive Commons 
License.  Our newest book Chess for Bright Children of any age is 
forthcoming.  We got the idea for the title from the New Testament.

MIchael


--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.405 / Virus Database: 268.11.3/423 - Release Date: 8/18/2006


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[HACKERS] Open 8.2 items

2006-08-21 Thread Bruce Momjian
I will try to generate a list of open 8.2 items in 7-10 days so we can
start focusing on beta.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] ISBN/ISSN/ISMN/EAN13 module

2006-08-21 Thread Michael Glaesemann


On Aug 22, 2006, at 2:52 , Bruce Momjian wrote:



Do we want to replace our /contrib/isbn with this, or have it  
pgfoundry?


If contrib/isbn is made obsolete by the pgfoundry ISBN/ISSN/ISMN/ 
EAN13 code, unless there's a compelling reason that the pgfoundry be  
part of the distribution, I recommend removing contrib/isbn from the  
core distribution and pointing people to pgfoundry.


Michael Glaesemann
grzm seespotcode net




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Unable to post to -patches (was: Visual C++ build files)

2006-08-21 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 a.hub.org[200.46.208.251], delay=1, status=sent (250 2.7.1 Ok,
 discarded, id=258
 35-09 - BANNED: P=p003,L=1,M=multipart/mixed |
 P=p002,L=1/2,M=application/x-gzip
 ,T=gz,N=vcbuild.tar.gz | P=p...)

 Seems -patches is rejecting any mail with attached .tar.gz files, if I
 read that correctly?

Hm, I just managed to send a patch labeled application/octet-stream
without any problem.  Not sure what's the point in banning
application/x-gzip, unless that's a common virus signature?
Anyway try the other MIME type.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] Use of backslash in tsearch2

2006-08-21 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 As part of the move to support standard-conforming strings and treat
 backslash literally, I reviewed the tsearch2 code and found two place
 that seemed to use \' rather than '', and generated the attached patch. 

I thought we had decided that that code should not be changed.  It has
nothing to do with SQL literals, and changing it will create unnecessary
backwards-compatibility problems.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] Use of backslash in tsearch2

2006-08-21 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  As part of the move to support standard-conforming strings and treat
  backslash literally, I reviewed the tsearch2 code and found two place
  that seemed to use \' rather than '', and generated the attached patch. 
 
 I thought we had decided that that code should not be changed.  It has
 nothing to do with SQL literals, and changing it will create unnecessary
 backwards-compatibility problems.

I don't remember any comment regarding that.  I think it does relate to
SQL literals because it is creating a literal inside a literal. Also, at
the time this was a core-only discussion and I am hoping from a comment
from the tsearch2 folks.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] TODO: Add pg_get_acldef(),

2006-08-21 Thread Bruce Momjian
Joshua D. Drake wrote:
 Tom Lane wrote:
  Joshua D. Drake [EMAIL PROTECTED] writes:
  Trying to get back on point. What is the scope of work for the TODO 
  item? Forget everything else I brought up. What is the goal of the 
  existing TODO?
  
  I'm not sure that the TODO item has a reason to live at all, but surely
  the first item of work for it should be to figure out what its use-case
  is.  If pg_dump isn't going to use these functions, what will?
 
 Well I can't think of a reason to use the functions as a way to deliver 
 CREATE statements.
 
 Anyone else have thoughts?

They seem useful because they allow abstract access to system
information without mucking with the system tables.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Open 8.2 items

2006-08-21 Thread Alvaro Herrera
Bruce Momjian wrote:
 I will try to generate a list of open 8.2 items in 7-10 days so we can
 start focusing on beta.

FYI, you have three emails about updatable views in the queue, but you
are missing the one I sent today which contains an updated patch that is
substantially better than those in the queue.  If somebody is going to
work on that item I suggest he starts from that one.

My patch is at
http://archives.postgresql.org/pgsql-patches/2006-08/msg00255.php

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] [PATCHES] COPY view

2006-08-21 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 OK, based on this feedback, I am adding COPY VIEW to the patches queue.

I think we have other things that demand our attention more than a
half-baked feature.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend