[HACKERS] Add the FET timetone abbreviation

2012-10-06 Thread Marc Balmer
FET stands for Further-eastern European Time and is the official time in
e.g. Belarus (Europe/Minsk).

The attched patch adds FET to the list of Default timezone abbreviations.

- Marc Balmer

diff --git a/src/timezone/tznames/Default b/src/timezone/tznames/Default
index 1369f47..7223ce5 100644
--- a/src/timezone/tznames/Default
+++ b/src/timezone/tznames/Default
@@ -615,6 +615,8 @@ EETDST  10800 D  # East-Egypt Summertime
  # (Europe/Uzhgorod)
  # (Europe/Vilnius)
  # (Europe/Zaporozhye)
+FET 10800# Further-eastern European Time
+ # (Europe/Minsk)
 MEST 7200 D  # Middle Europe Summer Time (not in zic)
 MET  3600# Middle Europe Time (not in zic)
 METDST   7200 D  # Middle Europe Summer Time (not in zic)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Add FET to Default and Europe.txt

2012-10-06 Thread Marc Balmer
The attached patch would add the FET timezone abbreviation to the
Default list _and_ the list of european abbreviations.

- mb
diff --git a/src/timezone/tznames/Default b/src/timezone/tznames/Default
index 1369f47..7223ce5 100644
--- a/src/timezone/tznames/Default
+++ b/src/timezone/tznames/Default
@@ -615,6 +615,8 @@ EETDST  10800 D  # East-Egypt Summertime
  # (Europe/Uzhgorod)
  # (Europe/Vilnius)
  # (Europe/Zaporozhye)
+FET 10800# Further-eastern European Time
+ # (Europe/Minsk)
 MEST 7200 D  # Middle Europe Summer Time (not in zic)
 MET  3600# Middle Europe Time (not in zic)
 METDST   7200 D  # Middle Europe Summer Time (not in zic)
diff --git a/src/timezone/tznames/Europe.txt b/src/timezone/tznames/Europe.txt
index 88abecca..6c35ab1 100644
--- a/src/timezone/tznames/Europe.txt
+++ b/src/timezone/tznames/Europe.txt
@@ -154,6 +154,8 @@ EETDST  10800 D  # East-Egypt Summertime
  # (Europe/Uzhgorod)
  # (Europe/Vilnius)
  # (Europe/Zaporozhye)
+FET 10800# Further-eastern European Time
+ # (Europe/Minsk)
 GMT 0# Greenwich Mean Time
  # (Africa/Abidjan)
  # (Africa/Bamako)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bad Data back Door

2012-10-06 Thread Heikki Linnakangas

On 06.10.2012 05:14, John R Pierce wrote:

I'd like to see some encoding validation and substitution functions in
postgres. for instance, one that can take any supported encoding and
convert it to the database encoding and generate an error on any invalid
character. this translation could be identity (eg, UTF8-UTF8) whereupon
it would just validate.


See pg_any_to_server() in mbutils.c. At the SQL level, there's the 
convert(bytea, name, name) function.



a 2nd function would do the same, but replace
errors with the substitution character in the target charset and not error.


Hmm, I don't think we have that.

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add FET to Default and Europe.txt

2012-10-06 Thread Bruce Momjian

The Postgres community does not maintain the timezone database;  we ship
copies of the IANA timezone database;  you will have to request the
changes from them:

http://www.iana.org/time-zones

---

On Sat, Oct  6, 2012 at 11:18:43AM +0200, Marc Balmer wrote:
 The attached patch would add the FET timezone abbreviation to the
 Default list _and_ the list of european abbreviations.
 
 - mb

 diff --git a/src/timezone/tznames/Default b/src/timezone/tznames/Default
 index 1369f47..7223ce5 100644
 --- a/src/timezone/tznames/Default
 +++ b/src/timezone/tznames/Default
 @@ -615,6 +615,8 @@ EETDST  10800 D  # East-Egypt Summertime
   # (Europe/Uzhgorod)
   # (Europe/Vilnius)
   # (Europe/Zaporozhye)
 +FET 10800# Further-eastern European Time
 + # (Europe/Minsk)
  MEST 7200 D  # Middle Europe Summer Time (not in zic)
  MET  3600# Middle Europe Time (not in zic)
  METDST   7200 D  # Middle Europe Summer Time (not in zic)
 diff --git a/src/timezone/tznames/Europe.txt b/src/timezone/tznames/Europe.txt
 index 88abecca..6c35ab1 100644
 --- a/src/timezone/tznames/Europe.txt
 +++ b/src/timezone/tznames/Europe.txt
 @@ -154,6 +154,8 @@ EETDST  10800 D  # East-Egypt Summertime
   # (Europe/Uzhgorod)
   # (Europe/Vilnius)
   # (Europe/Zaporozhye)
 +FET 10800# Further-eastern European Time
 + # (Europe/Minsk)
  GMT 0# Greenwich Mean Time
   # (Africa/Abidjan)
   # (Africa/Bamako)

 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Resetting libpq connections after an app error

2012-10-06 Thread Daniele Varrazzo
On Sat, Jul 21, 2012 at 5:36 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Martijn van Oosterhout klep...@svana.org writes:
 On Sat, Jul 21, 2012 at 01:08:58AM +0100, Daniele Varrazzo wrote:
 In a libpq application, if there is an application error between
 PQsendQuery and PQgetResult, is there a way to revert a connection
 back to an usable state (i.e. from transaction status ACTIVE to IDLE)
 without using the network in a blocking way? We are currently doing

 There is PQreset(), which also exists in a non-blocking variant.

 Note that PQreset essentially kills the connection and establishes a new
 one, which might not be what Daniele is looking for.  The alternative
 approach is to issue PQcancel and then just let the query flush out as
 you normally would in an async application, ie PQisBusy/PQconsumeInput
 until ready, then PQgetResult (which you throw away), repeat until you
 get NULL.

I'm back on this issue. I've tested the PQcancel approach, but blocks
as well on send()/recv(), and given the constraint resources it is
bound to use (to be called from a signal handler) I assume there is no
workaround for it.

The PQreset approach has the shortcoming of discarding the session
configuration that somebody may have created in Pythonland: a state
with a connection made but not configured would be something a client
program is probably not prepared to handle.

I think a plain disconnection would be much easier to handle for
long-running python program: a long-running one should probably
already cope with a broken connection, a short-lived one would benefit
of working SIGINT.

If you have other suggestions I'd be glad to know, thank you.


-- Daniele


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add FET to Default and Europe.txt

2012-10-06 Thread Marc Balmer
Bruce,

 The Postgres community does not maintain the timezone database;  we ship
 copies of the IANA timezone database;  you will have to request the
 changes from them:
 
   http://www.iana.org/time-zones

Please take a second look at the diffs, they do *NOT* change the files
in the timezone database, they change the Default set ot timezones that
PostgreSQL uses.

These files are maintained by PostgreSQL, there is even a README with an
explicit mention that changes should be reported to pgsql-hackers



 
 ---
 
 On Sat, Oct  6, 2012 at 11:18:43AM +0200, Marc Balmer wrote:
 The attached patch would add the FET timezone abbreviation to the
 Default list _and_ the list of european abbreviations.

 - mb
 
 diff --git a/src/timezone/tznames/Default b/src/timezone/tznames/Default
 index 1369f47..7223ce5 100644
 --- a/src/timezone/tznames/Default
 +++ b/src/timezone/tznames/Default
 @@ -615,6 +615,8 @@ EETDST  10800 D  # East-Egypt Summertime
   # (Europe/Uzhgorod)
   # (Europe/Vilnius)
   # (Europe/Zaporozhye)
 +FET 10800# Further-eastern European Time
 + # (Europe/Minsk)
  MEST 7200 D  # Middle Europe Summer Time (not in zic)
  MET  3600# Middle Europe Time (not in zic)
  METDST   7200 D  # Middle Europe Summer Time (not in zic)
 diff --git a/src/timezone/tznames/Europe.txt 
 b/src/timezone/tznames/Europe.txt
 index 88abecca..6c35ab1 100644
 --- a/src/timezone/tznames/Europe.txt
 +++ b/src/timezone/tznames/Europe.txt
 @@ -154,6 +154,8 @@ EETDST  10800 D  # East-Egypt Summertime
   # (Europe/Uzhgorod)
   # (Europe/Vilnius)
   # (Europe/Zaporozhye)
 +FET 10800# Further-eastern European Time
 + # (Europe/Minsk)
  GMT 0# Greenwich Mean Time
   # (Africa/Abidjan)
   # (Africa/Bamako)
 

 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 
 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Switching timeline over streaming replication

2012-10-06 Thread Amit Kapila
On Thursday, October 04, 2012 7:22 PM Heikki Linnakangas wrote:
  On Wednesday, October 03, 2012 8:45 PM Heikki Linnakangas wrote:
  On Tuesday, October 02, 2012 4:21 PM Heikki Linnakangas wrote:
   Thanks for the thorough review! I committed the xlog.c refactoring
  patch
   now. Attached is a new version of the main patch, comments on
 specific
   points below. I didn't adjust the docs per your comments yet, will
 do
   that next.
 
  I have some doubts regarding the comments fixed by you and some more
 new
  review comments.
  After this I shall focus majorly towards testing of this Patch.
 
 
 Testing
 ---

One more test seems to be failed. Apart from this, other tests are passed.

2. a. Master M-1 
   b. Standby S-1 follows M-1 
   c. insert 10 records on M-1. verify all records are visible on M-1,S-1 
   d. Stop S-1 
   e. insert 2 records on M-1. 
   f. Stop M-1 
   g. Start S-1 
   h. Promote S-1 
   i. Make M-1 recovery.conf such that it should connect to S-1 
   j. Start M-1. Below error comes on M-1 which is expected as M-1 has more
data. 
  LOG:  database system was shut down at 2012-10-05 16:45:39 IST 
  LOG:  entering standby mode 
  LOG:  consistent recovery state reached at 0/176A070 
  LOG:  record with zero length at 0/176A070 
  LOG:  database system is ready to accept read only connections 
  LOG:  streaming replication successfully connected to primary 
  LOG:  fetching timeline history file for timeline 2 from primary
server 
  LOG:  replication terminated by primary server 
  DETAIL:  End of WAL reached on timeline 1 
  LOG:  walreceiver ended streaming and awaits new instructions 
  LOG:  new timeline 2 forked off current database system timeline 1
before current recovery point 0/176A070 
  LOG:  re-handshaking at position 0/100 on tli 1 
  LOG:  replication terminated by primary server 
  DETAIL:  End of WAL reached on timeline 1 
  LOG:  walreceiver ended streaming and awaits new instructions 
  LOG:  new timeline 2 forked off current database system timeline 1
before current recovery point 0/176A070 
   k. Stop M-1. Start M-1. It is able to successfully connect to S-1 which
is a problem. 
   l. check in S-1. Records inserted in step-e are not present. 
   m. Now insert records in S-1. M-1 doesn't recieve any records. On M-1
server following log is getting printed. 
  LOG:  out-of-sequence timeline ID 1 (after 2) in log segment
00020001, offset 0 
  LOG:  out-of-sequence timeline ID 1 (after 2) in log segment
00020001, offset 0 
  LOG:  out-of-sequence timeline ID 1 (after 2) in log segment
00020001, offset 0 
  LOG:  out-of-sequence timeline ID 1 (after 2) in log segment
00020001, offset 0 
  LOG:  out-of-sequence timeline ID 1 (after 2) in log segment
00020001, offset 0
 
 
 With Regards,
 Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add FET to Default and Europe.txt

2012-10-06 Thread Bruce Momjian
On Sat, Oct  6, 2012 at 02:46:03PM +0200, Marc Balmer wrote:
 Bruce,
 
  The Postgres community does not maintain the timezone database;  we ship
  copies of the IANA timezone database;  you will have to request the
  changes from them:
  
  http://www.iana.org/time-zones
 
 Please take a second look at the diffs, they do *NOT* change the files
 in the timezone database, they change the Default set ot timezones that
 PostgreSQL uses.
 
 These files are maintained by PostgreSQL, there is even a README with an
 explicit mention that changes should be reported to pgsql-hackers
 

Oops, I see what you mean now.  I thought everything under src/timezone
was copied from others, but obviously I was wrong.

---

 
 
  
  ---
  
  On Sat, Oct  6, 2012 at 11:18:43AM +0200, Marc Balmer wrote:
  The attached patch would add the FET timezone abbreviation to the
  Default list _and_ the list of european abbreviations.
 
  - mb
  
  diff --git a/src/timezone/tznames/Default b/src/timezone/tznames/Default
  index 1369f47..7223ce5 100644
  --- a/src/timezone/tznames/Default
  +++ b/src/timezone/tznames/Default
  @@ -615,6 +615,8 @@ EETDST  10800 D  # East-Egypt Summertime
# (Europe/Uzhgorod)
# (Europe/Vilnius)
# (Europe/Zaporozhye)
  +FET 10800# Further-eastern European Time
  + # (Europe/Minsk)
   MEST 7200 D  # Middle Europe Summer Time (not in zic)
   MET  3600# Middle Europe Time (not in zic)
   METDST   7200 D  # Middle Europe Summer Time (not in zic)
  diff --git a/src/timezone/tznames/Europe.txt 
  b/src/timezone/tznames/Europe.txt
  index 88abecca..6c35ab1 100644
  --- a/src/timezone/tznames/Europe.txt
  +++ b/src/timezone/tznames/Europe.txt
  @@ -154,6 +154,8 @@ EETDST  10800 D  # East-Egypt Summertime
# (Europe/Uzhgorod)
# (Europe/Vilnius)
# (Europe/Zaporozhye)
  +FET 10800# Further-eastern European Time
  + # (Europe/Minsk)
   GMT 0# Greenwich Mean Time
# (Africa/Abidjan)
# (Africa/Bamako)
  
 
  -- 
  Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-hackers
  
  
 

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [WIP] Performance Improvement by reducing WAL for Update Operation

2012-10-06 Thread Amit kapila
On Thursday, October 04, 2012 8:03 PM Heikki Linnakangas wrote:
On Wednesday, October 03, 2012 9:33 PM Amit Kapila wrote:
On Friday, September 28, 2012 7:03 PM Amit Kapila wrote:
  On Thursday, September 27, 2012 6:39 PM Amit Kapila wrote:
   On Thursday, September 27, 2012 4:12 PM Heikki Linnakangas wrote:
   On 25.09.2012 18:27, Amit Kapila wrote:
If you feel it is must to do the comparison, we can do it in same
  way
as we identify for HOT?
  


  Now I shall do the various tests for following and post it here:
  a. Attached Patch in the mode where it takes advantage of history
  tuple b. By changing the logic for modified column calculation to use
  calculation for memcmp()



 1. Please find the results (pgbench_test.htm) for point -2 where there is
 one fixed column updation (last few bytes are random) and second column
 updation is 32 byte random string. The results for 50, 100 are still going
 on others are attached with this mail.

Please find the readings of LZ patch along with Xlog-Scale patch. 
The comparison is between for Update operations
base code + Xlog Scale Patch
base code + Xlog Scale Patch + Update WAL Optimization (LZ compression)

The readings have been taken based on below data.
pgbench_xlog_scale_50 -
a. Updated Record size 50, Total Record size 1800
b. Threads 8, 1 ,2 
c. Synchronous_commit - off, on

pgbench_xlog_scale_250 - 
a. Updated Record size 250, Total Record size 1800
b. Threads 8, 1 ,2 
c. Synchronous_commit - off, on

pgbench_xlog_scale_500- 
a. Updated Record size 500, Total Record size 1800
b. Threads 8, 1 ,2 
c. Synchronous_commit - off, on 

Observations
--
a. There is still a good performance improvement even if we do Update WAL 
optimization on top of Xlog Sclaing Patch.
b. There is a slight performance dip for 1 thread (only in Sync mode = off) 
with Update WAL optimization (LZ compression) 
but for 2 threads there is a performance increase.


With Regards,
Amit Kapila.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for REINDEX CONCURRENTLY

2012-10-06 Thread Jeremy Harris

On 10/05/2012 09:03 PM, Tom Lane wrote:

Note that allowing subsequent requests to jump the queue would not be a
good fix for this; if you do that, it's likely the ex-lock will never be
granted, at least not till the next system idle time.


Offering that option to the admin sounds like a good thing, since
(as Alvaro points out) the build of the replacement index could take
considerable time but be done without the lock.  Then the swap
done in the first quiet period (but without further admin action),
and the drop started.

One size doesn't fit all.  It doesn't need to be the only method.
--
Cheers,
Jeremy



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add FET to Default and Europe.txt

2012-10-06 Thread Tom Lane
Marc Balmer m...@msys.ch writes:
 These files are maintained by PostgreSQL, there is even a README with an
 explicit mention that changes should be reported to pgsql-hackers

What the README file actually suggests is that periodically we should
re-evaluate the set of default abbreviations.  I'm not that thrilled
with making one-off changes on a first-to-complain basis.

I have no particular objection to adding FET, since it doesn't seem to
have any conflicts with other zone abbreviations; but we're really way
overdue for another global look at the abbreviation lists.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bad Data back Door

2012-10-06 Thread David E. Wheeler
On Oct 5, 2012, at 6:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Probably not so much assumed as nobody thought about it.  In
 e.g. plperl we expend the cycles to do encoding validity checking on
 *every* string entering the system from Perl.  I'm not sure why foreign
 tables ought to get a pass on that, especially when you consider the
 communication overhead that the encoding check would be amortized
 against.

Yes, that’s what I was thinking.

 Now, having said that, I think it has to be the reponsibility of the FDW
 to apply any required check ... which makes this a bug report against
 oracle_fdw, not the core system.  (FWIW, contrib/file_fdw depends on the
 COPY code, which will check encoding.)

I agree that this is a bug in oracle_fdw (well, potentially; ultimately, it’s 
Oracle that’s lying about the encoding of those text values). But I think that 
it would be much more useful overall -- not to mention more database-like -- 
for PostgreSQL to provide a way to enforce it. That is, to consider foreign 
tables to be an input like COPY or SQL, and to validate values before 
displaying them.

Best,

David



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bad Data back Door

2012-10-06 Thread Atri Sharma
On Sat, Oct 6, 2012 at 1:34 PM, David E. Wheeler da...@justatheory.com wrote:
 On Oct 5, 2012, at 6:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Probably not so much assumed as nobody thought about it.  In
 e.g. plperl we expend the cycles to do encoding validity checking on
 *every* string entering the system from Perl.  I'm not sure why foreign
 tables ought to get a pass on that, especially when you consider the
 communication overhead that the encoding check would be amortized
 against.

 Yes, that’s what I was thinking.

 Now, having said that, I think it has to be the reponsibility of the FDW
 to apply any required check ... which makes this a bug report against
 oracle_fdw, not the core system.  (FWIW, contrib/file_fdw depends on the
 COPY code, which will check encoding.)

 I agree that this is a bug in oracle_fdw (well, potentially; ultimately, it’s 
 Oracle that’s lying about the encoding of those text values). But I think 
 that it would be much more useful overall -- not to mention more 
 database-like -- for PostgreSQL to provide a way to enforce it. That is, to 
 consider foreign tables to be an input like COPY or SQL, and to validate 
 values before displaying them.

 Best,

 David



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


+1
-- 
Regards,

Atri
l'apprenant


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Regarding identifying a foreign scan

2012-10-06 Thread Atri Sharma
Hi,

I am trying to identify foreign scans uniquely.I am trying to do that
by struct ForeignScanState,but I am confused as to how I can identify
the scan.

Is there a member of ForeignScanState that can be used for this purpose?

Atri

-- 
Regards,

Atri
l'apprenant


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER command reworks

2012-10-06 Thread Kohei KaiGai
2012/10/2 Alvaro Herrera alvhe...@2ndquadrant.com:
 Excerpts from Alvaro Herrera's message of vie sep 28 18:17:32 -0300 2012:
 Excerpts from Kohei KaiGai's message of lun sep 10 08:08:32 -0300 2012:

  As attached, I split off the original one into three portions; for 
  set-schema,
  set-owner and rename-to. Please apply them in order of patch filename.

 Hmm, in the first patch, it seems to me we can simplify
 AlterObjectNamespace's signature: instead of passing all the details of
 the object class (cache Ids and attribute numbers and so on), just do

 AlterObjectNamespace(catalog-containing-object, objectId, newNamespaceOid)

 Like in the attached reworked version, in which I renamed the function
 to AlterObjectNamespace_internal because the other name seemed a bit
 wrong in the fact of the existing AlterObjectNamespace_oid.

 I also made the ALTER FUNCTION case go through
 AlterObjectNamespace_internal; it seems pointless to have a separate
 code path to go through when the generic one does just fine (also, this
 makes functions identical to collations in implementation).  That's one
 less hook point for sepgsql, I guess.

Thanks for your reviewing, and sorry for my late response.

I definitely agree with your solution. The reason why my original patch
had separate code path for function and collation was they took
additional elements (such as argument-list of function) to check
duplicate names. So, I think it is a wise idea to invoke the common
code after name duplication checks.

Best regards,
-- 
KaiGai Kohei kai...@kaigai.gr.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bad Data back Door

2012-10-06 Thread John R Pierce

On 10/06/12 3:45 AM, Heikki Linnakangas wrote:

At the SQL level, there's the convert(bytea, name, name) function.


ahhh, right.  (forehead slap)




a 2nd function would do the same, but replace
errors with the substitution character in the target charset and not 
error.


Hmm, I don't think we have that.


me thinks this would be extremely useful for importing 'dirty' data.   
that or a per-connection flag (or option on the COPY command?)  that 
said substitute-on-error for the likes of UTF-8 imports from CSV.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bad Data back Door

2012-10-06 Thread Tom Lane
David E. Wheeler da...@justatheory.com writes:
 On Oct 5, 2012, at 6:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Now, having said that, I think it has to be the reponsibility of the FDW
 to apply any required check ... which makes this a bug report against
 oracle_fdw, not the core system.  (FWIW, contrib/file_fdw depends on the
 COPY code, which will check encoding.)

 I agree that this is a bug in oracle_fdw (well, potentially; ultimately, it’s 
 Oracle that’s lying about the encoding of those text values). But I think 
 that it would be much more useful overall -- not to mention more 
 database-like -- for PostgreSQL to provide a way to enforce it. That is, to 
 consider foreign tables to be an input like COPY or SQL, and to validate 
 values before displaying them.

It is the FDW's responsibility to deal with this.  We expect it to hand
back valid tuples; it is not reasonable to disassemble them looking for
mistakes (and we couldn't catch most mistakes, anyway).  If the
interface were defined in terms of text, we could do checking above the
FDW level ... but it isn't.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugs in CREATE/DROP INDEX CONCURRENTLY

2012-10-06 Thread Simon Riggs
On 6 October 2012 00:56, Tom Lane t...@sss.pgh.pa.us wrote:
 1.  These operations think they can use ordinary heap_update operations
 to change pg_index entries when they don't have exclusive lock on the
 parent table.  The lack of ex-lock means that another backend could be
 currently loading up its list of index OIDs for the table --- and since
 it scans pg_index with SnapshotNow to do that, the heap_update could
 result in the other backend failing to see this index *at all*.  That's
 okay if it causes the other backend to not use the index for scanning...
 but not okay if it causes the other backend to fail to make index
 entries it is supposed to make.

 I think this could possibly be fixed by using nontransactional
 update-in-place when we're trying to change indisvalid and/or
 indisready, but I've not really thought through the details.

 2.  DROP INDEX CONCURRENTLY doesn't bother to do
 TransferPredicateLocksToHeapRelation until long after it's invalidated
 the index.  Surely that's no good?  Is it even possible to do that
 correctly, when we don't have a lock that will prevent new predicate
 locks from being taken out meanwhile?

I'm in the middle of reviewing other fixes there, so will comment
soon, just not right now.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding identifying a foreign scan

2012-10-06 Thread Tom Lane
Atri Sharma atri.j...@gmail.com writes:
 I am trying to identify foreign scans uniquely.

What do you mean by identify?  What are you trying to accomplish,
and in what context?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding identifying a foreign scan

2012-10-06 Thread Atri Sharma
On Sat, Oct 6, 2012 at 3:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Atri Sharma atri.j...@gmail.com writes:
 I am trying to identify foreign scans uniquely.

 What do you mean by identify?  What are you trying to accomplish,
 and in what context?

 regards, tom lane

Hi Tom,

I am trying to identify the situation where a query has multiple
foreign scans.In that case,I need to check whether the current scan is
the same as a previous scan or not.If not,then I think it means that
multiple scans are in progress on the same backend.

Atri

-- 
Regards,

Atri
l'apprenant


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bad Data back Door

2012-10-06 Thread Andrew Dunstan


On 10/06/2012 03:35 PM, Tom Lane wrote:

David E. Wheeler da...@justatheory.com writes:

On Oct 5, 2012, at 6:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:

Now, having said that, I think it has to be the reponsibility of the FDW
to apply any required check ... which makes this a bug report against
oracle_fdw, not the core system.  (FWIW, contrib/file_fdw depends on the
COPY code, which will check encoding.)

I agree that this is a bug in oracle_fdw (well, potentially; ultimately, it’s 
Oracle that’s lying about the encoding of those text values). But I think that 
it would be much more useful overall -- not to mention more database-like -- 
for PostgreSQL to provide a way to enforce it. That is, to consider foreign 
tables to be an input like COPY or SQL, and to validate values before 
displaying them.

It is the FDW's responsibility to deal with this.  We expect it to hand
back valid tuples; it is not reasonable to disassemble them looking for
mistakes (and we couldn't catch most mistakes, anyway).  If the
interface were defined in terms of text, we could do checking above the
FDW level ... but it isn't.





Exactly.

We've done quite a lot of work tightening the ways that badly encoded 
data can enter the database over the years. It's a never ending game of 
whack-a-mole. There aren't any easy answers.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding identifying a foreign scan

2012-10-06 Thread Tom Lane
Atri Sharma atri.j...@gmail.com writes:
 On Sat, Oct 6, 2012 at 3:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Atri Sharma atri.j...@gmail.com writes:
 I am trying to identify foreign scans uniquely.

 What do you mean by identify?  What are you trying to accomplish,
 and in what context?

 I am trying to identify the situation where a query has multiple
 foreign scans.In that case,I need to check whether the current scan is
 the same as a previous scan or not.If not,then I think it means that
 multiple scans are in progress on the same backend.

Well, if you search the plan tree and find more than one ForeignScan
node, it means there's more than one foreign scan.  It doesn't seem to
me to be very complicated.  Now, if you're wondering whether they
reference the same server or not, that's a bit harder.  I guess you
could look at the RTEs, fetch the foreign-table data for each FT
relation OID, and see if the same server is referenced.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regarding identifying a foreign scan

2012-10-06 Thread Atri Sharma
On Sat, Oct 6, 2012 at 4:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Atri Sharma atri.j...@gmail.com writes:
 On Sat, Oct 6, 2012 at 3:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Atri Sharma atri.j...@gmail.com writes:
 I am trying to identify foreign scans uniquely.

 What do you mean by identify?  What are you trying to accomplish,
 and in what context?

 I am trying to identify the situation where a query has multiple
 foreign scans.In that case,I need to check whether the current scan is
 the same as a previous scan or not.If not,then I think it means that
 multiple scans are in progress on the same backend.

 Well, if you search the plan tree and find more than one ForeignScan
 node, it means there's more than one foreign scan.  It doesn't seem to
 me to be very complicated.  Now, if you're wondering whether they
 reference the same server or not, that's a bit harder.  I guess you
 could look at the RTEs, fetch the foreign-table data for each FT
 relation OID, and see if the same server is referenced.

 regards, tom lane

Hi Tom,

Thanks for the extensive reply.

The issue I am trying to resolve is that if two scans are taking place
on the same backend(due to the same query),then,the server is
crashing.

e.g. foreign_table is a foreign table

SELECT * FROM foreign_table UNION SELECT * FROM foreign_table; results
in a crash of the server.

I think it is because I am not saving the state of the scan,so,if
multiple scans a re running on the same backend,then,it is causing the
crash.

Any hints on how I can detect this condition please?

Atri

-- 
Regards,

Atri
l'apprenant


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] setting per-database/role parameters checks them against wrong context

2012-10-06 Thread Selena Deckelmann
On Mon, Oct 1, 2012 at 2:28 PM, Selena Deckelmann sel...@chesnok.com wrote:
 On Mon, Oct 1, 2012 at 1:37 PM, Selena Deckelmann sel...@chesnok.com wrote:
 On Mon, Oct 1, 2012 at 1:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Selena Deckelmann sel...@chesnok.com writes:
 The check_temp_buffers() problem seems like a regression and blocks us
 from upgrading to 9.2. The use case are functions that set
 temp_buffers and occasionally are called in a series from a parent
 session.  The work around is... a lot of work.

 Uh ... how is that a regression?  AFAIK it's been that way right along.

 We're running 9.0 - looks like it changed in 9.1, last revision to the
 relevant line was 6/2011. The group decided not to upgrade to 9.1 last
 year, but was going to just go directly to 9.2 in the next few weeks.

 And, I was basing the regression comment on the documentation for
 temp_buffers: The setting can be changed within individual sessions,
 but only before the first use of temporary tables within the session;
 subsequent attempts to change the value will have no effect on that
 session. This statement has been there since at least 8.1.

 A warning, and then not failing seems more appropriate than an error,
 given the documented behavior.

I tried out a few things, and then realized that perhaps just adding
PGC_S_SESSION to the list of sources that are at elevel WARNING
partially fixes this.

This doesn't fix the issue with log_statement_stats, but it makes the
behavior of temp_buffers  the documented behavior (no longer errors
out in a transaction), while still warning the user.

-selena


-- 
http://chesnok.com


session_warning.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add FET to Default and Europe.txt

2012-10-06 Thread Tom Lane
I wrote:
 What the README file actually suggests is that periodically we should
 re-evaluate the set of default abbreviations.

I looked through the diffs between the 2005m Olson time zone files
(which is what we were using at the time the tznames files were created)
and current.  There are several issues that we need to deal with,
I think.

The biggest issue is that all of Russia has apparently (1) abandoned
daylight-savings time changes, and (2) settled on new standard time
zones that match up with their old DST times!  For example we've got

*** Zone Europe/Moscow   2:30:20 -  LMT 1880
*** 1951,1967 
 2:00   -   EET 1930 Jun 21
 3:00   Russia  MSK/MSD 1991 Mar 31 2:00s
 2:00   Russia  EE%sT   1992 Jan 19 2:00s
!3:00   Russia  MSK/MSD
  #
  # From Oscar van Vlijmen (2001-08-25): [This region consists of]
  # Samarskaya oblast', Udmyrtskaya respublika
  Zone Europe/Samara 3:20:36 -  LMT 1919 Jul  1 2:00
!3:00   -   KUYT1930 Jun 21 # Kuybyshev
!4:00   Russia  KUY%sT  1989 Mar 26 2:00s
 3:00   Russia  KUY%sT  1991 Mar 31 2:00s
 2:00   Russia  KUY%sT  1991 Sep 29 2:00s
 3:00   -   KUYT1991 Oct 20 3:00
!4:00   Russia  SAM%sT  # Samara Time
  #
  # From Oscar van Vlijmen (2001-08-25): [This region consists of]
  # Respublika Bashkortostan, Komi-Permyatskij avtonomnyj okrug,
--- 2120,2155 
 2:00   -   EET 1930 Jun 21
 3:00   Russia  MSK/MSD 1991 Mar 31 2:00s
 2:00   Russia  EE%sT   1992 Jan 19 2:00s
!3:00   Russia  MSK/MSD 2011 Mar 27 2:00s
!4:00   -   MSK
! #
! # Astrakhanskaya oblast', Kirovskaya oblast', Saratovskaya oblast',
! # Volgogradskaya oblast'.  Shanks  Pottenger say Kirov is still at +0400
! # but Wikipedia (2006-05-09) says +0300.  Perhaps it switched after the
! # others?  But we have no data.
! Zone Europe/Volgograd  2:57:40 -  LMT 1920 Jan  3
!3:00   -   TSAT1925 Apr  6 # Tsaritsyn Time
!3:00   -   STAT1930 Jun 21 # Stalingrad Time
!4:00   -   STAT1961 Nov 11
!4:00   Russia  VOL%sT  1989 Mar 26 2:00s # Volgograd T
!3:00   Russia  VOL%sT  1991 Mar 31 2:00s
!4:00   -   VOLT1992 Mar 29 2:00s
!3:00   Russia  VOL%sT  2011 Mar 27 2:00s
!4:00   -   VOLT
  #
  # From Oscar van Vlijmen (2001-08-25): [This region consists of]
  # Samarskaya oblast', Udmyrtskaya respublika
  Zone Europe/Samara 3:20:36 -  LMT 1919 Jul  1 2:00
!3:00   -   SAMT1930 Jun 21
!4:00   -   SAMT1935 Jan 27
!4:00   Russia  KUY%sT  1989 Mar 26 2:00s # Kuybyshev
 3:00   Russia  KUY%sT  1991 Mar 31 2:00s
 2:00   Russia  KUY%sT  1991 Sep 29 2:00s
 3:00   -   KUYT1991 Oct 20 3:00
!4:00   Russia  SAM%sT  2010 Mar 28 2:00s # Samara Time
!3:00   Russia  SAM%sT  2011 Mar 27 2:00s
!4:00   -   SAMT
! 
  #
  # From Oscar van Vlijmen (2001-08-25): [This region consists of]
  # Respublika Bashkortostan, Komi-Permyatskij avtonomnyj okrug,

Thus for example MSK apparently now means GMT+4 not GMT+3.  We can
change the tznames entry for that, but should we get rid of MSD
entirely?  Some input from the Russians on this list would be helpful.

Lesser issues:

* The FET changes you noted, which seem to be related to the whole
Russian change.

* Mongolia seems to have moved an hour east too, but they kept summer
time:

*** 1268,1278 
  Zone  Asia/Choibalsan 7:38:00 -   LMT 1905 Aug
7:00-   ULAT1978
8:00-   ULAT1983 Apr
!   9:00Mongol  CHO%sT  # Choibalsan Time
  
  # Nepal
  # ZoneNAMEGMTOFF  RULES   FORMAT  [UNTIL]
--- 1783,1794 
  Zone  Asia/Choibalsan 7:38:00 -   LMT 1905 Aug
7:00-   ULAT1978
8:00-   ULAT1983 Apr
!   9:00Mongol  CHO%sT  2008 Mar 31 # Choibalsan Time
!   8:00Mongol  CHO%sT
  
  # Nepal
  # ZoneNAMEGMTOFF  RULES   FORMAT  [UNTIL]

* MAWT (Antarctica/Mawson) now means GMT+5 not GMT+6, and
Antarctica/Macquarie has adopted its very own zone name MIST.  It looks
from the Olson database as though all of the Australian Antarctic
stations have changed 

Re: [HACKERS] Regarding identifying a foreign scan

2012-10-06 Thread Tom Lane
Atri Sharma atri.j...@gmail.com writes:
 The issue I am trying to resolve is that if two scans are taking place
 on the same backend(due to the same query),then,the server is
 crashing.

That sounds like an FDW bug ... which FDW are we talking about?

 I think it is because I am not saving the state of the scan,so,if
 multiple scans a re running on the same backend,then,it is causing the
 crash.

The FDW should certainly not assume that only one scan can happen at a
time.  I would not think this would be a problem as long as you're using
reasonable coding techniques, like keeping scan-local state in
scan-local storage and not globally.

 Any hints on how I can detect this condition please?

If you are imagining that you'd do something differently depending on
whether the current query contains multiple ForeignScan nodes for your
FDW, that's still doomed to lose, because of nested queries.  You really
need to manage storage in a way that doesn't make assumptions of this
sort.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-10-06 Thread Tatsuo Ishii
Ok, committed with minor editings(fix header comments in testlo64.c).
Thank you Kaigai-san for review!
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

 As a committer, I have looked into the patch and it seems it's good to
 commit. However I want to make a small enhancement in the
 documentation part:
 
 1) lo_open section needs to mention about new 64bit APIs. Also it
should include description about lo_truncate, but this is not 64bit
APIs author's fault since it should had been there when lo_truncate
was added.
 
 2) Add mention that 64bit APIs are only available in PostgreSQL 9.3 or
later and if the API is requested against older version of servers
it will fail.
 
 If there's no objection, I would like commit attached patches.
 --
 Tatsuo Ishii
 SRA OSS, Inc. Japan
 English: http://www.sraoss.co.jp/index_en.php
 Japanese: http://www.sraoss.co.jp
 
 Hi Anzai-san,
 
 The latest patch is fair enough for me, so let me hand over its reviewing
 for comitters.
 
 Thanks,
 
 2012/10/1 Nozomi Anzai an...@sraoss.co.jp:
 Here is 64-bit API for large object version 3 patch.

 I checked this patch. It looks good, but here are still some points to be
 discussed.

 * I have a question. What is the meaning of INT64_IS_BUSTED?
   It seems to me a marker to indicate a platform without 64bit support.
   However, the commit 901be0fad4034c9cf8a3588fd6cf2ece82e4b8ce
   says as follows:
   | Remove all the special-case code for INT64_IS_BUSTED, per decision that
   | we're not going to support that anymore.

 Removed INT64_IS_BUSTED.


 * At inv_seek(), it seems to me it checks offset correctness with wrong 
 way,
   as follows:
 |  case SEEK_SET:
 |  if (offset  0)
 |  elog(ERROR, invalid seek offset:  INT64_FORMAT, 
 offset);
 |  obj_desc-offset = offset;
 |  break;
   It is a right assumption, if large object size would be restricted to 
 2GB.
   But the largest positive int64 is larger than expected limitation.
   So, it seems to me it should be compared with (INT_MAX * PAGE_SIZE)
   instead.

 Fixed.


 * At inv_write(), it definitely needs a check to prevent data-write upper 
 4TB.
   In case when obj_desc-offset is a bit below 4TB, an additional 1GB write
   will break head of the large object because of pageno overflow.

 Added a such check.


 * Please also add checks on inv_read() to prevent LargeObjectDesc-offset
   unexpectedly overflows 4TB boundary.

 Added a such check.


 * At inv_truncate(), variable off is re-defined to int64. Is it really 
 needed
   change? All its usage is to store the result of len % LOBLKSIZE.

 Fixed and back to int32.


 Thanks,

 2012/9/24 Nozomi Anzai an...@sraoss.co.jp:
  Here is 64-bit API for large object version 2 patch.
 
  I checked this patch. It can be applied onto the latest master branch
  without any problems. My comments are below.
 
  2012/9/11 Tatsuo Ishii is...@postgresql.org:
   Ok, here is the patch to implement 64-bit API for large object, to
   allow to use up to 4TB large objects(or 16TB if BLCKSZ changed to
   32KB). The patch is based on Jeremy Drake's patch posted on September
   23, 2005
   (http://archives.postgresql.org/pgsql-hackers/2005-09/msg01026.php)
   and reasonably updated/edited to adopt PostgreSQL 9.3 by Nozomi Anzai
   for the backend part and Yugo Nagata for the rest(including
   documentation patch).
  
   Here are changes made in the patch:
  
   1) Frontend lo_* libpq functions(fe-lobj.c)(Yugo Nagata)
  
   lo_initialize() gathers backend 64-bit large object handling
   function's oid, namely lo_lseek64, lo_tell64, lo_truncate64.
  
   If client calls lo_*64 functions and backend does not support them,
   lo_*64 functions return error to caller. There might be an argument
   since calls to lo_*64 functions can automatically be redirected to
   32-bit older API. I don't know this is worth the trouble though.
  
  I think it should definitely return an error code when user tries to
  use lo_*64 functions towards the backend v9.2 or older, because
  fallback to 32bit API can raise unexpected errors if application
  intends to seek the area over than 2GB.
 
   Currently lo_initialize() throws an error if one of oids are not
   available. I doubt we do the same way for 64-bit functions since this
   will make 9.3 libpq unable to access large objects stored in pre-9.2
   PostgreSQL servers.
  
  It seems to me the situation to split the case of pre-9.2 and post-9.3
  using a condition of conn-sversion = 90300.
 
  Fixed so, and tested it by deleteing the lo_tell64's row from pg_proc.
 
 
   To pass 64-bit integer to PQfn, PQArgBlock is used like this: int *ptr
   is a pointer to 64-bit integer and actual data is placed somewhere
   else. There might be other way: add new member to union u to store
   64-bit integer:
  
   typedef struct
   {
   int len;
   int   

[HACKERS] Re: [COMMITTERS] pgsql: Bump up catalog vesion due to 64-bit large object API functions

2012-10-06 Thread Tatsuo Ishii
Developers,

Please do re-initdb...

 Bump up catalog vesion due to 64-bit large object API functions
 addition.
 
 Branch
 --
 master
 
 Details
 ---
 http://git.postgresql.org/pg/commitdiff/b51a65f5bf4b68bc91d33d88108837d13d29df98
 
 Modified Files
 --
 src/include/catalog/catversion.h |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-10-06 Thread Amit kapila

On Sunday, October 07, 2012 5:42 AM Tatsuo Ishii wrote:

Ok, committed with minor editings(fix header comments in testlo64.c).
Thank you Kaigai-san for review!

Hello Tatsuo Ishii San,

Today when I tried to build the latest code on my windows m/c, I got few errors 
from the checkin of this patch.

lo_hton64 (due to -- unint32_t)
 .\src\interfaces\libpq\fe-lobj.c(1049) : error C2065: 'uint32_t' : undeclared 
identifier
inv_seek (due to   -- MAX_LARGE_OBJECT_SIZE)
\src\backend\storage\large_object\inv_api.c(389) : error C2065: 'LOBLKSIZELL' : 
undeclared identifier
inv_read ((due to   -- MAX_LARGE_OBJECT_SIZE))
\src\backend\storage\large_object\inv_api.c(441) : error C2065: 'LOBLKSIZELL' : 
undeclared identifier


It may be some settings problem of my m/c if it is okay on some other windows 
m/c. 


With Regards,
Amit Kapila. 

 As a committer, I have looked into the patch and it seems it's good to
 commit. However I want to make a small enhancement in the
 documentation part:

 1) lo_open section needs to mention about new 64bit APIs. Also it
should include description about lo_truncate, but this is not 64bit
APIs author's fault since it should had been there when lo_truncate
was added.

 2) Add mention that 64bit APIs are only available in PostgreSQL 9.3 or
later and if the API is requested against older version of servers
it will fail.

 If there's no objection, I would like commit attached patches.
 --
 Tatsuo Ishii
 SRA OSS, Inc. Japan
 English: http://www.sraoss.co.jp/index_en.php
 Japanese: http://www.sraoss.co.jp

 Hi Anzai-san,

 The latest patch is fair enough for me, so let me hand over its reviewing
 for comitters.

 Thanks,

 2012/10/1 Nozomi Anzai an...@sraoss.co.jp:
 Here is 64-bit API for large object version 3 patch.

 I checked this patch. It looks good, but here are still some points to be
 discussed.

 * I have a question. What is the meaning of INT64_IS_BUSTED?
   It seems to me a marker to indicate a platform without 64bit support.
   However, the commit 901be0fad4034c9cf8a3588fd6cf2ece82e4b8ce
   says as follows:
   | Remove all the special-case code for INT64_IS_BUSTED, per decision that
   | we're not going to support that anymore.

 Removed INT64_IS_BUSTED.


 * At inv_seek(), it seems to me it checks offset correctness with wrong 
 way,
   as follows:
 |  case SEEK_SET:
 |  if (offset  0)
 |  elog(ERROR, invalid seek offset:  INT64_FORMAT, 
 offset);
 |  obj_desc-offset = offset;
 |  break;
   It is a right assumption, if large object size would be restricted to 
 2GB.
   But the largest positive int64 is larger than expected limitation.
   So, it seems to me it should be compared with (INT_MAX * PAGE_SIZE)
   instead.

 Fixed.


 * At inv_write(), it definitely needs a check to prevent data-write upper 
 4TB.
   In case when obj_desc-offset is a bit below 4TB, an additional 1GB write
   will break head of the large object because of pageno overflow.

 Added a such check.


 * Please also add checks on inv_read() to prevent LargeObjectDesc-offset
   unexpectedly overflows 4TB boundary.

 Added a such check.


 * At inv_truncate(), variable off is re-defined to int64. Is it really 
 needed
   change? All its usage is to store the result of len % LOBLKSIZE.

 Fixed and back to int32.


 Thanks,

 2012/9/24 Nozomi Anzai an...@sraoss.co.jp:
  Here is 64-bit API for large object version 2 patch.
 
  I checked this patch. It can be applied onto the latest master branch
  without any problems. My comments are below.
 
  2012/9/11 Tatsuo Ishii is...@postgresql.org:
   Ok, here is the patch to implement 64-bit API for large object, to
   allow to use up to 4TB large objects(or 16TB if BLCKSZ changed to
   32KB). The patch is based on Jeremy Drake's patch posted on September
   23, 2005
   (http://archives.postgresql.org/pgsql-hackers/2005-09/msg01026.php)
   and reasonably updated/edited to adopt PostgreSQL 9.3 by Nozomi Anzai
   for the backend part and Yugo Nagata for the rest(including
   documentation patch).
  
   Here are changes made in the patch:
  
   1) Frontend lo_* libpq functions(fe-lobj.c)(Yugo Nagata)
  
   lo_initialize() gathers backend 64-bit large object handling
   function's oid, namely lo_lseek64, lo_tell64, lo_truncate64.
  
   If client calls lo_*64 functions and backend does not support them,
   lo_*64 functions return error to caller. There might be an argument
   since calls to lo_*64 functions can automatically be redirected to
   32-bit older API. I don't know this is worth the trouble though.
  
  I think it should definitely return an error code when user tries to
  use lo_*64 functions towards the backend v9.2 or older, because
  fallback to 32bit API can raise unexpected errors if application
  intends to seek the area over than 2GB.
 
   Currently lo_initialize() throws an error if one of oids are not
   available. I doubt we do the same way for 64-bit functions since this