[HACKERS] Problem with Bitmap Heap Scan

2008-11-19 Thread Rushabh Lathia
Simple select give wrong result when it uses the Bitmap Heap Scan path.


postgres=# CREATE OR REPLACE FUNCTION my_exec_im_test_func(i integer)
RETURNS integer AS $$
BEGIN
RETURN i + 1;
END;
$$ LANGUAGE plpgsql;
CREATE FUNCTION

postgres=# set enable_seqscan=off;
SET
postgres=# set enable_indexscan=off;
SET
postgres=# select proname from pg_proc where proname like 'my_pro1';
   proname
--
 my_exec_im_test_proc
(1 row)

postgres=# explain select proname from pg_proc where proname like 'my_pro1';
 QUERY
PLAN


-
 Bitmap Heap Scan on pg_proc  (cost=4.26..8.27 rows=1 width=64)
   Recheck Cond: (proname ~~ 'my_pro1'::text)
   -  Bitmap Index Scan on pg_proc_proname_args_nsp_index  (cost=0.00..4.26
row
s=1 width=0)
 Index Cond: ((proname = 'my'::name) AND (proname  'mz'::name))
(4 rows)




-- 
Rushabh Lathia

www.EnterpriseDB.com


Re: [HACKERS] Problem with Bitmap Heap Scan

2008-11-19 Thread Rushabh Lathia
Analysis:


While debugging bitmap heap scan (BitmapHeapNext function) found that first
we perform the underlying index scan and then iterate over bitmap. Now while
iterating,  we execute ExecQual only if tbmres-recheck is true. And for the
query tbmres-recheck is false.

But from the query it seems that we should execute ExecQual as we having
bitmpaqual on the BitmapHeap node (Not quite sure). And when I manually
did recheck = ture, query working fine and as expected.


Regards.
Rushabh
www.EnterpriseDB.com

On Wed, Nov 19, 2008 at 4:26 PM, Rushabh Lathia [EMAIL PROTECTED]wrote:

 Simple select give wrong result when it uses the Bitmap Heap Scan path.


 postgres=# CREATE OR REPLACE FUNCTION my_exec_im_test_func(i integer)
 RETURNS integer AS $$
 BEGIN
 RETURN i + 1;
 END;
 $$ LANGUAGE plpgsql;
 CREATE FUNCTION

 postgres=# set enable_seqscan=off;
 SET
 postgres=# set enable_indexscan=off;
 SET
 postgres=# select proname from pg_proc where proname like 'my_pro1';
proname
 --
  my_exec_im_test_proc
 (1 row)

 postgres=# explain select proname from pg_proc where proname like
 'my_pro1';
  QUERY
 PLAN


 
 -
  Bitmap Heap Scan on pg_proc  (cost=4.26..8.27 rows=1 width=64)
Recheck Cond: (proname ~~ 'my_pro1'::text)
-  Bitmap Index Scan on pg_proc_proname_args_nsp_index
 (cost=0.00..4.26 row
 s=1 width=0)
  Index Cond: ((proname = 'my'::name) AND (proname  'mz'::name))
 (4 rows)




 --
 Rushabh Lathia

 www.EnterpriseDB.com




-- 
Rushabh Lathia


[HACKERS] Server Crash into contrib module ISN into 64bit OS

2008-11-28 Thread Rushabh Lathia
Following test end up with the server crash into 8.4 cvs Head.

uname -a
Linux localhost.localdomain 2.6.18-53.el5 #1 SMP Wed Oct 10 16:34:19 EDT
2007 x86_64 x86_64 x86_64 GNU/Linux

Testcase with ISN contrib module:
=

CREATE OR REPLACE function isbn_issn_proc() returns void as
$$
declare
v1 isbn;
BEGIN
v1 := isbn_in('0-596-00270-x');
END;
$$ LANGUAGE plpgsql;

select isbn_issn_proc();

Analysis:
===

Found that we are getting crash while doing the memcpy into datumCopy().

Datum
datumCopy(Datum value, bool typByVal, int typLen)
{
...
   if (DatumGetPointer(value) == NULL)
   return PointerGetDatum(NULL);

   realSize = datumGetSize(value, typByVal, typLen);

   s = (char *) palloc(realSize);
   memcpy(s, DatumGetPointer(value), realSize);  /* crash */
}

Actually we get crash while doing the DatumGetPointer(), upon further
investigation found that  in isbn_in() function we are using
PG_RETURN_EAN13(), which seems to be returning the wrong address in case of
64bit OS.

I was wondering that why its happening in PG 8.4; then found that we are
having USE_FLOAT8_BYVAL into current version, because of the same not
getting correct/expected Datum representation of the int64.

postgres.h

#ifdef USE_FLOAT8_BYVAL
#define Int64GetDatum(X) ((Datum) SET_8_BYTES(X))
#else
extern Datum Int64GetDatum(int64 X);
#endif

When I tried the same case with --disable-float8-byval option, test running
as expected.



Regards,
Rushabh Lathia

www.EnterpriseDB.com


Re: [HACKERS] Server Crash into contrib module ISN into 64bit OS

2008-11-30 Thread Rushabh Lathia
I think we need to create ISBN type ( contrib/isn/isn.sql.in) with flag
PASSBYVALUE flag when flag USE_FLOAT8_BYVAL is set.

-Regards,
Rushabh



On Fri, Nov 28, 2008 at 10:29 PM, Tom Lane [EMAIL PROTECTED] wrote:

 Rushabh Lathia [EMAIL PROTECTED] writes:
  Following test end up with the server crash into 8.4 cvs Head.

 Hmm, this'd have been noticed sooner if contrib/isn had even
 minimal regression tests :-(  Anyone feel like writing some?

regards, tom lane




-- 
Rushabh Lathia


[HACKERS] use of dblink_build_sql_insert() induces a server crash

2010-02-03 Thread Rushabh Lathia
Hi All,

Testcase:

create table foo (a  int );
postgres=# SELECT dblink_build_sql_insert('foo','1 2',2,'{\0\,
\a\}','{\99\, \xyz\}');
HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.

Version: Latest

Description:  The dblink_build_sql_insert()/get_tuple_of_interest functions
is not taking care number of attributes in the target.

PFA patch to fix the same.

Thanks,
Rushabh Lathia
(www.EnterpriseDB.com)
diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c
index 276c7e1..a067309 100644
--- a/contrib/dblink/dblink.c
+++ b/contrib/dblink/dblink.c
@@ -2083,6 +2083,11 @@ get_tuple_of_interest(Oid relid, int2vector *pkattnums, int16 pknumatts, char **
 		/* internal error */
 		elog(ERROR, SPI connect failure - returned %d, ret);
 
+	if (pknumatts  tupdesc-natts)
+		ereport(ERROR,
+(errcode(ERRCODE_SYNTAX_ERROR),
+ errmsg(statement has more expression then specified relation)));
+
 	/*
 	 * Build sql statement to look up tuple of interest Use src_pkattvals as
 	 * the criteria.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Why NESTED LOOP Not Allowed for FULL and RIGHT Join.

2007-08-20 Thread Rushabh Lathia
Hi,

can anyone let me know that why Nested Loop path is not allowed if jointype
are JOIN_RIGHT or JOIN_FULL?


At match_unsorted_outer(), we having case where nestjoinOK = false if there
are
JOIN_RIGHT or JOIN_FULL.

match_unsorted_outer()
{

case JOIN_RIGHT:
case JOIN_FULL:
nestjoinOK = false;

..
}

wondering why ?

Regards,
Rushabh Lathia

[EMAIL PROTECTED]


[HACKERS] Function with default value not replacing old definition of the function

2008-12-10 Thread Rushabh Lathia
Hi,

Testcase: (8.4 CVS head)


CREATE OR REPLACE FUNCTION myfunc(y int)
RETURNS INTEGER AS $$
   select  100;
$$ language sql;

CREATE OR REPLACE FUNCTION myfunc(y int, x integer DEFAULT 100)
RETURNS INTEGER AS $$
   select  200;
$$ language sql;

select myfunc(10);

 myfunc
--
  100
(1 row)

When create the same function again by added one default value, while
calling the function old function getting called.

It seems that, function with defval not making any sense, if we want to call
the new function then we need to pass defval as well.

select myfunc(10,10);

 myfunc
--
  200
(1 row)

I think second function should replace the old definition of the function,
inputs ?


Thanks,
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Function with default value not replacing old definition of the function

2008-12-10 Thread Rushabh Lathia
On Thu, Dec 11, 2008 at 12:33 PM, Peter Eisentraut [EMAIL PROTECTED] wrote:

 Rushabh Lathia wrote:

 Hi,

 Testcase: (8.4 CVS head)
 

 CREATE OR REPLACE FUNCTION myfunc(y int)
 RETURNS INTEGER AS $$
   select  100;
 $$ language sql;

 CREATE OR REPLACE FUNCTION myfunc(y int, x integer DEFAULT 100)
 RETURNS INTEGER AS $$
   select  200;
 $$ language sql;

 select myfunc(10);

  myfunc
 --
  100
 (1 row)

 When create the same function again by added one default value, while
 calling the function old function getting called.

 It seems that, function with defval not making any sense, if we want to
 call the new function then we need to pass defval as well.


 Hmm, good point, but I'm not sure that replacing the old function is always
 right.  For example, someone recently requested being able to say

 select myfunc(10, DEFAULT);


Hmm, good point.



 so there would be some value to having both variants.

 Do you have any comparisons with other systems (Oracle?) or other
 programming languages?


Yes  Oracle replace the old definition of the function with the new one.





-- 
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Function with default value not replacing old definition of the function

2008-12-10 Thread Rushabh Lathia
On Thu, Dec 11, 2008 at 12:40 PM, Pavel Stehule [EMAIL PROTECTED]wrote:

 Hello

 2008/12/11 Rushabh Lathia [EMAIL PROTECTED]:
  Hi,
 
  Testcase: (8.4 CVS head)
  
 
  CREATE OR REPLACE FUNCTION myfunc(y int)
  RETURNS INTEGER AS $$
 select  100;
  $$ language sql;
 
  CREATE OR REPLACE FUNCTION myfunc(y int, x integer DEFAULT 100)
  RETURNS INTEGER AS $$
 select  200;
  $$ language sql;
 
  select myfunc(10);
 
   myfunc
  --
100
  (1 row)

 no, it's little bit different

 Default is only stored parameter value. You created two functions with
 two different signatures

 myfunc(int)
 myfunc(int, int)

 when you created function, we cannot check defaults, because we don't
 know if anybody use default or not. And when you call function, then
 postgres prefer function with most similar function.


Ok, but what if I want to call a second function with the default values.
How can I call that function with default values?



 regards
 Pavel Stehule

 
  When create the same function again by added one default value, while
  calling the function old function getting called.
 
  It seems that, function with defval not making any sense, if we want to
 call
  the new function then we need to pass defval as well.
 
  select myfunc(10,10);
 
   myfunc
  --
200
  (1 row)
 
  I think second function should replace the old definition of the
 function,
  inputs ?
 
 
  Thanks,
  Rushabh Lathia
  www.EnterpriseDB.com
 




-- 
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Function with default value not replacing old definition of the function

2008-12-10 Thread Rushabh Lathia
On Thu, Dec 11, 2008 at 12:59 PM, Pavel Stehule [EMAIL PROTECTED]wrote:

 2008/12/11 Rushabh Lathia [EMAIL PROTECTED]:
 
 
  On Thu, Dec 11, 2008 at 12:40 PM, Pavel Stehule [EMAIL PROTECTED]
 
  wrote:
 
  Hello
 
 
  when you created function, we cannot check defaults, because we don't
  know if anybody use default or not. And when you call function, then
  postgres prefer function with most similar function.
 
  Ok, but what if I want to call a second function with the default values.
  How can I call that function with default values?
 

 it isn't possible yet (without DEFAULT keyword support).


Ohh Ok.

Thanks



 you have to drop myfunc(int) first.

 regards
 Pavel Stehule

 
 
  regards
  Pavel Stehule
 
  
   When create the same function again by added one default value, while
   calling the function old function getting called.
  
   It seems that, function with defval not making any sense, if we want
 to
   call
   the new function then we need to pass defval as well.
  
   select myfunc(10,10);
  
myfunc
   --
 200
   (1 row)
  
   I think second function should replace the old definition of the
   function,
   inputs ?
  
  
   Thanks,
   Rushabh Lathia
   www.EnterpriseDB.com
  
 
 
 
  --
  Rushabh Lathia
  www.EnterpriseDB.com
 




-- 
Rushabh Lathia


[HACKERS] Function with defval returns error

2008-12-15 Thread Rushabh Lathia
Hi All,

Following test returns error on 8.4 cvs head.  it looks like an issue

Testcase: (8.4 CVS head)

   CREATE OR REPLACE FUNCTION f007( a INTEGER,
b INTEGER DEFAULT 10 ) RETURNS INTEGER
   AS $$
   select 10;
   $$ language sql;

   CREATE OR REPLACE FUNCTION f007( a INTEGER DEFAULT 10,
b INTEGER DEFAULT 10,
c INTEGER DEFAULT 10) RETURNS INTEGER
   AS $$
   select 10;
   $$ language sql;

   CREATE OR REPLACE FUNCTION f007( a TIMESTAMP DEFAULT to_date('01-JUN-06
14:03:50', 'DD-MON-YY HH24:MI:SS') ) RETURNS TIMESTAMP
   AS $$
   select current_date::timestamp;
   $$ language sql;

postgres=# SELECT f007( to_date('01-JUN-06 14:03:50', 'DD-MON-YY
HH24:MI:SS') );
ERROR:  functions with parameter defaults f007(integer, integer, integer)
and f007(integer, integer) are ambiguous


I think this should not return error as the input args here is timestamp...
inputs?

Thanks,
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Function with defval returns error

2008-12-15 Thread Rushabh Lathia
Another issue found on CVS head 

CREATE USER test WITH PASSWORD 'test';
CREATE SCHEMA AUTHORIZATION test;

CREATE OR REPLACE FUNCTION f_test(x in numeric) RETURNS numeric as $$
BEGIN
RETURN x;
END;
$$ language plpgsql;

select f_test(10);

\c postgres test;

select f_test(10);

CREATE OR REPLACE FUNCTION f_test(x in numeric, y in varchar default 'Local
Function with parameters') RETURNs numeric as $$
BEGIN
RETURN x+1;
END;
$$ language plpgsql;

postgres= select f_test(10);
ERROR:  cache lookup failed for type 2139062142




On Tue, Dec 16, 2008 at 2:07 AM, Peter Eisentraut pete...@gmx.net wrote:

 On Monday 15 December 2008 15:43:00 Tom Lane wrote:
  Peter Eisentraut pete...@gmx.net writes:
   Rushabh Lathia wrote:
   I think this should not return error as the input args here is
   timestamp... inputs?
  
   In theory yes, but it's currently not that smart.
 
  This is truly horrid.  Was that patch *really* ready to commit?
  I noticed some comments added to polymorphism.sql that certainly
  look like there's still a lot of half-bakedness in it.

 There is that one case where a call that could be allowed is
 overly-cautiously
 rejected.  That only happens if you have a mix of overloading and default
 parameters.  It's not really half-baked in the sense that it is not
 digestible; it's just not the greatest cake yet.  It's
 improvement-compatible.




-- 
Rushabh Lathia


Re: [HACKERS] Function with defval returns error

2008-12-15 Thread Rushabh Lathia
When we find the (pathpos  prevResult-pathpos) into
FuncnameGetCandidates(), we just replacing the prevResult with the
newResult.

While replacing the previous with new we do not replace the args. I think in
case of default we need to take care for the args as well.

Thanks,
Rushabh

On Tue, Dec 16, 2008 at 12:26 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 I'll write patch that block creating all ambiguous overloading.

 Regards
 Pavel Stehule

 2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:
 
  Another issue found on CVS head 
 
  CREATE USER test WITH PASSWORD 'test';
  CREATE SCHEMA AUTHORIZATION test;
 
  CREATE OR REPLACE FUNCTION f_test(x in numeric) RETURNS numeric as $$
  BEGIN
  RETURN x;
  END;
  $$ language plpgsql;
 
  select f_test(10);
 
  \c postgres test;
 
  select f_test(10);
 
  CREATE OR REPLACE FUNCTION f_test(x in numeric, y in varchar default
 'Local
  Function with parameters') RETURNs numeric as $$
  BEGIN
  RETURN x+1;
  END;
  $$ language plpgsql;
 
  postgres= select f_test(10);
  ERROR:  cache lookup failed for type 2139062142
 
 
 
 
  On Tue, Dec 16, 2008 at 2:07 AM, Peter Eisentraut pete...@gmx.net
 wrote:
 
  On Monday 15 December 2008 15:43:00 Tom Lane wrote:
   Peter Eisentraut pete...@gmx.net writes:
Rushabh Lathia wrote:
I think this should not return error as the input args here is
timestamp... inputs?
   
In theory yes, but it's currently not that smart.
  
   This is truly horrid.  Was that patch *really* ready to commit?
   I noticed some comments added to polymorphism.sql that certainly
   look like there's still a lot of half-bakedness in it.
 
  There is that one case where a call that could be allowed is
  overly-cautiously
  rejected.  That only happens if you have a mix of overloading and
 default
  parameters.  It's not really half-baked in the sense that it is not
  digestible; it's just not the greatest cake yet.  It's
  improvement-compatible.
 
 
 
  --
  Rushabh Lathia
 




-- 
Rushabh Lathia


Re: [HACKERS] Function with defval returns error

2008-12-16 Thread Rushabh Lathia
On Tue, Dec 16, 2008 at 5:35 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:
 
  When we find the (pathpos  prevResult-pathpos) into
  FuncnameGetCandidates(), we just replacing the prevResult with the
  newResult.
 
  While replacing the previous with new we do not replace the args. I think
 in
  case of default we need to take care for the args as well.
 

 personally I prefer raise exception, when I find similar function, we
 don't need emulate Oracle behave.


Raise exception when find similar function, do you mean similar function
with different pathpos ? Or similar function with defval ?



 Regards
 Pavel Stehule

  Thanks,
  Rushabh
 
  On Tue, Dec 16, 2008 at 12:26 PM, Pavel Stehule pavel.steh...@gmail.com
 
  wrote:
 
  Hello
 
  I'll write patch that block creating all ambiguous overloading.
 
  Regards
  Pavel Stehule
 
  2008/12/16 Rushabh Lathia rushabh.lat...@gmail.com:
  
   Another issue found on CVS head 
  
   CREATE USER test WITH PASSWORD 'test';
   CREATE SCHEMA AUTHORIZATION test;
  
   CREATE OR REPLACE FUNCTION f_test(x in numeric) RETURNS numeric as $$
   BEGIN
   RETURN x;
   END;
   $$ language plpgsql;
  
   select f_test(10);
  
   \c postgres test;
  
   select f_test(10);
  
   CREATE OR REPLACE FUNCTION f_test(x in numeric, y in varchar default
   'Local
   Function with parameters') RETURNs numeric as $$
   BEGIN
   RETURN x+1;
   END;
   $$ language plpgsql;
  
   postgres= select f_test(10);
   ERROR:  cache lookup failed for type 2139062142
  
  
  
  
   On Tue, Dec 16, 2008 at 2:07 AM, Peter Eisentraut pete...@gmx.net
   wrote:
  
   On Monday 15 December 2008 15:43:00 Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
 Rushabh Lathia wrote:
 I think this should not return error as the input args here is
 timestamp... inputs?

 In theory yes, but it's currently not that smart.
   
This is truly horrid.  Was that patch *really* ready to commit?
I noticed some comments added to polymorphism.sql that certainly
look like there's still a lot of half-bakedness in it.
  
   There is that one case where a call that could be allowed is
   overly-cautiously
   rejected.  That only happens if you have a mix of overloading and
   default
   parameters.  It's not really half-baked in the sense that it is not
   digestible; it's just not the greatest cake yet.  It's
   improvement-compatible.
  
  
  
   --
   Rushabh Lathia
  
 
 
 
  --
  Rushabh Lathia
 




-- 
Rushabh Lathia


Re: [HACKERS] INSERT..SELECT with GENERATE_SERIES returns error.

2008-12-18 Thread Rushabh Lathia
On Thu, Dec 18, 2008 at 5:14 PM, Anupama Aherrao 
anupama.aher...@enterprisedb.com wrote:

 Hi All,

 Following INSERT..SELECT with  GENERATE_SERIES for bulk insertion returns
 error on 8.4 cvs head.  It looks like an issue.

 Tested on : *8.4 CVS Head*

 CREATE TABLE t1 ( x int, y char(4));
 INSERT INTO  t1  VALUES ( 1, 'edb');
 INSERT INTO t1 SELECT 10  + GENERATE_SERIES(50,60), y FROM t1 WHERE
 y='edb';
 ERROR:  unrecognized table-function returnMode: 2822132


Debugged a bit and problem seem to be in ExecMakeFunctionResult().

We prepare a resultinfo node only If function returns set; so returnMode
will get set only when function returns is set.

   if (fcache-func.fn_retset)
   {
  .
   /* note we do not set SFRM_Materialize_Random or _Preferred */
   rsinfo.returnMode = SFRM_ValuePerCall;
  
   }

After calling the function we are looking for the rsinfo.returnMode which
is not set in this case. And we endup with error.
Seems like we missing the check for hasSetArg.

I added the following condition and query worked fine (Not sure how correct
it is).
Inputs ??


diff --git a/src/backend/executor/execQual.c
b/src/backend/executor/execQual.c
index 8df00ed..a23e35e 100644
--- a/src/backend/executor/execQual.c
+++ b/src/backend/executor/execQual.c
@@ -1513,7 +1513,8 @@ restart:
}

/* Which protocol does function want to use? */
-   if (rsinfo.returnMode == SFRM_ValuePerCall)
+   if (rsinfo.returnMode == SFRM_ValuePerCall ||
+   (!fcache-func.fn_retset 
hasSetArg))
{
if (*isDone != ExprEndResult)
{


Thanks,
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Function with defval returns error

2008-12-18 Thread Rushabh Lathia
On Thu, Dec 18, 2008 at 11:54 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Rushabh Lathia rushabh.lat...@gmail.com writes:
  Another issue found on CVS head 
  ...
  postgres= select f_test(10);
  ERROR:  cache lookup failed for type 2139062142

 I had some difficulty reproducing this locally.  Would you check it's
 fixed by latest commit?


I did fast test and now it seems great. Thanks.

Regards,
Rushabh Lathia
www.Enterprisedb.com


[HACKERS] Function with defval returns wrong result

2009-01-05 Thread Rushabh Lathia
Hi All,

Following test returns wrong result ..

Testcase ( on 8.4 cvs head )
===

CREATE OR REPLACE FUNCTION f1(retval VARCHAR DEFAULT 'Argument') RETURNS
VARCHAR as
$$
BEGIN
return retval;
END;
$$ LANGUAGE plpgsql;


CREATE OR REPLACE FUNCTION f2(p1 IN int, p2 IN VARCHAR DEFAULT f1())
RETURNS VARCHAR AS
$$
BEGIN
RETURN p2;
END;
$$ LANGUAGE plpgsql;


postgres=# select f2(10);
 f2


(1 row)


When we run the f2() it should return the output as the defvalue of f1()
function, but its returning null.


Thanks,
Rushabh Lathia
www.EnterpriseDB.com


[HACKERS] Segmentation fault on PG 8.4 CVS head

2009-01-08 Thread Rushabh Lathia
Hi All,

While running test with bind varible getting segmentation fault. ( CVS Head
8.4)

For testcase, please find the crash.c (C test) and test.java ( JDBC test)
attached with the mail.

Had a quick look at the core dump and found the call stack for the
segmentation fault.

(gdb) bt
#0  0x0813768d in analyze_requires_snapshot (parseTree=0x0) at analyze.c:270

#1  0x082e77a8 in exec_bind_message (input_message=0xbfd7d73c) at
postgres.c:1698
#2  0x082ec524 in PostgresMain (argc=4, argv=0x916fc70, username=0x916fb7c
rushabh) at postgres.c:4882
#3  0x082ac10a in BackendRun (port=0x9191b18) at postmaster.c:3309
#4  0x082ab4d4 in BackendStartup (port=0x9191b18) at postmaster.c:2881
#5  0x082a8ae1 in ServerLoop () at postmaster.c:1291

Had a look at the previous version and found that because of following
condition added with the new PG merge into exec_bind_message(); we end up
with the segmentation fault.

exec_bind_message{
...
   /*
* Set a snapshot if we have parameters to fetch (since the input
* functions might need it) or the query isn't a utility command (and
* hence could require redoing parse analysis and planning).
*/
   if (numParams  0 || analyze_requires_snapshot(psrc-raw_parse_tree))
   {
   PushActiveSnapshot(GetTransactionSnapshot());
   snapshot_set = true;
   }
...
}


Condition added with Fix failure to ensure that a snapshot is available to
datatype input functions commit. (
http://git.postgresql.org/?p=postgresql.git;a=commitdiff;h=d5e7e5dd7c81440bb46f52872906633ee2b388c1
)

Not very much sure but for the quick check I just modifiled condition by
added check for raw_parse_tree and test worked file.

Modified condition:
   /*
* Set a snapshot if we have parameters to fetch (since the input
* functions might need it) or the query isn't a utility command (and
* hence could require redoing parse analysis and planning).
*/
   if (numParams  0 ||
   (psrc-raw_parse_tree 
analyze_requires_snapshot(psrc-raw_parse_tree)))
   {
   PushActiveSnapshot(GetTransactionSnapshot());
   snapshot_set = true;
   }

 Another fix would be to add check for  parseTree into
analyze_requires_snapshot().

Thanks ,
Rushabh Lathia
www.EnterpriseDB.com
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.*;

public class Test {

/**
 * @param args
 */
public static void main(String[] args) {
try {
Class.forName(org.postgresql.Driver);
Connection con = DriverManager.getConnection(jdbc:postgresql://localhost:5432/postgres?loglevel=2,rushabh,edb);
PreparedStatement st;
ResultSet rs;
st = con.prepareStatement(SELECT ?;--SELECT ?);
st.setString(1, a);
st.execute();st.getMoreResults();
st.getMoreResults();
st.close();  
} catch (Exception e) {
e.printStackTrace();
}
}
}

#include stdio.h
#include stdlib.h
#include string.h
#include sys/types.h
#include libpq-fe.h

int
main(int argc, char **argv)
{
	const char *conninfo;
	PGconn	   *conn;
	PGresult   *res;

	/*
	 * If the user supplies a parameter on the command line, use it as the
	 * conninfo string; otherwise default to setting dbname=postgres and using
	 * environment variables or defaults for all other connection parameters.
	 */
	if (argc  1)
		conninfo = argv[1];
	else
		conninfo = dbname = postgres;

	/* Make a connection to the database */
	conn = PQconnectdb(conninfo);

	/* Check to see that the backend connection was successfully made */
	if (PQstatus(conn) != CONNECTION_OK)
	{
		fprintf(stderr, Connection to database failed: %s,
PQerrorMessage(conn));
		return 1;
	}

	res = PQexecParams(conn,
	   /* nothing */,
	   0,		/* no params */
	   NULL,	/* let the backend deduce param type */
	   NULL,
	   NULL,	/* don't need param lengths since text */
	   NULL,	/* default to all text params */
	   0);


	if (PQresultStatus(res) != PGRES_TUPLES_OK)
	{
		fprintf(stderr, query failed: %s, PQerrorMessage(conn));
	}

	PQclear(res);

	/* close the connection to the database and cleanup */
	PQfinish(conn);

	return 0;
}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Getting error while trying to insert date with the format 'dd-month-yyyy' , 'day-mm-yyyy' etc..

2009-06-10 Thread Rushabh Lathia
Getting error while trying to insert date with the format 'dd-month-' ,
'day-mm-' (format which add the space in between the date ) etc..

Testcase:

postgres=# \d t
 Table public.t
 Column | Type | Modifiers
+--+---
 a  | date |

postgres=# insert into t values ( to_char(current_date+2,
'day-mm-')::date);
ERROR:  invalid input syntax for type date: friday   -06-2009

postgres=# insert into t values ( to_char(current_date+2,
'dd-month-')::date);
ERROR:  invalid input syntax for type date: 12-june -2009


Debugged the issue and found that error coming from date_in() -
DecodeDateTime(). Problem here is whenever any space comes in the date
ParseDateTime() unable to break string into tokens based on a date/time
context.


-- 
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Getting error while trying to insert date with the format 'dd-month-yyyy' , 'day-mm-yyyy' etc..

2009-06-10 Thread Rushabh Lathia
On Thu, Jun 11, 2009 at 12:02 AM, Peter Eisentraut pete...@gmx.net wrote:

 On Wednesday 10 June 2009 17:10:42 Tom Lane wrote:
  Rushabh Lathia rushabh.lat...@gmail.com writes:
   Getting error while trying to insert date with the format
 'dd-month-'
   , 'day-mm-' (format which add the space in between the date ) etc..
 
  1. Why are you bothering with the completely pointless to_char call at
 all?

 Random guess for entertainment: Oracle applications do this sort of thing
 all
 the time.


I thought when we are providing the different format into to_char for
datetime then  standard datetime input converters should also have the
capability to read that format.



  2. It is not a bug that to_char is capable of emitting formats that
  won't be read by the standard datetime input converter.  If you insist
  on making a useless conversion to char and back, it's on your head to
  choose a format setting that will work.

 Of course they then also use to_date all the time.


Yes, its we can always use to_date.

-- 
Rushabh Lathia


[HACKERS] Primary Key Constraint on inheritance table not getting route to child tables

2012-08-20 Thread Rushabh Lathia
Hi,

ALTER TABLE ADD Constraints PRIMARY KEY on inheritance table not getting
route to child table.

But when we do ALTER TABLE DROP Constraint on the same, it complains about
constraint does not
exists on child table.

Consider the following example

psql=# CREATE TABLE measurement (
psql(# city_id int not null,
psql(# logdate date not null,
psql(# peaktempint,
psql(# unitsales   int
psql(# );
CREATE TABLE
psql=# CREATE TABLE measurement_y2006m02 (
psql(# CHECK ( logdate = DATE '2006-02-01' AND logdate  DATE
'2006-03-01' )
psql(# ) INHERITS (measurement);
CREATE TABLE
psql=# CREATE TABLE measurement_y2006m03 (
psql(# CHECK ( logdate = DATE '2006-03-01' AND logdate  DATE
'2006-04-01' )
psql(# ) INHERITS (measurement);
CREATE TABLE
psql=#
psql=#
psql=# ALTER TABLE measurement
ADD CONSTRAINT con1 PRIMARY KEY (city_id);
ALTER TABLE
psql=#
psql=#

-- Don't have primary key on child table
psql=# desc measurement_y2006m02
 Table public.measurement_y2006m02
  Column   |Type | Modifiers
---+-+---
 city_id   | integer | not null
 logdate   | timestamp without time zone | not null
 peaktemp  | integer |
 unitsales | integer |
Check constraints:
measurement_y2006m02_logdate_check CHECK (logdate = '01-FEB-06
00:00:00'::timestamp without time zone AND logdate  '01-MAR-06
00:00:00'::timestamp without time zone)
Inherits: measurement

-- Primary key on parent table
psql=# desc measurement
 Table public.measurement
  Column   |Type | Modifiers
---+-+---
 city_id   | integer | not null
 logdate   | timestamp without time zone | not null
 peaktemp  | integer |
 unitsales | integer |
Indexes:
con1 PRIMARY KEY, btree (city_id)
Number of child tables: 2 (Use \d+ to list them.)

*psql=# ALTER TABLE measurement*
*DROP CONSTRAINT con1;*
*ERROR:  constraint con1 of relation measurement_y2006m02 does not exist
*

I am not sure whether PRIMARY KEY not getting route is a
expected behavior or not, but if its expected behavior
then obviously DROP CONSTRAINT should not complain about constraint doesn't
exists on child table.

Inputs/Comments ?

Thanks,
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Primary Key Constraint on inheritance table not getting route to child tables

2012-08-20 Thread Rushabh Lathia
On Mon, Aug 20, 2012 at 9:28 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Rushabh Lathia rushabh.lat...@gmail.com writes:
  ALTER TABLE ADD Constraints PRIMARY KEY on inheritance table not getting
  route to child table.

 Right.

  But when we do ALTER TABLE DROP Constraint on the same, it complains
 about
  constraint does not exists on child table.

 Works for me in HEAD.  What version are you testing?  This seems related
 to some recent bug fixes ...


Oh ok.

Sorry for wrong noise, I was checking this on old version.

Thanks,


 regards, tom lane




-- 
--

Rushabh Lathia
Technical Architect
EnterpriseDB Corporation
The Enterprise Postgres Company

Phone: +91-20-30589494

Website: http://www.enterprisedb.com
EnterpriseDB Blog: http://blogs.enterprisedb.com/
Follow us on Twitter: http://www.twitter.com/enterprisedb


Re: [HACKERS] assertion failure w/extended query protocol

2012-10-21 Thread Rushabh Lathia
Sorry It might be late here, but just wanted to share the patch
I came up with. Actually with Robert told he reported issues to
pgsql-hacker, I thought he might have also submitted patch.

PFA I came up with, but seems like issue has been already
committed.

Thanks,


On Sat, Oct 20, 2012 at 9:07 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Andres Freund and...@2ndquadrant.com writes:
  On Saturday, October 20, 2012 12:05:15 AM Tom Lane wrote:
  (such as the current query showing up in pg_cursors --- maybe we should
  prevent that?)

  I don't really see an argument for preventing that.

 Well, the reason it seems peculiar to me is that the current query is in
 no way a cursor --- it's just a SELECT in the cases that showed
 regression test differences.  I didn't go looking in the code yet, but
 I suspect the pg_cursors view is displaying all Portals.  Arguably, it
 should only display those that were created by actual cursor commands.

 regards, tom lane


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


assertion_pg.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Server crash while trying to fetch EXPLAIN query results with a cursor

2012-06-26 Thread Rushabh Lathia
Hi All,

Consider the following testcase:

postgres=# select version();

version
-
 PostgreSQL 9.3devel on i686-pc-linux-gnu, compiled by gcc (Ubuntu
4.4.3-4ubuntu5) 4.4.3, 32-bit
(1 row)

postgres=# do $$
declare
 t cursor for explain SELECT n into tbl from generate_series(1,10) n;
begin
 open t;
end
$$;
The connection to the server was lost. Attempting reset: Failed.

Above testcase endup with server crash. Crash is due to following assert
into ScanQueryForLocks()

Assert(parsetree-commandType != CMD_UTILITY);

Testcase works fine with earlier version.

Description :
==

Here is the stack for the crash:

#0  ScanQueryForLocks (parsetree=0xa304ea4, acquire=1 '\001') at
plancache.c:1324
#1  0x08447e81 in AcquirePlannerLocks (stmt_list=0xa304dcc, acquire=1
'\001') at plancache.c:1306
#2  0x08446c34 in RevalidateCachedQuery (plansource=0xa3041bc) at
plancache.c:460
#3  0x0844743a in GetCachedPlan (plansource=0xa3041bc, boundParams=0x0,
useResOwner=0 '\000') at plancache.c:936
#4  0x0827047a in SPI_cursor_open_internal (name=0xa2fc124 t,
plan=0xa3059dc, paramLI=0x0, read_only=0 '\000') at spi.c:1187
#5  0x0827024e in SPI_cursor_open_with_paramlist (name=0xa2fc124 t,
plan=0xa3059dc, params=0x0, read_only=0 '\000') at spi.c:1112
#6  0x002c59ec in exec_stmt_open (estate=0xbfc3c4d0, stmt=0xa2fd4c0) at
pl_exec.c:3534


Here query-commandType is CMD_UTILITY, and its utilitystmt contain
CreateTableAsStmt. Now looking further I found that for any SELECT .. INTO
.. clause we create CreateTableAsStmt to remove the into clause from the
select stmt.

In the above mentioned stack before calling ScanQueryForLocks(),
AcquirePlannerLocks() do having check for CMD_UTILITY and it fetch Query
from the utility stmt. To get the Query stmt from Utilty command it calls
UtilityContainsQuery().

Into UtilityContainsQuery() we need to handle ExplainStmt properlly for the
the SELECT INTO so that it returns proper contained Query Stmt.

PFA patch for the same.

Thanks,


-- 
Rushabh Lathia
www.EnterpriseDB.com
diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c
index 8b73858..9286ce5 100644
--- a/src/backend/tcop/utility.c
+++ b/src/backend/tcop/utility.c
@@ -1362,9 +1362,20 @@ UtilityContainsQuery(Node *parsetree)
 	switch (nodeTag(parsetree))
 	{
 		case T_ExplainStmt:
-			Assert(IsA(((ExplainStmt *) parsetree)-query, Query));
-			return (Query *) ((ExplainStmt *) parsetree)-query;
-
+			{
+Query *qry;
+Assert(IsA(((ExplainStmt *) parsetree)-query, Query));
+qry = (Query *) ((ExplainStmt *) parsetree)-query;
+/*
+ * Any SELECT..INTO clause get transformed to CreateTableAsStmt,
+ * and thats CMD_UTILITY stmt. So if CMD_UTILITY found for any
+ * ExplainStmt Query then call UtilityConainsQuery() again
+ * to fetch the proper contained query.
+ */
+if (qry-commandType == CMD_UTILITY)
+	qry = UtilityContainsQuery(qry-utilityStmt);
+return qry;
+			}
 		case T_CreateTableAsStmt:
 			/* might or might not contain a Query ... */
 			if (IsA(((CreateTableAsStmt *) parsetree)-query, Query))

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Windows env returns error while running select pgstatindex

2011-08-24 Thread Rushabh Lathia
Description:
===

Error Message  invalid input syntax for type double precision: -1#I is
displayed while running select pgstatindex

Issue only getting reproduce on windows environment.

Analysis:
=

Consider the following testcase to reproduce the issue on windows:

create table test (a  int primary key );

Windows Output:
==

psqlselect pgstatindex('public.test_pkey');
ERROR:  invalid input syntax for type double precision: -1.#J

Linux output:
==

psql=# select pgstatindex('public.test_pkey');
pgstatindex
---
 (2,0,0,0,0,0,0,0,NaN,NaN)
(1 row)

here when we run the select on linux its returning proper result and on
windows error coming from float8in() while trying to work for the NaN
values.

After debugging I noticed that 0/0 returning NaN on linux but it returns
-1.#JIND on windows. Now when float8in() getting call for such value
on windows it ending up with error  invalid input syntax for type double
precision: as strtod() not able to understand such values.

I added to check into pgstatindex() to avoid 0/0 situation and issue got
fixed.

PFA patch for the same.


Thanks,
Rushabh Lathia

EnterpriseDB Corporation
The Enterprise Postgres Company


Website: http://www.enterprisedb.com
EnterpriseDB Blog: http://blogs.enterprisedb.com/
Follow us on Twitter: http://www.twitter.com/enterprisedb


win_pgstat_fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ERROR: invalid input syntax for type timestamp with time zone

2013-03-11 Thread Rushabh Lathia
Hi All,

Consider the following test:

postgres=# select version();

version

-
 PostgreSQL 9.3devel on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit
(1 row)

postgres=# create table test ( a timestamptz);
CREATE TABLE
postgres=# insert into test values ( 'Sat Mar 11 23:58:48 1000 IST');
INSERT 0 1
postgres=# select * from test;
  a
--
 1000-03-12 03:52:16+05:53:28
(1 row)

postgres=# insert into test values ( 'Sat Mar 11 23:58:48 1 IST');
ERROR:  invalid input syntax for type timestamp with time zone: Sat Mar 11
23:58:48 1 IST
LINE 1: insert into test values ( 'Sat Mar 11 23:58:48 1 IST');

So, if you see while trying to insert date 'Sat Mar 11 23:58:48 *1* IST'
, its returns with invalid input syntax.  Error coming from the stack
(timestamptz_in()  - DecodeDateTime()).

But when I tried to enter same date with other format then it does work.

postgres=# insert into test values ( '*1*-03-11 23:58:48');
INSERT 0 1
postgres=# select * from test;
  a
---
Sat Mar 11 23:58:48 *1* IST
(1 row)

Looking at the code, it seems like for Postgres,MDY datestyle DecodeDateTime()
doesn't handle date properly if year is greater then 4 digits (greater then
). Do you see this as bug or expected output ?


Regards,
Rushabh Lathia
www.EnterpriseDB.com


[HACKERS] elog() error, trying CURENT OF with foreign table

2013-04-19 Thread Rushabh Lathia
While trying out CURRENT OF with foreign table, ending up with error.

postgres=# select version();
 version

-
 PostgreSQL 9.3devel on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit
(1 row)

-- Create exptension  database
postgres=# CREATE EXTENSION postgres_fdw;
CREATE EXTENSION
postgres=# create database regression;
CREATE DATABASE

-- Create foreign server
postgres=# CREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw
postgres-#   OPTIONS (dbname 'regression');
CREATE SERVER
postgres=# CREATE USER MAPPING FOR CURRENT_USER SERVER loopback;
CREATE USER MAPPING

-- Create table into remote server
postgres=# \c regression
You are now connected to database regression as user rushabh.
regression=# create table test ( a int );
CREATE TABLE
regression=# insert into test values ( 1 );
INSERT 0 1

-- Connect to local server and create test function
regression=# \c postgres
-- Create foreign table
postgres=# create foreign table test ( a int ) server loopback;
CREATE FOREIGN TABLE
postgres=# CREATE OR REPLACE FUNCTION taest_func_dblink2()
postgres-# RETURNS numeric
postgres-# AS $$
postgres$# DECLARE c CURSOR FOR SELECT a FROM test FOR UPDATE;
postgres$# v_i numeric;
postgres$# BEGIN
postgres$# OPEN c;
postgres$# FETCH c INTO v_i;
postgres$# UPDATE test SET a=50 WHERE CURRENT OF c;
postgres$# RETURN 0;
postgres$# END; $$ LANGUAGE plpgsql;
CREATE FUNCTION

postgres=# select taest_func_dblink2();
ERROR:  CURRENT OF cannot be executed
CONTEXT:  SQL statement UPDATE test SET a=50 WHERE CURRENT OF c
PL/pgSQL function taest_func_dblink2() line 7 at SQL statement

Here test ending up with following:

elog(ERROR, CURRENT OF cannot be executed);

should we change this to ereport() or is there some other fix that we
should make?

Regards,
Rushabh Lathia
www.EnterpriseDB.com


[HACKERS] edb-postgres.exe has encountered a problem on windows

2011-04-01 Thread Rushabh Lathia
Problem:


On windows when we run edb-postgres.exe without any command line args, its
getting crash or its showing error into Application logs of Event Viewer.

Analysis:
==

For any stderr we call the write_stderr() and write_stderr() calls the
write_console() for stderr. Now here write_console() using the palloc()
internally, which require the CurrentMemoryContext.

At the startup CurrentMemoryContext will be NULL, so palloc again calling
write_stderr(). So recursion has been started and its ending up with
exception.

Call stack for palloc() is:

main() - check_root() - write_stderr() - write_console() -
pgwin32_toUTF16() - palloc()

Fix:
=

Earlier  we used to call vfprintf() for windows stderr, which is now
replaced with write_console().
So to avoid the exception now, I added condition for CurrentMemoryContext
into write_stderr().

PFA patch to fix the same.

Regards,
Rushabh Lathia
EnterpriseDB http://www.enterprisedb.com/, The Enterprise
PostgreSQLhttp://www.enterprisedb.com/
 company.
Index: src/backend/utils/error/elog.c
===
RCS file: /repositories/postgreshome/cvs/pgsql/src/backend/utils/error/elog.c,v
retrieving revision 1.226
diff -c -p -r1.226 elog.c
*** src/backend/utils/error/elog.c	19 Aug 2010 22:55:01 -	1.226
--- src/backend/utils/error/elog.c	1 Apr 2011 14:08:38 -
*** write_stderr(const char *fmt,...)
*** 2759,2766 
  	}
  	else
  	{
! 		/* Not running as service, write to stderr */
! 		write_console(errbuf, strlen(errbuf));
  		fflush(stderr);
  	}
  #endif
--- 2759,2773 
  	}
  	else
  	{
! 		/* 
! 		 * To use the write_console we need memoryContext as its using palloc
! 		 * internally. When we don't have CurrentMemoryContext directly throw
! 		 * error at stderr.
! 		 */
! 		if (CurrentMemoryContext)
! 			write_console(errbuf, strlen(errbuf));
! 		else
! 			vfprintf(stderr, fmt, ap);
  		fflush(stderr);
  	}
  #endif

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] edb-postgres.exe has encountered a problem on windows

2011-04-01 Thread Rushabh Lathia
On Fri, Apr 1, 2011 at 7:26 PM, Robert Haas robertmh...@gmail.com wrote:

 On Fri, Apr 1, 2011 at 9:14 AM, Rushabh Lathia rushabh.lat...@gmail.com
 wrote:
  On windows when we run edb-postgres.exe ...

 Did you intend to send this to an EDB-internal mailing list?


Oops sorry, initially we found this bug with EDB.

But after looking into issue more I found that its also present into
postgreSQL.

Need to replace edb-postgres.exe with postgres.exe ..


Regards,
Rushabh Lathia
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] postgres.exe has encountered a problem on windows

2011-04-01 Thread Rushabh Lathia
On Fri, Apr 1, 2011 at 6:51 PM, Magnus Hagander mag...@hagander.net wrote:

 On Fri, Apr 1, 2011 at 15:14, Rushabh Lathia rushabh.lat...@gmail.com
 wrote:
  Problem:
  
 
  On windows when we run postgres.exe without any command line args, its
  getting crash or its showing error into Application logs of Event Viewer.
 
  Analysis:
  ==
 
  For any stderr we call the write_stderr() and write_stderr() calls the
  write_console() for stderr. Now here write_console() using the palloc()
  internally, which require the CurrentMemoryContext.
 
  At the startup CurrentMemoryContext will be NULL, so palloc again calling
  write_stderr(). So recursion has been started and its ending up with
  exception.
 
  Call stack for palloc() is:
 
  main() - check_root() - write_stderr() - write_console() -
  pgwin32_toUTF16() - palloc()
 
  Fix:
  =
 
  Earlier  we used to call vfprintf() for windows stderr, which is now
  replaced with write_console().
  So to avoid the exception now, I added condition for CurrentMemoryContext
  into write_stderr().
 
  PFA patch to fix the same.

 What about the cases where we directly call write_console()? Do we
 know we are good there, or should the check perhaps be made inside
 write_console() instead of in the caller?


Hmm, yes. It make more sense to add check for CurrentMemoryContext in
write_console().

PFA patch for the same.


Regards,
Rushabh Lathia
EnterpriseDB, The Enterprise PostgreSQL company.


Re: [HACKERS] postgres.exe has encountered a problem on windows

2011-04-01 Thread Rushabh Lathia
On Fri, Apr 1, 2011 at 8:23 PM, Rushabh Lathia rushabh.lat...@gmail.comwrote:



 On Fri, Apr 1, 2011 at 6:51 PM, Magnus Hagander mag...@hagander.netwrote:

 On Fri, Apr 1, 2011 at 15:14, Rushabh Lathia rushabh.lat...@gmail.com
 wrote:
  Problem:
  
 
  On windows when we run postgres.exe without any command line args, its
  getting crash or its showing error into Application logs of Event
 Viewer.
 
  Analysis:
  ==
 
  For any stderr we call the write_stderr() and write_stderr() calls the
  write_console() for stderr. Now here write_console() using the palloc()
  internally, which require the CurrentMemoryContext.
 
  At the startup CurrentMemoryContext will be NULL, so palloc again
 calling
  write_stderr(). So recursion has been started and its ending up with
  exception.
 
  Call stack for palloc() is:
 
  main() - check_root() - write_stderr() - write_console() -
  pgwin32_toUTF16() - palloc()
 
  Fix:
  =
 
  Earlier  we used to call vfprintf() for windows stderr, which is now
  replaced with write_console().
  So to avoid the exception now, I added condition for
 CurrentMemoryContext
  into write_stderr().
 
  PFA patch to fix the same.

 What about the cases where we directly call write_console()? Do we
 know we are good there, or should the check perhaps be made inside
 write_console() instead of in the caller?


 Hmm, yes. It make more sense to add check for CurrentMemoryContext in
 write_console().

 PFA patch for the same.


Oops missed the attachment.

Here it is ..



 Regards,
 Rushabh Lathia
 EnterpriseDB, The Enterprise PostgreSQL company.

Index: src/backend/utils/error/elog.c
===
RCS file: /repositories/postgreshome/cvs/pgsql/src/backend/utils/error/elog.c,v
retrieving revision 1.226
diff -c -p -r1.226 elog.c
*** src/backend/utils/error/elog.c	19 Aug 2010 22:55:01 -	1.226
--- src/backend/utils/error/elog.c	1 Apr 2011 15:47:01 -
*** write_console(const char *line, int len)
*** 1661,1667 
  	 */
  	if (GetDatabaseEncoding() != GetPlatformEncoding() 
  		!in_error_recursion_trouble() 
! 		!redirection_done)
  	{
  		WCHAR	   *utf16;
  		int			utf16len;
--- 1661,1668 
  	 */
  	if (GetDatabaseEncoding() != GetPlatformEncoding() 
  		!in_error_recursion_trouble() 
! 		!redirection_done 
! 		CurrentMemoryContext)
  	{
  		WCHAR	   *utf16;
  		int			utf16len;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres.exe has encountered a problem on windows

2011-04-01 Thread Rushabh Lathia
On Fri, Apr 1, 2011 at 11:31 PM, Magnus Hagander mag...@hagander.netwrote:

 On Fri, Apr 1, 2011 at 16:56, Rushabh Lathia rushabh.lat...@gmail.com
 wrote:
 
 
  On Fri, Apr 1, 2011 at 8:23 PM, Rushabh Lathia rushabh.lat...@gmail.com
 
  wrote:
 
 
  On Fri, Apr 1, 2011 at 6:51 PM, Magnus Hagander mag...@hagander.net
  wrote:
 
  On Fri, Apr 1, 2011 at 15:14, Rushabh Lathia rushabh.lat...@gmail.com
 
  wrote:
   Problem:
   
  
   On windows when we run postgres.exe without any command line args,
 its
   getting crash or its showing error into Application logs of Event
   Viewer.
  
   Analysis:
   ==
  
   For any stderr we call the write_stderr() and write_stderr() calls
 the
   write_console() for stderr. Now here write_console() using the
 palloc()
   internally, which require the CurrentMemoryContext.
  
   At the startup CurrentMemoryContext will be NULL, so palloc again
   calling
   write_stderr(). So recursion has been started and its ending up with
   exception.
  
   Call stack for palloc() is:
  
   main() - check_root() - write_stderr() - write_console() -
   pgwin32_toUTF16() - palloc()
  
   Fix:
   =
  
   Earlier  we used to call vfprintf() for windows stderr, which is now
   replaced with write_console().
   So to avoid the exception now, I added condition for
   CurrentMemoryContext
   into write_stderr().
  
   PFA patch to fix the same.
 
  What about the cases where we directly call write_console()? Do we
  know we are good there, or should the check perhaps be made inside
  write_console() instead of in the caller?
 
  Hmm, yes. It make more sense to add check for CurrentMemoryContext in
  write_console().
 
  PFA patch for the same.
 
  Oops missed the attachment.
 
  Here it is ..

 Thanks, applied with the addition of a comment.


Thanks Magnus.

regards,
Rushabh Lathia
EnterpriseDB, The Enterprise PostgreSQL company.


Re: [HACKERS] proposal 9.4 plpgsql: allows access to call stack from GET DIAGNOSTICS statement

2013-06-24 Thread Rushabh Lathia
Hi,

Use of this feature is to get  call stack for debugging without raising
exception. This definitely seems very useful.

Here are my comments about the submitted patch:

Patch gets applied cleanly (there are couple of white space warning but
that's
into test case file). make and make install too went smooth. make check
was smooth too. Did some manual testing about feature and its went smooth.

However would like to share couple of points:

1) I noticed that you did the initialization of edata manually, just
wondering
can't we directly use CopyErrorData() which will do that job ?

I found that inside CopyErrorData() there is Assert:

Assert(CurrentMemoryContext != ErrorContext);

And in the patch we are setting using ErrorContext as short living context,
is
it the reason of not using CopyErrorData() ?


2) To reset stack to empty, FlushErrorState() can be used.

3) I was think about the usability of the feature and wondering how about if
we add some header to the stack, so user can differentiate between STACK and
normal NOTICE message ?

Something like:

postgres=# select outer_outer_func(10);
NOTICE:  - Call Stack -
PL/pgSQL function inner_func(integer) line 4 at GET DIAGNOSTICS
PL/pgSQL function outer_func(integer) line 3 at RETURN
PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
CONTEXT:  PL/pgSQL function outer_func(integer) line 3 at RETURN
PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
 outer_outer_func
--
   20
(1 row)

I worked on point 2) and 3) and PFA patch for reference.

Thanks,
Rushabh



On Sat, Feb 2, 2013 at 2:53 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 I propose enhancing GET DIAGNOSTICS statement about new field
 PG_CONTEXT. It is similar to GET STACKED DIAGNOSTICS'
 PG_EXCEPTION_CONTEXT.

 Motivation for this proposal is possibility to get  call stack for
 debugging without raising exception.

 This code is based on cleaned code from Orafce, where is used four
 years without any error reports.

 CREATE OR REPLACE FUNCTION public.inner(integer)
  RETURNS integer
  LANGUAGE plpgsql
 AS $function$
 declare _context text;
 begin
   get diagnostics _context = pg_context;
   raise notice '***%***', _context;
   return 2 * $1;
 end;
 $function$

 postgres=# select outer_outer(10);
 NOTICE:  ***PL/pgSQL function inner(integer) line 4 at GET DIAGNOSTICS
 PL/pgSQL function outer(integer) line 3 at RETURN
 PL/pgSQL function outer_outer(integer) line 3 at RETURN***
 CONTEXT:  PL/pgSQL function outer(integer) line 3 at RETURN
 PL/pgSQL function outer_outer(integer) line 3 at RETURN
  outer_outer
 ─
   20
 (1 row)

 Ideas, comments?

 Regards

 Pavel Stehule


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


get_diagnostics_context_initial_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal 9.4 plpgsql: allows access to call stack from GET DIAGNOSTICS statement

2013-06-24 Thread Rushabh Lathia
On Tue, Jun 25, 2013 at 1:47 AM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 This is fragment ofmy old code from orafce package - it is functional,
 but it is written little bit more generic due impossible access to
 static variables (in elog.c)

 static char*
 dbms_utility_format_call_stack(char mode)
 {
MemoryContext oldcontext = CurrentMemoryContext;
ErrorData *edata;
ErrorContextCallback *econtext;
StringInfo sinfo;

#if PG_VERSION_NUM = 80400
   errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO, TEXTDOMAIN);
#else
   errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO);
 #endif

MemoryContextSwitchTo(oldcontext);

for (econtext = error_context_stack;
  econtext != NULL;
  econtext = econtext-previous)
 (*econtext-callback) (econtext-arg);

edata = CopyErrorData();
FlushErrorState();

 https://github.com/orafce/orafce/blob/master/utility.c

 2013/6/24 Rushabh Lathia rushabh.lat...@gmail.com:
  Hi,
 
  Use of this feature is to get  call stack for debugging without raising
  exception. This definitely seems very useful.
 
  Here are my comments about the submitted patch:
 
  Patch gets applied cleanly (there are couple of white space warning but
  that's
  into test case file). make and make install too went smooth. make check
  was smooth too. Did some manual testing about feature and its went
 smooth.
 
  However would like to share couple of points:
 

 My main motive is concentration to stack info string only. So I didn't
 use err_start function, and I didn't use CopyErrorData too. These
 routines does lot of other work.


  1) I noticed that you did the initialization of edata manually, just
  wondering
  can't we directly use CopyErrorData() which will do that job ?
 

 yes, we can, but in this context on context field is interesting for us.

  I found that inside CopyErrorData() there is Assert:
 
  Assert(CurrentMemoryContext != ErrorContext);
 
  And in the patch we are setting using ErrorContext as short living
 context,
  is
  it the reason of not using CopyErrorData() ?

 it was not a primary reason, but sure - this routine is not designed
 for direct using from elog module. Next, we can save one palloc call.


Hmm ok.



 
 
  2) To reset stack to empty, FlushErrorState() can be used.
 

 yes, it can be. I use explicit rows due different semantics, although
 it does same things. FlushErrorState some global ErrorState and it is
 used from other modules. I did a reset of ErrorContext. I fill a small
 difference there - so FlushErrorState is not best name for my purpose.
 What about modification:

 static void
 resetErrorStack()
 {
 errordata_stack_depth = -1;
 recursion_depth = 0;
 /* Delete all data in ErrorContext */
 MemoryContextResetAndDeleteChildren(ErrorContext);
 }

 and then call in  InvokeErrorCallbacks -- resetErrorStack()

 and rewrote flushErrorState like

 void FlushErrorState(void)
 {
   /* reset ErrorStack is enough */
   resetErrorStack();
 }

 ???


Nope. rather then that I would still prefer direct call of
FlushErrorState().



 but I can live well with direct call of FlushErrorState() - it is only
 minor change.


  3) I was think about the usability of the feature and wondering how
 about if
  we add some header to the stack, so user can differentiate between STACK
 and
  normal NOTICE message ?
 
  Something like:
 
  postgres=# select outer_outer_func(10);
  NOTICE:  - Call Stack -
  PL/pgSQL function inner_func(integer) line 4 at GET DIAGNOSTICS
  PL/pgSQL function outer_func(integer) line 3 at RETURN
  PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
  CONTEXT:  PL/pgSQL function outer_func(integer) line 3 at RETURN
  PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
   outer_outer_func
  --
 20
  (1 row)

 I understand, but I don't think so it is good idea. Sometimes, you
 would to use context for different purposes than debug log - for
 example - you should to identify top most call or near call - and any
 additional lines means some little bit more difficult parsing. You can
 add this line simply (if you want)

 RAISE NOTICE e'--- Call Stack ---\n%', stack

 but there are more use cases, where you would not this information (or
 you can use own header (different language)), and better returns only
 clean data.


I don't know but I still feel like given header from function it self would
be
nice to have things. Because it will help user to differentiate between
STACK and normal NOTICE context message.




 
  I worked on point 2) and 3) and PFA patch for reference.

 so

 *1 CopyErrorData does one useless palloc more and it is too generic,


Ok


 *2 it is possible - I have not strong opinion


Prefer  FlushErrorState() call.


 *3 no, I would to return data in most simply and clean form without any
 sugar


As a user perspective it would be nice to have sugar layer around

Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-06-25 Thread Rushabh Lathia
Hi Pavel,

I gone through the discussion over here and found that with this patch we
enable the new error fields in plpgsql. Its a simple patch to expose the new
error fields in plpgsql.

Patch gets applied cleanly. make and make install too went smooth. make
check
was smooth too. Patch also include test coverage

I tested the functionality with manual test and its woking as expected.

BTW in the patch I found one additional new like in read_raise_options():

@@ -3631,7 +3661,23 @@ read_raise_options(void)
else if (tok_is_keyword(tok, yylval,
K_HINT,
hint))
opt-opt_type = PLPGSQL_RAISEOPTION_HINT;
+   else if (tok_is_keyword(tok, yylval,
+
K_COLUMN_NAME, column_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_COLUMN_NAME;
+   else if (tok_is_keyword(tok, yylval,
+
K_CONSTRAINT_NAME, constraint_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_CONSTRAINT_NAME;
+   else if (tok_is_keyword(tok, yylval,
+
K_DATATYPE_NAME, datatype_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_DATATYPE_NAME;
+   else if (tok_is_keyword(tok, yylval,
+
K_TABLE_NAME, table_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_TABLE_NAME;
+   else if (tok_is_keyword(tok, yylval,
+
K_SCHEMA_NAME, schema_name))
+   opt-opt_type = PLPGSQL_RAISEOPTION_SCHEMA_NAME;
else
+
yyerror(unrecognized RAISE statement option);

can you please remove that.

Apart from that patch looks good to me.

Thanks,
Rushabh


On Fri, Feb 1, 2013 at 7:29 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 2013/2/1 Pavel Stehule pavel.steh...@gmail.com:
  2013/2/1 Peter Eisentraut pete...@gmx.net:
  On 2/1/13 8:00 AM, Pavel Stehule wrote:
  2013/2/1 Marko Tiikkaja pgm...@joh.to:
  On 2/1/13 1:47 PM, Pavel Stehule wrote:
 
  now a most hard work is done and I would to enable access to new
  error fields from plpgsql.
 
 
  Is there a compelling reason why we wouldn't provide these already in
 9.3?
 
  a time for assign to last commitfest is out.
 
  this patch is relative simple and really close to enhanced error
  fields feature - but depends if some from commiters will have a time
  for commit to 9.3 - so I am expecting primary target 9.4, but I am not
  be angry if it will be commited early.
 
  If we don't have access to those fields on PL/pgSQL, what was the point
  of the patch to begin with?  Surely, accessing them from C wasn't the
  main use case?
 
 
  These fields are available for application developers now. But is a
  true, so without this patch, GET STACKED DIAGNOSTICS statement will
  not be fully consistent, because some fields are accessible and others
  not

 there is one stronger argument for commit this patch now. With this
 patch, we are able to wrote regression tests for new fields via
 plpgsql.

 Regards

 Pavel

 
  Pavel


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-06-25 Thread Rushabh Lathia
On Tue, Jun 25, 2013 at 2:41 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 2013/6/25 Rushabh Lathia rushabh.lat...@gmail.com:
  Hi Pavel,
 
  I gone through the discussion over here and found that with this patch we
  enable the new error fields in plpgsql. Its a simple patch to expose the
 new
  error fields in plpgsql.
 
  Patch gets applied cleanly. make and make install too went smooth. make
  check
  was smooth too. Patch also include test coverage
 
  I tested the functionality with manual test and its woking as expected.
 
  BTW in the patch I found one additional new like in read_raise_options():
 
  @@ -3631,7 +3661,23 @@ read_raise_options(void)
  else if (tok_is_keyword(tok, yylval,
  K_HINT,
  hint))
  opt-opt_type = PLPGSQL_RAISEOPTION_HINT;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_COLUMN_NAME, column_name))
  +   opt-opt_type = PLPGSQL_RAISEOPTION_COLUMN_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_CONSTRAINT_NAME, constraint_name))
  +   opt-opt_type =
 PLPGSQL_RAISEOPTION_CONSTRAINT_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_DATATYPE_NAME, datatype_name))
  +   opt-opt_type =
 PLPGSQL_RAISEOPTION_DATATYPE_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_TABLE_NAME, table_name))
  +   opt-opt_type = PLPGSQL_RAISEOPTION_TABLE_NAME;
  +   else if (tok_is_keyword(tok, yylval,
  +
  K_SCHEMA_NAME, schema_name))
  +   opt-opt_type = PLPGSQL_RAISEOPTION_SCHEMA_NAME;
  else
  +
  yyerror(unrecognized RAISE statement option);
 
  can you please remove that.

 No, these fields are there as was proposed - it enhance possibilities
 to PLpgSQL developers - they can use these fields for custom
 exceptions. It is same like possibility to specify SQLCODE, MESSAGE,
 HINT in current RAISE statement implementation.

 Why you dislike it?


Seems like some confusion.

I noticed additional new line which has been added into your patch in
function
read_raise_options()::pl_gram.y @ line number 3680. And in the earlier mail
thats what I asked to remove.




 Regards

 Pavel

 
  Apart from that patch looks good to me.
 
  Thanks,
  Rushabh
 
 
  On Fri, Feb 1, 2013 at 7:29 PM, Pavel Stehule pavel.steh...@gmail.com
  wrote:
 
  2013/2/1 Pavel Stehule pavel.steh...@gmail.com:
   2013/2/1 Peter Eisentraut pete...@gmx.net:
   On 2/1/13 8:00 AM, Pavel Stehule wrote:
   2013/2/1 Marko Tiikkaja pgm...@joh.to:
   On 2/1/13 1:47 PM, Pavel Stehule wrote:
  
   now a most hard work is done and I would to enable access to new
   error fields from plpgsql.
  
  
   Is there a compelling reason why we wouldn't provide these already
 in
   9.3?
  
   a time for assign to last commitfest is out.
  
   this patch is relative simple and really close to enhanced error
   fields feature - but depends if some from commiters will have a time
   for commit to 9.3 - so I am expecting primary target 9.4, but I am
 not
   be angry if it will be commited early.
  
   If we don't have access to those fields on PL/pgSQL, what was the
 point
   of the patch to begin with?  Surely, accessing them from C wasn't the
   main use case?
  
  
   These fields are available for application developers now. But is a
   true, so without this patch, GET STACKED DIAGNOSTICS statement will
   not be fully consistent, because some fields are accessible and others
   not
 
  there is one stronger argument for commit this patch now. With this
  patch, we are able to wrote regression tests for new fields via
  plpgsql.
 
  Regards
 
  Pavel
 
  
   Pavel
 
 
  --
  Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-hackers
 
 
 
 
  --
  Rushabh Lathia




-- 
Rushabh Lathia


Re: [HACKERS] proposal 9.4 plpgsql: allows access to call stack from GET DIAGNOSTICS statement

2013-06-26 Thread Rushabh Lathia
On Tue, Jun 25, 2013 at 11:09 AM, Pavel Stehule pavel.steh...@gmail.comwrote:

 2013/6/25 Rushabh Lathia rushabh.lat...@gmail.com:
 
 
 
  On Tue, Jun 25, 2013 at 1:47 AM, Pavel Stehule pavel.steh...@gmail.com
  wrote:
 
  Hello
 
  This is fragment ofmy old code from orafce package - it is functional,
  but it is written little bit more generic due impossible access to
  static variables (in elog.c)
 
  static char*
  dbms_utility_format_call_stack(char mode)
  {
 MemoryContext oldcontext = CurrentMemoryContext;
 ErrorData *edata;
 ErrorContextCallback *econtext;
 StringInfo sinfo;
 
 #if PG_VERSION_NUM = 80400
errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO,
 TEXTDOMAIN);
 #else
errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO);
  #endif
 
 MemoryContextSwitchTo(oldcontext);
 
 for (econtext = error_context_stack;
   econtext != NULL;
   econtext = econtext-previous)
  (*econtext-callback) (econtext-arg);
 
 edata = CopyErrorData();
 FlushErrorState();
 
  https://github.com/orafce/orafce/blob/master/utility.c
 
  2013/6/24 Rushabh Lathia rushabh.lat...@gmail.com:
   Hi,
  
   Use of this feature is to get  call stack for debugging without
 raising
   exception. This definitely seems very useful.
  
   Here are my comments about the submitted patch:
  
   Patch gets applied cleanly (there are couple of white space warning
 but
   that's
   into test case file). make and make install too went smooth. make
 check
   was smooth too. Did some manual testing about feature and its went
   smooth.
  
   However would like to share couple of points:
  
 
  My main motive is concentration to stack info string only. So I didn't
  use err_start function, and I didn't use CopyErrorData too. These
  routines does lot of other work.
 
 
   1) I noticed that you did the initialization of edata manually, just
   wondering
   can't we directly use CopyErrorData() which will do that job ?
  
 
  yes, we can, but in this context on context field is interesting for
 us.
 
   I found that inside CopyErrorData() there is Assert:
  
   Assert(CurrentMemoryContext != ErrorContext);
  
   And in the patch we are setting using ErrorContext as short living
   context,
   is
   it the reason of not using CopyErrorData() ?
 
  it was not a primary reason, but sure - this routine is not designed
  for direct using from elog module. Next, we can save one palloc call.
 
 
  Hmm ok.
 
 
 
  
  
   2) To reset stack to empty, FlushErrorState() can be used.
  
 
  yes, it can be. I use explicit rows due different semantics, although
  it does same things. FlushErrorState some global ErrorState and it is
  used from other modules. I did a reset of ErrorContext. I fill a small
  difference there - so FlushErrorState is not best name for my purpose.
  What about modification:
 
  static void
  resetErrorStack()
  {
  errordata_stack_depth = -1;
  recursion_depth = 0;
  /* Delete all data in ErrorContext */
  MemoryContextResetAndDeleteChildren(ErrorContext);
  }
 
  and then call in  InvokeErrorCallbacks -- resetErrorStack()
 
  and rewrote flushErrorState like
 
  void FlushErrorState(void)
  {
/* reset ErrorStack is enough */
resetErrorStack();
  }
 
  ???
 
 
  Nope. rather then that I would still prefer direct call of
  FlushErrorState().
 
 
 
  but I can live well with direct call of FlushErrorState() - it is only
  minor change.
 
 
   3) I was think about the usability of the feature and wondering how
   about if
   we add some header to the stack, so user can differentiate between
 STACK
   and
   normal NOTICE message ?
  
   Something like:
  
   postgres=# select outer_outer_func(10);
   NOTICE:  - Call Stack -
   PL/pgSQL function inner_func(integer) line 4 at GET DIAGNOSTICS
   PL/pgSQL function outer_func(integer) line 3 at RETURN
   PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
   CONTEXT:  PL/pgSQL function outer_func(integer) line 3 at RETURN
   PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
outer_outer_func
   --
  20
   (1 row)
 
  I understand, but I don't think so it is good idea. Sometimes, you
  would to use context for different purposes than debug log - for
  example - you should to identify top most call or near call - and any
  additional lines means some little bit more difficult parsing. You can
  add this line simply (if you want)
 
  RAISE NOTICE e'--- Call Stack ---\n%', stack
 
  but there are more use cases, where you would not this information (or
  you can use own header (different language)), and better returns only
  clean data.
 
 
  I don't know but I still feel like given header from function it self
 would
  be
  nice to have things. Because it will help user to differentiate between
  STACK and normal NOTICE context message.
 
 
 
 
  
   I worked on point 2) and 3) and PFA patch

Re: [HACKERS] proposal 9.4 plpgsql: allows access to call stack from GET DIAGNOSTICS statement

2013-06-26 Thread Rushabh Lathia
On Wed, Jun 26, 2013 at 5:11 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 2013/6/26 Rushabh Lathia rushabh.lat...@gmail.com:
 
 
 
  On Tue, Jun 25, 2013 at 11:09 AM, Pavel Stehule pavel.steh...@gmail.com
 
  wrote:
 
  2013/6/25 Rushabh Lathia rushabh.lat...@gmail.com:
  
  
  
   On Tue, Jun 25, 2013 at 1:47 AM, Pavel Stehule 
 pavel.steh...@gmail.com
   wrote:
  
   Hello
  
   This is fragment ofmy old code from orafce package - it is
 functional,
   but it is written little bit more generic due impossible access to
   static variables (in elog.c)
  
   static char*
   dbms_utility_format_call_stack(char mode)
   {
  MemoryContext oldcontext = CurrentMemoryContext;
  ErrorData *edata;
  ErrorContextCallback *econtext;
  StringInfo sinfo;
  
  #if PG_VERSION_NUM = 80400
 errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO,
   TEXTDOMAIN);
  #else
 errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO);
   #endif
  
  MemoryContextSwitchTo(oldcontext);
  
  for (econtext = error_context_stack;
econtext != NULL;
econtext = econtext-previous)
   (*econtext-callback) (econtext-arg);
  
  edata = CopyErrorData();
  FlushErrorState();
  
   https://github.com/orafce/orafce/blob/master/utility.c
  
   2013/6/24 Rushabh Lathia rushabh.lat...@gmail.com:
Hi,
   
Use of this feature is to get  call stack for debugging without
raising
exception. This definitely seems very useful.
   
Here are my comments about the submitted patch:
   
Patch gets applied cleanly (there are couple of white space warning
but
that's
into test case file). make and make install too went smooth. make
check
was smooth too. Did some manual testing about feature and its went
smooth.
   
However would like to share couple of points:
   
  
   My main motive is concentration to stack info string only. So I
 didn't
   use err_start function, and I didn't use CopyErrorData too. These
   routines does lot of other work.
  
  
1) I noticed that you did the initialization of edata manually,
 just
wondering
can't we directly use CopyErrorData() which will do that job ?
   
  
   yes, we can, but in this context on context field is interesting
 for
   us.
  
I found that inside CopyErrorData() there is Assert:
   
Assert(CurrentMemoryContext != ErrorContext);
   
And in the patch we are setting using ErrorContext as short living
context,
is
it the reason of not using CopyErrorData() ?
  
   it was not a primary reason, but sure - this routine is not designed
   for direct using from elog module. Next, we can save one palloc call.
  
  
   Hmm ok.
  
  
  
   
   
2) To reset stack to empty, FlushErrorState() can be used.
   
  
   yes, it can be. I use explicit rows due different semantics, although
   it does same things. FlushErrorState some global ErrorState and it is
   used from other modules. I did a reset of ErrorContext. I fill a
 small
   difference there - so FlushErrorState is not best name for my
 purpose.
   What about modification:
  
   static void
   resetErrorStack()
   {
   errordata_stack_depth = -1;
   recursion_depth = 0;
   /* Delete all data in ErrorContext */
   MemoryContextResetAndDeleteChildren(ErrorContext);
   }
  
   and then call in  InvokeErrorCallbacks -- resetErrorStack()
  
   and rewrote flushErrorState like
  
   void FlushErrorState(void)
   {
 /* reset ErrorStack is enough */
 resetErrorStack();
   }
  
   ???
  
  
   Nope. rather then that I would still prefer direct call of
   FlushErrorState().
  
  
  
   but I can live well with direct call of FlushErrorState() - it is
 only
   minor change.
  
  
3) I was think about the usability of the feature and wondering how
about if
we add some header to the stack, so user can differentiate between
STACK
and
normal NOTICE message ?
   
Something like:
   
postgres=# select outer_outer_func(10);
NOTICE:  - Call Stack -
PL/pgSQL function inner_func(integer) line 4 at GET DIAGNOSTICS
PL/pgSQL function outer_func(integer) line 3 at RETURN
PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
CONTEXT:  PL/pgSQL function outer_func(integer) line 3 at RETURN
PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
 outer_outer_func
--
   20
(1 row)
  
   I understand, but I don't think so it is good idea. Sometimes, you
   would to use context for different purposes than debug log - for
   example - you should to identify top most call or near call - and any
   additional lines means some little bit more difficult parsing. You
 can
   add this line simply (if you want)
  
   RAISE NOTICE e'--- Call Stack ---\n%', stack
  
   but there are more use cases, where you would not this information
 (or
   you can use own header

Re: [HACKERS] proposal 9.4 plpgsql: allows access to call stack from GET DIAGNOSTICS statement

2013-06-27 Thread Rushabh Lathia
Latest patch looks good to me.

Regards,
Rushabh


On Wed, Jun 26, 2013 at 11:21 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 updated patch with some basic doc

 Regards

 Pavel


 2013/6/26 Rushabh Lathia rushabh.lat...@gmail.com:
 
 
 
  On Wed, Jun 26, 2013 at 5:11 PM, Pavel Stehule pavel.steh...@gmail.com
  wrote:
 
  2013/6/26 Rushabh Lathia rushabh.lat...@gmail.com:
  
  
  
   On Tue, Jun 25, 2013 at 11:09 AM, Pavel Stehule
   pavel.steh...@gmail.com
   wrote:
  
   2013/6/25 Rushabh Lathia rushabh.lat...@gmail.com:
   
   
   
On Tue, Jun 25, 2013 at 1:47 AM, Pavel Stehule
pavel.steh...@gmail.com
wrote:
   
Hello
   
This is fragment ofmy old code from orafce package - it is
functional,
but it is written little bit more generic due impossible access to
static variables (in elog.c)
   
static char*
dbms_utility_format_call_stack(char mode)
{
   MemoryContext oldcontext = CurrentMemoryContext;
   ErrorData *edata;
   ErrorContextCallback *econtext;
   StringInfo sinfo;
   
   #if PG_VERSION_NUM = 80400
  errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO,
TEXTDOMAIN);
   #else
  errstart(ERROR, __FILE__, __LINE__, PG_FUNCNAME_MACRO);
#endif
   
   MemoryContextSwitchTo(oldcontext);
   
   for (econtext = error_context_stack;
 econtext != NULL;
 econtext = econtext-previous)
(*econtext-callback) (econtext-arg);
   
   edata = CopyErrorData();
   FlushErrorState();
   
https://github.com/orafce/orafce/blob/master/utility.c
   
2013/6/24 Rushabh Lathia rushabh.lat...@gmail.com:
 Hi,

 Use of this feature is to get  call stack for debugging without
 raising
 exception. This definitely seems very useful.

 Here are my comments about the submitted patch:

 Patch gets applied cleanly (there are couple of white space
 warning
 but
 that's
 into test case file). make and make install too went smooth.
 make
 check
 was smooth too. Did some manual testing about feature and its
 went
 smooth.

 However would like to share couple of points:

   
My main motive is concentration to stack info string only. So I
didn't
use err_start function, and I didn't use CopyErrorData too. These
routines does lot of other work.
   
   
 1) I noticed that you did the initialization of edata manually,
 just
 wondering
 can't we directly use CopyErrorData() which will do that job ?

   
yes, we can, but in this context on context field is interesting
for
us.
   
 I found that inside CopyErrorData() there is Assert:

 Assert(CurrentMemoryContext != ErrorContext);

 And in the patch we are setting using ErrorContext as short
 living
 context,
 is
 it the reason of not using CopyErrorData() ?
   
it was not a primary reason, but sure - this routine is not
 designed
for direct using from elog module. Next, we can save one palloc
call.
   
   
Hmm ok.
   
   
   


 2) To reset stack to empty, FlushErrorState() can be used.

   
yes, it can be. I use explicit rows due different semantics,
although
it does same things. FlushErrorState some global ErrorState and it
is
used from other modules. I did a reset of ErrorContext. I fill a
small
difference there - so FlushErrorState is not best name for my
purpose.
What about modification:
   
static void
resetErrorStack()
{
errordata_stack_depth = -1;
recursion_depth = 0;
/* Delete all data in ErrorContext */
MemoryContextResetAndDeleteChildren(ErrorContext);
}
   
and then call in  InvokeErrorCallbacks -- resetErrorStack()
   
and rewrote flushErrorState like
   
void FlushErrorState(void)
{
  /* reset ErrorStack is enough */
  resetErrorStack();
}
   
???
   
   
Nope. rather then that I would still prefer direct call of
FlushErrorState().
   
   
   
but I can live well with direct call of FlushErrorState() - it is
only
minor change.
   
   
 3) I was think about the usability of the feature and wondering
 how
 about if
 we add some header to the stack, so user can differentiate
 between
 STACK
 and
 normal NOTICE message ?

 Something like:

 postgres=# select outer_outer_func(10);
 NOTICE:  - Call Stack -
 PL/pgSQL function inner_func(integer) line 4 at GET DIAGNOSTICS
 PL/pgSQL function outer_func(integer) line 3 at RETURN
 PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
 CONTEXT:  PL/pgSQL function outer_func(integer) line 3 at RETURN
 PL/pgSQL function outer_outer_func(integer) line 3 at RETURN
  outer_outer_func
 --
20
 (1 row)
   
I

Re: [HACKERS] proposal: enable new error fields in plpgsql (9.4)

2013-07-15 Thread Rushabh Lathia
On Wed, Jul 3, 2013 at 5:16 PM, Noah Misch n...@leadboat.com wrote:

 On Wed, Jul 03, 2013 at 06:17:18AM +0200, Pavel Stehule wrote:
  2013/7/2 Noah Misch n...@leadboat.com:
   Here's a revision bearing the discussed name changes and protocol
   documentation tweaks, along with some cosmetic adjustments.  If this
 seems
   good to you, I will commit it.
 
  +1

 Done.

 Rushabh, I neglected to credit you as a reviewer and realized it too late.
 Sorry about that.


Sorry somehow I missed this thread.

Thanks Noah.



 --
 Noah Misch
 EnterpriseDB http://www.enterprisedb.com




-- 
Rushabh Lathia


Re: [HACKERS] Fix pgstattuple/pgstatindex to use regclass-type as the argument

2013-07-15 Thread Rushabh Lathia
Hello Satoshi,

I assigned myself for the reviewer of this patch. Issue status is waiting on
author.

Now looking at the discussion under the thread it seems like we are waiting
for the suggestion for the new function name, right ?

I am wondering why actually we need new name ? Can't we just overload the
same function and provide two version of the functions ?

In the last thread Fujii just did the same for pg_relpages and it seems
like an
good to go approach, isn't it ? Am I missing anything here ?

Regards,




On Thu, Jul 4, 2013 at 12:28 AM, Fujii Masao masao.fu...@gmail.com wrote:

 On Wed, Jun 26, 2013 at 12:39 AM, Robert Haas robertmh...@gmail.com
 wrote:
  On Thu, Jun 20, 2013 at 2:32 PM, Fujii Masao masao.fu...@gmail.com
 wrote:
  On Thu, Jun 20, 2013 at 11:43 AM, Satoshi Nagayasu sn...@uptime.jp
 wrote:
  (2013/06/17 4:02), Fujii Masao wrote:
 
  On Sat, Mar 9, 2013 at 3:23 PM, Satoshi Nagayasu sn...@uptime.jp
 wrote:
 
  It is obviously easy to keep two types of function interfaces,
  one with regclass-type and another with text-type, in the next
  release for backward-compatibility like below:
 
  pgstattuple(regclass)  -- safer interface.
  pgstattuple(text)  -- will be depreciated in the future release.
 
 
  So you're thinking to remove pgstattuple(oid) soon?
 
 
  AFAIK, a regclass type argument would accept an OID value,
  which means regclass type has upper-compatibility against
  oid type.
 
  So, even if the declaration is changed, compatibility could
  be kept actually. This test case (in sql/pgstattuple.sql)
  confirms that.
 
  
  select * from pgstatindex('myschema.test_pkey'::regclass::oid);
   version | tree_level | index_size | root_block_no | internal_pages |
  leaf_pages | empty_pages | deleted_pages | avg_leaf_density |
  leaf_fragmentation
 
 -+++---+++-+---+--+
 2 |  0 |   8192 | 1 |  0 |
  1 |   0 | 0 | 0.79 |0
  (1 row)
  
 
 
  Having both interfaces for a while would allow users to have enough
  time to rewrite their applications.
 
  Then, we will be able to obsolete (or just drop) old interfaces
  in the future release, maybe 9.4 or 9.5. I think this approach
  would minimize an impact of such interface change.
 
  So, I think we can clean up function arguments in the pgstattuple
  module, and also we can have two interfaces, both regclass and text,
  for the next release.
 
  Any comments?
 
 
  In the document, you should mark old functions as deprecated.
 
 
  I'm still considering changing the function name as Tom pointed
  out. How about pgstatbtindex?
 
  Or just adding pgstatindex(regclass)?
 
  In fact, pgstatindex does support only BTree index.
  So, pgstatbtindex seems to be more appropriate for this function.
 
  Can most ordinary users realize bt means btree?
 
  We can keep having both (old) pgstatindex and (new) pgstatbtindex
  during next 2-3 major releases, and the old one will be deprecated
  after that.
 
  Since pg_relpages(oid) doesn't exist, pg_relpages() is in the same
  situation as pgstatindex(), i.e., we cannot just replace
 pg_relpages(text)
  with pg_relpages(regclass) for the backward-compatibility. How do you
  think we should solve the pg_relpages() problem? Rename? Just
  add pg_relpages(regclass)?
 
  Adding a function with a new name seems likely to be smoother, since
  that way you don't have to worry about problems with function calls
  being thought ambiguous.

 Could you let me know the example that this problem happens?

 For the test, I just implemented the regclass-version of pg_relpages()
 (patch attached) and tested some cases. But I could not get that problem.

 SELECT pg_relpages('hoge');-- OK
 SELECT pg_relpages(oid) FROM pg_class WHERE relname = 'hoge';-- OK
 SELECT pg_relpages(relname) FROM pg_class WHERE relname = 'hoge';
  -- OK

 Regards,

 --
 Fujii Masao


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


Re: [HACKERS] Fix pgstattuple/pgstatindex to use regclass-type as the argument

2013-07-16 Thread Rushabh Lathia
Hi Satoshi,

I spent some time on the revised version on the
patch(pgstattuple_regclass_v2.diff)
and here are my comments.

.) Patch get applies cleanly on PG master branch
.) Successful build and database creation
.) Basic test coverage included in the patch
.) make check running cleanly

Basically goal of the patch is to allow specifying a relation/index with
several expressions, 'relname', 'schemaname.relname' and oid in all
pgstattuple
functions. To achieve the same patch introduced another version of
pgstattuple
functions which takes regclass as input args. To make it backward compatible
we kept the pgstatetuple functions with TEXT input arg.

In the mail thread we decided that pgstattuple(text) will be depreciated in
the future release and we need to document that. Which is missing in the
patch.

Apart from that few comments in the C code to explain why multiple version
of the pgstattuple function ? would be really helpful for future
understanding
purpose.

Thanks,



On Tue, Jul 16, 2013 at 11:42 AM, Satoshi Nagayasu sn...@uptime.jp wrote:

 Hi Rushabh,


 (2013/07/16 14:58), Rushabh Lathia wrote:

 Hello Satoshi,

 I assigned myself for the reviewer of this patch. Issue status is waiting
 on
 author.


 Thank you for picking it up.


  Now looking at the discussion under the thread it seems like we are
 waiting
 for the suggestion for the new function name, right ?


 Yes.


  I am wondering why actually we need new name ? Can't we just overload the
 same function and provide two version of the functions ?


 I think the major reason is to avoid some confusion with old and new
 function arguments.

 My thought here is that having both arguments (text and regclass)
 for each function is a good choice to clean up interfaces with keeping
 the backward-compatibility.


  In the last thread Fujii just did the same for pg_relpages and it seems
 like an
 good to go approach, isn't it ? Am I missing anything here ?


 I just posted a revised patch to handle the issue in three functions
 of the pgstattuple module. Please take a look.


 Regards,
 --
 Satoshi Nagayasu sn...@uptime.jp
 Uptime Technologies, LLC. http://www.uptime.jp




-- 
Rushabh Lathia


Re: [HACKERS] Fix pgstattuple/pgstatindex to use regclass-type as the argument

2013-07-17 Thread Rushabh Lathia
On Thu, Jul 18, 2013 at 9:40 AM, Satoshi Nagayasu sn...@uptime.jp wrote:

 (2013/07/18 2:31), Fujii Masao wrote:

 On Tue, Jul 16, 2013 at 3:00 PM, Satoshi Nagayasu sn...@uptime.jp
 wrote:

 (2013/07/04 3:58), Fujii Masao wrote:

 For the test, I just implemented the regclass-version of pg_relpages()
 (patch attached) and tested some cases. But I could not get that
 problem.

   SELECT pg_relpages('hoge');-- OK
   SELECT pg_relpages(oid) FROM pg_class WHERE relname = 'hoge';
  -- OK
   SELECT pg_relpages(relname) FROM pg_class WHERE relname = 'hoge';
-- OK


 In the attached patch, I cleaned up three functions to have
 two types of arguments for each, text and regclass.

pgstattuple(text)
pgstattuple(regclass)
pgstatindex(text)
pgstatindex(regclass)
pg_relpages(text)
pg_relpages(regclass)

 I still think a regclass argument is more appropriate for passing
 relation/index name to a function than text-type, but having both
 arguments in each function seems to be a good choice at this moment,
 in terms of backward-compatibility.

 Docs needs to be updated if this change going to be applied.


 Yes, please.


 Updated docs and code comments, etc. PFA.


Looks good.




  Any comments?


 'make installcheck' failed in my machine.


 Hmm, it works on my box...


Works for me too.

Overall looks good to me.





  Do we need to remove pgstattuple--1.1.sql and create
 pgstattuple--1.1--1.2.sql?

 +/* contrib/pgstattuple/**pgstattuple--1.1.sql */

 Typo: s/1.1/1.2


 Done.


  You seem to have forgotten to update pgstattuple.c.


 Should I change something in pgstattuple.c?

 I just changed CREATE FUNCTION statement for pgstattuple
 to replace oid input arg with the regclass.

 Regards,

 --
 Satoshi Nagayasu sn...@uptime.jp
 Uptime Technologies, LLC. http://www.uptime.jp


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




Thanks,
Rushabh Lathia


[HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-08-14 Thread Rushabh Lathia
Hi,

While working on something I come across this issue. Consider following
test:

postgres=# select version();
 version

-
 PostgreSQL 9.4devel on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit
(1 row)


postgres=# create table test ( a timestamptz);
CREATE TABLE

-- Date with year 1000
postgres=#  insert into test values ( 'Sat Mar 11 23:58:48 1000 IST');
INSERT 0 1

-- Now try with year 1 it will return error
postgres=#  insert into test values ( 'Sat Mar 11 23:58:48 1 IST');
ERROR:  invalid input syntax for type timestamp with time zone: Sat Mar 11
23:58:48 1 IST
LINE 1: insert into test values ( 'Sat Mar 11 23:58:48 1 IST');

here error coming from timestamptz_in() - datefields_to_timestamp() -
DecodeDateTime() stack.

Looking more at the DecodeDateTime() function, here error coming while
trying
to Decode year field which is 1 in the our test. For year field ftype is
DTK_NUMBER, and under DTK_NUMBER for this case if drop in to following
condition:

else if (flen  4)
{
dterr = DecodeNumberField(flen, field[i], fmask,
  tmask, tm,
  fsec, is2digits);
if (dterr  0)
 return dterr;
}

because flen in out case flen is 5 (1).

As per the comment above DecodeNumberField(), it interpret numeric string
as a
concatenated date or time field. So ideally we should be into
DecodeNumberField
function only with (fmask  DTK_DATE_M) == 0 or (fmask  DTK_TIME_M) == 0,
right ??

So, I tried the same and after that test working fine.

Another fix could be to modify DecodeNumberField() to only check for the
date and time when (fmask  DTK_DATE_M) == 0 and (fmask  DTK_TIME_M) == 0.
And if DecodeNumberField() returns error then call DecodeNumber() to check
the year possibility. But I didn't

Results after fix:

postgres=# select * from test;
  a
--
 1000-03-12 03:52:16+05:53:28
 1-03-12 03:28:48+05:30
(2 rows)

PFA patch and share your input/suggestions.
(With patch make check running fine without additional failures)

Regards,
Rushabh Lathia
www.EnterpriseDB.com


timestamptz_fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-08-16 Thread Rushabh Lathia
On Thu, Aug 15, 2013 at 1:08 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Rushabh Lathia rushabh.lat...@gmail.com writes:
  PFA patch and share your input/suggestions.

 I think this needs review.  Please add it to the next commitfest.


Done.

Here is latest patch with testcase added to regression.



 regards, tom lane




Regards,
Rushabh Lathia
www.EnterpriseDB.com


timestamptz_fix_with_testcase.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-09-17 Thread Rushabh Lathia
On Mon, Sep 16, 2013 at 7:22 PM, Haribabu kommi
haribabu.ko...@huawei.comwrote:

   *On *14 August 2013 Rushabh Lathia wrote:**

 ** **

 postgres=# create table test ( a timestamptz);

 CREATE TABLE

 ** **

 -- Date with year 1000

 postgres=#  insert into test values ( 'Sat Mar 11 23:58:48 1000 IST');***
 *

 INSERT 0 1

 ** **

 -- Now try with year 1 it will return error

 postgres=#  insert into test values ( 'Sat Mar 11 23:58:48 1 IST');**
 **

 ERROR:  invalid input syntax for type timestamp with time zone: Sat Mar
 11 23:58:48 1 IST 

 LINE 1: insert into test values ( 'Sat Mar 11 23:58:48 1 IST');

 ** **

 here error coming from timestamptz_in() - datefields_to_timestamp() -**
 **

 DecodeDateTime() stack.

 ** **

 Looking more at the DecodeDateTime() function, here error coming while
 trying

 to Decode year field which is 1 in the our test. For year field
 ftype is

 DTK_NUMBER, and under DTK_NUMBER for this case if drop in to following
 condition:

 ** **

 else if (flen  4)

 {

 dterr = DecodeNumberField(flen, field[i], fmask,

  tmask, tm,

  fsec, is2digits);

 if (dterr  0)

 return dterr;

 }

 ** **

 because flen in out case flen is 5 (1).

 ** **

 As per the comment above DecodeNumberField(), it interpret numeric
 string as a

 concatenated date or time field. So ideally we should be into
 DecodeNumberField

 function only with (fmask  DTK_DATE_M) == 0 or (fmask  DTK_TIME_M) ==
 0,

 right ??

 ** **

 So, I tried the same and after that test working fine.

 ** **

 PFA patch and share your input/suggestions.

 ** **

 Patch applies cleanly to HEAD. As this patch tries to improve in inserting
 the date of the year value to be more than 4 in length.

 But it didn’t solve all the ways to insert the year field more than 4 in
 length. Please check the following test.

 ** **

 ** **

 postgres=# insert into test values ('10001010 10:10:10 IST');

 INSERT 0 1

 postgres=# insert into test values ('100011010 10:10:10 IST');

 ERROR:  invalid input syntax for type timestamp with time zone: 100011010
 10:10:10 IST at character 26

 STATEMENT:  insert into test values ('100011010 10:10:10 IST');

 ERROR:  invalid input syntax for type timestamp with time zone: 100011010
 10:10:10 IST

 LINE 1: insert into test values ('100011010 10:10:10 IST');

 ^

 ** **

 I feel it is better to provide the functionality of inserting year field
 more than 4 in length in all flows.


+1. Nice catch.

Here is the latest version of patch which handles the functionality in all
flows.
Could you test it and share you comments.

Thanks,
Rushabh Lathia
www.EnterpriseDB.com


timestamptz_fix_with_testcase_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor inheritance/check bug: Inconsistent behavior

2013-09-17 Thread Rushabh Lathia
Hi Amit.

I gone through the mail thread discussion regarding this issue and reviewed
you patch.

-- Patch get applied cleanly on Master branch
-- Make and Make Install fine
-- make check also running cleanly

In the patch code changes looks good to me.

This patch having two part:

1) Allowed TableOid system column in the CHECK constraint
2) Throw an error if other then TableOid system column in CHECK constraint.

I noticed that you added test coverage for 1) but the test coverage for 2)
is missing..

I added the test coverage for 2) in the attached patch.

Marking this as Ready for committer.

Regards,
Rushabh





On Sun, Sep 15, 2013 at 2:31 PM, Amit Kapila amit.kapil...@gmail.comwrote:

 Bruce Momjian wrote:
 On Sun, Jun 30, 2013 at 06:57:10AM +, Amit kapila wrote:
   I have done the initial analysis and prepared a patch, don't know if
   anything more I can do until
   someone can give any suggestions to further proceed on this bug.
 
   So, I guess we never figured this out.
 
  I can submit this bug-fix for next commitfest if there is no objection
 for doing so.
  What is your opinion?

  Yes, good idea.

 I had rebased the patch against head and added the test case to validate
 it.
 I will upload this patch to commit fest.

 With Regards,
 Amit Kapila.
 EnterpriseDB: http://www.enterprisedb.com


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia
www.EnterpriseDB.com


sys_col_constr_v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: lob conversion functionality

2013-09-19 Thread Rushabh Lathia
Hi Pavel,

I have reviewed you patch.

-- Patch got applied cleanly (using patch -p1)
-- Make  Make install works fine
-- make check looks good

I done code-walk and it looks good. Also did some manual testing and haven't
found any issue with the implementation.

Patch introduced two new API load_lo() and make_lo() for loading and saving
from/to large objects Functions. When it comes to drop an lo object created
using make_lo() this still depend on older API lo_unlink(). I think we
should
add that into documentation for the clerification.

As a user to lo object function when I started testing this new API, first
question came to mind is why delete_lo() or destroy_lo() API is missing.
Later I realize that need to use lo_unlink() older API for that
functionality.
So I feel its good to document that. Do let you know what you think ?

Otherwise patch looks nice and clean.

Regards,
Rushabh Lathia
www.EnterpriseDB.com



On Sun, Aug 25, 2013 at 8:31 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 here is a patch

 it introduce a load_lo and make_lo functions

 postgres=# select make_lo(decode('ff00','hex'));
  make_lo
 ─
24629
 (1 row)

 Time: 40.724 ms
 postgres=# select load_lo(24628);
   load_lo
 
  \xff00
 (1 row)

 postgres=# \lo_import ~/avatar.png
 lo_import 24630

 postgres=# select md5(load_lo(24630));
md5
 ──
  513f60836f3b625713acaf1c19b6ea78
 (1 row)

 postgres=# \q
 bash-4.1$ md5sum ~/avatar.png
 513f60836f3b625713acaf1c19b6ea78  /home/pavel/avatar.png

 Regards

 Pavel Stehule



 2013/8/22 Jov am...@amutu.com

 +1
 badly need the large object and bytea convert function.

 Once I have to use the ugly pg_read_file() to put some text to pg,I tried
 to use large object but find it is useless without function to convert
 large object to bytea.

 Jov
 blog: http:amutu.com/blog http://amutu.com/blog


 2013/8/10 Pavel Stehule pavel.steh...@gmail.com

  Hello

 I had to enhance my older project, where XML documents are parsed and
 created on server side - in PLpgSQL and PLPerl procedures. We would to
 use a LO API for client server communication, but we have to
 parse/serialize LO on server side.

 I found so there are no simple API for working with LO from PL without
 access to file system. I had to use a ugly hacks:

 CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea)
 RETURNS oid AS $$
 DECLARE
   _loid oid;
   _substr bytea;
 BEGIN
   _loid := lo_creat(-1);
   FOR i IN 0..length($1)/2048
   LOOP
 _substr := substring($1 FROM i * 2048 + 1 FOR 2048);
 IF _substr  '' THEN
   INSERT INTO pg_largeobject(loid, pageno, data)
 VALUES(_loid, i, _substr);
 END IF;
   END LOOP;

   EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid);
   RETURN _loid;
 END;
 $$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path =
 'pg_catalog';

 and

 CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid)
 RETURNS xml AS $$
 DECLARE
   b_cum bytea = '';
   b bytea;
 BEGIN
   FOR b IN SELECT l.data
   FROM pg_largeobject l
  WHERE l.loid = attachment_to_xml.attachment
  ORDER BY l.pageno
   LOOP
 b_cum := b_cum || b;
   END LOOP;
   IF NOT FOUND THEN
 RETURN NULL;
   ELSE
 RETURN xmlelement(NAME attachment,
encode(b_cum, 'base64'));
   END IF;
 END;
 $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path =
 'pg_catalog';

 These functions can be simplified if we supports some functions like
 encode, decode for LO

 So my proposal is creating functions:

 * lo_encode(loid oid) .. returns bytea
 * lo_encode(loid oid, encoding text) .. returns text
 * lo_make(loid oid, data bytea)
 * lo_make(loid oid, data text, encoding text)

 This can simplify all transformation between LO and VARLENA. Known
 limit is 1G for varlena, but it is still relative enough high.

 Notes. comments?

 Regards

 Pavel


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers





 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


Re: [HACKERS] proposal: lob conversion functionality

2013-09-19 Thread Rushabh Lathia
On Thu, Sep 19, 2013 at 10:19 PM, Pavel Stehule pavel.steh...@gmail.comwrote:

 Hello

 here is patch


Looks good.

Marking it as Ready for Committer.




 Regards

 Pavel



 2013/9/19 Pavel Stehule pavel.steh...@gmail.com




 2013/9/19 Rushabh Lathia rushabh.lat...@gmail.com

 Hi Pavel,

 I have reviewed you patch.

 -- Patch got applied cleanly (using patch -p1)
 -- Make  Make install works fine
 -- make check looks good

 I done code-walk and it looks good. Also did some manual testing and
 haven't
 found any issue with the implementation.

 Patch introduced two new API load_lo() and make_lo() for loading and
 saving
 from/to large objects Functions. When it comes to drop an lo object
 created
 using make_lo() this still depend on older API lo_unlink(). I think we
 should
 add that into documentation for the clerification.

 As a user to lo object function when I started testing this new API,
 first
 question came to mind is why delete_lo() or destroy_lo() API is missing.
 Later I realize that need to use lo_unlink() older API for that
 functionality.
 So I feel its good to document that. Do let you know what you think ?


 good idea

 I'll send a updated patch evening



 Otherwise patch looks nice and clean.


 Thank you :)

 Regards

 Pavel


 Regards,
 Rushabh Lathia
 www.EnterpriseDB.com



 On Sun, Aug 25, 2013 at 8:31 PM, Pavel Stehule 
 pavel.steh...@gmail.comwrote:

 Hello

 here is a patch

 it introduce a load_lo and make_lo functions

 postgres=# select make_lo(decode('ff00','hex'));
  make_lo
 ─
24629
 (1 row)

 Time: 40.724 ms
 postgres=# select load_lo(24628);
   load_lo
 
  \xff00
 (1 row)

 postgres=# \lo_import ~/avatar.png
 lo_import 24630

 postgres=# select md5(load_lo(24630));
md5
 ──
  513f60836f3b625713acaf1c19b6ea78
 (1 row)

 postgres=# \q
 bash-4.1$ md5sum ~/avatar.png
 513f60836f3b625713acaf1c19b6ea78  /home/pavel/avatar.png

 Regards

 Pavel Stehule



 2013/8/22 Jov am...@amutu.com

 +1
 badly need the large object and bytea convert function.

 Once I have to use the ugly pg_read_file() to put some text to pg,I
 tried to use large object but find it is useless without function to
 convert large object to bytea.

 Jov
 blog: http:amutu.com/blog http://amutu.com/blog


 2013/8/10 Pavel Stehule pavel.steh...@gmail.com

  Hello

 I had to enhance my older project, where XML documents are parsed and
 created on server side - in PLpgSQL and PLPerl procedures. We would to
 use a LO API for client server communication, but we have to
 parse/serialize LO on server side.

 I found so there are no simple API for working with LO from PL without
 access to file system. I had to use a ugly hacks:

 CREATE OR REPLACE FUNCTION parser.save_as_lob(bytea)
 RETURNS oid AS $$
 DECLARE
   _loid oid;
   _substr bytea;
 BEGIN
   _loid := lo_creat(-1);
   FOR i IN 0..length($1)/2048
   LOOP
 _substr := substring($1 FROM i * 2048 + 1 FOR 2048);
 IF _substr  '' THEN
   INSERT INTO pg_largeobject(loid, pageno, data)
 VALUES(_loid, i, _substr);
 END IF;
   END LOOP;

   EXECUTE format('GRANT SELECT ON LARGE OBJECT %s TO ohs', _loid);
   RETURN _loid;
 END;
 $$ LANGUAGE plpgsql SECURITY DEFINER STRICT SET search_path =
 'pg_catalog';

 and

 CREATE OR REPLACE FUNCTION fbuilder.attachment_to_xml(attachment oid)
 RETURNS xml AS $$
 DECLARE
   b_cum bytea = '';
   b bytea;
 BEGIN
   FOR b IN SELECT l.data
   FROM pg_largeobject l
  WHERE l.loid = attachment_to_xml.attachment
  ORDER BY l.pageno
   LOOP
 b_cum := b_cum || b;
   END LOOP;
   IF NOT FOUND THEN
 RETURN NULL;
   ELSE
 RETURN xmlelement(NAME attachment,
encode(b_cum, 'base64'));
   END IF;
 END;
 $$ LANGUAGE plpgsql STRICT SECURITY DEFINER SET search_path =
 'pg_catalog';

 These functions can be simplified if we supports some functions like
 encode, decode for LO

 So my proposal is creating functions:

 * lo_encode(loid oid) .. returns bytea
 * lo_encode(loid oid, encoding text) .. returns text
 * lo_make(loid oid, data bytea)
 * lo_make(loid oid, data text, encoding text)

 This can simplify all transformation between LO and VARLENA. Known
 limit is 1G for varlena, but it is still relative enough high.

 Notes. comments?

 Regards

 Pavel


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers





 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




 --
 Rushabh Lathia






-- 
Rushabh Lathia


Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-09-27 Thread Rushabh Lathia
Sorry for delay in reply.



On Tue, Sep 17, 2013 at 6:23 PM, Haribabu kommi
haribabu.ko...@huawei.comwrote:

  On Tue, 17 September 2013 14:33 Rushabh Lathia wrote:

 On Mon, Sep 16, 2013 at 7:22 PM, Haribabu kommi 
 haribabu.ko...@huawei.com wrote:

 ***On *14 August 2013 Rushabh Lathia wrote:

  postgres=# create table test ( a timestamptz);

 CREATE TABLE

  -- Date with year 1000

 postgres=#  insert into test values ( 'Sat Mar 11 23:58:48 1000 IST');*
 ***

 INSERT 0 1

  -- Now try with year 1 it will return error

 postgres=#  insert into test values ( 'Sat Mar 11 23:58:48 1 IST');
 

 ERROR:  invalid input syntax for type timestamp with time zone: Sat
 Mar 11 23:58:48 1 IST 

 LINE 1: insert into test values ( 'Sat Mar 11 23:58:48 1 IST');

 Patch applies cleanly to HEAD. As this patch tries to improve in
 inserting the date of the year value to be more than 4 in length.

 But it didn’t solve all the ways to insert the year field more than 4
 in length. Please check the following test.

 ** **

  postgres=# insert into test values ('10001010 10:10:10 IST');

 INSERT 0 1

 postgres=# insert into test values ('100011010 10:10:10 IST');

 ERROR:  invalid input syntax for type timestamp with time zone:
 100011010 10:10:10 IST at character 26

 STATEMENT:  insert into test values ('100011010 10:10:10 IST');

 ERROR:  invalid input syntax for type timestamp with time zone:
 100011010 10:10:10 IST

 LINE 1: insert into test values ('100011010 10:10:10 IST');

 ^

  I feel it is better to provide the functionality of inserting year
 field more than 4 in length in all flows.

 ** **

 +1. Nice catch.

 ** **

 Here is the latest version of patch which handles the functionality in
 all flows. 

 Could you test it and share you comments.

 ** **

 I am getting some other failures with the updated patch also, please check
 the following tests.

 ** **

 select date 'January 8, 19990';

 select timestamptz 'January 8, 199910 01:01:01 IST';

 INSERT INTO TIMESTAMPTZ_TST VALUES(4, '10001 SAT 8 MAR 10:10:10 IST');

 ** **

 you can get the test scripts from regress test files of date.sql,
 timetz.sql, timestamp.sql and timestamptz.sql

 and modify according to the patch for verification.

 ** **

 I feel changing the year value to accept the length (4) is not simple. **
 **

 So many places the year length crossing more than length 4 is not
 considered.

 Search in the code with “” and correct all related paths.


Right, changing the year value to accept the length (4) is not simple
because so
many places the year length crossing plus most of the please having
assumption
that it will be always 4.

Tried to fix issue more couple of places but I don't feeling like its
always going
to be safe to assume that we covered all path.

Still looking and wondering if we can do change in any simple place or
whether
we can find any other smarter way to fix the issue.



 

 ** **

 Regards,

 Hari babu.

 ** **




-- 
Rushabh Lathia


Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-10-03 Thread Rushabh Lathia
Thanks Bruce.

Yes for me main problem was to make assumption that a 5-digit number is a
year,
as was bit worried about side effect of that assumption in the date/time
module. I
did tested patch shared by you with various test and so far it looks good
to me.

I would like reviewer to review/test the patch and share his comments.

Attaching the git patch again with this mail.

Assigning to Reviewer.

Regards,
Rushabh


On Wed, Oct 2, 2013 at 9:34 PM, Bruce Momjian br...@momjian.us wrote:

 On Wed, Oct  2, 2013 at 11:00:30AM -0400, Robert Haas wrote:
  On Tue, Oct 1, 2013 at 7:52 PM, Bruce Momjian br...@momjian.us wrote:
   On Fri, Sep 27, 2013 at 10:42:17AM +, Haribabu kommi wrote:
   If the changes are very high to deal all scenarios,
  
   I feel it is better do it only in scenarios where the use cases needs
 it, until
   it is not confusing users.
  
   The rest can be documented.
  
   Any other opinions/suggestions welcome.
  
   I have reviewed this patch and it is good.  The problem is guessing if
 a
   number with 5+ digits is YMD, HMS, or a year.  I have created a
 modified
   patch, attached, assumes a 5-digit number is a year, because YMD and
 HMS
   require at least six digits, and used your date/time test to control
 the
   other cases.  I also added a few more regression tests.
 
  In an ideal world the interpretation of the tokens wouldn't depend on
  the order in which they appear.  But we don't live in an ideal world,
  so maybe this is fine.

 Yes, earlier in the thread the original patch poster questioned whether
 he was going in the right direction, given the unusual hacks needed, but
 such hacks are standard operating procedure for date/time stuff.

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + It's impossible for everything to be true. +




-- 
Rushabh Lathia


timestamptz_fix_with_testcase_v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-10-06 Thread Rushabh Lathia
On Fri, Oct 4, 2013 at 11:35 PM, Bruce Momjian br...@momjian.us wrote:

 On Fri, Oct  4, 2013 at 10:19:38AM +, Haribabu kommi wrote:
 
  On 03 October 2013 19:30 Bruce Momjian wrote:
  On Thu, Oct  3, 2013 at 11:54:14AM +0530, Rushabh Lathia wrote:
   Thanks Bruce.
  
   Yes for me main problem was to make assumption that a 5-digit number
   is a year, as was bit worried about side effect of that assumption in
   the date/time module. I did tested patch shared by you with various
   test and so far it looks good to me.
  
   I would like reviewer to review/test the patch and share his comments.
  
   Attaching the git patch again with this mail.
  
   Assigning to Reviewer.
 
  Oh, great.  If everyone likes it I can apply it.
 
  With Year length of 6 digits has some inconsistency problem,
  The tests are carried out on a default configuration.

 The general limitation we have is that while we know 5-digit numbers
 can't be YMD or HMS, we don't know that for 6-digit values, so we
 require that the string contain _a_ date and _a_ time specification
 before we consider a six-digit number as a year.  I don't see how we can
 do any better than that.  Your results below show that behavior.  Do you
 have a suggestion for improvement?


Hmm right it has some inconsistency when year length is 6. But the patch
is based on assumption that 5-digit number is a year, because YMD and HMS
require at least six digits. Now Year with 6-digit number its getting
conflict with
YMD and HMS, that the reason its ending up with error. So with
patch approach
that's an expected behaviour for me.

I spent good amount of time on thinking how we can improve the behaviour, or
how can be change the assumption about the year field, YMD and HMS. At
current point of time it seems difficult to me because postgres date module
is tightly build with few assumption and changing that may lead to big
project.
Not sure but personally I feel that patch which was submitted earlier was
definitely good improvement.

Any other suggestion or thought for improvement ?



 ---

   select timestamptz '199910108 01:01:01 IST'; -- works
   select timestamptz '19991 01 08 01:01:01 IST'; -- works
   select timestamptz '1999100108 01:01:01 IST'; -- works
   select timestamptz '199910 01 08 01:01:01 IST'; -- Not working
 
   select timestamptz 'January 8, 19991 01:01:01 IST'; -- works
   select timestamptz 'January 8, 199910 01:01:01 IST'; -- Not working
 
   CREATE TABLE TIMESTAMPTZ_TST (a int , b timestamptz);
   INSERT INTO TIMESTAMPTZ_TST VALUES(1, '10312 23:58:48 IST'); --
 works
   INSERT INTO TIMESTAMPTZ_TST VALUES(2, '1 03 12 23:58:48 IST'); --
 works
   INSERT INTO TIMESTAMPTZ_TST VALUES(3, '100312 23:58:48 IST'); --
 works
   INSERT INTO TIMESTAMPTZ_TST VALUES(4, '10 03 12 23:58:48 IST'); --
 Not working
 
  please correct me if anything wrong in the tests.
 
  Regards,
  Hari babu.

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + It's impossible for everything to be true. +




-- 
Rushabh Lathia


Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-10-08 Thread Rushabh Lathia
On Tue, Oct 8, 2013 at 1:34 AM, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Oct 7, 2013 at 12:41 AM, Rushabh Lathia
 rushabh.lat...@gmail.com wrote:
  Hmm right it has some inconsistency when year length is 6. But the patch
  is based on assumption that 5-digit number is a year, because YMD and HMS
  require at least six digits. Now Year with 6-digit number its getting
  conflict with
  YMD and HMS, that the reason its ending up with error. So with patch
  approach
  that's an expected behaviour for me.
 
  I spent good amount of time on thinking how we can improve the
 behaviour, or
  how can be change the assumption about the year field, YMD and HMS. At
  current point of time it seems difficult to me because postgres date
 module
  is tightly build with few assumption and changing that may lead to big
  project.
  Not sure but personally I feel that patch which was submitted earlier was
  definitely good improvement.
 
  Any other suggestion or thought for improvement ?

 I'm not entirely convinced that this patch is heading in the right
 direction.  The thing is, it lets you use 5-digit years always and
 longer years only in some contexts.  So I'm not sure this is really
 good enough for unambiguous date input.  If you want that, you should
 probably be using trusty YYY-MM-DD format.  But if you don't
 need that, then isn't a five-digit year most likely a typo?


 Do agree with you in certain extent.

But there are already ambiguity when it comes to postgres date module:

For example:
-- Doing select with year field  4
edb=# select '10-10-2'::timestamp;
 timestamp
---
 Thu Oct 10 00:00:00 2
(1 row)

edb=# create table test ( a timestamp );
CREATE TABLE
-- When try to insert it throw an error
edb=# insert into test values ('Thu Oct 10 00:00:00 2');
ERROR:  invalid input syntax for type timestamp: Thu Oct 10 00:00:00 2
LINE 1: insert into test values ('Thu Oct 10 00:00:00 2');
 ^
Of course user can use the specific format and then this kind of date
can be used.


This
 might be a case where throwing an error is actually better than trying
 to make sense of the input.

 I don't feel super-strongly about this, but I offer it as a question
 for reflection.



At the same time I do agree fixing this kind of issue in postgres datetime
module
is bit difficult without some assumption. Personally I feel patch do add
some
value but not fully compatible with all kind of year field format.

Bruce,

Do you have any thought/suggestion ?




 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company




-- 
Rushabh Lathia


Re: [HACKERS] ToDo: fast update of arrays with fixed length fields for PL/pgSQL

2013-11-21 Thread Rushabh Lathia
Hi,

I started reviewing this patch. Gone through the mail thread discussion to
understand the need of the patch. With patch there is significant
performance
improvement in case of update for the array scalar variable.

- Patch gets apply cleanly on master branch
- make, make install and make check works fine

Did code walk through, code change looks good to me. I was doing some
testing
in the area of scalar variable array and domain type. For the big array size
noticed significant performance improvement. So far haven't found any issues
with the code.

Patch looks good to me.

Just FYI.. Do need to remove array_fast_update GUC in the final version of
commit.

Regards,
Rushabh


On Mon, Oct 7, 2013 at 7:52 PM, Pavel Stehule pavel.steh...@gmail.comwrote:




 2013/10/7 Andres Freund and...@2ndquadrant.com

 On 2013-10-07 16:00:54 +0200, Pavel Stehule wrote:
/*
 * We need to do subscript evaluation,
 which might require
  @@ -4321,6 +4322,14 @@ exec_assign_value(PLpgSQL_execstate *estate,
oldarrayval = (ArrayType *)
 DatumGetPointer(oldarraydatum);
 
/*
  +  * support fast update for array scalar
 variable is enabled only
  +  * when target is a scalar variable and
 variable holds a local
  +  * copy of some array.
  +  */
  + inplace_update = (((PLpgSQL_datum *)
 target)-dtype == PLPGSQL_DTYPE_VAR
  + 
 ((PLpgSQL_var *) target)-freeval);
  +
  + /*
 * Build the modified array value.
 */

 Will this recognize if the local Datum is just a reference to an
 external toast Datum (i.e. varattrib_1b_e)?


 this works on plpgsql local copies only (when cleaning is controlled by
 plpgsql) - so it should be untoasted every time.

 btw when I tested this patch I found a second plpgsql array issue -
 repeated untoasting :( and we should not forget it.




 I don't know much about plpgsql's implementation, so please excuse if
 the question is stupid.

 Greetings,

 Andres Freund

 --
  Andres Freund http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services





-- 
Rushabh Lathia


[HACKERS] ERROR: missing chunk number 0 for toast value

2014-01-02 Thread Rushabh Lathia
Hi All,

Test case:

drop table if exists t;
create table t(c text);
insert into t values ('x'), (repeat(md5('abcdefghijklmnop'), 1));
select pg_column_size(c), pg_column_size(c || '') FROM t;

CREATE OR REPLACE FUNCTION copy_toast_out() RETURNS VOID AS $$
declare
v text;
BEGIN
SELECT c INTO v FROM t WHERE c  'x';
Select 1/0;
Exception
When Others Then
PERFORM pg_sleep(30); -- go run TRUNCATE t in a 2nd session


raise notice 'length :%', length(v || ''); -- force detoast


END;
$$ language plpgsql;

postgres=# select copy_toast_out();
ERROR:  missing chunk number 0 for toast value 16390 in pg_toast_16384
CONTEXT:  PL/pgSQL function copy_toast_out() line 10 at RAISE

Analysis:

The basic problem here is that if the lock is released on table before
extracting toasted value, and in meantime someone truncates the table,
this error can occur.  Here error coming with PL block contains an Exception
block (as incase there is an exception block, it calls
RollbackAndReleaseCurrentSubTransaction).

Do you think we should detoast the local variable before
 RollbackAndReleaseCurrentSubTransaction ? Or any other options ?

Regards,
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Display oprcode and its volatility in \do+

2014-01-16 Thread Rushabh Lathia
Hi,

I have reviewed you patch.

-- Patch got applied cleanly (using patch -p1)
-- Make  Make install works fine
-- make check looks good

I done code-walk and it looks good. Also did some manual testing and haven't
found any issue with the implementation.

Even I personally felt the Function and Volatility is nice to have info
into \do+.

Marking it as ready for committer.

Regards,
Rushabh Lathia
www.EnterpriseDB.com



On Fri, Jan 10, 2014 at 7:12 AM, Marko Tiikkaja ma...@joh.to wrote:

 Hi,

 Here's a patch for $SUBJECT, displaying information which I find quite
 tedious to locate using alternative methods.  Hopefully someone else does,
 too.  Or doesn't.  Not sure.


 Regards,
 Marko Tiikkaja


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


[HACKERS] NOT Null constraint on foreign table not working

2014-01-19 Thread Rushabh Lathia
Hello,

Please consider the following test:

create database foo;
\c foo

create table foo_test ( a int );

\c postgres

create extension if not exists postgres_fdw;
create server foo_server foreign data wrapper postgres_fdw options ( dbname
 'foo' );
create user mapping for current_user server foo_server;

create foreign table foo_test ( a int not null) server foo_server;

-- insert should return error for because NOT NULL constraint on column a
postgres=# insert into foo_test values ( null );
INSERT 0 1

postgres=# select * from foo_test;
 a
---

(1 row)

-- clean up
drop foreign table foo_test;
drop server foo_server cascade;
\c postgres
drop database foo;


Analysis:

As per the PG documentation it says that foreign table do support the
NOT NULL, NULL and DEFAULT.

http://www.postgresql.org/docs/devel/static/sql-createforeigntable.html

But when I tried the NOT NULL constraint, its not working for the foreign
tables.
Looking at the code into ExecInsert(), for the foreign table missed to call
ExecConstraints(). I am not sure whether it is intentional that  we not
calling
ExecConstraints() in case of foreign server or its missed.
Do share your thought on this.

I quickly fix the issue by adding ExecConstraints() call for foreign table
and
now test behaving as expected. PFA patch for the same.

Regards,
Rushabh Lathia
www.EnterpriseDB.com
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 6f0f47e..240126c 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -229,6 +229,12 @@ ExecInsert(TupleTableSlot *slot,
 	else if (resultRelInfo-ri_FdwRoutine)
 	{
 		/*
+		 * Check the constraints of the tuple
+		 */
+		if (resultRelationDesc-rd_att-constr)
+			ExecConstraints(resultRelInfo, slot, estate);
+
+		/*
 		 * insert into foreign table: let the FDW do it
 		 */
 		slot = resultRelInfo-ri_FdwRoutine-ExecForeignInsert(estate,

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NOT Null constraint on foreign table not working

2014-01-19 Thread Rushabh Lathia
Please consider attached patch here as earlier attached wrong patch.

Sorry for the inconvenience.


On Mon, Jan 20, 2014 at 1:21 PM, Rushabh Lathia rushabh.lat...@gmail.comwrote:

 Hello,

 Please consider the following test:

 create database foo;
 \c foo

 create table foo_test ( a int );

 \c postgres

 create extension if not exists postgres_fdw;
 create server foo_server foreign data wrapper postgres_fdw options (
 dbname  'foo' );
 create user mapping for current_user server foo_server;

 create foreign table foo_test ( a int not null) server foo_server;

 -- insert should return error for because NOT NULL constraint on column a
 postgres=# insert into foo_test values ( null );
 INSERT 0 1

 postgres=# select * from foo_test;
  a
 ---

 (1 row)

 -- clean up
 drop foreign table foo_test;
 drop server foo_server cascade;
 \c postgres
 drop database foo;


 Analysis:

 As per the PG documentation it says that foreign table do support the
 NOT NULL, NULL and DEFAULT.

 http://www.postgresql.org/docs/devel/static/sql-createforeigntable.html

 But when I tried the NOT NULL constraint, its not working for the foreign
 tables.
 Looking at the code into ExecInsert(), for the foreign table missed to call
 ExecConstraints(). I am not sure whether it is intentional that  we not
 calling
 ExecConstraints() in case of foreign server or its missed.
 Do share your thought on this.

 I quickly fix the issue by adding ExecConstraints() call for foreign table
 and
 now test behaving as expected. PFA patch for the same.

 Regards,
 Rushabh Lathia
 www.EnterpriseDB.com




-- 
Rushabh Lathia
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 6f0f47e..f745f5e 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -229,6 +229,12 @@ ExecInsert(TupleTableSlot *slot,
 	else if (resultRelInfo-ri_FdwRoutine)
 	{
 		/*
+		 * Check the constraints of the tuple
+		 */
+		if (resultRelationDesc-rd_att-constr)
+			ExecConstraints(resultRelInfo, slot, estate);
+
+		/*
 		 * insert into foreign table: let the FDW do it
 		 */
 		slot = resultRelInfo-ri_FdwRoutine-ExecForeignInsert(estate,
@@ -641,6 +647,9 @@ ExecUpdate(ItemPointer tupleid,
 	}
 	else if (resultRelInfo-ri_FdwRoutine)
 	{
+		if (resultRelationDesc-rd_att-constr)
+			ExecConstraints(resultRelInfo, slot, estate);
+
 		/*
 		 * update in foreign table: let the FDW do it
 		 */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] NOT Null constraint on foreign table not working

2014-01-20 Thread Rushabh Lathia
On Mon, Jan 20, 2014 at 8:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Rushabh Lathia rushabh.lat...@gmail.com writes:
  As per the PG documentation it says that foreign table do support the
  NOT NULL, NULL and DEFAULT.

 There has been a great deal of debate about what constraints on foreign
 tables ought to mean.  Right now, at least for postgres_fdw, they're just
 taken as documentation of constraints that are supposed to exist on the
 far side.  It's not clear what's the point of trying to enforce them
 against insertions done locally if the remote table lacks them --- any
 table update done on the far side could still violate the constraint.

 We might change this in response to a well-reasoned argument, but it won't
 happen just because somebody lobs a quick-and-dirty patch over the fence.

 If we were going to enforce them locally, I'd opine it should be the FDW's
 task to do it, anyway.  It might have more knowledge about the best way
 to do it than nodeModifyTable.c can, and if not it could still call
 ExecConstraints for itself.


Submitted patch was never intended to get checked in without the proper
discussion
and decision about what behaviour should be for foreign table constraint.
Sorry if I
passed wrong message but was never intended.

I found constraints on foreign table is very useful for the application
when the multiple
user accessing same remote table using fdw and both user want to enforce
different
constraint on particular table or different user want to enforce different
DEFAULT
expression for the same table column.

I agree with you that if we want to enforce constraint locally then it
should be
FDW's task to do it rather then nodeModifyTable.c.

Regards
Rushabh Lathia


Re: [HACKERS] pg_upgrade: make the locale comparison more tolerating

2014-01-24 Thread Rushabh Lathia
Hello,

I started reviewing the patch. Go through the original mail thread to
understand
the need of fix and the actual problem.

http://www.postgresql.org/message-id/20121002155857.ge30...@momjian.us

Patch is using pg_valid_server_encoding() to compare the encoding which
looks
more correct. Did code walk through and it looks good to me. I tried to test
the patch on CentOS and its working fine. I am not quite knowledgeable
about other OS so on that perspective would good to have others view.

Here are the comment about the patch:

.) Patch gets cleanly apply on master ( using patch -p1 )
.) compilation done successful
.) Code walk through and logic looks good
.) Manual testing worked good for me

To test the issue I set the old database locale to en_US.utf8 and for new
database
locale to en_US.utf-8. WIth this when I tried pg_upgrade it failed with
lc_collate cluster
values do not match:  old en_US.utf8, new en_US.UTF-8. With the patch
pg_upgrade
running fine.

Regards,


Re: [HACKERS] pg_upgrade: make the locale comparison more tolerating

2014-01-30 Thread Rushabh Lathia
Hi Pavel,

I looked at your latest cleanup patch. Yes its looks more cleaner in term
equivalent_locale  equivalent_encoding separate functions.

I did run the test again on latest patch and all looks good.

Marking it as ready for committer.

Regards,



On Fri, Jan 24, 2014 at 9:49 PM, Pavel Raiskup prais...@redhat.com wrote:

 Rushabh, really sorry I have to re-create the patch and thanks a
 lot for looking at it!

 Looking at the patch once again, I see that there were at least two
 problems.  Firstly, I used the equivalent_locale function also on the
 encoding values.  Even if that should not cause bugs (as it should result
 in strncasecmp anyway), it was not pretty..

 The second problem was assuming that the locale specifier A is not
 longer then locale specifier B.  Comparisons like 'en_US.utf8' with
 'en_US_.utf8' would result in success.  Bug resulting from this mistake is
 not real probably but it is not nice anyway..

 Rather cleaning the patch once more, attached,
 Pavel




-- 
Rushabh Lathia


Re: [HACKERS] RETURNING PRIMARY KEY syntax extension

2014-06-25 Thread Rushabh Lathia
Hello All,

I assigned my self as reviewer of the patch. I gone through the
mail chain discussion and in that question has been raised about
the feature and its implementation, so would like to know what is
the current status of this project/patch.

Regards,


On Thu, Jun 12, 2014 at 5:53 PM, Ian Barwick i...@2ndquadrant.com wrote:

 On 14/06/12 20:58, Jochem van Dieten wrote:
  On Thu, Jun 12, 2014 at 12:25 PM, Ian Barwick wrote:
 
  On 14/06/12 18:46, Jochem van Dieten wrote:
   I haven't checked the code, but I am hoping it will help with the
 problem
   where a RETURNING * is added to a statement that is not an insert
 or update
   by the JDBC driver. That has been reported on the JDBC list at
 least twice,
   and the proposed workaround is neither very elegant nor very
 robust:
  
 https://groups.google.com/forum/#!msg/pgsql.interfaces.jdbc/7WY60JX3qyo/-v1fqDqLQKwJ
 
  Unfortunately that seems to be a JDBC-specific issue, which is
 outside
  of the scope of this particular patch (which proposes additional
 server-side
  syntax intended to make RETURNING * operations more efficient for
  certain use cases, but which is in itself not a JDBC change).
 
 
  But the obvious way to fix the JDBC issue is not to fix it by adding a
 'mini parser' on
  the JDBC side, but to make SELECT ... RETURNING PRIMARY KEY a regular
 select that silently
  ignores the returning clause and doesn't throw an error on the
 server-side.
 
  That might still be outside the scope of this particular patch, but it
 would provide
  (additional) justification if it were supported.

 That would be adding superfluous, unused and unusable syntax of no
 potential value
 (there is no SELECT ... RETURNING and it wouldn't make any sense if there
 was) as a
 workaround for a driver issue - not going to happen.

 Regards

 Ian Barwick


 --
  Ian Barwick   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] RETURNING PRIMARY KEY syntax extension

2014-06-26 Thread Rushabh Lathia
Thanks for sharing latest patch.

For me this syntax is limiting the current returning clause implementation.
Because its not allowing the returning PRIMARY KEYS with other columns, and
if user or application require that they need to do RETURNING *. Following
syntax not working, which i think is very useful in term if someone want to
use
returning clause with PRIMARY KEY as well as some other table columns.

INSERT INTO dept VALUES (10,'ACCOUNTING','NEW YORK') returning primary key,
dname;

I think allowing other columns with PRIMARY KEY would be more useful syntax.
Even in later versions if we want to extend this syntax to return UNIQUE
KEY,
SEQUENCE VALUES, etc.. comma separation syntax will be more handy.

Others please share your inputs/thoughts.



On Thu, Jun 26, 2014 at 8:11 AM, Ian Barwick i...@2ndquadrant.com wrote:

 On 25/06/14 16:04, Ian Barwick wrote:

 Hi

 On 14/06/25 15:13, Rushabh Lathia wrote:

 Hello All,

 I assigned my self as reviewer of the patch. I gone through the
 mail chain discussion and in that question has been raised about
 the feature and its implementation, so would like to know what is
 the current status of this project/patch.

 Regards,


 I'll be submitting a revised version of this patch very shortly.


 Revised version of the patch attached, which implements the expansion
 of primary key in the rewrite phase per Tom Lane's suggestion upthread
 [*]

 [*] http://www.postgresql.org/message-id/28583.1402325...@sss.pgh.pa.us



 Regards

 Ian Barwick

 --
  Ian Barwick   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services




-- 
Rushabh Lathia


Re: [HACKERS] RETURNING PRIMARY KEY syntax extension

2014-07-01 Thread Rushabh Lathia
I spent some more time on the patch and here are my review comments.

.) Patch gets applied through patch -p1 (git apply fails)

.) trailing whitespace in the patch at various places

.) Unnecessary new line + and - in the patch.
(src/backend/rewrite/rewriteManip.c::getInsertSelectQuery())
(src/include/rewrite/rewriteManip.h)

.) patch has proper test coverage and regression running cleanly.

.) In map_primary_key_to_list() patch using INDEX_ATTR_BITMAP_IDENTITY_KEY
bitmap to get the keycols. In IndexAttrBitmapKind there is also
INDEX_ATTR_BITMAP_KEY, so was confuse between INDEX_ATTR_BITMAP_KEY and
INDEX_ATTR_BITMAP_IDENTITY_KEY and also haven't found related comments in
the code. Later with use of testcase and debugging found confirmed that
INDEX_ATTR_BITMAP_IDENTITY_KEY only returns the Primary key.

.) At present in patch when RETURNING PRIMARY KEY is used on table having no
primary key it throw an error. If I am not wrong JDBC will be using that
into
getGeneratedKeys(). Currently this feature is implemented in the JDBC
driver by
appending RETURNING * to the supplied statement. With this implementation
it will replace RETURNING * with RETURNING PRIMARY KEY, right ? So just
wondering what JDBC expect getGeneratedKeys() to return when query don't
have primary key and user called executeUpdate() with
Statement.RETURN_GENERATED_KEYS? I looked at JDBC specification but its not
clear what it will return when table don't have keys. Can you please let us
know your finding on this ?

.) Already raised my concern regarding syntax limiting the current returning
clause implementation.




On Fri, Jun 27, 2014 at 6:22 AM, Ian Barwick i...@2ndquadrant.com wrote:



 On 27/06/14 09:09, Tom Dunstan wrote:

  On 27 June 2014 06:14, Gavin Flower gavinflo...@archidevsys.co.nz
 mailto:gavinflo...@archidevsys.co.nz wrote:

 On 27/06/14 00:12, Rushabh Lathia wrote:

 INSERT INTO dept VALUES (10,'ACCOUNTING','NEW YORK') returning
 primary key, dname;

 I think allowing other columns with PRIMARY KEY would be more
 useful syntax.
 Even in later versions if we want to extend this syntax to return
 UNIQUE KEY,
 SEQUENCE VALUES, etc.. comma separation syntax will be more handy.


 I agree 100%.


 If the query is being hand-crafted, what's to stop the query writer from
 just listing the

  id columns in the returning clause? And someone specifying RETURNING *
 is getting all the
  columns anyway.


 The target use-case for this feature is a database driver that has just
 done an insert and

  doesn't know what the primary key columns are - in that case mixing them
 with any other
  columns is actually counter-productive as the driver won't know which
 columns are which.
  What use cases are there where the writer of the query knows enough to
 write specific columns
  in the RETURNING clause but not enough to know which column is the id
 column?


 Consistency is nice, and I can understand wanting to treat the PRIMARY
 KEY bit as just
 another set of columns in the list to return, but I'd hate to see this
 feature put on

  the back-burner to support use-cases that are already handled by the
 current RETURNING
  feature. Maybe it's easy to do, though.. I haven't looked into the
 implementation at all.

 Normal columns are injected into the query's returning list at parse time,
 whereas
 this version of the patch handles expansion of PRIMARY KEY at the rewrite
 stage, which
 would make handling a mix of PRIMARY KEY and normal output expressions
 somewhat tricky
 to handle. (In order to maintain the columns in their expected position
 you'd
 have to add some sort of placeholder/dummy TargetEntry to the returning
 list at parse
 time, then rewrite it later with the expanded primary key columns, or
 something
 equally messy).

 On the other hand, it should be fairly straightforward to handle a list of
 keywords
 for expansion (e.g. RETURNING PRIMARY KEY, UNIQUE KEYS, SEQUENCE VALUES)
 should
 the need arise.



 Regards

 Ian Barwick

 --
  Ian Barwick   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services




-- 
Rushabh Lathia


Re: [HACKERS] RETURNING PRIMARY KEY syntax extension

2014-07-03 Thread Rushabh Lathia
On Thu, Jul 3, 2014 at 6:56 AM, Ian Barwick i...@2ndquadrant.com wrote:

 On 01/07/14 21:00, Rushabh Lathia wrote:


 I spent some more time on the patch and here are my review comments.

 .) Patch gets applied through patch -p1 (git apply fails)

 .) trailing whitespace in the patch at various places


 Not sure where you see this, unless it's in the tests, where it's
 required.


Please ignore the whitespace comment. Don't know somehow that was due to
my local changes in the branch.



  .) Unnecessary new line + and - in the patch.
 (src/backend/rewrite/rewriteManip.c::getInsertSelectQuery())
 (src/include/rewrite/rewriteManip.h)


 Fixed.


  .) patch has proper test coverage and regression running cleanly.

 .) In map_primary_key_to_list() patch using INDEX_ATTR_BITMAP_IDENTITY_KEY
 bitmap to get the keycols. In IndexAttrBitmapKind there is also
 INDEX_ATTR_BITMAP_KEY, so was confuse between INDEX_ATTR_BITMAP_KEY and
 INDEX_ATTR_BITMAP_IDENTITY_KEY and also haven't found related comments in
 the code. Later with use of testcase and debugging found confirmed that
 INDEX_ATTR_BITMAP_IDENTITY_KEY only returns the Primary key.


 Revised patch version (see other mail) fixes this by introducing
 INDEX_ATTR_BITMAP_PRIMARY_KEY.


Looks good.



  .) At present in patch when RETURNING PRIMARY KEY is used on table having
 no
 primary key it throw an error. If I am not wrong JDBC will be using that
 into
 getGeneratedKeys(). Currently this feature is implemented in the JDBC
 driver by
 appending RETURNING * to the supplied statement. With this
 implementation
 it will replace RETURNING * with RETURNING PRIMARY KEY, right ? So
 just
 wondering what JDBC expect getGeneratedKeys() to return when query don't
 have primary key and user called executeUpdate() with
 Statement.RETURN_GENERATED_KEYS? I looked at JDBC specification but its
 not
 clear what it will return when table don't have keys. Can you please let
 us
 know your finding on this ?


 The spec [*] is indeed frustratingly vague:

 The method Statement.getGeneratedKeys, which can be called to retrieve
 the generated
 value, returns a ResultSet object with a column for each automatically
 generated value.
 The methods execute, executeUpdate or Connection.prepareStatement
 accept an optional
 parameter, which can be used to indicate that any auto generated
 values should be
 returned when the statement is executed or prepared.

 [*] http://download.oracle.com/otn-pub/jcp/jdbc-4_1-mrel-
 spec/jdbc4.1-fr-spec.pdf

 I understand this to mean that no rows will be returned if no
 auto-generated values
 are not present.

 As-is, the patch will raise an error if the target table does not have a
 primary key,
 which makes sense from the point of view of the proposed syntax, but which
 will
 make it impossible for the JDBC driver to implement the above
 understanding of the spec
 (i.e. return nothing if no primary key exists).

 The basic reason for this patch is to allow JDBC to use this syntax for
getGeneratedKeys().
Now because this patch throw an error when table don't have any primary
key, this patch
won't be useful for the getGeneratedKeys() implementation.


 It would be simple enough not to raise an error in this case, but that
 means the
 query would be effectively failing silently and I don't think that's
 desirable
 behaviour.


Yes, not to raise an error won't be desirable behavior.


 A better solution would be to have an optional IF EXISTS clause:

   RETURNING PRIMARY KEY [ IF EXISTS ]

 which would be easy enough to implement.


 Thoughts?


Hmm liked the idea about adding [ IF EXISTS ]. Looking at the grammar so
far we
having [ IF EXISTS ] option only for DDL commands (mainly DROP) not sure
whether
its ok to use such syntax for DML commands.

Others please share your thoughts/comments.



 Ian Barwick

 --
  Ian Barwick   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services




-- 
Rushabh Lathia


Re: [HACKERS] RETURNING PRIMARY KEY syntax extension

2014-07-03 Thread Rushabh Lathia
On Fri, Jul 4, 2014 at 7:37 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Tom Dunstan pg...@tomd.cc writes:
  On 4 July 2014 00:07, Tom Lane t...@sss.pgh.pa.us wrote:
  It sounds to me like designing this for JDBC's getGeneratedKeys method
  is a mistake.  There isn't going to be any way that the driver can
 support
  that without having looked at the table's metadata for itself, and if
  it's going to do that then it doesn't need a crutch that only partially
  solves the problem anyhow.

  Sure it can - it append RETURNING PRIMARY KEY and hand back a ResultSet
  from whatever was returned. It's CURRENTLY doing that, but it's appending
  RETURNING * and leaving it up to the caller of getGeneratedKeys() to work
  out which columns the caller is interested in.

 I might be mistaken, but as I read the spec for getGeneratedKeys, whether
 or not a column is in a primary key has got nothing to do with whether
 that column should be returned.  It looks to me like what you're supposed
 to return is columns with computed default values, eg, serial columns.
 (Whether we should define it as being *exactly* columns created by the
 SERIAL macro is an interesting question, but for sure those ought to be
 included.)  Now, the pkey might be a serial column ... but it equally
 well might not be; it might not have any default expression at all.
 And there certainly could be serial columns that weren't in the pkey.


100% agree with Tom.


 The fact that the spec is kinda fuzzy doesn't entitle us to ignore the
 plain meaning of the term generated key.

 regards, tom lane




-- 
Rushabh Lathia


Re: [HACKERS] RETURNING PRIMARY KEY syntax extension

2014-07-07 Thread Rushabh Lathia
On Mon, Jul 7, 2014 at 5:50 PM, Robert Haas robertmh...@gmail.com wrote:

 On Fri, Jul 4, 2014 at 12:36 AM, Rushabh Lathia
 rushabh.lat...@gmail.com wrote:
  On Fri, Jul 4, 2014 at 7:37 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  Tom Dunstan pg...@tomd.cc writes:
   On 4 July 2014 00:07, Tom Lane t...@sss.pgh.pa.us wrote:
   It sounds to me like designing this for JDBC's getGeneratedKeys
 method
   is a mistake.  There isn't going to be any way that the driver can
   support
   that without having looked at the table's metadata for itself, and if
   it's going to do that then it doesn't need a crutch that only
 partially
   solves the problem anyhow.
 
   Sure it can - it append RETURNING PRIMARY KEY and hand back a
 ResultSet
   from whatever was returned. It's CURRENTLY doing that, but it's
   appending
   RETURNING * and leaving it up to the caller of getGeneratedKeys() to
   work
   out which columns the caller is interested in.
 
  I might be mistaken, but as I read the spec for getGeneratedKeys,
 whether
  or not a column is in a primary key has got nothing to do with whether
  that column should be returned.  It looks to me like what you're
 supposed
  to return is columns with computed default values, eg, serial columns.
  (Whether we should define it as being *exactly* columns created by the
  SERIAL macro is an interesting question, but for sure those ought to be
  included.)  Now, the pkey might be a serial column ... but it equally
  well might not be; it might not have any default expression at all.
  And there certainly could be serial columns that weren't in the pkey.
 
  100% agree with Tom.

 Well, we could have a RETURNING GENERATED COLUMNS construct, or
 something like that.  I can certainly see the usefulness of such a
 thing.

 As a general note not necessarily related to this specific issue, I
 think we should be careful not to lose sight of the point of this
 feature, which is to allow pgsql-jdbc to more closely adhere to the
 JDBC specification.  While it's great if this feature has general
 utility, if it doesn't help pgsql-jdbc, then it fails to achieve the
 author's original intention.  We need to make sure we're not throwing
 up too many obstacles in the way of better driver compliance if we
 want to have good drivers.

 As a code-level comment, I am not too find of adding yet another value
 for IndexAttrBitmapKind; every values we compute in
 RelationGetIndexAttrBitmap adds overhead for everyone, even people not
 using the feature.  Admittedly, it's only paid once per relcache load,
 but does
 map_primary_key_to_list() really need this information cached, or can
 it just look through the index list for the primary key, and then work
 directly from that?

 Also, map_primary_key_to_list() tacitly assumes that an auto-updatable
 view has only 1 from-list item.  That seems like a good thing to check
 with an Assert(), just in case we should support some other case in
 the future.


Another code-level comment is, why we need RowExclusiveLock before calling
map_primary_key_to_list() ? I think we should have explanation for that.

+/* Handle RETURNING PRIMARY KEY */
+if(query-returningPK)
+{
+RangeTblEntry *rte = (RangeTblEntry *)
list_nth(query-rtable, query-resultRelation - 1);
+Relation rel = heap_open(rte-relid, RowExclusiveLock);
+query-returningList = map_primary_key_to_list(rel,
query);
+heap_close(rel, NoLock);
+}


--
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company




-- 
Rushabh Lathia


[HACKERS] [Segmentation fault] pg_dump binary-upgrade fail for type without element

2014-10-15 Thread Rushabh Lathia
Hi All,

pg_dump binary-upgrade fail with segmentation fault for type without
element.

Consider the following testcase:

rushabh@postgresql$ ./db/bin/psql postgres
psql (9.5devel)
Type help for help.

postgres=# drop type typ;
DROP TYPE
postgres=# create type typ as ();
CREATE TYPE
postgres=# \q
rushabh@postgresql$ ./db/bin/pg_dump postgres --binary-upgrade
pg_dump: row number 0 is out of range 0..-1
Segmentation fault (core dumped)

Stake trace:

(gdb) bt
#0  0x003a2cc375f2 in strtoull_l_internal () from /lib64/libc.so.6
#1  0x00417a08 in dumpCompositeType (fout=0x1365200,
tyinfo=0x13b1340) at pg_dump.c:9356
#2  0x004156a2 in dumpType (fout=0x1365200, tyinfo=0x13b1340) at
pg_dump.c:8449
#3  0x00414b08 in dumpDumpableObject (fout=0x1365200,
dobj=0x13b1340) at pg_dump.c:8135
#4  0x004041f8 in main (argc=3, argv=0x7fff838ff6e8) at
pg_dump.c:812

Into dumpCompositeType(), query fetch the elements for the composite type,
but in case there are no elements for the type then it returns zero rows. In
the following code block:

if (binary_upgrade)
{
Oidtyprelid = atooid(PQgetvalue(res, 0, i_typrelid));

binary_upgrade_set_type_oids_by_type_oid(fout, q,
 tyinfo-dobj.catId.oid);
binary_upgrade_set_pg_class_oids(fout, q, typrelid, false);
}

it fetching the typrelid which require for binary_upgrade. But in case query
is returning zero rows (for the composite type without element) is failing.

Looking further into code I found that rather then fetching typrelid, we can
use the already stored typrelid from TypeInfo structure.

Following commit added code related to binary_upgrade:

commit 28f6cab61ab8958b1a7dfb019724687d92722538
Author: Bruce Momjian br...@momjian.us
Date:   Wed Jan 6 05:18:18 2010 +

binary upgrade:

Preserve relfilenodes for views and composite types --- even though we
don't store data in, them, they do consume relfilenodes.

Bump catalog version.

PFA patch to fix the issue. I think this need to fix in the back branch as
well
because its effecting pg_upgrade. Fix should backport till PG91, as on PG90
it was not allowed to create type without element.

postgres=# select version();

version
---
 PostgreSQL 9.0.18 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit
(1 row)
postgres=# create type typ as ();
ERROR:  syntax error at or near )
LINE 1: create type typ as ();
^

Regards,
Rushabh Lathia
www.EnterpriseDB.com
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 2915329..64d3856 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -9290,7 +9290,6 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo)
 	int			i_attalign;
 	int			i_attisdropped;
 	int			i_attcollation;
-	int			i_typrelid;
 	int			i;
 	int			actual_atts;
 
@@ -9311,8 +9310,7 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo)
 			pg_catalog.format_type(a.atttypid, a.atttypmod) AS atttypdefn, 
 		  a.attlen, a.attalign, a.attisdropped, 
 		  CASE WHEN a.attcollation  at.typcollation 
-		  THEN a.attcollation ELSE 0 END AS attcollation, 
-		  ct.typrelid 
+		  THEN a.attcollation ELSE 0 END AS attcollation 
 		  FROM pg_catalog.pg_type ct 
 JOIN pg_catalog.pg_attribute a ON a.attrelid = ct.typrelid 
 	LEFT JOIN pg_catalog.pg_type at ON at.oid = a.atttypid 
@@ -9330,8 +9328,7 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo)
 		appendPQExpBuffer(query, SELECT a.attname, 
 			pg_catalog.format_type(a.atttypid, a.atttypmod) AS atttypdefn, 
 		  a.attlen, a.attalign, a.attisdropped, 
-		  0 AS attcollation, 
-		  ct.typrelid 
+		  0 AS attcollation 
 	 FROM pg_catalog.pg_type ct, pg_catalog.pg_attribute a 
 		  WHERE ct.oid = '%u'::pg_catalog.oid 
 		  AND a.attrelid = ct.typrelid 
@@ -9349,15 +9346,12 @@ dumpCompositeType(Archive *fout, TypeInfo *tyinfo)
 	i_attalign = PQfnumber(res, attalign);
 	i_attisdropped = PQfnumber(res, attisdropped);
 	i_attcollation = PQfnumber(res, attcollation);
-	i_typrelid = PQfnumber(res, typrelid);
 
 	if (binary_upgrade)
 	{
-		Oid			typrelid = atooid(PQgetvalue(res, 0, i_typrelid));
-
 		binary_upgrade_set_type_oids_by_type_oid(fout, q,
  tyinfo-dobj.catId.oid);
-		binary_upgrade_set_pg_class_oids(fout, q, typrelid, false);
+		binary_upgrade_set_pg_class_oids(fout, q, tyinfo-typrelid, false);
 	}
 
 	qtypname = pg_strdup(fmtId(tyinfo-dobj.name));

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Segmentation fault] pg_dump binary-upgrade fail for type without element

2014-10-15 Thread Rushabh Lathia
PFA patch patch for the master branch.

On Thu, Oct 16, 2014 at 11:09 AM, Rushabh Lathia rushabh.lat...@gmail.com
wrote:

 Hi All,

 pg_dump binary-upgrade fail with segmentation fault for type without
 element.

 Consider the following testcase:

 rushabh@postgresql$ ./db/bin/psql postgres
 psql (9.5devel)
 Type help for help.

 postgres=# drop type typ;
 DROP TYPE
 postgres=# create type typ as ();
 CREATE TYPE
 postgres=# \q
 rushabh@postgresql$ ./db/bin/pg_dump postgres --binary-upgrade
 pg_dump: row number 0 is out of range 0..-1
 Segmentation fault (core dumped)

 Stake trace:

 (gdb) bt
 #0  0x003a2cc375f2 in strtoull_l_internal () from /lib64/libc.so.6
 #1  0x00417a08 in dumpCompositeType (fout=0x1365200,
 tyinfo=0x13b1340) at pg_dump.c:9356
 #2  0x004156a2 in dumpType (fout=0x1365200, tyinfo=0x13b1340) at
 pg_dump.c:8449
 #3  0x00414b08 in dumpDumpableObject (fout=0x1365200,
 dobj=0x13b1340) at pg_dump.c:8135
 #4  0x004041f8 in main (argc=3, argv=0x7fff838ff6e8) at
 pg_dump.c:812

 Into dumpCompositeType(), query fetch the elements for the composite type,
 but in case there are no elements for the type then it returns zero rows.
 In
 the following code block:

 if (binary_upgrade)
 {
 Oidtyprelid = atooid(PQgetvalue(res, 0, i_typrelid));

 binary_upgrade_set_type_oids_by_type_oid(fout, q,
  tyinfo-dobj.catId.oid);
 binary_upgrade_set_pg_class_oids(fout, q, typrelid, false);
 }

 it fetching the typrelid which require for binary_upgrade. But in case
 query
 is returning zero rows (for the composite type without element) is failing.

 Looking further into code I found that rather then fetching typrelid, we
 can
 use the already stored typrelid from TypeInfo structure.

 Following commit added code related to binary_upgrade:

 commit 28f6cab61ab8958b1a7dfb019724687d92722538
 Author: Bruce Momjian br...@momjian.us
 Date:   Wed Jan 6 05:18:18 2010 +

 binary upgrade:

 Preserve relfilenodes for views and composite types --- even though we
 don't store data in, them, they do consume relfilenodes.

 Bump catalog version.

 PFA patch to fix the issue. I think this need to fix in the back branch as
 well
 because its effecting pg_upgrade. Fix should backport till PG91, as on PG90
 it was not allowed to create type without element.

 postgres=# select version();

 version

 ---
  PostgreSQL 9.0.18 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit
 (1 row)
 postgres=# create type typ as ();
 ERROR:  syntax error at or near )
 LINE 1: create type typ as ();
 ^

 Regards,
 Rushabh Lathia
 www.EnterpriseDB.com




-- 
Rushabh Lathia
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index c56a4cb..c528577 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -9269,7 +9269,6 @@ dumpCompositeType(Archive *fout, DumpOptions *dopt, TypeInfo *tyinfo)
 	int			i_attalign;
 	int			i_attisdropped;
 	int			i_attcollation;
-	int			i_typrelid;
 	int			i;
 	int			actual_atts;
 
@@ -9290,8 +9289,7 @@ dumpCompositeType(Archive *fout, DumpOptions *dopt, TypeInfo *tyinfo)
 			pg_catalog.format_type(a.atttypid, a.atttypmod) AS atttypdefn, 
 		  a.attlen, a.attalign, a.attisdropped, 
 		  CASE WHEN a.attcollation  at.typcollation 
-		  THEN a.attcollation ELSE 0 END AS attcollation, 
-		  ct.typrelid 
+		  THEN a.attcollation ELSE 0 END AS attcollation 
 		  FROM pg_catalog.pg_type ct 
 JOIN pg_catalog.pg_attribute a ON a.attrelid = ct.typrelid 
 	LEFT JOIN pg_catalog.pg_type at ON at.oid = a.atttypid 
@@ -9309,8 +9307,7 @@ dumpCompositeType(Archive *fout, DumpOptions *dopt, TypeInfo *tyinfo)
 		appendPQExpBuffer(query, SELECT a.attname, 
 			pg_catalog.format_type(a.atttypid, a.atttypmod) AS atttypdefn, 
 		  a.attlen, a.attalign, a.attisdropped, 
-		  0 AS attcollation, 
-		  ct.typrelid 
+		  0 AS attcollation 
 	 FROM pg_catalog.pg_type ct, pg_catalog.pg_attribute a 
 		  WHERE ct.oid = '%u'::pg_catalog.oid 
 		  AND a.attrelid = ct.typrelid 
@@ -9328,15 +9325,12 @@ dumpCompositeType(Archive *fout, DumpOptions *dopt, TypeInfo *tyinfo)
 	i_attalign = PQfnumber(res, attalign);
 	i_attisdropped = PQfnumber(res, attisdropped);
 	i_attcollation = PQfnumber(res, attcollation);
-	i_typrelid = PQfnumber(res, typrelid);
 
 	if (dopt-binary_upgrade)
 	{
-		Oid			typrelid = atooid(PQgetvalue(res, 0, i_typrelid));
-
 		binary_upgrade_set_type_oids_by_type_oid(fout, q,
  tyinfo-dobj.catId.oid);
-		binary_upgrade_set_pg_class_oids(fout, q, typrelid, false);
+		binary_upgrade_set_pg_class_oids(fout, q, tyinfo-typrelid, false);
 	}
 
 	qtypname = pg_strdup(fmtId(tyinfo-dobj.name));

-- 
Sent via

Re: [HACKERS] alter user/role CURRENT_USER

2014-10-20 Thread Rushabh Lathia
I gone through patch and here is the review for this patch:


.) patch go applied on master branch with patch -p1 command
   (git apply failed)
.) regression make check run fine
.) testcase coverage is missing in the patch
.) Over all coding/patch looks good.

Few comments:

1) Any particular reason for not adding SESSION_USER/USER also made usable
for this command ?

2) I think RoleId_or_curruser can be used for following role:

/* ALTER TABLE name OWNER TO RoleId */
| OWNER TO RoleId

3) In the documentation patch, added for CURRENT_USER but CURRENT_ROLE is
missing.


On Fri, Oct 10, 2014 at 1:57 PM, Kyotaro HORIGUCHI 
horiguchi.kyot...@lab.ntt.co.jp wrote:

 Hello, on the way considering alter role set .., I found that
 ALTER ROLE/USER cannot take CURRENT_USER as the role name.

 In addition to that, documents of ALTER ROLE / USER are
 inconsistent with each other in spite of ALTER USER is said to be
 an alias for ALTER ROLE. Plus, ALTER USER cannot take ALL as user
 name although ALTER ROLE can.

 This patch does following things,

  - ALTER USER/ROLE now takes CURRENT_USER as user name.

  - Rewrite sysnopsis of the documents for ALTER USER and ALTER
ROLE so as to they have exactly same syntax.

  - Modify syntax of ALTER USER so as to be an alias of ALTER ROLE.

- Added CURRENT_USER/CURRENT_ROLE syntax to both.
- Added ALL syntax as user name to ALTER USER.
- Added IN DATABASE syntax to ALTER USER.

x Integrating ALTER ROLE/USER syntax could not be done because of
  shift/reduce error of bison.

  x This patch contains no additional regressions. Is it needed?

 SESSION_USER/USER also can be made usable for this command, but
 this patch doesn't so (yet).

 regards,

 --
 Kyotaro Horiguchi
 NTT Open Source Software Center


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


Re: [HACKERS] alter user/role CURRENT_USER

2014-10-26 Thread Rushabh Lathia
Thanks Kyotaro,

I just did quickly looked at the patch and it does cover more syntax then
earlier patch. But still if doesn't not cover the all the part of syntax
where
we can use CURRENT_USER/CURRENT_ROLE/USER/SESSION_USER. For example:

-- Not working
alter default privileges for role current_user grant SELECT ON TABLES TO
current_user ;

-- Not working
grant user1 TO current_user;

Their might few more syntax like this.

I understand that patch is  sightly getting bigger and complex then what it
was
originally proposed. Before going back to more review on latest patch I
would
like to understand the requirement of this new feature. Also would like
others
to comment on where/how we should restrict this feature ?

On Fri, Oct 24, 2014 at 1:59 PM, Kyotaro HORIGUCHI 
horiguchi.kyot...@lab.ntt.co.jp wrote:

 Hi, here is the revised patch.

 Attached files are the followings

  - 0001-ALTER-ROLE-CURRENT_USER_v2.patch  - the patch.

  - testset.tar.bz2 - test set. Run by typing 'make check' as a
superuser of the running postgreSQL server. It creates testdb
and some roles.

 Documents are not edited this time.

 

 Considering your comments, I found more points to modify.

  - CREATE SCHEMA (IF NOT EXISTS) .. AUTHORIZATION role ...

  - ALTER AGGREAGE/COLLATION/etc... OWNER TO role

  - CREATE/ALTER/DROP USER MAPPING FOR role SERVER ..

 GRANT/REVOKE also takes role as an arguemnt but CURRENT_USER and
 the similar keywords seem to be useless for them.

 Finally, the new patch modifies the following points.

 In gram.y,

  - RoleId's are replaced with RoleId_or_curruser in more places.
It accepts CURRENT_USER/USER/CURRENT_ROLE/SESSION_USER.

  - ALTER USER ALL syntax is added. (not changed from the previous patch)

  - The non-terminal auth_ident now uses RoleId_or_curruser
instead of RoleId. This affects CREATE/ALTER/DROP USER MAPPING

 In user.c, new function ResolveRoleId() is added and used for all
 role ID resolutions that correspond to the syntax changes in
 parser. It is AlterRole() in user.c.

 In foreigncmds.c, GetUserOidFromMapping() is removed and
 ResolveRoleId is used instead.

 In alter.c and tablecmds.c, all calls to get_role_oid()
 correspond the the grammer change were replaced with
 ResolveRoleId().

 The modifications are a bit complicated so I provided a
 comprehensive test set.


 regards,

 --
 Kyotaro Horiguchi
 NTT Open Source Software Center




-- 
Rushabh Lathia


Re: [HACKERS] CINE in CREATE TABLE AS ... and CREATE MATERIALIZED VIEW ...

2014-10-27 Thread Rushabh Lathia
Hi All,

- Patch got applied cleanly.
- Regression make check run fine.
- Patch covered the documentation changes

Here are few comments:

1) What the need of following change:

diff --git a/src/backend/storage/lmgr/lwlock.c
b/src/backend/storage/lmgr/lwlock.c
index bcec173..9fe6855 100644
--- a/src/backend/storage/lmgr/lwlock.c
+++ b/src/backend/storage/lmgr/lwlock.c
@@ -1005,12 +1005,6 @@ LWLockWaitForVar(LWLock *lock, uint64 *valptr,
uint64 oldval, uint64 *newval)
 lock-tail = proc;
 lock-head = proc;

-/*
- * Set releaseOK, to make sure we get woken up as soon as the lock
is
- * released.
- */
-lock-releaseOK = true;
-
 /* Can release the mutex now */
 SpinLockRelease(lock-mutex);


It doesn't look like related to this patch.

2)

diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
index ae5fe88..4d11952 100644
--- a/src/bin/psql/help.c
+++ b/src/bin/psql/help.c
@@ -247,7 +247,7 @@ slashUsage(unsigned short int pager)
 fprintf(output, _(  \\f [STRING]show or set field
separator for unaligned query output\n));
 fprintf(output, _(  \\H toggle HTML output mode
(currently %s)\n),
 ON(pset.popt.topt.format == PRINT_HTML));
-fprintf(output, _(  \\pset [NAME [VALUE]]   set table output option\n
+fprintf(output, _(  \\pset [NAME [VALUE]] set table output
option\n
(NAME :=
{format|border|expanded|fieldsep|fieldsep_zero|footer|null|\n
   
numericlocale|recordsep|recordsep_zero|tuples_only|title|tableattr|pager|\n
   
unicode_border_linestyle|unicode_column_linestyle|unicode_header_linestyle})\n));


Why above changes reqired ?

3) This patch adding IF NOT EXIST_S for CREATE TABLE AS and CREATE
MATERIALIZED
TABLE, but testcase coverage for CREATE MATERIALIZED TABLE is missing.

Apart from this changes looks good to me.


On Tue, Oct 14, 2014 at 10:28 PM, Fabrízio de Royes Mello 
fabriziome...@gmail.com wrote:

 On Wed, Oct 1, 2014 at 9:17 AM, Fabrízio de Royes Mello 
 fabriziome...@gmail.com wrote:
 
  Hi all,
 
  We already have IF NOT EXISTS for CREATE TABLE. There are some reason to
 don't have to CREATE TABLE AS and CREATE MATERIALIZED VIEW??
 

 Patch attached to add CINE support to:

 - CREATE TABLE AS
 - CREATE MATERIALIZED VIEW

 Regards,

 --
 Fabrízio de Royes Mello
 Consultoria/Coaching PostgreSQL
  Timbira: http://www.timbira.com.br
  Blog: http://fabriziomello.github.io
  Linkedin: http://br.linkedin.com/in/fabriziomello
  Twitter: http://twitter.com/fabriziomello
  Github: http://github.com/fabriziomello


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


Re: [HACKERS] CINE in CREATE TABLE AS ... and CREATE MATERIALIZED VIEW ...

2014-11-02 Thread Rushabh Lathia
Patch looks good, assigning to committer.

On Fri, Oct 31, 2014 at 8:36 AM, Fabrízio de Royes Mello 
fabriziome...@gmail.com wrote:



 On Mon, Oct 27, 2014 at 4:15 AM, Rushabh Lathia rushabh.lat...@gmail.com
 wrote:
 
  Hi All,
 
  - Patch got applied cleanly.
  - Regression make check run fine.
  - Patch covered the documentation changes
 
  Here are few comments:
 
  1) What the need of following change:
 
  diff --git a/src/backend/storage/lmgr/lwlock.c
 b/src/backend/storage/lmgr/lwlock.c
  index bcec173..9fe6855 100644
  --- a/src/backend/storage/lmgr/lwlock.c
  +++ b/src/backend/storage/lmgr/lwlock.c
  @@ -1005,12 +1005,6 @@ LWLockWaitForVar(LWLock *lock, uint64 *valptr,
 uint64 oldval, uint64 *newval)
   lock-tail = proc;
   lock-head = proc;
 
  -/*
  - * Set releaseOK, to make sure we get woken up as soon as the
 lock is
  - * released.
  - */
  -lock-releaseOK = true;
  -
   /* Can release the mutex now */
   SpinLockRelease(lock-mutex);
 
 
  It doesn't look like related to this patch.
 

 Sorry... my mistake when diff to master (more updated than my branch).

 Fixed.


  2)
 
  diff --git a/src/bin/psql/help.c b/src/bin/psql/help.c
  index ae5fe88..4d11952 100644
  --- a/src/bin/psql/help.c
  +++ b/src/bin/psql/help.c
  @@ -247,7 +247,7 @@ slashUsage(unsigned short int pager)
   fprintf(output, _(  \\f [STRING]show or set field
 separator for unaligned query output\n));
   fprintf(output, _(  \\H toggle HTML output
 mode (currently %s)\n),
   ON(pset.popt.topt.format == PRINT_HTML));
  -fprintf(output, _(  \\pset [NAME [VALUE]]   set table output
 option\n
  +fprintf(output, _(  \\pset [NAME [VALUE]] set table output
 option\n
  (NAME :=
 {format|border|expanded|fieldsep|fieldsep_zero|footer|null|\n
 
 numericlocale|recordsep|recordsep_zero|tuples_only|title|tableattr|pager|\n
 
 unicode_border_linestyle|unicode_column_linestyle|unicode_header_linestyle})\n));
 
 
  Why above changes reqired ?
 

 Same previous mistake.

 Fixed.


  3) This patch adding IF NOT EXIST_S for CREATE TABLE AS and CREATE
 MATERIALIZED
  TABLE, but testcase coverage for CREATE MATERIALIZED TABLE is missing.
 
  Apart from this changes looks good to me.
 

 Fixed.

 Thanks for your review.

 Regards,

 --
 Fabrízio de Royes Mello
 Consultoria/Coaching PostgreSQL
  Timbira: http://www.timbira.com.br
  Blog: http://fabriziomello.github.io
  Linkedin: http://br.linkedin.com/in/fabriziomello
  Twitter: http://twitter.com/fabriziomello
  Github: http://github.com/fabriziomello




-- 
Rushabh Lathia


Re: [HACKERS] pg_dump gets attributes from tables in extensions

2015-02-23 Thread Rushabh Lathia
Thanks to the easy handy testcase, was able to replicate the test scenario
on my local environment. And yes tbinfo-dobj.ext_member check into
getTableAttrs() do fix the issue.

Looking more into pg_dump code what I found that, generally PG don't have
tbinfo-dobj.ext_member check to ignore the object. Mostly we do this kind
of check using tbinfo-dobj.dump (look at dumpTable() for reference). Do you
have any particular reason if choosing dobj.ext_member over dobj.dump ?

On Fri, Feb 20, 2015 at 12:20 PM, Michael Paquier michael.paqu...@gmail.com
 wrote:

 On Fri, Feb 20, 2015 at 5:33 AM, Peter Eisentraut pete...@gmx.net wrote:
  On 2/16/15 2:45 AM, Michael Paquier wrote:
  While looking at the patch to fix pg_dump with extensions containing
  tables referencing each other, I got surprised by the fact that
  getTableAttrs tries to dump table attributes even for tables that are
  part of an extension. Is that normal?
  Attached is a patch that I think makes things right, but not dumping any
  tables that are part of ext_member.
 
  Can you provide an example/test case?  (e.g., which publicly available
  extension contains tables with attributes?)

 Sure. Attached is a simplified version of the extension I used for the
 other patch on pg_dump.
 $ psql -c 'create extension dump_test'
 CREATE EXTENSION
 $ psql -At -c '\dx+ dump_test'
 table aa_tab_fkey
 table bb_tab_fkey
 $ pg_dump -v 21 | grep columns and types
 pg_dump: finding the columns and types of table public.bb_tab_fkey
 pg_dump: finding the columns and types of table public.aa_tab_fkey
 --
 Michael


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Rushabh Lathia


Re: [HACKERS] pg_dump gets attributes from tables in extensions

2015-02-23 Thread Rushabh Lathia
On Mon, Feb 23, 2015 at 7:45 PM, Michael Paquier michael.paqu...@gmail.com
wrote:



 On Mon, Feb 23, 2015 at 5:57 PM, Rushabh Lathia rushabh.lat...@gmail.com
 wrote:
  Thanks to the easy handy testcase, was able to replicate the test
 scenario
  on my local environment. And yes tbinfo-dobj.ext_member check into
  getTableAttrs() do fix the issue.
 
  Looking more into pg_dump code what I found that, generally PG don't have
  tbinfo-dobj.ext_member check to ignore the object. Mostly we do this
 kind
  of check using tbinfo-dobj.dump (look at dumpTable() for reference). Do
 you
  have any particular reason if choosing dobj.ext_member over dobj.dump ?

 Hm. I have been pondering about this code more and I am dropping the patch
 as either adding a check based on the flag to track dumpable objects or
 ext_member visibly breaks the logic that binary upgrade and tables in
 extensions rely on. Particularly this portion makes me think so in
 getExtensionMembership():
 /*
  * Normally, mark the member object as not to be dumped.
 But in
  * binary upgrades, we still dump the members
 individually, since the
  * idea is to exactly reproduce the database contents
 rather than
  * replace the extension contents with something different.
  */
 if (!dopt-binary_upgrade)
 dobj-dump = false;
 else
 dobj-dump = refdobj-dump;


Ok. Looking at above code into getExtensionMembership(). It seems like you
fix you suggested is not correct. I new table with DEFAULT attribute into
dump_test extension and pg_dump with binary-upgrade is failing with
pg_dump: invalid column number 1 for table bb_tab_fkey.

rushabh@rushabh-centos-vm:dump_test$ cat dump_test--1.0.sql
/* dump_test/dump_test--1.0.sql */

-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use CREATE EXTENSION dump_test to load this file. \quit

CREATE TABLE IF NOT EXISTS bb_tab_fkey (
id int PRIMARY KEY
);

CREATE TABLE IF NOT EXISTS aa_tab_fkey (
id int REFERENCES bb_tab_fkey(id)
);

CREATE TABLE IF NOT EXISTS foo ( a int default 100 );

This gave another strong reason to add if (!tbinfo-dobj.dump) check rather
then ext_member check. What you say ?

Regards,


[HACKERS] Why can't we used CAPITAL LETTERS into replication slot_name?

2015-09-25 Thread Rushabh Lathia
Hi,

I am just wondering why pg_create_physical_replication_slot() can't take
CAPITAL LETTERS into slot_name ?

Comment over ReplicationSlotValidateName() says that - Slot names may
consist out of [a-z0-9_]{1,NAMEDATALEN-1} which should allow the name to be
used as a directory name on every supported OS.

If its by design then was should atleast change the hint into
ReplicationSlotValidateName() which says: "Replication slot names may only
contain letters, numbers, and the underscore character."

Comments ?


Regards,
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Why can't we used CAPITAL LETTERS into replication slot_name?

2015-09-25 Thread Rushabh Lathia
Thanks Andres.

PFA patch to fix the hint message.

On Fri, Sep 25, 2015 at 5:34 PM, Andres Freund <and...@anarazel.de> wrote:

> Hi,
>
> On 2015-09-25 17:32:39 +0530, Rushabh Lathia wrote:
> > I am just wondering why pg_create_physical_replication_slot() can't take
> > CAPITAL LETTERS into slot_name ?
>
> Windows. And OSX. Specifically case-insensitive filenames.
>
> > If its by design then was should atleast change the hint into
> > ReplicationSlotValidateName() which says: "Replication slot names may
> only
> > contain letters, numbers, and the underscore character."
>
> We could add a 'lower case' in there.
>
> Greetings,
>
> Andres Freund
>



Regards,
Rushabh Lathia
www.EnterpriseDB.com


slot_name_hint_message_change.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Getting sorted data from foreign server for merge join

2015-12-02 Thread Rushabh Lathia
Thanks Ashutosh.

Re-reviewed and Re-verified the patch, pg_sort_all_pd_v5.patch
looks good to me.


On Fri, Nov 27, 2015 at 3:02 PM, Ashutosh Bapat <
ashutosh.ba...@enterprisedb.com> wrote:

> Thanks Rushabh for your review and comments.
>
> On Thu, Nov 26, 2015 at 5:39 PM, Rushabh Lathia <rushabh.lat...@gmail.com>
> wrote:
>
>> Hi Ashutosh,
>>
>> I reviewed your latest version of patch and over all the implementation
>> and other details look good to me.
>>
>> Here are few cosmetic issues which I found:
>>
>> 1) Patch not getting applied cleanly - white space warning
>>
>>
> Done.
>
>
>> 2)
>>
>> -List   *usable_pathkeys = NIL;
>> +List*useful_pathkeys_list = NIL;/* List of all pathkeys
>> */
>>
>> Code alignment is not correct with other declared variables.
>>
>>
> Incorporated the change in the patch.
>
> 3)
>>
>> +{
>> +PathKey*pathkey;
>> +List*pathkeys;
>> +
>> +pathkey = make_canonical_pathkey(root, cur_ec,
>> +
>> linitial_oid(cur_ec->ec_opfamilies),
>> +BTLessStrategyNumber,
>> +false);
>> +pathkeys = list_make1(pathkey);
>> +useful_pathkeys_list = lappend(useful_pathkeys_list,
>> pathkeys);
>> +}
>>
>> Code alignment need to fix at make_canonical_pathkey().
>>
>
> Incorporated the change in the patch.
>
> I have also removed the TODO item in the prologue of this function, since
> none has objected to externalization of make_canonical_pathkeys till now
> and it's not expected to be part of the final commit.
>
>
>>
>> 4)
>>
>> I don't understand the meaning of following added testcase into
>> postgres_fdw.
>>
>> +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql
>> @@ -171,20 +171,35 @@ SELECT COUNT(*) FROM ft1 t1;
>>  -- join two tables
>>  SELECT t1.c1 FROM ft1 t1 JOIN ft2 t2 ON (t1.c1 = t2.c1) ORDER BY t1.c3,
>> t1.c1 OFFSET 100 LIMIT 10;
>>  -- subquery
>>  SELECT * FROM ft1 t1 WHERE t1.c3 IN (SELECT c3 FROM ft2 t2 WHERE c1 <=
>> 10) ORDER BY c1;
>>  -- subquery+MAX
>>  SELECT * FROM ft1 t1 WHERE t1.c3 = (SELECT MAX(c3) FROM ft2 t2) ORDER BY
>> c1;
>>  -- used in CTE
>>  WITH t1 AS (SELECT * FROM ft1 WHERE c1 <= 10) SELECT t2.c1, t2.c2,
>> t2.c3, t2.c4 FROM t1, ft2 t2 WHERE t1.c1 = t2.c1 ORDER BY t1.c1;
>>  -- fixed values
>>  SELECT 'fixed', NULL FROM ft1 t1 WHERE c1 = 1;
>> +-- getting data sorted from the foreign table for merge join
>> +-- Since we are interested in merge join, disable other joins
>> +SET enable_hashjoin TO false;
>> +SET enable_nestloop TO false;
>> +-- inner join, expressions in the clauses appear in the equivalence
>> class list
>> +EXPLAIN (VERBOSE, COSTS false)
>> +SELECT t1.c1, t2."C 1" FROM ft2 t1 JOIN "S 1"."T 1" t2 ON (t1.c1 =
>> t2."C 1") ORDER BY t1.c1 OFFSET 100 LIMIT 10;
>> +SELECT t1.c1, t2."C 1" FROM ft2 t1 JOIN "S 1"."T 1" t2 ON (t1.c1 = t2."C
>> 1") ORDER BY t1.c1 OFFSET 100 LIMIT 10;
>> +-- outer join, expression in the clauses do not appear in equivalence
>> class list
>> +-- but no output change as compared to the previous query
>> +EXPLAIN (VERBOSE, COSTS false)
>> +SELECT t1.c1, t2."C 1" FROM ft2 t1 LEFT JOIN "S 1"."T 1" t2 ON
>> (t1.c1 = t2."C 1") ORDER BY t1.c1 OFFSET 100 LIMIT 10;
>> +SELECT t1.c1, t2."C 1" FROM ft2 t1 LEFT JOIN "S 1"."T 1" t2 ON (t1.c1 =
>> t2."C 1") ORDER BY t1.c1 OFFSET 100 LIMIT 10;
>> +SET enable_hashjoin TO true;
>> +SET enable_nestloop TO true;
>>
>> Because, I removed the code changes of the patch and then I run the test
>> seem like it has nothing to do with the code changes. Above set of test
>> giving
>> same result with/without patch.
>>
>> Am I missing something ?
>>
>
> Actually, the output of merge join is always ordered by the pathkeys used
> for merge join. That routed through LIMIT node remains ordered. So, we
> actually do not need ORDER BY t1.c1 clause in the above queries. Without
> that clause, the tests will show difference output with and without patch.
> I have changed the attached patch accordingly.
>
>
>>
>> Apart from this I debugged the patch for each scenario (query pathkeys and
>> pathkeys arising out of the equivalence classes) and so far patch looks
>> good
>> to me.
>>
>>
> Thanks.
>
>
>> Attaching update version of patch by fixing the cosmetic changes.
>>
>>
> Attached version of patch contains your changes.
> --
> Best Wishes,
> Ashutosh Bapat
> EnterpriseDB Corporation
> The Postgres Database Company
>
>
>
>


-- 
Rushabh Lathia


[HACKERS] Bug in TupleQueueReaderNext() ?

2015-12-15 Thread Rushabh Lathia
Hi All,

TupleQueueReaderNext() always pass true for the nowait into
shm_mq_receive() call. I think here it need to pass the nowait
which is passed by the caller of TupleQueueReaderNext.

This is usefull if the caller want TupleQueueReaderNext() to wait
until it gets the tuple from the particular queue.

PFA to fix the same.

Regards,
Rushabh Lathia
www.EnterpriseDB.com
diff --git a/src/backend/executor/tqueue.c b/src/backend/executor/tqueue.c
index d625b0d..276956e 100644
--- a/src/backend/executor/tqueue.c
+++ b/src/backend/executor/tqueue.c
@@ -535,7 +535,7 @@ TupleQueueReaderNext(TupleQueueReader *reader, bool nowait, bool *done)
 		void	   *data;
 
 		/* Attempt to read a message. */
-		result = shm_mq_receive(reader->queue, , , true);
+		result = shm_mq_receive(reader->queue, , , nowait);
 
 		/* If queue is detached, set *done and return NULL. */
 		if (result == SHM_MQ_DETACHED)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in MergeAttributesIntoExisting() function.

2016-01-04 Thread Rushabh Lathia
On Mon, Jan 4, 2016 at 4:41 PM, amul sul <sul_a...@yahoo.co.in> wrote:

> Hi,
>
> In inheritance, child column's pg_attribute.attislocal flag not getting
> updated, if it is inherited using ALTER TABLE  INHERIT .
>
> Due to this, if we try to drop column(s) from parent table, which are not
> getting drop from child.
> Attached herewith is quick patch fixing this issue.
>
>
> --Demonstration:
> --
> CREATE TABLE p1 (a int , b int, c int, d int);
>
> CREATE TABLE c1 () inherits (p1);CREATE TABLE c2 (a int , b int, c int, d
> int);
>
>
> --Drop parent's column
> ALTER TABLE p1 DROP COLUMN b;
> ALTER TABLE p1 DROP COLUMN c;
> ALTER TABLE p1 DROP COLUMN d;
>
>
> postgres=# \d p1
>   Table "public.p1"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer |
> Number of child tables: 2 (Use \d+ to list them.)
>
> postgres=# \d c1
>   Table "public.c1"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer |
> Inherits: p1
>
> postgres=# \d c2
>   Table "public.c2"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer |
>  b  | integer |
>  c  | integer |
>  d  | integer |
> Inherits: p1
>
>
> --
>

Seems like you missed following command in the demonstration test:

ALTER TABLE c2 INHERIT p1;


> You can see columns are not dropped from child c2 table, which we have
> inherited using ALTER command.
>


I took a quick look at this and did some testing. Patch looks good to me.
ALTER TABLE INHERIT missing the attislocal = false check.


> Regards,
> Amul Sul
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


-- 
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-06 Thread Rushabh Lathia
ed getting the core code involved,
I thought that we can do the mandatory checks into core it self and making
completely out of dml_is_pushdown_safe(). Please correct me

.) Documentation for the new API is missing (fdw-callbacks).



Regards,


On Fri, Dec 25, 2015 at 3:30 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2015/12/24 4:34, Robert Haas wrote:
>
>> On Wed, Dec 23, 2015 at 5:50 AM, Rushabh Lathia
>> <rushabh.lat...@gmail.com> wrote:
>>
>>> +1.
>>>
>>> I like idea of separate FDW API for the DML Pushdown. Was thinking can't
>>> we
>>> can re-use the  IterateForeignScan(ForeignScanState *node) rather then
>>> introducing IterateDMLPushdown(ForeignScanState *node) new API ?
>>>
>>
> Yeah, I think we need to ask ourselves what advantage we're getting
>> out of adding any new core APIs.  Marking the scan as a pushed-down
>> update or delete has some benefit in terms of making the information
>> visible via EXPLAIN, but even that's a pretty thin benefit.  The
>> iterate method seems to just complicate the core code without any
>> benefit at all.  More generally, there is very, very little code in
>> this patch that accomplishes anything that could not be done just as
>> well with the existing methods.  So why are we even doing these core
>> changes?
>>
>
> From the FDWs' point of view, ISTM that what FDWs have to do for
> IterateDMLPushdown is quite different from what FDWs have to do for
> IterateForeignScan; eg, IterateDMLPushdown must work in accordance with
> presence/absence of a RETURNING list.  (In addition to that,
> IterateDMLPushdown has been designed so that it must make the scan tuple
> available to later RETURNING projection in nodeModifyTable.c.)  So, I think
> that it's better to FDWs to add separate APIs for the DML pushdown, making
> the FDW code much simpler.  So based on that idea, I added the postgres_fdw
> changes to the patch.  Attached is an updated version of the patch, which
> is still WIP, but I'd be happy if I could get any feedback.
>
> Tom seemed to think that we could centralize some checks in the core
>> code, say, related to triggers, or to LIMIT.  But there's nothing like
>> that in this patch, so I'm not really understanding the point.
>>
>
> For the trigger check, I added relation_has_row_level_triggers.  I placed
> that function in postgres_fdw.c in the updated patch, but I think that by
> placing that function in the core, FDWs can share that function.  As for
> the LIMIT, I'm not sure we can do something about that.
>
> I think the current design allows us to handle a pushed-down update on a
> join, ie, "UPDATE foo ... FROM bar ..." where both foo and bar are remote,
> which was Tom's concern, but I'll leave that for another patch.  Also, I
> think the current design could also extend to push down INSERT .. RETURNING
> .., but I'd like to leave that for future work.
>
> I'll add this to the next CF.
>
> Best regards,
> Etsuro Fujita
>




Rushabh Lathia


Re: [HACKERS] Getting sorted data from foreign server for merge join

2015-12-19 Thread Rushabh Lathia
On Sat, Dec 19, 2015 at 2:17 AM, Robert Haas <robertmh...@gmail.com> wrote:

> On Thu, Dec 17, 2015 at 3:32 AM, Ashutosh Bapat
> <ashutosh.ba...@enterprisedb.com> wrote:
> > On Wed, Dec 9, 2015 at 12:14 AM, Robert Haas <robertmh...@gmail.com>
> wrote:
> >> On Wed, Dec 2, 2015 at 6:45 AM, Rushabh Lathia <
> rushabh.lat...@gmail.com>
> >> wrote:
> >> > Thanks Ashutosh.
> >> >
> >> > Re-reviewed and Re-verified the patch, pg_sort_all_pd_v5.patch
> >> > looks good to me.
> >>
> >> This patch needs a rebase.
> >
> > Done.
>
> Thanks.
>
> >> It's not going to work to say this is a patch proposed for commit when
> >> it's still got a TODO comment in it that obviously needs to be
> >> changed.   And the formatting of that long comment is pretty weird,
> >> too, and not consistent with other functions in that same file (e.g.
> >> get_remote_estimate, ec_member_matches_foreign, create_cursor).
> >>
> >
> > The TODO was present in v4 but not in v5 and is not present in v6
> attached
> > here.. Formatted comment according estimate_path_cost_size(),
> > convert_prep_stmt_params().
>
> Hrm, I must have been looking at the wrong version somehow.  Sorry about
> that.
>
> >> Aside from that, I think before we commit this, somebody should do
> >> some testing that demonstrates that this is actually a good idea.  Not
> >> as part of the test case set for this patch, but just in general.
> >> Merge joins are typically going to be relevant for large tables, but
> >> the examples in the regression tests are necessarily tiny.  I'd like
> >> to see some sample data and some sample queries that get appreciably
> >> faster with this code.  If we can't find any, we don't need the code.
> >>
> >
> > I tested the patch on my laptop with two types of queries, a join between
> > two foreign tables on different foreign servers (pointing to the same
> self
> > server) and a join between one foreign and one local table. The foreign
> > tables and servers are created using sort_pd_setup.sql attached. Foreign
> > tables pointed to table with index useful for join clause. Both the
> joining
> > tables had 10M rows. The execution time of query was measured for 100
> runs
> > and average and standard deviation were calculated (using function
> > query_execution_stats() in script sort_pd.sql) and are presented below.
>
> OK, cool.
>
> I went over this patch in some detail today and did a lot of cosmetic
> cleanup.  The results are attached.  I'm fairly happy with this
> version, but let me know what you think.  Of course, feedback from
> others is more than welcome also.
>
>
Had a quick look at the patch changes and also performed basic
sanity check. Patch looks good to me.

Thanks.

--
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>



-- 
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2015-12-23 Thread Rushabh Lathia
On Mon, Dec 21, 2015 at 6:20 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2015/11/26 18:00, Etsuro Fujita wrote:
>
>> On 2015/11/25 20:36, Thom Brown wrote:
>>
>>> On 13 May 2015 at 04:10, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
>>> wrote:
>>>
>>>> On 2015/05/13 0:55, Stephen Frost wrote:
>>>>
>>>>> While the EXPLAIN output changed, the structure hasn't really changed
>>>>> from what was discussed previously and there's not been any real
>>>>> involvment from the core code in what's happening here.
>>>>>
>>>>> Clearly, the documentation around how to use the FDW API hasn't changed
>>>>> at all and there's been no additions to it for handling bulk work.
>>>>> Everything here continues to be done inside of postgres_fdw, which
>>>>> essentially ignores the prescribed "Update/Delete one tuple" interface
>>>>> for ExecForeignUpdate/ExecForeignDelete.
>>>>>
>>>>> I've spent the better part of the past two days trying to reason my way
>>>>> around that while reviewing this patch and I haven't come out the other
>>>>> side any happier with this approach than I was back in
>>>>> 20140911153049.gc16...@tamriel.snowman.net.
>>>>>
>>>>> There are other things that don't look right to me, such as what's
>>>>> going
>>>>> on at the bottom of push_update_down(), but I don't think there's much
>>>>> point going into it until we figure out what the core FDW API here
>>>>> should look like.  It might not be all that far from what we have now,
>>>>> but I don't think we can just ignore the existing, documented, API.
>>>>>
>>>>
> OK, I'll try to introduce the core FDW API for this (and make changes
>>>> to the
>>>> core code) to address your previous comments.
>>>>
>>>
> I'm a bit behind in reading up on this, so maybe it's been covered
>>> since, but is there a discussion of this API on another thread, or a
>>> newer patch available?
>>>
>>
> To address Stephen's comments, I'd like to propose the following FDW APIs:
>
> bool
> PlanDMLPushdown (PlannerInfo *root,
>  ModifyTable *plan,
>  Index resultRelation,
>  int subplan_index);
>
> This is called in make_modifytable, before calling PlanForeignModify. This
> checks to see whether a given UPDATE/DELETE .. RETURNING .. is
> pushdown-safe and if so, performs planning actions needed for the DML
> pushdown.  The idea is to modify a ForeignScan subplan accordingly as in
> the previous patch.  If the DML is pushdown-safe, this returns true, and we
> don't call PlanForeignModify anymore.  (Else returns false and call
> PlanForeignModify as before.)  When the DML is pushdown-safe, we hook the
> following FDW APIs located in nodeForeignscan.c, instead of
> BeginForeignModify, ExecForeignUpdate/ExecForeignDelete and
> EndForeignModify:
>
> void
> BeginDMLPushdown (ForeignScanState *node,
>   int eflags);
>
> This initializes the DML pushdown, like BeginForeignScan.
>
> TupleTableSlot *
> IterateDMLPushdown (ForeignScanState *node);
>
> This fetches one returning result from the foreign server, like
> IterateForeignScan, if having a RETURNING clause.  If not, just return an
> empty slot.  (I'm thinking that it's required that the FDW replaces the
> targetlist of the ForeignScan subplan to the RETURNING clause during
> PlanDMLPushdown, if having the clause, so that we do nothing at
> ModifyTable.)
>
> void
> EndDMLPushdown (ForeignScanState *node);
>
> This finishes the DML pushdown, like EndForeignScan.
>

+1.

I like idea of separate FDW API for the DML Pushdown. Was thinking can't we
can re-use the  IterateForeignScan(ForeignScanState *node) rather then
introducing IterateDMLPushdown(ForeignScanState *node) new API ?


> I'm attaching a WIP patch, which only includes changes to the core.  I'm
> now working on the postgres_fdw patch to demonstrate that these APIs work
> well, but I'd be happy if I could get any feedback earlier.
>
> Best regards,
> Etsuro Fujita
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


-- 
Rushabh Lathia


Re: [HACKERS] Getting sorted data from foreign server for merge join

2015-11-26 Thread Rushabh Lathia
 speculatively and then test which ones survive
>> truncate_useless_pathkeys(), or something like that.  This isn't an
>> enormously complicated patch, but it duplicates more of the logic in
>> pathkeys.c than I'd like.
>>
>
>
> pathkeys_useful_for_merging(), truncate_useless_pathkeys() and host of
> functions in that area are crafted to assess the usefulness of given
> pathkeys which for regular tables are "speculated" from indexes on that
> table. Foreign tables do not have indexes and neither we have information
> about the indexes (if any) on foreign server, thus pathkeys to be assessed
> are not readily available. Hence we need some other way of "speculating"
> pathkeys for foreign tables. We can not just generate pathkeys because
> there are infinite possible expressions on which pathkeys can be built. So,
> we hunt for the expressions at various places like root_pathkeys, merge
> joinable clauses etc. The seeming duplication of code is because the places
> where we are hunting for useful pathkeys in case of foreign table are same
> as the places used for assessing usefulness for pathkeys in case of regular
> table. Thus in case of foreign tables, we always generate useful pathkeys,
> which do not need any further assessment. For regular tables we have set of
> pathkeys which need to be assessed. Because of this difference the code
> differs in details.
>
> right_merge_direction() compares whether a given pathkey (readily
> available or derived from an index) has the same direction as required by
> root->query_pathkeys or ascending direction. For foreign tables, the
> pathkeys are themselves created from root->query_pathkeys, thus doesn't
> require this assessment. The patch creates pathkeys other than those from
> root->query_pathkeys in the ascending order (like other places in the code
> eg. select_outer_pathkeys_for_merge()), which is what
> right_merge_direction() assesses. So again we don't need to call
> right_merge_direction().
>
> non-EC-derivable join is interesting. Thanks for bringing it out. These
> are mergejoinable clauses in an OUTER join, where columns from inner side
> can be NULL, and thus not exactly equal. I will incorporate that change in
> the patch. Again while doing so, we have to just pick up the right or left
> equivalence class from a given RestrictInfo and don't need to assess it
> further like pathkeys_useful_for_merging().
>
> Added this change in the attached patch.
>
>>
>> I'm inclined to think that we shouldn't consider pathkeys that might
>> be useful for merge-joining unless we're using remote estimates.  It
>> seems more speculative than pushing down a toplevel sort.
>>
>
> I agree with you but let me elaborate why I agree. The pathkeys we are
> generating are heauristic in nature and thus may not be useful in the same
> sense as pathkeys for regular tables. If use_remote_estimate is ON, the
> planner will spend time in explaining multiple queries, but it will be in
> better position to cost the usage. If use_remote_estimate is OFF, we will
> altogether loose chances of merge join, which doesn't look good to me. But
> a speculative costing done in this case can result in choosing wrong plan
> similar to the same case as toplevel sort. But then choosing a wrong join
> has more serious implications than estimating wrong cost for top level join.
>
>
>>
>> This patch seems rather light on test cases.
>>
>>
> Added tests. Let me know if you have any specific scenario in mind.
> --
> Best Wishes,
> Ashutosh Bapat
> EnterpriseDB Corporation
> The Postgres Database Company
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


-- 
Rushabh Lathia
www.EntepriseDB.com


pg_sort_all_pd_v4.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-12 Thread Rushabh Lathia
On Thu, Jan 7, 2016 at 6:15 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2016/01/06 18:58, Rushabh Lathia wrote:
>
>> I started looking at updated patch and its definitely iked the new
>> approach.
>>
>
> Thanks for the review!
>
> With the initial look and test overall things looking great, I am still
>> reviewing the code changes but here are few early doubts/questions:
>>
>
> .) What the need of following change ?
>>
>> @@ -833,9 +833,6 @@ appendWhereClause(StringInfo buf,
>>  int nestlevel;
>>  ListCell   *lc;
>>
>> -   if (params)
>> -   *params = NIL;  /* initialize result list to empty */
>> -
>>  /* Set up context struct for recursion */
>>  context.root = root;
>>  context.foreignrel = baserel;
>> @@ -971,6 +968,63 @@ deparseUpdateSql(StringInfo buf, PlannerInfo *root,
>>   }
>>
>
> It is needed for deparsePushedDownUpdateSql to store params in both WHERE
> clauses and expressions to assign to the target columns
> into one params_list list.
>

Hmm sorry but I am still not getting the point, can you provide some
example to explain this ?

.) When Tom Lane and Stephen Frost suggested getting the core code involved,
>> I thought that we can do the mandatory checks into core it self and making
>> completely out of dml_is_pushdown_safe(). Please correct me
>>
>
> The reason why I put that function in postgres_fdw.c is Check point 4:
>
> +  * 4. We can't push an UPDATE down, if any expressions to assign to the
> target
> +  * columns are unsafe to evaluate on the remote server.
>
>
Here I was talking about checks related to triggers, or to LIMIT. I think
earlier thread talked about those mandatory check to the core. So may
be we can move those checks into make_modifytable() before calling
the PlanDMLPushdown.

I think this depends on the capabilities of the FDW.
>
> .) Documentation for the new API is missing (fdw-callbacks).
>>
>
> Will add the docs.
>





-- 
Rushabh Lathia


[HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference

2016-06-21 Thread Rushabh Lathia
 | f  | f
 (7654,MARTIN,SALESMAN,7698,1981-09-28,1250.00,1400.00,30) | f  | t
 (7698,BLAKE,MANAGER,7839,1981-05-01,2850.00,,30)  | f  | f
 (7782,CLARK,MANAGER,7839,1981-06-09,2450.00,,10)  | f  | f
 (7788,SCOTT,ANALYST,7566,1987-04-19,3000.00,,20)  | f  | f
 (7839,KING,PRESIDENT,,1981-11-17,5000.00,,10) | f  | f
 (7844,TURNER,SALESMAN,7698,1981-09-08,1500.00,0.00,30)| f  | t
 (7876,ADAMS,CLERK,7788,1987-05-23,1100.00,,20)| f  | f
 (7900,JAMES,CLERK,7698,1981-12-03,950.00,,30) | f  | f
 (7902,FORD,ANALYST,7566,1981-12-03,3000.00,,20)   | f  | f
 (7934,MILLER,CLERK,7782,1982-01-23,1300.00,,10)   | f  | f
(14 rows)

Now I was under impression the IS NOT NULL should be always in inverse of
IS NULL, but clearly here its not the case with wholerow. But further
looking at
the document its saying different thing for wholerow:

https://www.postgresql.org/docs/9.5/static/functions-comparison.html

Note: If the expression is row-valued, then IS NULL is true when the row
expression
itself is null or when all the row's fields are null, while IS NOT NULL is
true
when the row expression itself is non-null and all the row's fields are
non-null.
Because of this behavior, IS NULL and IS NOT NULL do not always return
inverse
results for row-valued expressions, i.e., a row-valued expression that
contains
both NULL and non-null values will return false for both tests. This
definition
conforms to the SQL standard, and is a change from the inconsistent behavior
exhibited by PostgreSQL versions prior to 8.2.


And as above documentation clearly says that IS NULL and IS NOT NULL do not
always return inverse results for row-valued expressions. So need to change
the
deparse logic into postgres_fdw - how ? May be to use IS NULL rather then IS
NOT NULL?

Input/thought?

Regards
Rushabh Lathia
www.EnterpriseDB.com
CREATE TABLE dept (
deptno  numeric(2) NOT NULL CONSTRAINT dept_pk PRIMARY KEY,
dname   varchar(14) CONSTRAINT dept_dname_uq UNIQUE,
loc varchar(13)
);

CREATE TABLE emp (
empno   numeric(4) NOT NULL CONSTRAINT emp_pk PRIMARY KEY,
ename   varchar(10),
job varchar(9),
mgr numeric(4),
hiredateDATE,
sal numeric(7,2) CONSTRAINT emp_sal_ck CHECK (sal > 0),
commnumeric(7,2),
deptno  numeric(2) CONSTRAINT emp_ref_dept_fk
REFERENCES dept(deptno)
);


INSERT INTO dept VALUES (10,'ACCOUNTING','NEW YORK');
INSERT INTO dept VALUES (20,'RESEARCH','DALLAS');
INSERT INTO dept VALUES (30,'SALES','CHICAGO');
INSERT INTO dept VALUES (40,'OPERATIONS','BOSTON');

INSERT INTO emp VALUES (7369,'SMITH','CLERK',7902,'17-DEC-80',800,NULL,20);
INSERT INTO emp VALUES (7499,'ALLEN','SALESMAN',7698,'20-FEB-81',1600,300,30);
INSERT INTO emp VALUES (7521,'WARD','SALESMAN',7698,'22-FEB-81',1250,500,30);
INSERT INTO emp VALUES (7566,'JONES','MANAGER',7839,'02-APR-81',2975,NULL,20);
INSERT INTO emp VALUES (7654,'MARTIN','SALESMAN',7698,'28-SEP-81',1250,1400,30);
INSERT INTO emp VALUES (7698,'BLAKE','MANAGER',7839,'01-MAY-81',2850,NULL,30);
INSERT INTO emp VALUES (7782,'CLARK','MANAGER',7839,'09-JUN-81',2450,NULL,10);
INSERT INTO emp VALUES (7788,'SCOTT','ANALYST',7566,'19-APR-87',3000,NULL,20);
INSERT INTO emp VALUES (7839,'KING','PRESIDENT',NULL,'17-NOV-81',5000,NULL,10);
INSERT INTO emp VALUES (7844,'TURNER','SALESMAN',7698,'08-SEP-81',1500,0,30);
INSERT INTO emp VALUES (7876,'ADAMS','CLERK',7788,'23-MAY-87',1100,NULL,20);
INSERT INTO emp VALUES (7900,'JAMES','CLERK',7698,'03-DEC-81',950,NULL,30);
INSERT INTO emp VALUES (7902,'FORD','ANALYST',7566,'03-DEC-81',3000,NULL,20);
INSERT INTO emp VALUES (7934,'MILLER','CLERK',7782,'23-JAN-82',1300,NULL,10);

CREATE EXTENSION postgres_fdw;
CREATE SERVER s1 FOREIGN DATA WRAPPER postgres_fdw options (dbname 'postgres');
CREATE USER MAPPING FOR CURRENT_USER SERVER s1;

CREATE foreign TABLE f_dept (
deptno  numeric(2),
dname   varchar(14),
loc varchar(13)
) SERVER s1 options (table_name 'dept');

CREATE foreign TABLE f_emp (
empno   numeric(4),
ename   varchar(10),
job varchar(9),
mgr numeric(4),
hiredateDATE,
sal numeric(7,2),
commnumeric(7,2),
deptno  numeric(2) )
SERVER s1 options (table_name 'emp');


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-14 Thread Rushabh Lathia
On Thu, Jan 14, 2016 at 2:00 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2016/01/12 20:31, Rushabh Lathia wrote:
>
>> On Thu, Jan 7, 2016 at 6:15 PM, Etsuro Fujita
>> <fujita.ets...@lab.ntt.co.jp <mailto:fujita.ets...@lab.ntt.co.jp>> wrote:
>> On 2016/01/06 18:58, Rushabh Lathia wrote:
>> .) What the need of following change ?
>>
>> @@ -833,9 +833,6 @@ appendWhereClause(StringInfo buf,
>>   int nestlevel;
>>   ListCell   *lc;
>>
>> -   if (params)
>> -   *params = NIL;  /* initialize result list to
>> empty */
>> -
>>   /* Set up context struct for recursion */
>>   context.root = root;
>>   context.foreignrel = baserel;
>> @@ -971,6 +968,63 @@ deparseUpdateSql(StringInfo buf,
>> PlannerInfo *root,
>>}
>>
>
> It is needed for deparsePushedDownUpdateSql to store params in both
>> WHERE clauses and expressions to assign to the target columns
>> into one params_list list.
>>
>
> Hmm sorry but I am still not getting the point, can you provide some
>> example to explain this ?
>>
>
> Sorry, maybe my explanation was not enough.  Consider:
>
> postgres=# create foreign table ft1 (a int, b int) server myserver options
> (table_name 't1');
> postgres=# insert into ft1 values (0, 0);
> postgres=# prepare mt(int, int) as update ft1 set a = $1 where b = $2;
> postgres=# explain verbose execute mt(1, 0);
> postgres=# explain verbose execute mt(1, 0);
> postgres=# explain verbose execute mt(1, 0);
> postgres=# explain verbose execute mt(1, 0);
> postgres=# explain verbose execute mt(1, 0);
>
> After the 5 executions of mt we have
>
> postgres=# explain verbose execute mt(1, 0);
>  QUERY PLAN
>
> 
>  Update on public.ft1  (cost=100.00..140.35 rows=12 width=10)
>->  Foreign Update on public.ft1  (cost=100.00..140.35 rows=12 width=10)
>  Remote SQL: UPDATE public.t1 SET a = $1::integer WHERE ((b =
> $2::integer))
> (3 rows)
>
> If we do that initialization in appendWhereClause, we would get a wrong
> params_list list and a wrong remote pushed-down query for the last mt() in
> deparsePushedDownUpdateSql.
>

Strange, I am seeing same behaviour with or without that initialization in
appendWhereClause. After the 5 executions of mt I with or without I am
getting following output:

postgres=# explain verbose execute mt(1, 0);
 QUERY
PLAN

 Update on public.ft2  (cost=100.00..140.35 rows=12 width=10)
   ->  Foreign Update on public.ft2  (cost=100.00..140.35 rows=12 width=10)
 Remote SQL: UPDATE public.t2 SET a = $1::integer WHERE ((b =
$2::integer))
(3 rows)



>
> .) When Tom Lane and Stephen Frost suggested getting the core
>> code involved,
>> I thought that we can do the mandatory checks into core it self
>> and making
>> completely out of dml_is_pushdown_safe(). Please correct me
>>
>
> The reason why I put that function in postgres_fdw.c is Check point 4:
>>
>> +  * 4. We can't push an UPDATE down, if any expressions to assign
>> to the target
>> +  * columns are unsafe to evaluate on the remote server.
>>
>
> Here I was talking about checks related to triggers, or to LIMIT. I think
>> earlier thread talked about those mandatory check to the core. So may
>> be we can move those checks into make_modifytable() before calling
>> the PlanDMLPushdown.
>>
>
> Noticed that.  Will do.
>
> BTW, I keep a ForeignScan node pushing down an update to the remote
> server, in the updated patches.  I have to admit that that seems like
> rather a misnomer.  So, it might be worth adding a new ForeignUpdate node,
> but my concern about that is that if doing so, we would have a lot of
> duplicate code in ForeignUpdate and ForeignScan.  What do you think about
> that?
>
>
Yes, I noticed that in the patch and I was about to point that out in my
final review. As first review I was mainly focused on the functionality
testing
and other overview things. Another reason I haven't posted that in my
first review round is, I was not quite sure whether we need the
separate new node ForeignUpdate, ForeignDelete  and want to duplicate
code? Was also not quite sure about the fact that what we will achieve
by doing that.

So I thought, I will have this open question in my final review comment,
and will take committer's opinion on this. Since you already raised this
question lets take others opinion on this.

Regards,



-- 
Rushabh Lathia
www.EnterpriseDB.come


[HACKERS] ERROR: ORDER/GROUP BY expression not found in targetlist

2016-06-27 Thread Rushabh Lathia
Hi,

Consider the below testcase:

CREATE TABLE tab(
  c1 INT NOT NULL,
  c2 INT NOT NULL
);
INSERT INTO tab VALUES (1, 2);
INSERT INTO tab VALUES (2, 1);
INSERT INTO tab VALUES (1, 2);


case 1:

SELECT c.c1, c.c2 from tab C WHERE c.c2 = ANY (
SELECT 1 FROM tab A WHERE a.c2 IN (
  SELECT 1 FROM tab B WHERE a.c1 = c.c1
  GROUP BY rollup(a.c1)
)
GROUP BY cube(c.c2)
  )
  GROUP BY grouping sets(c.c1, c.c2)
  ORDER BY 1, 2 DESC;
ERROR:  ORDER/GROUP BY expression not found in targetlist

case 2:

create sequence s;
SELECT setval('s', max(100)) from tab;
ERROR:  ORDER/GROUP BY expression not found in targetlist

Looking further I found that error started coming with below commit:

commit aeb9ae6457865c8949641d71a9523374d843a418
Author: Tom Lane <t...@sss.pgh.pa.us>
Date:   Thu May 26 14:52:24 2016 -0400

Disable physical tlist if any Var would need multiple sortgroupref
labels.

If we revert the above commit, then the give test are running
as expected.


Regards,
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-02-04 Thread Rushabh Lathia
On Wed, Feb 3, 2016 at 3:31 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2016/01/28 15:20, Rushabh Lathia wrote:
>
>> On Thu, Jan 28, 2016 at 11:33 AM, Etsuro Fujita
>> <fujita.ets...@lab.ntt.co.jp <mailto:fujita.ets...@lab.ntt.co.jp>> wrote:
>>
>> On 2016/01/27 21:23, Rushabh Lathia wrote:
>>
>> If I understood correctly, above documentation means, that if
>> FDW have
>> DMLPushdown APIs that is enough. But in reality thats not the
>> case, we
>> need  ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete
>> in case
>> DML is not pushable.
>>
>> And here fact is DMLPushdown APIs are optional for FDW, so that
>> if FDW
>> don't have DMLPushdown APIs they can still very well perform the
>> DML
>> operations using ExecForeignInsert, ExecForeignUpdate, or
>> ExecForeignDelete.
>>
>
> So documentation should be like:
>>
>> If the IsForeignRelUpdatable pointer is set to NULL, foreign
>> tables are
>> assumed to be insertable, updatable, or deletable if the FDW
>> provides
>> ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete
>> respectively,
>>
>> If FDW provides DMLPushdown APIs and the DML are pushable to the
>> foreign
>> server, then FDW still needs ExecForeignInsert,
>> ExecForeignUpdate, or
>> ExecForeignDelete for the non-pushable DML operation.
>>
>> What's your opinion ?
>>
>
> I agree that we should add this to the documentation, too.
>>
>
> I added docs to the IsForeignRelUpdatable documentation.  Also, a brief
> introductory remark has been added at the beginning of the DML pushdown
> APIs' documentation.
>
> BTW, if I understand correctly, I think we should also modify
>> relation_is_updatabale() accordingly.  Am I right?
>>
>
> Yep, we need to modify relation_is_updatable().
>>
>
> I thought I'd modify that function in the same way as
> CheckValidResultRel(), but I noticed that we cannot do that, because we
> don't have any information on whether each update is pushed down to the
> remote server by PlanDMLPushdown, during relation_is_updatabale().  So, I
> left that function as-is.  relation_is_updatabale() is just used for
> display in the information_schema views, so ISTM that that function is fine
> as-is.  (As for CheckValidResultRel(), I revised it so as to check the
> presence of DML pushdown APIs after checking the existing APIs if the given
> command will be pushed down.  The reason is because we assume the presence
> of the existing APIs, anyway.)
>
> I revised other docs and some comments, mostly for consistency.
>
>
I just started reviewing this and realized that patch is not getting applied
cleanly on latest source, it having some conflicts. Can you please upload
the correct version of patch.


Attached is an updated version of the patch, which has been created on top
> of the updated version of the bugfix patch posted by Robert in [1]
> (attached).
>
>
> Best regards,
> Etsuro Fujita
>
> [1]
> http://www.postgresql.org/message-id/ca+tgmoz40j2uc5ac1nxu03oj4crvolks15xx+ptfp-1u-8z...@mail.gmail.com
>



-- 
Rushabh Lathia


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-02-10 Thread Rushabh Lathia
Fujita-san, I am attaching update version of the patch, which added
the documentation update.

Once we finalize this, I feel good with the patch and think that we
could mark this as ready for committer.

On Fri, Feb 5, 2016 at 5:33 PM, Rushabh Lathia <rushabh.lat...@gmail.com>
wrote:

>
>
> On Fri, Feb 5, 2016 at 4:46 PM, Rushabh Lathia <rushabh.lat...@gmail.com>
> wrote:
>
>>
>>
>> On Wed, Feb 3, 2016 at 3:31 PM, Etsuro Fujita <
>> fujita.ets...@lab.ntt.co.jp> wrote:
>>
>>> On 2016/01/28 15:20, Rushabh Lathia wrote:
>>>
>>>> On Thu, Jan 28, 2016 at 11:33 AM, Etsuro Fujita
>>>> <fujita.ets...@lab.ntt.co.jp <mailto:fujita.ets...@lab.ntt.co.jp>>
>>>> wrote:
>>>>
>>>> On 2016/01/27 21:23, Rushabh Lathia wrote:
>>>>
>>>> If I understood correctly, above documentation means, that if
>>>> FDW have
>>>> DMLPushdown APIs that is enough. But in reality thats not the
>>>> case, we
>>>> need  ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete
>>>> in case
>>>> DML is not pushable.
>>>>
>>>> And here fact is DMLPushdown APIs are optional for FDW, so that
>>>> if FDW
>>>> don't have DMLPushdown APIs they can still very well perform
>>>> the DML
>>>> operations using ExecForeignInsert, ExecForeignUpdate, or
>>>> ExecForeignDelete.
>>>>
>>>
>>> So documentation should be like:
>>>>
>>>> If the IsForeignRelUpdatable pointer is set to NULL, foreign
>>>> tables are
>>>> assumed to be insertable, updatable, or deletable if the FDW
>>>> provides
>>>> ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete
>>>> respectively,
>>>>
>>>> If FDW provides DMLPushdown APIs and the DML are pushable to the
>>>> foreign
>>>> server, then FDW still needs ExecForeignInsert,
>>>> ExecForeignUpdate, or
>>>> ExecForeignDelete for the non-pushable DML operation.
>>>>
>>>> What's your opinion ?
>>>>
>>>
>>> I agree that we should add this to the documentation, too.
>>>>
>>>
>>> I added docs to the IsForeignRelUpdatable documentation.  Also, a brief
>>> introductory remark has been added at the beginning of the DML pushdown
>>> APIs' documentation.
>>>
>>> BTW, if I understand correctly, I think we should also modify
>>>> relation_is_updatabale() accordingly.  Am I right?
>>>>
>>>
>>> Yep, we need to modify relation_is_updatable().
>>>>
>>>
>>> I thought I'd modify that function in the same way as
>>> CheckValidResultRel(), but I noticed that we cannot do that, because we
>>> don't have any information on whether each update is pushed down to the
>>> remote server by PlanDMLPushdown, during relation_is_updatabale().
>>
>>
>> Sorry I didn't get you here. Can't resultRelInfo->ri_FdwPushdown gives
>> information update whether update is pushed down safe or not ? What my
>> concern here is, lets say resultRelInfo->ri_FdwPushdown marked as true
>> (PlanDMLPushdown return true), but later into CheckValidResultRel() it
>> found out that missing BeginDMLPushdown, IterateDMLPushdown and
>> EndDMLPushdown APIs and it will end up with error.
>>
>> What I think CheckValidResultRel() should do is, if
>> resultRelInfo->ri_FdwPushdown is true then check for the DMLPushdown API
>> and if it doesn't find those API then check for traditional APIs
>> (ExecForeignInsert, ExecForeignUpdate or ExecForeignDelete). And when it
>> doesn't find both then it should return an error.
>>
>> I changed CheckValidResultRel(), where
>>
>> 1) Don't throw an error if resultRelInfo->ri_FdwPushdown is true and
>> DMLPushdown APIs are missing as query can still perform operation with
>> traditional ExecForeign APIs. So in this situation just marked
>> resultRelInfo->ri_FdwPushdown to false.
>>
>> (Wondering can we add the checks for DMLPushdown APIs into
>> PlanDMLPushdown as additional check? Means PlanDMLPushdown should return
>> true only if FDW provides the BeginDMLPushdown & IterateDMLPushdown &
>> EndDMLPushdown APIs ? What you say ?)
>>
>>

Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-02-05 Thread Rushabh Lathia
On Wed, Feb 3, 2016 at 3:31 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2016/01/28 15:20, Rushabh Lathia wrote:
>
>> On Thu, Jan 28, 2016 at 11:33 AM, Etsuro Fujita
>> <fujita.ets...@lab.ntt.co.jp <mailto:fujita.ets...@lab.ntt.co.jp>> wrote:
>>
>> On 2016/01/27 21:23, Rushabh Lathia wrote:
>>
>> If I understood correctly, above documentation means, that if
>> FDW have
>> DMLPushdown APIs that is enough. But in reality thats not the
>> case, we
>> need  ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete
>> in case
>> DML is not pushable.
>>
>> And here fact is DMLPushdown APIs are optional for FDW, so that
>> if FDW
>> don't have DMLPushdown APIs they can still very well perform the
>> DML
>> operations using ExecForeignInsert, ExecForeignUpdate, or
>> ExecForeignDelete.
>>
>
> So documentation should be like:
>>
>> If the IsForeignRelUpdatable pointer is set to NULL, foreign
>> tables are
>> assumed to be insertable, updatable, or deletable if the FDW
>> provides
>> ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete
>> respectively,
>>
>> If FDW provides DMLPushdown APIs and the DML are pushable to the
>> foreign
>> server, then FDW still needs ExecForeignInsert,
>> ExecForeignUpdate, or
>> ExecForeignDelete for the non-pushable DML operation.
>>
>> What's your opinion ?
>>
>
> I agree that we should add this to the documentation, too.
>>
>
> I added docs to the IsForeignRelUpdatable documentation.  Also, a brief
> introductory remark has been added at the beginning of the DML pushdown
> APIs' documentation.
>
> BTW, if I understand correctly, I think we should also modify
>> relation_is_updatabale() accordingly.  Am I right?
>>
>
> Yep, we need to modify relation_is_updatable().
>>
>
> I thought I'd modify that function in the same way as
> CheckValidResultRel(), but I noticed that we cannot do that, because we
> don't have any information on whether each update is pushed down to the
> remote server by PlanDMLPushdown, during relation_is_updatabale().


Sorry I didn't get you here. Can't resultRelInfo->ri_FdwPushdown gives
information update whether update is pushed down safe or not ? What my
concern here is, lets say resultRelInfo->ri_FdwPushdown marked as true
(PlanDMLPushdown return true), but later into CheckValidResultRel() it
found out that missing BeginDMLPushdown, IterateDMLPushdown and
EndDMLPushdown APIs and it will end up with error.

What I think CheckValidResultRel() should do is, if
resultRelInfo->ri_FdwPushdown is true then check for the DMLPushdown API
and if it doesn't find those API then check for traditional APIs
(ExecForeignInsert, ExecForeignUpdate or ExecForeignDelete). And when it
doesn't find both then it should return an error.

I changed CheckValidResultRel(), where

1) Don't throw an error if resultRelInfo->ri_FdwPushdown is true and
DMLPushdown APIs are missing as query can still perform operation with
traditional ExecForeign APIs. So in this situation just marked
resultRelInfo->ri_FdwPushdown to false.

(Wondering can we add the checks for DMLPushdown APIs into PlanDMLPushdown
as additional check? Means PlanDMLPushdown should return true only if FDW
provides the BeginDMLPushdown & IterateDMLPushdown & EndDMLPushdown APIs ?
What you say ?)

2) Don't throw an error if resultRelInfo->ri_FdwPushdown is true and
DMLPushdown APIs is present but ExecForeign APIs are missing.
3) Throw an error if resultRelInfo->ri_FdwPushdown is false and ExecForeign
APIs are missing.

Attaching is the WIP patch here, do share your thought.
(need to apply on top of V6 patch)


So, I left that function as-is.  relation_is_updatabale() is just used for
> display in the information_schema views, so ISTM that that function is fine
> as-is.  (As for CheckValidResultRel(), I revised it so as to check the
> presence of DML pushdown APIs after checking the existing APIs if the given
> command will be pushed down.  The reason is because we assume the presence
> of the existing APIs, anyway.)
>
>
I revised other docs and some comments, mostly for consistency.
>
> Attached is an updated version of the patch, which has been created on top
> of the updated version of the bugfix patch posted by Robert in [1]
> (attached).
>
>
> Best regards,
> Etsuro Fujita
>
> [1]
> http://www.postgresql.org/message-id/ca+tgmoz40j2uc5ac1nxu03oj4crvolks15xx+ptfp-1u-8z...@mail.gmail.com
>

Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-02-05 Thread Rushabh Lathia
On Fri, Feb 5, 2016 at 4:46 PM, Rushabh Lathia <rushabh.lat...@gmail.com>
wrote:

>
>
> On Wed, Feb 3, 2016 at 3:31 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp
> > wrote:
>
>> On 2016/01/28 15:20, Rushabh Lathia wrote:
>>
>>> On Thu, Jan 28, 2016 at 11:33 AM, Etsuro Fujita
>>> <fujita.ets...@lab.ntt.co.jp <mailto:fujita.ets...@lab.ntt.co.jp>>
>>> wrote:
>>>
>>> On 2016/01/27 21:23, Rushabh Lathia wrote:
>>>
>>> If I understood correctly, above documentation means, that if
>>> FDW have
>>> DMLPushdown APIs that is enough. But in reality thats not the
>>> case, we
>>> need  ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete
>>> in case
>>> DML is not pushable.
>>>
>>> And here fact is DMLPushdown APIs are optional for FDW, so that
>>> if FDW
>>> don't have DMLPushdown APIs they can still very well perform the
>>> DML
>>> operations using ExecForeignInsert, ExecForeignUpdate, or
>>> ExecForeignDelete.
>>>
>>
>> So documentation should be like:
>>>
>>> If the IsForeignRelUpdatable pointer is set to NULL, foreign
>>> tables are
>>> assumed to be insertable, updatable, or deletable if the FDW
>>> provides
>>> ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete
>>> respectively,
>>>
>>> If FDW provides DMLPushdown APIs and the DML are pushable to the
>>> foreign
>>> server, then FDW still needs ExecForeignInsert,
>>> ExecForeignUpdate, or
>>> ExecForeignDelete for the non-pushable DML operation.
>>>
>>> What's your opinion ?
>>>
>>
>> I agree that we should add this to the documentation, too.
>>>
>>
>> I added docs to the IsForeignRelUpdatable documentation.  Also, a brief
>> introductory remark has been added at the beginning of the DML pushdown
>> APIs' documentation.
>>
>> BTW, if I understand correctly, I think we should also modify
>>> relation_is_updatabale() accordingly.  Am I right?
>>>
>>
>> Yep, we need to modify relation_is_updatable().
>>>
>>
>> I thought I'd modify that function in the same way as
>> CheckValidResultRel(), but I noticed that we cannot do that, because we
>> don't have any information on whether each update is pushed down to the
>> remote server by PlanDMLPushdown, during relation_is_updatabale().
>
>
> Sorry I didn't get you here. Can't resultRelInfo->ri_FdwPushdown gives
> information update whether update is pushed down safe or not ? What my
> concern here is, lets say resultRelInfo->ri_FdwPushdown marked as true
> (PlanDMLPushdown return true), but later into CheckValidResultRel() it
> found out that missing BeginDMLPushdown, IterateDMLPushdown and
> EndDMLPushdown APIs and it will end up with error.
>
> What I think CheckValidResultRel() should do is, if
> resultRelInfo->ri_FdwPushdown is true then check for the DMLPushdown API
> and if it doesn't find those API then check for traditional APIs
> (ExecForeignInsert, ExecForeignUpdate or ExecForeignDelete). And when it
> doesn't find both then it should return an error.
>
> I changed CheckValidResultRel(), where
>
> 1) Don't throw an error if resultRelInfo->ri_FdwPushdown is true and
> DMLPushdown APIs are missing as query can still perform operation with
> traditional ExecForeign APIs. So in this situation just marked
> resultRelInfo->ri_FdwPushdown to false.
>
> (Wondering can we add the checks for DMLPushdown APIs into PlanDMLPushdown
> as additional check? Means PlanDMLPushdown should return true only if FDW
> provides the BeginDMLPushdown & IterateDMLPushdown & EndDMLPushdown APIs ?
> What you say ?)
>
>
On another thought, we should not give responsibility to check for the APIs
to the FDW. So may be we should call PlanDMLPushdown only if
BeginDMLPushdown & IterateDMLPushdown & EndDMLPushdown APIs are present
into FDW. That means prepare DMLPushdown plan only when all the required
APIs are available with FDW. This will also reduce the changes into
CheckValidResultRel().

Thanks Ashutosh Bapat for healthy discussion.

PFA patch.



> 2) Don't throw an error if resultRelInfo->ri_FdwPushdown is true and
> DMLPushdown APIs is present but ExecForeign APIs are missing.
> 3) Throw an error if resultRelInfo->ri_FdwPushdown is false and
> ExecForeign APIs a

Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-27 Thread Rushabh Lathia
On Thu, Jan 28, 2016 at 11:33 AM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp
> wrote:

> On 2016/01/27 21:23, Rushabh Lathia wrote:
>
>> If I understood correctly, above documentation means, that if FDW have
>> DMLPushdown APIs that is enough. But in reality thats not the case, we
>> need  ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete in case
>> DML is not pushable.
>>
>> And here fact is DMLPushdown APIs are optional for FDW, so that if FDW
>> don't have DMLPushdown APIs they can still very well perform the DML
>> operations using ExecForeignInsert, ExecForeignUpdate, or
>> ExecForeignDelete.
>>
>
> I agree with you.  I guess I was wrong. sorry.
>
> So documentation should be like:
>>
>> If the IsForeignRelUpdatable pointer is set to NULL, foreign tables are
>> assumed to be insertable, updatable, or deletable if the FDW provides
>> ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete respectively,
>>
>> If FDW provides DMLPushdown APIs and the DML are pushable to the foreign
>> server, then FDW still needs ExecForeignInsert, ExecForeignUpdate, or
>> ExecForeignDelete for the non-pushable DML operation.
>>
>> What's your opinion ?
>>
>
> I agree that we should add this to the documentation, too.
>
> BTW, if I understand correctly, I think we should also modify
> relation_is_updatabale() accordingly.  Am I right?
>

Yep, we need to modify relation_is_updatable().


>
> Best regards,
> Etsuro Fujita
>
>
>


-- 
Rushabh Lathia


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-02-22 Thread Rushabh Lathia
I did another round of review for the latest patch and well as performed
the sanity test, and
haven't found any functional issues. Found couple of issue, see in-line
comments
for the same.

On Thu, Feb 18, 2016 at 3:15 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2016/02/12 21:19, Etsuro Fujita wrote:
>
>> Comments on specific points follow.
>>>
>>> This seems to need minor rebasing in the wake of the join pushdown patch.
>>>
>>
> Will do.
>>
>
> Done.
>
> +   /* Likewise for ForeignScan that has pushed down
>>> INSERT/UPDATE/DELETE */
>>> +   if (IsA(plan, ForeignScan) &&
>>> +   (((ForeignScan *) plan)->operation == CMD_INSERT ||
>>> +((ForeignScan *) plan)->operation == CMD_UPDATE ||
>>> +((ForeignScan *) plan)->operation == CMD_DELETE))
>>> +   return;
>>>
>>> I don't really understand why this is a good idea.  Since target lists
>>> are only displayed with EXPLAIN (VERBOSE), I don't really understand
>>> what is to be gained by suppressing them in any case at all, and
>>> therefore I don't understand why it's a good idea to do it here,
>>> either.  If there is a good reason, maybe the comment should explain
>>> what it is.
>>>
>>
> I think that displaying target lists would be confusing for users.
>>
>
> There seems no objection from you (or anyone), I left that as proposed,
> and added more comments.
>
> +   /* Check point 1 */
>>> +   if (operation == CMD_INSERT)
>>> +   return false;
>>> +
>>> +   /* Check point 2 */
>>> +   if (nodeTag(subplan) != T_ForeignScan)
>>> +   return false;
>>> +
>>> +   /* Check point 3 */
>>> +   if (subplan->qual != NIL)
>>> +   return false;
>>> +
>>> +   /* Check point 4 */
>>> +   if (operation == CMD_UPDATE)
>>>
>>> These comments are referring to something in the function header
>>> further up, but you could instead just delete the stuff from the
>>> header and mention the actual conditions here.
>>>
>>
> Will fix.
>>
>
> Done.
>
> The patch doesn't allow the postgres_fdw to push down an UPDATE/DELETE on
> a foreign join, so I added one more condition here not to handle such
> cases.  (I'm planning to propose a patch to handle such cases, in the next
> CF.)
>

I think we should place the checking foreign join condition before the
target columns, as foreign join condition is less costly then the target
columns.


>
> - If the first condition is supposed accept only CMD_UPDATE or
>>> CMD_DELETE, say if (operation != CMD_UPDATE || operation !=
>>> CMD_DELETE) rather than doing it this way.  Is-not-insert may in this
>>> context be functionally equivalent to is-update-or-delete, but it's
>>> better to write the comment and the code so that they exactly match
>>> rather than kinda-match.
>>>
>>> - For point 2, use IsA(subplan, ForiegnScan).
>>>
>>
> Will fix.
>>
>
> Done.
>
> +   /*
>>> +* ignore subplan if the FDW pushes down the
>>> command to the remote
>>> +* server
>>> +*/
>>>
>>> This comment states what the code does, instead of explaining why it
>>> does it.  Please update it so that it explains why it does it.
>>>
>>
> Will update.
>>
>
> Done.
>
> +   List   *fdwPushdowns;   /* per-target-table FDW
>>> pushdown flags */
>>>
>>> Isn't a list of booleans an awfully inefficient representation?  How
>>> about a Bitmapset?
>>>
>>
> OK, will do.
>>
>
> Done.
>
> +   /*
>>> +* Prepare remote-parameter expressions for evaluation.
>>> (Note: in
>>> +* practice, we expect that all these expressions will be just
>>> Params, so
>>> +* we could possibly do something more efficient than using
>>> the full
>>> +* expression-eval machinery for this.  But probably there
>>> would be little
>>> +* benefit, and it'd require postgres_fdw to know more than is
>>> desirable
>>> +* about Param evaluation.)
>>> +*/
>>> +   dpstate->param_exprs = (List *)
>>> +   ExecInitExpr((Expr *) fsplan->fdw_exprs,
>>> + 

Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-25 Thread Rushabh Lathia
On Thu, Jan 21, 2016 at 3:20 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2016/01/20 19:57, Rushabh Lathia wrote:
>
>> Overall I am quite done with the review of this patch. Patch is in good
>> shape and covered most of the things which been discussed earlier
>> or been mentioned during review process. Patch pass through the
>> make check and also includes good test coverage.
>>
>
> Thanks for the review!
>
> Here are couple of things which is still open for discussion:
>>
>
> 1)
>> .) When Tom Lane and Stephen Frost suggested getting the core
>> code involved,
>> I thought that we can do the mandatory checks into core it self
>> and making
>> completely out of dml_is_pushdown_safe(). Please correct me
>>
>
> The reason why I put that function in postgres_fdw.c is Check point 4:
>>
>> +  * 4. We can't push an UPDATE down, if any expressions to assign
>> to the target
>> +  * columns are unsafe to evaluate on the remote server.
>>
>
> Here I was talking about checks related to triggers, or to LIMIT. I think
>> earlier thread talked about those mandatory check to the core. So may
>> be we can move those checks into make_modifytable() before calling
>> the PlanDMLPushdown.
>>
>> This need to handle by the Owner.
>>
>
> Done.  For that, I modified relation_has_row_triggers a bit, renamed it to
> has_row_triggers (more shortly), and moved it to plancat.c.  And I merged
> dml_is_pushdown_safe with postgresPlanDMLPushdown, and revised that
> callback routine a bit.  Attached is an updated version of the patch
> created on top of Robert's version of the patch [1], which fixes handling
> of RETURNING tableoid in updating foreign tables.
>


This looks great.


>
> 2) Decision on whether we need the separate new node ForeignUpdate,
>> ForeignDelete. In my opinion I really don't see the need of this as we
>> that will add lot of duplicate. Having said that if committer or someone
>> else feel like that will make code more clean that is also true,
>>
>> This need more comments from the committer.
>>
>
> I agree with you.
>
> Other changes:
>
> * In previous version, I assumed that PlanDMLPushdown sets fsSystemCol to
> true when rewriting the ForeignScan plan node so as to push down an
> UPDATE/DELETE to the remote server, in order to initialize t_tableOid for
> the scan tuple in ForeignNext.  The reason is that I created the patch so
> that the scan tuple is provided to the local query's RETURNING computation,
> which might see the tableoid column.  In this version, however, I modified
> the patch so that the tableoid value is inserted by ModifyTable.  This
> eliminates the need for postgres_fdw (or any other FDW) to set fsSystemCol
> to true in PlanDMLPushdown.
>
> * Add set_transmission_modes/reset_transmission_modes to
> deparsePushedDownUpdateSql.
>
> * Revise comments a bit further.
>
> * Revise docs, including a fix for a wrong copy-and-paste.
>
> Best regards,
> Etsuro Fujita
>
> [1]
> http://www.postgresql.org/message-id/ca+tgmoz40j2uc5ac1nxu03oj4crvolks15xx+ptfp-1u-8z...@mail.gmail.com
>

Here are couple of comments:

1)

int
IsForeignRelUpdatable (Relation rel);


Documentation for IsForeignUpdatable() need to change as it says:

If the IsForeignRelUpdatable pointer is set to NULL, foreign tables are
assumed
to be insertable, updatable, or deletable if the FDW provides
ExecForeignInsert,
ExecForeignUpdate or ExecForeignDelete respectively.

With introduce of DMLPushdown API now this is no more correct, as even if
FDW don't provide ExecForeignInsert, ExecForeignUpdate or ExecForeignDelete
API
still foreign tables are assumed to be updatable or deletable with
DMLPushdown
API's, right ?


2)

+ /* SQL statement to execute remotely (as a String node) */
+ FdwDmlPushdownPrivateUpdateSql,

FdwDmlPushdownPrivateUpdateSql holds the UPDATE/DELETE query, so name should
be something like FdwDmlPushdownPrivateQuery os FdwDmlPushdownPrivateSql ?


Later I realized that for FdwModifyPrivateIndex too the index name is
FdwModifyPrivateUpdateSql even though its holding any DML query. Not sure
whether we should consider to change this or not ?

Apart from this perform sanity testing on the new patch and things working
as expected.


-- 
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-26 Thread Rushabh Lathia
On Tue, Jan 26, 2016 at 4:15 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2016/01/25 17:03, Rushabh Lathia wrote:
>
>> Here are couple of comments:
>>
>
> 1)
>>
>> int
>> IsForeignRelUpdatable (Relation rel);
>>
>
> Documentation for IsForeignUpdatable() need to change as it says:
>>
>> If the IsForeignRelUpdatable pointer is set to NULL, foreign tables are
>> assumed
>> to be insertable, updatable, or deletable if the FDW provides
>> ExecForeignInsert,
>> ExecForeignUpdate or ExecForeignDelete respectively.
>>
>> With introduce of DMLPushdown API now this is no more correct, as even if
>> FDW don't provide ExecForeignInsert, ExecForeignUpdate or
>> ExecForeignDelete API
>> still foreign tables are assumed to be updatable or deletable with
>> DMLPushdown
>> API's, right ?
>>
>
> That's what I'd like to discuss.
>
> I intentionally leave that as-is, because I think we should determine the
> updatability of a foreign table in the current manner.  As you pointed out,
> even if the FDW doesn't provide eg, ExecForeignUpdate, an UPDATE on a
> foreign table could be done using the DML pushdown APIs if the UPDATE is
> *pushdown-safe*.  However, since all UPDATEs on the foreign table are not
> necessarily pushdown-safe, I'm not sure it's a good idea to assume the
> table-level updatability if the FDW provides the DML pushdown callback
> routines.  To keep the existing updatability decision, I think the FDW
> should provide the DML pushdown callback routines together with
> ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete.  What do you
> think about that?
>
>
Sorry but I am not in favour of adding compulsion that FDW should provide
the DML pushdown callback routines together with existing ExecForeignInsert,
ExecForeignUpdate or ExecForeignDelete APIs.

May be we should change the documentation in such way, that explains

a) If FDW PlanDMLPushdown is NULL, then check for ExecForeignInsert,
ExecForeignUpdate or ExecForeignDelete APIs
b) If FDW PlanDMLPushdown is non-NULL and plan is not pushable
check for ExecForeignInsert, ExecForeignUpdate or ExecForeignDelete APIs
c) If FDW PlanDMLPushdown is non-NULL and plan is pushable
check for DMLPushdown APIs.

Does this sounds wired ?



> 2)
>>
>> + /* SQL statement to execute remotely (as a String node) */
>> + FdwDmlPushdownPrivateUpdateSql,
>>
>> FdwDmlPushdownPrivateUpdateSql holds the UPDATE/DELETE query, so name
>> should
>> be something like FdwDmlPushdownPrivateQuery os FdwDmlPushdownPrivateSql ?
>>
>
> Later I realized that for FdwModifyPrivateIndex too the index name is
>> FdwModifyPrivateUpdateSql even though its holding any DML query. Not sure
>> whether we should consider to change this or not ?
>>
>
> To tell the truth, I imitated FdwModifyPrivateIndex when adding
> FdwDmlPushdownPrivateIndex, because I think "UpdateSql" means INSERT,
> UPDATE, or DELETE, not just UPDATE.  (IsForeignRelUpdatable discussed above
> reports not only the updatability but the insertability and deletability of
> a foreign table!).  So, +1 for leaving that as-is.
>
>
Make sense for now.


> Apart from this perform sanity testing on the new patch and things working
>> as expected.
>>
>
> Thanks for the review!
>
> Best regards,
> Etsuro Fujita
>
>
>


-- 
Rushabh Lathia


Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-20 Thread Rushabh Lathia
On Fri, Jan 15, 2016 at 9:06 AM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2016/01/14 21:36, Rushabh Lathia wrote:
>
>> On Thu, Jan 14, 2016 at 2:00 PM, Etsuro Fujita
>> <fujita.ets...@lab.ntt.co.jp <mailto:fujita.ets...@lab.ntt.co.jp>> wrote:
>>
>
> On 2016/01/12 20:31, Rushabh Lathia wrote:
>>
>
> On Thu, Jan 7, 2016 at 6:15 PM, Etsuro Fujita
>> <fujita.ets...@lab.ntt.co.jp
>> <mailto:fujita.ets...@lab.ntt.co.jp>
>> <mailto:fujita.ets...@lab.ntt.co.jp
>>
>> <mailto:fujita.ets...@lab.ntt.co.jp>>> wrote:
>>  On 2016/01/06 18:58, Rushabh Lathia wrote:
>>  .) What the need of following change ?
>>
>>  @@ -833,9 +833,6 @@ appendWhereClause(StringInfo buf,
>>int nestlevel;
>>ListCell   *lc;
>>
>>  -   if (params)
>>  -   *params = NIL;  /* initialize result
>> list to
>>  empty */
>>  -
>>/* Set up context struct for recursion */
>>context.root = root;
>>context.foreignrel = baserel;
>>  @@ -971,6 +968,63 @@ deparseUpdateSql(StringInfo buf,
>>  PlannerInfo *root,
>> }
>>
>
>  It is needed for deparsePushedDownUpdateSql to store params
>> in both
>>  WHERE clauses and expressions to assign to the target columns
>>  into one params_list list.
>>
>
> Hmm sorry but I am still not getting the point, can you provide
>> some
>> example to explain this ?
>>
>
> Sorry, maybe my explanation was not enough.  Consider:
>>
>> postgres=# create foreign table ft1 (a int, b int) server myserver
>> options (table_name 't1');
>> postgres=# insert into ft1 values (0, 0);
>> postgres=# prepare mt(int, int) as update ft1 set a = $1 where b = $2;
>> postgres=# explain verbose execute mt(1, 0);
>> postgres=# explain verbose execute mt(1, 0);
>> postgres=# explain verbose execute mt(1, 0);
>> postgres=# explain verbose execute mt(1, 0);
>> postgres=# explain verbose execute mt(1, 0);
>>
>> After the 5 executions of mt we have
>>
>> postgres=# explain verbose execute mt(1, 0);
>>   QUERY PLAN
>>
>> 
>>   Update on public.ft1  (cost=100.00..140.35 rows=12 width=10)
>> ->  Foreign Update on public.ft1  (cost=100.00..140.35 rows=12
>> width=10)
>>   Remote SQL: UPDATE public.t1 SET a = $1::integer WHERE ((b
>> = $2::integer))
>> (3 rows)
>>
>> If we do that initialization in appendWhereClause, we would get a
>> wrong params_list list and a wrong remote pushed-down query for the
>> last mt() in deparsePushedDownUpdateSql.
>>
>
> Strange, I am seeing same behaviour with or without that initialization in
>> appendWhereClause. After the 5 executions of mt I with or without I am
>> getting following output:
>>
>> postgres=# explain verbose execute mt(1, 0);
>>   QUERY PLAN
>>
>> 
>>   Update on public.ft2  (cost=100.00..140.35 rows=12 width=10)
>> ->  Foreign Update on public.ft2  (cost=100.00..140.35 rows=12
>> width=10)
>>   Remote SQL: UPDATE public.t2 SET a = $1::integer WHERE ((b =
>> $2::integer))
>> (3 rows)
>>
>
> Really?  With that initialization in appendWhereClause, I got the
> following wrong result (note that both parameter numbers are $1):
>
> postgres=# explain verbose execute mt(1, 0);
>  QUERY PLAN
>
> 
>  Update on public.ft1  (cost=100.00..140.35 rows=12 width=10)
>->  Foreign Update on public.ft1  (cost=100.00..140.35 rows=12 width=10)
>  Remote SQL: UPDATE public.t1 SET a = $1::integer WHERE ((b =
> $1::integer))
> (3 rows)
>
>
Oops sorry. I got the point now.


> BTW, I keep a ForeignScan node pushing down an update to the remote
>> server, in the updated patches.  I have to admit that that seems
>> l

Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-01-27 Thread Rushabh Lathia
On Wed, Jan 27, 2016 at 2:50 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> On 2016/01/27 12:20, Etsuro Fujita wrote:
>
>> On 2016/01/26 22:57, Rushabh Lathia wrote:
>>
>>> On Tue, Jan 26, 2016 at 4:15 PM, Etsuro Fujita
>>> <fujita.ets...@lab.ntt.co.jp <mailto:fujita.ets...@lab.ntt.co.jp>>
>>> wrote:
>>>
>>> On 2016/01/25 17:03, Rushabh Lathia wrote:
>>>
>>
> int
>>> IsForeignRelUpdatable (Relation rel);
>>>
>>
> Documentation for IsForeignUpdatable() need to change as it says:
>>>
>>> If the IsForeignRelUpdatable pointer is set to NULL, foreign
>>> tables are
>>> assumed
>>> to be insertable, updatable, or deletable if the FDW provides
>>> ExecForeignInsert,
>>> ExecForeignUpdate or ExecForeignDelete respectively.
>>>
>>> With introduce of DMLPushdown API now this is no more correct,
>>> as even if
>>> FDW don't provide ExecForeignInsert, ExecForeignUpdate or
>>> ExecForeignDelete API
>>> still foreign tables are assumed to be updatable or deletable
>>> with
>>> DMLPushdown
>>> API's, right ?
>>>
>>
> That's what I'd like to discuss.
>>>
>>> I intentionally leave that as-is, because I think we should
>>> determine the updatability of a foreign table in the current
>>> manner.  As you pointed out, even if the FDW doesn't provide eg,
>>> ExecForeignUpdate, an UPDATE on a foreign table could be done using
>>> the DML pushdown APIs if the UPDATE is *pushdown-safe*.  However,
>>> since all UPDATEs on the foreign table are not necessarily
>>> pushdown-safe, I'm not sure it's a good idea to assume the
>>> table-level updatability if the FDW provides the DML pushdown
>>> callback routines.  To keep the existing updatability decision, I
>>> think the FDW should provide the DML pushdown callback routines
>>> together with ExecForeignInsert, ExecForeignUpdate, or
>>> ExecForeignDelete.  What do you think about that?
>>>
>>
> Sorry but I am not in favour of adding compulsion that FDW should provide
>>> the DML pushdown callback routines together with existing
>>> ExecForeignInsert,
>>> ExecForeignUpdate or ExecForeignDelete APIs.
>>>
>>> May be we should change the documentation in such way, that explains
>>>
>>> a) If FDW PlanDMLPushdown is NULL, then check for ExecForeignInsert,
>>> ExecForeignUpdate or ExecForeignDelete APIs
>>> b) If FDW PlanDMLPushdown is non-NULL and plan is not pushable
>>> check for ExecForeignInsert, ExecForeignUpdate or ExecForeignDelete APIs
>>> c) If FDW PlanDMLPushdown is non-NULL and plan is pushable
>>> check for DMLPushdown APIs.
>>>
>>> Does this sounds wired ?
>>>
>>
> Yeah, but I think that that would be what is done during executor
>> startup (see CheckValidResultRel()), while what the documentation is
>> saying is about relation_is_updatable(); that is, how to decide the
>> updatability of a given foreign table, not how the executor processes an
>> individual INSERT/UPDATE/DELETE on a updatable foreign table.  So, I'm
>> not sure it's a good idea to modify the documentation in such a way.
>>
>
> However, I agree that we should add a documentation note about the
>> compulsion somewhere.  Maybe something like this:
>>
>> The FDW should provide DML pushdown callback routines together with
>> table-updating callback routines described above.  Even if the callback
>> routines are provided, the updatability of a foreign table is determined
>> based on the presence of ExecForeignInsert, ExecForeignUpdate or
>> ExecForeignDelete if the IsForeignRelUpdatable pointer is set to NULL.
>>
>
> On second thought, I think it might be okay to assume the presence of
> PlanDMLPushdown, BeginDMLPushdown, IterateDMLPushdown, and EndDMLPushdown
> is also sufficient for the insertablity, updatability, and deletability of
> a foreign table, if the IsForeignRelUpdatable pointer is set to NULL.  How
> about modifying the documentation like this:
>
> If the IsForeignRelUpdatable pointer is set to NULL, foreign tables are
> assumed to be insertable, updatable, or deletable if the FDW provides
> ExecForeignInsert, ExecForeignUpdate, or ExecForeignDelete respectively, or
> if the FDW provides PlanDMLPushdown, BeginDMLPushdown, IterateDMLPus

Re: [HACKERS] Optimization for updating foreign tables in Postgres FDW

2016-02-14 Thread Rushabh Lathia
On Fri, Feb 12, 2016 at 5:40 PM, Etsuro Fujita <fujita.ets...@lab.ntt.co.jp>
wrote:

> Hi Rushabh and Thom,
>
> Thanks for the review!
>
> On 2016/02/10 22:37, Thom Brown wrote:
>
>> On 10 February 2016 at 08:00, Rushabh Lathia <rushabh.lat...@gmail.com>
>> wrote:
>>
>>> Fujita-san, I am attaching update version of the patch, which added
>>> the documentation update.
>>>
>>
> Thanks for updating the patch!
>
> Once we finalize this, I feel good with the patch and think that we
>>> could mark this as ready for committer.
>>>
>>
> I find this wording a bit confusing:
>>
>> "If the IsForeignRelUpdatable pointer is set to NULL, foreign tables
>> are assumed to be insertable, updatable, or deletable either the FDW
>> provides ExecForeignInsert,ExecForeignUpdate, or ExecForeignDelete
>> respectively or if the FDW optimizes a foreign table update on a
>> foreign table using PlanDMLPushdown (PlanDMLPushdown still needs to
>> provide BeginDMLPushdown, IterateDMLPushdown and EndDMLPushdown to
>> execute the optimized update.)."
>>
>> This is a very long sentence, and the word "either" doesn't work here.
>>
>
> Agreed.
>
> As a result of our discussions, we reached a conclusion that the DML
> pushdown APIs should be provided together with existing APIs such as
> ExecForeignInsert, ExecForeignUpdate or ExecForeignDelete, IIUC.  So, how
> about (1) leaving the description for the existing APIs as-is and (2)
> adding a new description for the DML pushdown APIs in parenthesis, like
> this?:
>
>  If the IsForeignRelUpdatable pointer is set to
>  NULL, foreign tables are assumed to be insertable,
> updatable,
>  or deletable if the FDW provides ExecForeignInsert,
>  ExecForeignUpdate, or ExecForeignDelete
>  respectively.
>  (If the FDW attempts to optimize a foreign table update, it still
>  needs to provide PlanDMLPushdown, BeginDMLPushdown,
>  IterateDMLPushdown and EndDMLPushdown.)
>
> Actually, if the FDW provides the DML pushdown APIs, (pushdown-able)
> foreign table updates can be done without ExecForeignInsert,
> ExecForeignUpdate or ExecForeignDelete.  So, the above docs are not
> necessarily correct.  But we don't recommend to do that without the
> existing APIs, so I'm not sure it's worth complicating the docs.


Adding a new description for DML pushdown API seems good idea. I would
suggest to add that as separate paragraph rather then into brackets.


>


>
> Also:
>>
>> "When the query doesn't has the clause, the FDW must also increment
>> the row count for the ForeignScanState node in the EXPLAIN ANALYZE
>> case."
>>
>> Should read "doesn't have"
>>
>
> Will fix.
>
> Best regards,
> Etsuro Fujita
>
>
>


-- 
Rushabh Lathia


[HACKERS] Should PostgresFDW ImportForeignSchema should import the remote table default expression?

2016-02-17 Thread Rushabh Lathia
Here is the test:

-- create database
postgres=# create database foo;
CREATE DATABASE
postgres=# \c foo
You are now connected to database "foo" as user "rushabh".
-- Create remote table with default expression
foo=# create table test ( a int , b int default 200 );
CREATE TABLE
foo=# \c postgres
You are now connected to database "postgres" as user "rushabh".
postgres=#
postgres=# create extension postgres_fdw ;
CREATE EXTENSION
-- Create server and user mapping
postgres=# create server myserver FOREIGN DATA WRAPPER postgres_fdw options
(dbname 'foo', port '');
CREATE SERVER
postgres=# CREATE USER MAPPING FOR CURRENT_USER SERVER myserver;
CREATE USER MAPPING
-- Import foreign schema
postgres=# import foreign schema public from server myserver into public;
IMPORT FOREIGN SCHEMA

-- Foreign table got imported
postgres=# \d test
   Foreign table "public.test"
 Column |  Type   | Modifiers |FDW Options
+-+---+---
 a  | integer |   | (column_name 'a')
 b  | integer |   | (column_name 'b')
Server: myserver
FDW Options: (schema_name 'public', table_name 'test')

-- Try to insert row and assume that it will add default value for 'b'
column
postgres=# insert into test (a) values ( 10 );
INSERT 0 1

-- But guess what, I was wrong ???
postgres=# select * from test;
 a  | b
+---
 10 |
(1 row)

Looking at the code of postgresImportForeignSchema it clear that its not
importing the default expression from the foreign table. But question is
whether it should ?

inputs/thoughts ?

Regards,
Rushabh Lathia
www.EnterpriseDB.com


  1   2   3   >