Re: [HACKERS] xlog filename formatting functions in recovery

2012-09-21 Thread Heikki Linnakangas

On 03.07.2012 15:13, Robert Haas wrote:

On the substance of the patch, I believe the reason why this is
currently disallowed is because the TLI is implicitly taken from the
running system, and on the standby that might be the wrong value.


Yeah, I believe that's the reason. So the question is, what timeline 
should the functions use on a standby? With the patch as it is, they use 0:


postgres=# select pg_xlogfile_name_offset('3/FF02');
  pg_xlogfile_name_offset
---
 (000300FF,131072)
(1 row)

There's a few different options:

1. current recovery_target_timeline (XLogCtl->recoveryTargetTLI)
2. current ThisTimeLineID, which is bumped every time a timeline-bumping 
checkpoint record is replayed. (this is not currently visible to 
backends, but we could easily add a shared memory variable for it)
3. curFileTLI. That is, the TLI of the current file that we're 
replaying. This is usually the same as ThisTimeLineID, except when 
replaying a WAL segment where the timeline changes

4. Something else?

What do you use these functions for? Which option would make the most sense?

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Kohei KaiGai
> I think Tom's point is, there are tons of applications which define
> their own "int64_t" (at least in 2005).
> Also pg_config.h has:
>
> #define HAVE_STDINT_H   1
>
> and this suggests that PostgreSQL adopts to platforms which does not
> have stdint.h. If so, we need to take care of such platforms anyway.
>
OK, it makes me clear. It might be helpful a source code comment
to remain why we used self defined datatype here.

2012/9/21 Tom Lane :
> Tatsuo Ishii  writes:
>> To pass 64-bit integer to PQfn, PQArgBlock is used like this: int *ptr
>> is a pointer to 64-bit integer and actual data is placed somewhere
>> else.
>
> Yeah, I think we have to do it like that.  Changing the size of
> PQArgBlock would be a libpq ABI break, which IMO is sufficiently painful
> to kill this whole proposal.  Much better a little localized ugliness
> in fe-lobj.c.
>
Hmm, I see. Please deliver the 64bit integer argument as reference,
and don't forget endian translations here.

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Yugo Nagata

> > Currently lo_initialize() throws an error if one of oids are not
> > available. I doubt we do the same way for 64-bit functions since this
> > will make 9.3 libpq unable to access large objects stored in pre-9.2
> > PostgreSQL servers.
> >
> It seems to me the situation to split the case of pre-9.2 and post-9.3
> using a condition of "conn->sversion >= 90300".
> 

Agreed. I'll fix it like that.

> > 4) src/test/examples/testlo64.c added for 64-bit API example(Yugo Nagata)
> >
> > Comments and suggestions are welcome.
> >
> miscellaneous comments are below.
> 
> Regression test is helpful. Even though no need to try to create 4TB large
> object, it is helpful to write some chunks around the design boundary.
> Could you add some test cases that writes some chunks around 4TB offset.

Agreed. I'll do that.

> 
> Thanks,
> -- 
> KaiGai Kohei 
> 
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Yugo Nagata 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] pg_upgrade from 9.1.3 to 9.2 failed

2012-09-21 Thread Rural Hunter

于 2012/9/19 7:22, Bruce Momjian 写道:

On Mon, Sep 17, 2012 at 05:07:23PM -0400, Bruce Momjian wrote:

# select * from pg_tables where tablename='sql_features';
 schemaname |  tablename   | tableowner | tablespace |
hasindexes | hasrules | hastriggers
+--++++--+-
information_schema | sql_features | postgres   || f
| f| f
(1 row)

OK, good to know.  This is the query pg_upgrade 9.2 uses to pull
information from 9.1 and 9.2:

SELECT c.oid, n.nspname, c.relname,  c.relfilenode, c.reltablespace, 
t.spclocation
FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON 
c.relnamespace = n.oid
LEFT OUTER JOIN pg_catalog.pg_tablespace t ON c.reltablespace = 
t.oid
WHERE relkind IN ('r','t', 'i', 'S') AND
((n.nspname !~ '^pg_temp_' AND
  n.nspname !~ '^pg_toast_temp_' AND
  n.nspname NOT IN ('pg_catalog', 'information_schema', 
'binary_upgrade') AND
  c.oid >= 16384
 )
 OR
 (n.nspname = 'pg_catalog' AND
  relname IN
  ('pg_largeobject', 'pg_largeobject_loid_pn_index', 
'pg_largeobject_metadata', 'pg_largeobject_metadata_oid_index')
 )
)
ORDER BY 1;

Based on the fact that sql_features exists in the information_schema
schema, I don't think 'sql_features' table is actually being processed
by pg_upgrade, but I think its TOAST table, because it has a high oid,
is being processed because it is in the pg_toast schema.  This is
causing the mismatch between the old and new clusters.

I am thinking this query needs to be split apart into a UNION where the
second part handles TOAST tables and looks at the schema of the _owner_
of the TOAST table.  Needs to be backpatched too.

OK, I am at a conference now so will not be able to write-up a patch
until perhaps next week.  You can drop the information schema in the old
database and pg_upgrade should run fine.  I will test your failure once
I create a patch.

OK. I will try. I also found some problems on initdb when re-init my 
pg9.2 db.
1. initdb doesn't create the pg_log dir so pg can not be started after 
initdb before I create the dir manually.
2. The case issue of db charset name. I installed pg9.1 and pg9.2 with 
zh_CN.UTF8. But somehow it seems the actual chaset name is stored with 
lowercase 'zh_CN.utf8' during the install. In this case, I can run the 
pg_upgrade without problem since they are both lowercase. But when I 
re-init pg9.2 with option '-E zh_CN.UTF8', pg_upgrade will fail and 
report that encoding/charset mis-match: one is uppercase and another is 
lowercase. If I run initdb with '-E zh_CN.utf8', it will tell me there 
is no such charset in the system. I found a workaround to run initdb 
with '--lc-collate=zh_CN.utf8 --lc-ctype=zh_CN.utf8 
--lc-messages=zh_CN.utf8 --lc-monetary=zh_CN.utf8 
--lc-numeric=zh_CN.utf8 --lc-time=zh_CN.utf8'. But the case problem is 
really confusing.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Tatsuo Ishii
>> I think Tom's point is, there are tons of applications which define
>> their own "int64_t" (at least in 2005).
>> Also pg_config.h has:
>>
>> #define HAVE_STDINT_H   1
>>
>> and this suggests that PostgreSQL adopts to platforms which does not
>> have stdint.h. If so, we need to take care of such platforms anyway.
>>
> OK, it makes me clear. It might be helpful a source code comment
> to remain why we used self defined datatype here.

Ok.

> 2012/9/21 Tom Lane :
>> Tatsuo Ishii  writes:
>>> To pass 64-bit integer to PQfn, PQArgBlock is used like this: int *ptr
>>> is a pointer to 64-bit integer and actual data is placed somewhere
>>> else.
>>
>> Yeah, I think we have to do it like that.  Changing the size of
>> PQArgBlock would be a libpq ABI break, which IMO is sufficiently painful
>> to kill this whole proposal.  Much better a little localized ugliness
>> in fe-lobj.c.
>>
> Hmm, I see. Please deliver the 64bit integer argument as reference,
> and don't forget endian translations here.

I thought pgPutInt64() takes care of endianness. No?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Kohei KaiGai
2012/9/21 Tatsuo Ishii :
>>> I think Tom's point is, there are tons of applications which define
>>> their own "int64_t" (at least in 2005).
>>> Also pg_config.h has:
>>>
>>> #define HAVE_STDINT_H   1
>>>
>>> and this suggests that PostgreSQL adopts to platforms which does not
>>> have stdint.h. If so, we need to take care of such platforms anyway.
>>>
>> OK, it makes me clear. It might be helpful a source code comment
>> to remain why we used self defined datatype here.
>
> Ok.
>
>> 2012/9/21 Tom Lane :
>>> Tatsuo Ishii  writes:
 To pass 64-bit integer to PQfn, PQArgBlock is used like this: int *ptr
 is a pointer to 64-bit integer and actual data is placed somewhere
 else.
>>>
>>> Yeah, I think we have to do it like that.  Changing the size of
>>> PQArgBlock would be a libpq ABI break, which IMO is sufficiently painful
>>> to kill this whole proposal.  Much better a little localized ugliness
>>> in fe-lobj.c.
>>>
>> Hmm, I see. Please deliver the 64bit integer argument as reference,
>> and don't forget endian translations here.
>
> I thought pgPutInt64() takes care of endianness. No?
>
It works inside of the PGfn(), when isint = 1 towards pointer data type.
In my sense, it is a bit problem specific solution.

So, I'd like to see other person's opinion here.

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo: allow to get a number of processed rows by COPY statement [Review of Patch]

2012-09-21 Thread Amit Kapila
On Friday, September 21, 2012 1:23 AM Pavel Stehule wrote:

>> Basic stuff: 
>>  
>> - Patch applies OK. but offset difference in line numbers. 
>> - Compiles with errors in contrib [pg_stat_statements, sepgsql] modules 
>> - Regression failed; one test-case in COPY due to incomplete test-case 
>> attached patch. – same as reported by Heikki 
> 
>fixed patch is in attachment 

After modifications: 
--- 
- Patch applies OK 
- Compiles cleanly without any errors/warnings 
- Regression tests pass. 

>> 
>> What it does: 
>> -- 
>> Modification to get the number of processed rows evaluated via SPI. The 
>> changes are to add extra parameter in ProcessUtility to get the number of 
>> rows processed by COPY command. 
>> 
>> Code Review Comments: 
>> - 
>> 1. New parameter is added to ProcessUtility_hook_type function 
>>but the functions which get assigned to these functions like 
>>sepgsql_utility_command, pgss_ProcessUtility, prototype & definition is 
>> not modified. 

Functionality is not fixed correctly for hook functions, In function 
pgss_ProcessUtility 
for bellow snippet of code processed parameter is passed NULL, as well as not 
initialized. 
because of this when "pg_stat_statements" extention is utilized COPY command is 
giving garbage values. 
if (prev_ProcessUtility) 
prev_ProcessUtility(parsetree, queryString, params, 
dest, completionTag, 
context, NULL); 
else 
standard_ProcessUtility(parsetree, queryString, params, 
dest, completionTag, 
context, NULL); 

Testcase is attached. 
In this testcase table has only 1000 records but it show garbage value. 
postgres=# show shared_preload_libraries ;   
 shared_preload_libraries 
--   
 pg_stat_statements 
(1 row) 
postgres=# CREATE TABLE tbl (a int); 
CREATE TABLE 
postgres=# INSERT INTO tbl VALUES(generate_series(1,1000)); 
INSERT 0 1000 
postgres=# do $$ 
declare r int; 
begin 
  copy tbl to '/home/kiran/copytest.csv' csv; 
  get diagnostics r = row_count; 
  raise notice 'exported % rows', r; 
  truncate tbl; 
  copy tbl from '/home/kiran/copytest.csv' csv; 
  get diagnostics r = row_count; 
  raise notice 'imported % rows', r; 
end; 
$$ language plpgsql; 
postgres$# 
NOTICE:  exported 13281616 rows 
NOTICE:  imported 13281616 rows 
DO 

>>   
>> 2. Why to add the new parameter if completionTag hold the number of 
>> processed tuple information; can be extracted 
>>   
>>from it as follows: 
>> _SPI_current->processed = strtoul(completionTag + 7, 
>> NULL, 10); 
>   
>this is basic question. I prefer a natural type for counter - uint64 
>instead text. And there are no simply way to get offset (7 in this 
>case)   

I agree with your point, but currently in few other places we are parsing 
the completion tag for getting number of tuples processed. So may be in future 
we can change those places as well. For example

pgss_ProcessUtility 
{
..

/* parse command tag to retrieve the number of affected rows. */ 
if (completionTag && 
sscanf(completionTag, "COPY " UINT64_FORMAT, &rows) != 1)   
rows = 0;

}

_SPI_execute_plan
{
..
..
if (IsA(stmt, CreateTableAsStmt)) 
{ 
Assert(strncmp(completionTag, "SELECT ", 7) == 0); 
_SPI_current->processed = strtoul(completionTag + 7, 
  NULL, 
10);

..
}


With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Tatsuo Ishii
>>> Hmm, I see. Please deliver the 64bit integer argument as reference,
>>> and don't forget endian translations here.
>>
>> I thought pgPutInt64() takes care of endianness. No?
>>
> It works inside of the PGfn(), when isint = 1 towards pointer data type.
> In my sense, it is a bit problem specific solution.
> 
> So, I'd like to see other person's opinion here.

I think we cannot change this because we want to keep the counter part
backend side function pq_getmsgint64() as it is (the function is not
part of the patch).
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Kohei KaiGai
2012/9/21 Tatsuo Ishii :
 Hmm, I see. Please deliver the 64bit integer argument as reference,
 and don't forget endian translations here.
>>>
>>> I thought pgPutInt64() takes care of endianness. No?
>>>
>> It works inside of the PGfn(), when isint = 1 towards pointer data type.
>> In my sense, it is a bit problem specific solution.
>>
>> So, I'd like to see other person's opinion here.
>
> I think we cannot change this because we want to keep the counter part
> backend side function pq_getmsgint64() as it is (the function is not
> part of the patch).
>
My opinion is lo_lseek64() and lo_tell64() should handle endian translation
prior and next to PQfn() invocation; to avoid the int64 specific case-handling
inside of PQfn() that can be called by other applications.

Am I missing something?

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Tatsuo Ishii
 I thought pgPutInt64() takes care of endianness. No?

>>> It works inside of the PGfn(), when isint = 1 towards pointer data type.
>>> In my sense, it is a bit problem specific solution.
>>>
>>> So, I'd like to see other person's opinion here.
>>
>> I think we cannot change this because we want to keep the counter part
>> backend side function pq_getmsgint64() as it is (the function is not
>> part of the patch).
>>
> My opinion is lo_lseek64() and lo_tell64() should handle endian translation
> prior and next to PQfn() invocation; to avoid the int64 specific case-handling
> inside of PQfn() that can be called by other applications.
> 
> Am I missing something?

So what do you want to do with pq_getmsgint64()? It exactly does the
same thing as pqPutInt64(), just in opposit direction. Do you want to
change pq_getmsgint64()? Or add new function in backend?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Kohei KaiGai
2012/9/21 Tatsuo Ishii :
> I thought pgPutInt64() takes care of endianness. No?
>
 It works inside of the PGfn(), when isint = 1 towards pointer data type.
 In my sense, it is a bit problem specific solution.

 So, I'd like to see other person's opinion here.
>>>
>>> I think we cannot change this because we want to keep the counter part
>>> backend side function pq_getmsgint64() as it is (the function is not
>>> part of the patch).
>>>
>> My opinion is lo_lseek64() and lo_tell64() should handle endian translation
>> prior and next to PQfn() invocation; to avoid the int64 specific 
>> case-handling
>> inside of PQfn() that can be called by other applications.
>>
>> Am I missing something?
>
> So what do you want to do with pq_getmsgint64()? It exactly does the
> same thing as pqPutInt64(), just in opposit direction. Do you want to
> change pq_getmsgint64()? Or add new function in backend?
>
My preference is nothing are changed both pg_getmsgint64() of the backend
and routines under PQfn() of the libpq. Isn't it unavailable to deliver int64-
value "after" the endian translation on the caller side?

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Tatsuo Ishii
> 2012/9/21 Tatsuo Ishii :
>> I thought pgPutInt64() takes care of endianness. No?
>>
> It works inside of the PGfn(), when isint = 1 towards pointer data type.
> In my sense, it is a bit problem specific solution.
>
> So, I'd like to see other person's opinion here.

 I think we cannot change this because we want to keep the counter part
 backend side function pq_getmsgint64() as it is (the function is not
 part of the patch).

>>> My opinion is lo_lseek64() and lo_tell64() should handle endian translation
>>> prior and next to PQfn() invocation; to avoid the int64 specific 
>>> case-handling
>>> inside of PQfn() that can be called by other applications.
>>>
>>> Am I missing something?
>>
>> So what do you want to do with pq_getmsgint64()? It exactly does the
>> same thing as pqPutInt64(), just in opposit direction. Do you want to
>> change pq_getmsgint64()? Or add new function in backend?
>>
> My preference is nothing are changed both pg_getmsgint64() of the backend
> and routines under PQfn() of the libpq. Isn't it unavailable to deliver int64-
> value "after" the endian translation on the caller side?

I am confused.

>>> My opinion is lo_lseek64() and lo_tell64() should handle endian translation
>>> prior and next to PQfn() invocation; to avoid the int64 specific 
>>> case-handling
>>> inside of PQfn() that can be called by other applications.

Why do we need this? If PQArgBlock.isint != 0, it treats input data as
integer anyway. So I don't see any use case other than "int64 specific
case-handling" if isint != 0 and len == 8. If you have other use case
for isint != 0 and len == 8, please show it.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #7534: walreceiver takes long time to detect n/w breakdown

2012-09-21 Thread Amit kapila
On Tuesday, September 18, 2012 6:02 PM Fujii Masao wrote:
On Mon, Sep 17, 2012 at 4:03 PM, Amit Kapila  wrote:

>> Approach-2 :
>> Provide a variable wal_send_status_interval, such that if this is 0, then
>> the current behavior would prevail and if its non-zero then KeepAlive
>> message would be send maximum after that time.
>> The modified code of WALSendLoop will be as follows:


>> Which way you think is better or you have any other idea to handle.

>I think #2 is better because it's more intuitive to a user.

Please find a patch attached for implementation of Approach-2.


With Regards,
Amit Kapila.

replication_timeout_patch_v2.patch
Description: replication_timeout_patch_v2.patch

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_reorg in core?

2012-09-21 Thread Daniele Varrazzo
On Fri, Sep 21, 2012 at 5:17 AM, Josh Kupershmidt  wrote:

> If the argument for moving pg_reorg into core is "faster and easier"
> development, well I don't really buy that.

I don't see any problem in having pg_reorg in PGXN instead.

I've tried adding a META.json to the project and it seems working fine
with the pgxn client. It is together with other patches in my own
github fork.

https://github.com/dvarrazzo/pg_reorg/

I haven't submitted it to PGXN as I prefer the original author to keep
the ownership.

-- Daniele


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-21 Thread Alvaro Herrera
Excerpts from Amit Kapila's message of vie sep 21 02:26:49 -0300 2012:
> On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote:

> > Well, there is a difficulty here which is that the number of processes
> > connected to databases must be configured during postmaster start
> > (because it determines the size of certain shared memory structs).  So
> > you cannot just spawn more tasks if all max_worker_tasks are busy.
> > (This is a problem only for those workers that want to be connected as
> > backends.  Those that want libpq connections do not need this and are
> > easier to handle.)
> 
> Are you telling about shared memory structs that need to be allocated for 
> each worker task?
> I am not sure if they can be shared across multiple slaves or will be 
> required for each slave.
> However even if that is not possible, other mechanism can be used to get the 
> work done by existing slaves.

I mean stuff like PGPROC entries and such.  Currently, they are
allocated based on max_autovacuum_workers + max_connections +
max_prepared_transactions IIRC.  So by following identical reasoning we
would just have to add an hypothetical new max_bgworkers to the mix;
however as I said above, we don't really need that because we can count
the number of registered workers at postmaster start time and use that
to size PGPROC.

Shared memory used by each worker (or by a group of workers) that's not
part of core structs should be allocated by the worker itself via
RequestAddInShmemSpace.

> If not above then where there is a need of dynamic worker tasks as mentioned 
> by Simon?

Well, I think there are many uses for dynamic workers, or short-lived
workers (start, do one thing, stop and not be restarted).

In my design, a worker is always restarted if it stops; otherwise there
is no principled way to know whether it should be running or not (after
a crash, should we restart a registered worker?  We don't know whether
it stopped before the crash.)  So it seems to me that at least for this
first shot we should consider workers as processes that are going to be
always running as long as postmaster is alive.  On a crash, if they have
a backend connection, they are stopped and then restarted.

> > One thing I am not going to look into is how is this new capability be
> > used for parallel query.  I feel we have enough use cases without it,
> > that we can develop a fairly powerful feature.  After that is done and
> > proven (and committed) we can look into how we can use this to implement
> > these short-lived workers for stuff such as parallel query.
> 
>   Agreed and I also meant to say the same as you are saying.

Great.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Properly set relpersistence for fake relcache entries.

2012-09-21 Thread Marko Tiikkaja

On 9/20/12 11:55 PM, Andres Freund wrote:

On Monday, September 17, 2012 03:58:37 PM Tom Lane wrote:

OK, that explains why we've not seen a blizzard of trouble reports.
Still seems like a good idea to fix it ASAP, though.

Btw, I think RhodiumToad/Andrew Gierth and I some time ago helped a user in the
IRC Channel that had symptoms matching this bug.


Another such user reporting in. :-(

Our slave started accumulating WAL files and ran out of disk space 
yesterday.  After investigation from Andres and Andrew, it turns out 
that we were most likely hit by this very same bug.


Here's what they have to say:
"If the db crashes between logging the split and the parent-node insert, 
then in recovery, since relpersistence is not initialized correctly, 
when the recovery process tries to complete the operation, no xlog 
record is written for the insert.  If there's a slave server, then the 
missing xlog record for the insert means that the slave's 
incomplete_actions queue never becomes empty, therefore the slave can no 
longer do recovery restartpoints."


Some relevant information:

[cur:92/314BC870, xid:76872047, rmid:10(Heap), len/tot_len:91/123, 
info:0, prev:92/314BB890] insert: s/d/r:1663/408841/415746 
blk/off:13904/65 header: t_infomask2 8 t_infomask 2050 t_hoff 24
[cur:92/314BC8F0, xid:76872047, rmid:11(Btree), len/tot_len:702/734, 
info:64, prev:92/314BC870] split_r: s/d/r:1663/408841/475676 leftsib 2896
[cur:92/314BCBD0, xid:0, rmid:0(XLOG), len/tot_len:56/88, info:0, 
prev:92/314BC8F0] checkpoint: redo 146/314BCBD0; tli 1; nextxid 
76872048;  nextoid 764990; nextmulti 62062; nextoffset 132044; shutdown 
at 2012-09-11 14:26:26 CEST


2012-09-11 14:26:26.719 CEST,,,44620,,504f2df2.ae4c,5,,2012-09-11 
14:26:26 CEST,,0,LOG,0,"redo done at 
92/314BC8F0""StartupXLOG, xlog.c:6641",""


And apparently the relpersistence check in RelationNeedsWAL() call in 
_bt_insertonpg had a role in this as well.




Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_reorg in core?

2012-09-21 Thread Michael Paquier
On Fri, Sep 21, 2012 at 9:33 PM, Daniele Varrazzo <
daniele.varra...@gmail.com> wrote:

> On Fri, Sep 21, 2012 at 5:17 AM, Josh Kupershmidt 
> wrote:
>
> > If the argument for moving pg_reorg into core is "faster and easier"
> > development, well I don't really buy that.
>
> I don't see any problem in having pg_reorg in PGXN instead.
>
> I've tried adding a META.json to the project and it seems working fine
> with the pgxn client. It is together with other patches in my own
> github fork.
>
> https://github.com/dvarrazzo/pg_reorg/
>
> I haven't submitted it to PGXN as I prefer the original author to keep
> the ownership.
>
Thanks, I merged your patches with the dev branch for the time being.
It would be great to have some input from the maintainers of pg_reorg in
pgfoundry to see if they agree about putting it in pgxn.
-- 
Michael Paquier
http://michael.otacoo.com


Re: [HACKERS] pg_reorg in core?

2012-09-21 Thread Michael Paquier
On Fri, Sep 21, 2012 at 1:00 PM, Hitoshi Harada wrote:

> I'm not familiar with pg_reorg, but I wonder why we need a separate
> program for this task.  I know pg_reorg is ok as an external program
> per se, but if we could optimize CLUSTER (or VACUUM which I'm a little
> pessimistic about) in the same way, it's much nicer than having
> additional binary + extension.  Isn't it possible to do the same thing
> above within the CLUSTER command?  Maybe CLUSTER .. CONCURRENTLY?
>
CLUSTER might be more adapted in this case as the purpose is to reorder the
table.
The same technique used by pg_reorg (aka table coupled with triggers) could
lower the lock access of the table.
Also, it could be possible to control each sub-operation in the same
fashion way as CREATE INDEX CONCURRENTLY.
By the way, whatever the operation, VACUUM or CLUSTER used, I got a couple
of doubts:
1) isn't it be too costly for a core operation as pg_reorg really needs
many temporary objects? Could be possible to reduce the number of objects
created if added to core though...
2) Do you think the current CLUSTER is enough and are there wishes to
implement such an optimization directly in core?
-- 
Michael Paquier
http://michael.otacoo.com


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Tom Lane
Kohei KaiGai  writes:
> My preference is nothing are changed both pg_getmsgint64() of the backend
> and routines under PQfn() of the libpq. Isn't it unavailable to deliver int64-
> value "after" the endian translation on the caller side?

Right.  If we had to change anything on the backend side, it would mean
we had a wire protocol change, which is even less acceptable than a
libpq ABI change.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Properly set relpersistence for fake relcache entries.

2012-09-21 Thread Andres Freund
On Friday, September 21, 2012 03:30:31 PM Marko Tiikkaja wrote:
> On 9/20/12 11:55 PM, Andres Freund wrote:
> > On Monday, September 17, 2012 03:58:37 PM Tom Lane wrote:
> >> OK, that explains why we've not seen a blizzard of trouble reports.
> >> Still seems like a good idea to fix it ASAP, though.
> > 
> > Btw, I think RhodiumToad/Andrew Gierth and I some time ago helped a user
> > in the IRC Channel that had symptoms matching this bug.
> 
> Another such user reporting in. :-(
> 
> Our slave started accumulating WAL files and ran out of disk space
> yesterday.  After investigation from Andres and Andrew, it turns out
> that we were most likely hit by this very same bug.
> 
> Here's what they have to say:
> "If the db crashes between logging the split and the parent-node insert,
> then in recovery, since relpersistence is not initialized correctly,
> when the recovery process tries to complete the operation, no xlog
> record is written for the insert.  If there's a slave server, then the
> missing xlog record for the insert means that the slave's
> incomplete_actions queue never becomes empty, therefore the slave can no
> longer do recovery restartpoints."
> 
> Some relevant information:
> 
> [cur:92/314BC870, xid:76872047, rmid:10(Heap), ... insert: ...
> [cur:92/314BC8F0, xid:76872047, rmid:11(Btree), ... split_r: ...
> [cur:92/314BCBD0, xid:0, rmid:0(XLOG), len/tot_len:56/88, info:0,
> prev:92/314BC8F0] checkpoint: redo 146/314BCBD0; ... shutdown
> ... "redo done at 92/314BC8F0""StartupXLOG, xlog.c:6641",""
Which means that an insert into the heap, triggered a btree split. At that 
point the database crashed. During recovery the split was supposed to be 
finished by the btree cleanup code.

> And apparently the relpersistence check in RelationNeedsWAL() call in
> _bt_insertonpg had a role in this as well.
When detecting an incomplete split the nbtree cleanup code calls 
_bt_insert_parent, which calls _bt_insertonpg. Which finishes the split. BUT: 
it doesn't log that it finished because RelationNeedsWal() says it doesn't need 
to.

That means:
* indexes on stanbys will *definitely* be corrupted
* a standby won't perform any restartpoints anymore till restarted
* if the primary crashes corruption is likely.

Hrm. I retract my earlier statement about the low likelihood of corruption due 
to this.

Greetings,

Andres
-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] DB & Schema

2012-09-21 Thread m...@rpzdesign.com
During the execution of ProcessUtility() function in 
/src/backend/xxx/utility.c,


the CreateStmt node type is processed to create a table.

Is there a global function in the context of the backend process that 
will deliver what the current database and schema names?


The querystring cannot be relied upon for discerning this information.

marco


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [WIP] Patch : Change pg_ident.conf parsing to be the same as pg_hba.conf

2012-09-21 Thread Heikki Linnakangas

On 02.07.2012 15:08, Amit Kapila wrote:

Attached is a Patch to change the parsing of pg_ident.conf to make it
similar to pg_hba.conf.
This is based on Todo Item:
http://archives.postgresql.org/pgsql-hackers/2011-06/msg02204.php

Purpose - This will allow to catch syntax errors in pg_ident at the startup
or reload time.


Changes are described as follows:
a.  Make the load_ident() functionality same as load_hba, such that it
cleans the previous context, after successful parsing.
b.  Change the load_ident(), so that parsing can be done during load
time and the parsed lines are saved.
c.  Change the functionality of parse_ident_usermap() so that parsing is
not done during authentication.
d.  If load_ident() fails for parsing, it returns false and error is
issued.


Looks good to me, committed with some small cleanup.


 This point I am not sure, as for pg_hba failure it issues FATAL at
startup. Currently I have kept error handling for load of pg_ident same as
pg_hba


I think we should be more lenient with pg_ident.conf, and behave as if 
the file was empty. That is the old behavior, and it seems sensible. You 
can still connect using an authentication method that doesn't use 
pg_ident.conf, but if pg_hba.conf is missing, you cannot log in at all.


Thanks!

- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Tatsuo Ishii
> Kohei KaiGai  writes:
>> My preference is nothing are changed both pg_getmsgint64() of the backend
>> and routines under PQfn() of the libpq. Isn't it unavailable to deliver 
>> int64-
>> value "after" the endian translation on the caller side?
> 
> Right.  If we had to change anything on the backend side, it would mean
> we had a wire protocol change, which is even less acceptable than a
> libpq ABI change.

The patch does not touch pg_getmsgint64() and I don't think we are not
going have a wire protocol change.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Kohei KaiGai
2012/9/21 Tatsuo Ishii :
>> Kohei KaiGai  writes:
>>> My preference is nothing are changed both pg_getmsgint64() of the backend
>>> and routines under PQfn() of the libpq. Isn't it unavailable to deliver 
>>> int64-
>>> value "after" the endian translation on the caller side?
>>
>> Right.  If we had to change anything on the backend side, it would mean
>> we had a wire protocol change, which is even less acceptable than a
>> libpq ABI change.
>
> The patch does not touch pg_getmsgint64() and I don't think we are not
> going have a wire protocol change.
>
It's also uncertain what portion does Tom said "right" for...

What I pointed out is this patch adds a special case handling on pqFunctionCall3
of libpq to fetch 64bit-integer from PQArgBlock->u.ptr and adjust endian orders.
It is never the topic on backend side.

It is not a technical problem, but I feel a bit strange coding style.
So, I don't want to against it so much.

Tom, could you give us a suggestion which manner is better approach; whether
the PQfn should have responsibility for endian translation of 64bit-integer, or
callers (lo_tell64 or lo_seek64)?

Thanks,
-- 
KaiGai Kohei 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Suggestion for --truncate-tables to pg_restore

2012-09-21 Thread Karl O. Pinc
On 09/20/2012 12:24:49 PM, Karl O. Pinc wrote:

> I've had problems using pg_restore --data-only when
> restoring individual schemas (which contain data which
> has had bad things done to it).  --clean does not work
> well because of dependent objects in other schemas.

Before doing any more work I want to report on the
discussions that took place at the code sprint at
Postgres Open in Chicago.  Because I'm going to add
in additional thoughts I've had and to avoid mis-representing
anybody's opinion I'll not mention who said what.
Feel free to step forward and claim Ingenious Ideas
as your own.  Likewise I apologize if lack of attribution
makes it more difficult to discern (my) uninformed drivel
from intelligent insight.



First, the problem:

Begin with the following structure:

CREATE TABLE schemaA.foo (id PRIMARY KEY, data INT);

CREATE VIEW schemaB.bar AS SELECT * FROM schemaA.foo;

Then, by accident, somebody does:

UPDATE schemaA.foo SET data = data + (RANDOM() * 1000)::INT;

So, you want to restore the data into schemaA.foo.
But schemaA.foo has (bad) data in it that must first
be removed.  It would seem that using 

  pg_restore --clean -n schemaA -t foo my_pg_dump_backup

would solve the problem, it would drop schemaA.foo,
recreate it, and then restore the data.  But this does
not work.  schemaA.foo does not drop because it's
got a dependent database object, schemaB.bar.

Of course there are manual work-arounds.  One of these
is truncating schemaA.foo and then doing a pg_restore
with --data-only.  The manual work-arounds become
increasingly burdensome as you need to restore more
tables.  The case that motivated me was an attempt
to restore the data in an entire schema, one which
contained a significant number of tables.

So, the idea here is to be able to do a data-only
restore, first truncating the data in the tables
being restored to remove the existing corrupted data.

The proposal is to add a --truncate-tables option
to pg_restore.



There were some comments on syntax.

I proposed to use -u as a short option.  This was
thought confusing, given it's use in other
Unix command line programs (mysql).   Since there's
no obvious short option, forget it.  Just have
a long option.

Another choice is to avoid introducing yet another
option and instead overload --clean so that when
doing a --data-only restore --clean truncates tables
and otherwise --clean retains the existing behavior of
dropping and re-creating the restored objects.

(I tested pg_restore with 9.1 and when --data-only is
used --clean is ignored, it does not even produce a warning.
This is arguably a bug.)



More serious objections were raised regarding semantics.

What if, instead, the initial structure looked like:

CREATE TABLE schemaA.foo
  (id PRIMARY KEY, data INT);

CREATE TABLE schemaB.bar
  (id INT CONSTRAINT "bar_on_foo" REFERENCES foo
 , moredata INT);

With a case like this, in most real-world situations, you'd
have to use pg_restore with --disable-triggers if you wanted
to use --data-only and --truncate-tables.  The possibility of
foreign key referential integrity corruption is obvious.

Aside:  Unless you're restoring databases in their entirety
the pg_restore --disable-triggers option makes it easy to
introduce foreign key referential integrity corruption.
In fact, since pg_restore does not wrap it's operations
in one big transaction, it's easy to attempt restoration
of a portion of a database, have part of the process
succeed and part of it fail (due to either schema
or data dependencies), and be left off worse
than before you started.  The pg_restore docs might
benefit from a big fat warning regarding
attempts to restore less than an entire database.

So, the discussion went, pg_restore is just another
application and introducing more options
which could lead to corruption of referential integrity is
a bad idea.

But pg_restore should not be thought of as just another
front-end.  It should be thought of as a data recovery
tool.  Recovering some data and being left with referential
integrity problems is better than having no data.  This
is true even if, due to different users owning different
schemas and so forth, nobody knows exactly what
might be broken.

Yes, but we can do better.  (The unstated sub-text being that
we don't want to introduce an inferior feature which
will then need to be supported forever.)

How could we do better:

Here I will record only the ideas related to restore,
although there was some mention of dump as well.

There has apparently been some discussion of writing
a foreign data wrapper which would operate on a database
dump.  This might (in ways that are not immediately
obvious to me) address this issue.

The restore process could, based on what table data needs
restoration, look at foreign key dependencies and produce a
list of the tables which all must be restored into order to
ensure foreign key referential integrity.  In the case of
restoration into a empty database the foreign 

Re: [HACKERS] DB & Schema

2012-09-21 Thread Heikki Linnakangas

On 21.09.2012 17:58, m...@rpzdesign.com wrote:

During the execution of ProcessUtility() function in
/src/backend/xxx/utility.c,

the CreateStmt node type is processed to create a table.

Is there a global function in the context of the backend process that
will deliver what the current database and schema names?


There's a global variable MyDatabaseId for the database the backend is 
connected to. It doesn't change after login.


There's no such thing as a "current schema", but I think you'll want to 
take a look at src/backend/catalog/namespace.c, which handles the 
search_path. There's a variable activeCreationNamespace there; look at 
the functions in namespace.c to see how it works.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Tom Lane
Kohei KaiGai  writes:
> Tom, could you give us a suggestion which manner is better approach; whether
> the PQfn should have responsibility for endian translation of 64bit-integer, 
> or
> callers (lo_tell64 or lo_seek64)?

Adding anything inside pqFunctionCall is useless, unless we were to add
an int64 variant to PQArgBlock, which isn't a good idea because it will
be an ABI break.  The functions in fe-lobj.c have to set up the int64
value as if it were pass-by-reference, which means dealing with
endianness concerns there.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-09-21 Thread David E. Wheeler
On Sep 21, 2012, at 10:59 AM, Tom Lane  wrote:

> I don't believe this has been thought through nearly carefully enough.
> If CREATE SCHEMA created a schema and nothing more, then the proposed
> implementation would probably be fine.  But per spec, CREATE SCHEMA
> can specify not only creating the schema but a whole bunch of objects
> within the schema.  As coded, if the schema exists then creation of
> the specified sub-objects is just skipped, regardless of whether they
> exist or not.  I doubt that this is really sane behavior.  Would the
> principle of least astonishment dictate that the IF NOT EXISTS option
> apply implicitly to each sub-object as well?  (If so, we'd have to
> extend everything that can appear in OptSchemaEltList; most of those
> commands don't have IF NOT EXISTS options today.)

I had no idea about that functionality. Seems very strange.

> A possible compromise is to allow the IF NOT EXISTS option only without
> a schema-element list, which I suspect is the only use-case David had in
> mind to start with anyway.

Yes, true.

> The existing patch added the check in a pretty randomly chosen spot too,
> with one bad consequence being that if the schema already exists then it
> will fall out with the wrong user ID in effect, creating a security bug.
> But I'm not entirely sure where to put the check instead.  Should we put
> it before or after the permissions checks --- that is, should IF NOT
> EXISTS require that you would have had permission to create the schema?
> Or, if the schema does exist, should we just call it good anyway?  I'm
> too lazy to look at how other INE options resolved this question, but it
> seems like we ought to be consistent.

Agreed. But if it already exists, where does it currently die? ISTM that would 
be the point to check, if possible.

> Also, the AUTHORIZATION clause of CREATE SCHEMA creates an aspect of all
> this that doesn't exist for any other kind of CREATE command, namely
> that the object might have been requested to be created under some other
> user id.  For instance, supposing that we were to go forward with trying
> to create sub-objects, but the ownership of the existing schema is
> different from what's implied or specified by CREATE SCHEMA, should the
> sub-objects be (attempted to be) created as owned by that user instead?
> Perhaps not, but I'm not at all sure.

I tend to think that if the schema exists, there should be no attempt to create 
the sub-objects. Seems the least astonishing to me.

Best,

David




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-09-21 Thread Tom Lane
"Dickson S. Guedes"  writes:
> I reviewed this v5 of patch:

> - https://commitfest.postgresql.org/action/patch_view?id=907

> The patch is small and implements a new syntax to CREATE SCHEMA
> that allow the creation of a schema be skipped when IF NOT EXISTS is
> used.

I don't believe this has been thought through nearly carefully enough.
If CREATE SCHEMA created a schema and nothing more, then the proposed
implementation would probably be fine.  But per spec, CREATE SCHEMA
can specify not only creating the schema but a whole bunch of objects
within the schema.  As coded, if the schema exists then creation of
the specified sub-objects is just skipped, regardless of whether they
exist or not.  I doubt that this is really sane behavior.  Would the
principle of least astonishment dictate that the IF NOT EXISTS option
apply implicitly to each sub-object as well?  (If so, we'd have to
extend everything that can appear in OptSchemaEltList; most of those
commands don't have IF NOT EXISTS options today.)

This is somewhat connected to our previous arguments about whether CINE
ought to try to make any guarantees about whether the existing object is
at all like the object that would have been created if the statement
executed successfully.  I realize that the existing statements that have
an INE option have entirely blown off this concern, but I still think
that it's a question that ought to be considered, particularly for
objects that have separately creatable sub-structure.

A possible compromise is to allow the IF NOT EXISTS option only without
a schema-element list, which I suspect is the only use-case David had in
mind to start with anyway.

The existing patch added the check in a pretty randomly chosen spot too,
with one bad consequence being that if the schema already exists then it
will fall out with the wrong user ID in effect, creating a security bug.
But I'm not entirely sure where to put the check instead.  Should we put
it before or after the permissions checks --- that is, should IF NOT
EXISTS require that you would have had permission to create the schema?
Or, if the schema does exist, should we just call it good anyway?  I'm
too lazy to look at how other INE options resolved this question, but it
seems like we ought to be consistent.

Also, the AUTHORIZATION clause of CREATE SCHEMA creates an aspect of all
this that doesn't exist for any other kind of CREATE command, namely
that the object might have been requested to be created under some other
user id.  For instance, supposing that we were to go forward with trying
to create sub-objects, but the ownership of the existing schema is
different from what's implied or specified by CREATE SCHEMA, should the
sub-objects be (attempted to be) created as owned by that user instead?
Perhaps not, but I'm not at all sure.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] External Replication

2012-09-21 Thread m...@rpzdesign.com

Heikki:

Thanks for the response.  I am writing an external replication engine
and putting hooks into postgres to send "signals" via a unix socket to 
the outside world.


All of the logic and implementation will occur OUTSIDE the postgres 
codebase and

will not interfere with any WAL based replication schemes.

The usual "Trigger" level asynch replication does on not suffice since 
it does not handle
new databases, new schemas, new tables, new indexes, alter everything, 
new functions, etc.


So I started putting into utility.c->xxx_ProcessUtility(..., stmt* 
parsetree,) so that discrete


Does anybody have other ideas where to better locate the "Hooks" for 
external replication/signaling

than utility.c?

One drawback is that I have to patch directly into those areas where new 
relation IDs are created
so I can pass them outside of the process to the replication engine.  
Process Utility does a really bad job of retaining
its work as it processes the statements, so I have to patch code where 
the relation IDS are created.  Those new IDS are never retained

when leaving ProcessUtility, its work is done.

Is there a way to put a "Trigger" on pg_class, pg_database, pg_namespace 
instead of patching the statically

compiled binary to simulate the triggers?

Cheers,

marco


On 9/21/2012 10:15 AM, Heikki Linnakangas wrote:

On 21.09.2012 17:58, m...@rpzdesign.com wrote:

During the execution of ProcessUtility() function in
/src/backend/xxx/utility.c,

the CreateStmt node type is processed to create a table.

Is there a global function in the context of the backend process that
will deliver what the current database and schema names?


There's a global variable MyDatabaseId for the database the backend is 
connected to. It doesn't change after login.


There's no such thing as a "current schema", but I think you'll want 
to take a look at src/backend/catalog/namespace.c, which handles the 
search_path. There's a variable activeCreationNamespace there; look at 
the functions in namespace.c to see how it works.


- Heikki






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] External Replication

2012-09-21 Thread Andres Freund
On Friday, September 21, 2012 08:12:26 PM m...@rpzdesign.com wrote:
> Heikki:
> 
> Thanks for the response.  I am writing an external replication engine
> and putting hooks into postgres to send "signals" via a unix socket to
> the outside world.
> 
> All of the logic and implementation will occur OUTSIDE the postgres
> codebase and
> will not interfere with any WAL based replication schemes.
> 
> The usual "Trigger" level asynch replication does on not suffice since
> it does not handle
> new databases, new schemas, new tables, new indexes, alter everything,
> new functions, etc.
> 
> So I started putting into utility.c->xxx_ProcessUtility(..., stmt*
> parsetree,) so that discrete
> 
> Does anybody have other ideas where to better locate the "Hooks" for
> external replication/signaling
> than utility.c?
Look into the new event triggers started recently. 

Commits
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3855968f328918b6cd1401dd11d109d471a54d40
and
3a0e4d36ebd7f477822d5bae41ba121a40d22ccc

Look into earlier discussions around event/command triggers why putting stuff 
plainly into ProcessUtility is not all that helpful...

Greetings,

Andres

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] External Replication

2012-09-21 Thread m...@rpzdesign.com

Andres:

Thanks for the link on the GIT patch code.

I did a big major domo search and found some stuff related to command 
triggers:


http://archives.postgresql.org/pgsql-hackers/2012-03/msg00169.php

"Look into the new event triggers started recently." -

Are these command triggers currently in the 9.2.0 code base or is it in 
a alpha 9.2.1xxx?


I searched the 9.2.0 code base and found nothing on CREATE TRIGGER that 
had anything to do with other than TABLES and VIEWS.


I cannot wait for stable code to arrive, I need to add it today.

Since the hackers group is already working on this, I will not even try 
to contribute this work, unless someone wants it.


marco


On 9/21/2012 1:19 PM, Andres Freund wrote:

On Friday, September 21, 2012 08:12:26 PM m...@rpzdesign.com wrote:

Heikki:

Thanks for the response.  I am writing an external replication engine
and putting hooks into postgres to send "signals" via a unix socket to
the outside world.

All of the logic and implementation will occur OUTSIDE the postgres
codebase and
will not interfere with any WAL based replication schemes.

The usual "Trigger" level asynch replication does on not suffice since
it does not handle
new databases, new schemas, new tables, new indexes, alter everything,
new functions, etc.

So I started putting into utility.c->xxx_ProcessUtility(..., stmt*
parsetree,) so that discrete

Does anybody have other ideas where to better locate the "Hooks" for
external replication/signaling
than utility.c?

Look into the new event triggers started recently.

Commits
http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=3855968f328918b6cd1401dd11d109d471a54d40
and
3a0e4d36ebd7f477822d5bae41ba121a40d22ccc

Look into earlier discussions around event/command triggers why putting stuff
plainly into ProcessUtility is not all that helpful...

Greetings,

Andres





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] External Replication

2012-09-21 Thread Andres Freund
Hi,

On Friday, September 21, 2012 11:06:46 PM m...@rpzdesign.com wrote:
> Thanks for the link on the GIT patch code.
> 
> I did a big major domo search and found some stuff related to command
> triggers:
> 
> http://archives.postgresql.org/pgsql-hackers/2012-03/msg00169.php
> 
> "Look into the new event triggers started recently." -
> 
> Are these command triggers currently in the 9.2.0 code base or is it in
> a alpha 9.2.1xxx?
Its not in 9.2 and will only be in 9.3

Andres
-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Tatsuo Ishii
Tom, Kaigai,

> Kohei KaiGai  writes:
>> Tom, could you give us a suggestion which manner is better approach; whether
>> the PQfn should have responsibility for endian translation of 64bit-integer, 
>> or
>> callers (lo_tell64 or lo_seek64)?
> 
> Adding anything inside pqFunctionCall is useless, unless we were to add
> an int64 variant to PQArgBlock, which isn't a good idea because it will
> be an ABI break.  The functions in fe-lobj.c have to set up the int64
> value as if it were pass-by-reference, which means dealing with
> endianness concerns there.

I just want to make sure you guy's point.

We do not modify pqFunctionCall. That means PQfn does not accept
PQArgBlock.isint != 0 and PQArgBlock.len == 8 case. If a PQfn caller
wants to send 64-bit integer, it should set PQArgBlock.isint = 0 and
PQArgBlock.len = 8 and set data pass-by-reference. Endianness should
be taken care by the PQfn caller. Also we do not modify fe-misc.c
because there's no point to add pqPutint64/pqGetint64(they are called
from pqFunctionCall in the patch).
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-09-21 Thread Michael Paquier
On Sat, Sep 22, 2012 at 3:06 AM, David E. Wheeler wrote:

> On Sep 21, 2012, at 10:59 AM, Tom Lane  wrote:
>
> > I don't believe this has been thought through nearly carefully enough.
> > If CREATE SCHEMA created a schema and nothing more, then the proposed
> > implementation would probably be fine.  But per spec, CREATE SCHEMA
> > can specify not only creating the schema but a whole bunch of objects
> > within the schema.  As coded, if the schema exists then creation of
> > the specified sub-objects is just skipped, regardless of whether they
> > exist or not.  I doubt that this is really sane behavior.  Would the
> > principle of least astonishment dictate that the IF NOT EXISTS option
> > apply implicitly to each sub-object as well?  (If so, we'd have to
> > extend everything that can appear in OptSchemaEltList; most of those
> > commands don't have IF NOT EXISTS options today.)
>
> I had no idea about that functionality. Seems very strange.
>
Isn't it this SQL?
CREATE SCHEMA foo CREATE TABLE aa (a int) CREATE TABLE bb (b int);
-- 
Michael Paquier
http://michael.otacoo.com


Re: [HACKERS] pg_reorg in core?

2012-09-21 Thread sakamoto

(2012/09/21 22:32), Michael Paquier wrote:
On Fri, Sep 21, 2012 at 9:33 PM, Daniele Varrazzo 
mailto:daniele.varra...@gmail.com>> wrote:


On Fri, Sep 21, 2012 at 5:17 AM, Josh Kupershmidt
mailto:schmi...@gmail.com>> wrote:

I haven't submitted it to PGXN as I prefer the original author to keep
the ownership.

Thanks, I merged your patches with the dev branch for the time being.
It would be great to have some input from the maintainers of pg_reorg 
in pgfoundry to see if they agree about putting it in pgxn.



Hi, I'm Sakamoto, reorg mainainer.
I'm very happy Josh, Michael  and Daniele are interested in reorg.

I'm working on the next version of reorg 1.1.8, which will be released 
in a couple of days.
And I come to think that it is a point to reconsider the way to 
develop/maintain.
To be honest,   we have little available development resources, so no 
additional
features are added recently.  But features and fixes to be done (as Josh 
sums up. thanks).


I think it is a good idea to develop on github. Michael's repo is the root?
After the release of 1.1.8, I will freeze CVS repository and create a 
mirror on github.

# Or Michael's repo will do :)

I have received some patches from Josh, Daniele. It should be developed 
in the next
major version 1.2. So some of them may not be included in 1.1.8 (caz 
it's minor versionup),

but I feel so appreciated.

I think we can discuss further at reorg list.

Sakamoto


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_reorg in core?

2012-09-21 Thread Michael Paquier
On Sat, Sep 22, 2012 at 9:08 AM, sakamoto  wrote:

> (2012/09/21 22:32), Michael Paquier wrote:
>
>> On Fri, Sep 21, 2012 at 9:33 PM, Daniele Varrazzo <
>> daniele.varra...@gmail.com 
>> >
>> wrote:
>>
>> On Fri, Sep 21, 2012 at 5:17 AM, Josh Kupershmidt
>> mailto:schmi...@gmail.com>> wrote:
>>
>> I haven't submitted it to PGXN as I prefer the original author to keep
>> the ownership.
>>
>> Thanks, I merged your patches with the dev branch for the time being.
>> It would be great to have some input from the maintainers of pg_reorg in
>> pgfoundry to see if they agree about putting it in pgxn.
>>
>>  Hi, I'm Sakamoto, reorg mainainer.
> I'm very happy Josh, Michael  and Daniele are interested in reorg.
>
> I'm working on the next version of reorg 1.1.8, which will be released in
> a couple of days.
> And I come to think that it is a point to reconsider the way to
> develop/maintain.
> To be honest,   we have little available development resources, so no
> additional
> features are added recently.  But features and fixes to be done (as Josh
> sums up. thanks).
>
> I think it is a good idea to develop on github. Michael's repo is the root?
> After the release of 1.1.8, I will freeze CVS repository and create a
> mirror on github.
> # Or Michael's repo will do :)
>
As you wish. You could create a root folder based on a new organization, or
on your own account, or use my repo.
The result will be the same. I let it at your appreciation

I have received some patches from Josh, Daniele. It should be developed in
> the next
> major version 1.2. So some of them may not be included in 1.1.8 (caz it's
> minor versionup),
> but I feel so appreciated.
>
Great!
-- 
Michael Paquier
http://michael.otacoo.com


Re: [HACKERS] 64-bit API for large object

2012-09-21 Thread Tatsuo Ishii
> Tom, Kaigai,
> 
>> Kohei KaiGai  writes:
>>> Tom, could you give us a suggestion which manner is better approach; whether
>>> the PQfn should have responsibility for endian translation of 
>>> 64bit-integer, or
>>> callers (lo_tell64 or lo_seek64)?
>> 
>> Adding anything inside pqFunctionCall is useless, unless we were to add
>> an int64 variant to PQArgBlock, which isn't a good idea because it will
>> be an ABI break.  The functions in fe-lobj.c have to set up the int64
>> value as if it were pass-by-reference, which means dealing with
>> endianness concerns there.
> 
> I just want to make sure you guy's point.
> 
> We do not modify pqFunctionCall. That means PQfn does not accept
> PQArgBlock.isint != 0 and PQArgBlock.len == 8 case. If a PQfn caller
> wants to send 64-bit integer, it should set PQArgBlock.isint = 0 and
> PQArgBlock.len = 8 and set data pass-by-reference. Endianness should
> be taken care by the PQfn caller. Also we do not modify fe-misc.c
> because there's no point to add pqPutint64/pqGetint64(they are called
> from pqFunctionCall in the patch).

Oops. There's no such a function pqGetint64 in the patch. 64-bit int
case is taken care inside pqGetint.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_reorg in core?

2012-09-21 Thread Christopher Browne
If the present project is having a tough time doing enhancements, I should
think it mighty questionable to try to draw it into core, that presses it
towards a group of already very busy developers.

On the other hand, if the present development efforts can be made more
public, by having them take place in a more public repository, that at
least has potential to let others in the community see and participate.
There are no guarantees, but privacy is liable to hurt.

I wouldn't expect any sudden huge influx of developers, but a steady
visible stream of development effort would be mighty useful to a "merge
into core" argument.

A *lot* of projects are a lot like this.  On the Slony project, we have
tried hard to maintain this sort of visibility.  Steve Singer, Jan Wieck
and I do our individual efforts on git repos visible at GitHub to ensure
ongoing efforts aren't invisible inside a corporate repo.  It hasn't led to
any massive of extra developers, but I am always grateful to see Peter
Eisentraut's bug reports.


Re: [HACKERS] pg_reorg in core?

2012-09-21 Thread sakamoto

(2012/09/22 10:02), Christopher Browne wrote:


If the present project is having a tough time doing enhancements, I 
should think it mighty questionable to try to draw it into core, that 
presses it towards a group of already very busy developers.


On the other hand, if the present development efforts can be made more 
public, by having them take place in a more public repository, that at 
least has potential to let others in the community see and 
participate.  There are no guarantees, but privacy is liable to hurt.


I wouldn't expect any sudden huge influx of developers, but a steady 
visible stream of development effort would be mighty useful to a 
"merge into core" argument.


A *lot* of projects are a lot like this.  On the Slony project, we 
have tried hard to maintain this sort of visibility.  Steve Singer, 
Jan Wieck and I do our individual efforts on git repos visible at 
GitHub to ensure ongoing efforts aren't invisible inside a corporate 
repo.  It hasn't led to any massive of extra developers, but I am 
always grateful to see Peter Eisentraut's bug reports.




Agreed.  What reorg project needs first is transparency, including
issue traking, bugs,  listup todo items, clearfied release schedules,
quarity assurance and so force.
Only after all that done, the discussion to put them to core can be started.

Until now, reorg is developed and maintained behind corporate repository.
But now that its activity goes slow, what I should do as a maintainer is to
try development process more public and finds someone to corporate with:)

Sakamoto


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [WIP] Patch : Change pg_ident.conf parsing to be the same as pg_hba.conf

2012-09-21 Thread Amit kapila
On Friday, September 21, 2012 8:28 PM Heikki Linnakangas wrote:
On 02.07.2012 15:08, Amit Kapila wrote:
>> Attached is a Patch to change the parsing of pg_ident.conf to make it
>> similar to pg_hba.conf.
>> This is based on Todo Item:
>> http://archives.postgresql.org/pgsql-hackers/2011-06/msg02204.php
>
>> Purpose - This will allow to catch syntax errors in pg_ident at the startup
>> or reload time.
>

> Looks good to me, committed with some small cleanup.

Thank you.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [ADMIN] pg_upgrade from 9.1.3 to 9.2 failed

2012-09-21 Thread Bruce Momjian
On Fri, Sep 21, 2012 at 05:16:46PM +0800, Rural Hunter wrote:
> >>I am thinking this query needs to be split apart into a UNION where the
> >>second part handles TOAST tables and looks at the schema of the _owner_
> >>of the TOAST table.  Needs to be backpatched too.
> >OK, I am at a conference now so will not be able to write-up a patch
> >until perhaps next week.  You can drop the information schema in the old
> >database and pg_upgrade should run fine.  I will test your failure once
> >I create a patch.
> >
> OK. I will try. I also found some problems on initdb when re-init my
> pg9.2 db.
> 1. initdb doesn't create the pg_log dir so pg can not be started
> after initdb before I create the dir manually.
> 2. The case issue of db charset name. I installed pg9.1 and pg9.2
> with zh_CN.UTF8. But somehow it seems the actual chaset name is
> stored with lowercase 'zh_CN.utf8' during the install. In this case,
> I can run the pg_upgrade without problem since they are both
> lowercase. But when I re-init pg9.2 with option '-E zh_CN.UTF8',
> pg_upgrade will fail and report that encoding/charset mis-match: one
> is uppercase and another is lowercase. If I run initdb with '-E
> zh_CN.utf8', it will tell me there is no such charset in the system.
> I found a workaround to run initdb with '--lc-collate=zh_CN.utf8
> --lc-ctype=zh_CN.utf8 --lc-messages=zh_CN.utf8
> --lc-monetary=zh_CN.utf8 --lc-numeric=zh_CN.utf8
> --lc-time=zh_CN.utf8'. But the case problem is really confusing.

Yes, it sounds very confusing.  I wonder if pg_upgrade should do a
case-insentive comprison of encodings?  Comments?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)

2012-09-21 Thread Amit kapila
On Friday, September 21, 2012 6:50 PM Alvaro Herrera wrote:
Excerpts from Amit Kapila's message of vie sep 21 02:26:49 -0300 2012:
> On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote:

> > > Well, there is a difficulty here which is that the number of processes
> >> connected to databases must be configured during postmaster start
> >> (because it determines the size of certain shared memory structs).  So
> >> you cannot just spawn more tasks if all max_worker_tasks are busy.
> >> (This is a problem only for those workers that want to be connected as
> >> backends.  Those that want libpq connections do not need this and are
> >> easier to handle.)
>

>> If not above then where there is a need of dynamic worker tasks as mentioned 
>> by Simon?

> Well, I think there are many uses for dynamic workers, or short-lived
> workers (start, do one thing, stop and not be restarted).

> In my design, a worker is always restarted if it stops; otherwise there
> is no principled way to know whether it should be running or not (after
> a crash, should we restart a registered worker?  We don't know whether
> it stopped before the crash.)  So it seems to me that at least for this
> first shot we should consider workers as processes that are going to be
> always running as long as postmaster is alive.  On a crash, if they have
> a backend connection, they are stopped and then restarted.

a. Is there a chance that it would have made shared memory inconsitent after 
crash like by having lock on some structure and crash before releasing it?
If such is case, do we need reinitialize the shared memory as well with 
worker restart?

b. do these worker tasks be able to take any new jobs, or whatever they are 
started with they will do only those jobs?


With Regards,
Amit Kapila. 


With Regards,
Amit Kapila.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CREATE SCHEMA IF NOT EXISTS

2012-09-21 Thread Fabrízio de Royes Mello
2012/9/21 David E. Wheeler 

> On Sep 21, 2012, at 10:59 AM, Tom Lane  wrote:
>
> > I don't believe this has been thought through nearly carefully enough.
> > If CREATE SCHEMA created a schema and nothing more, then the proposed
> > implementation would probably be fine.  But per spec, CREATE SCHEMA
> > can specify not only creating the schema but a whole bunch of objects
> > within the schema.  As coded, if the schema exists then creation of
> > the specified sub-objects is just skipped, regardless of whether they
> > exist or not.  I doubt that this is really sane behavior.  Would the
> > principle of least astonishment dictate that the IF NOT EXISTS option
> > apply implicitly to each sub-object as well?  (If so, we'd have to
> > extend everything that can appear in OptSchemaEltList; most of those
> > commands don't have IF NOT EXISTS options today.)
>
> I had no idea about that functionality. Seems very strange.
>
>
I completely forgot this functionality. The example above is from our docs
[1]:

CREATE SCHEMA hollywood
CREATE TABLE films (title text, release date, awards text[])
CREATE VIEW winners AS
SELECT title, release FROM films WHERE awards IS NOT NULL;

The "CREATE SCHEMA" statement accepts another "CREATE" commands (CREATE
{TABLE | VIEW | INDEX | SEQUENCE | TRIGGER}), and the current patch do not
consider this options.


> A possible compromise is to allow the IF NOT EXISTS option only without
> > a schema-element list, which I suspect is the only use-case David had in
> > mind to start with anyway.
>
> Yes, true.
>
>
Ok.



> > The existing patch added the check in a pretty randomly chosen spot too,
> > with one bad consequence being that if the schema already exists then it
> > will fall out with the wrong user ID in effect, creating a security bug.
> > But I'm not entirely sure where to put the check instead.  Should we put
> > it before or after the permissions checks --- that is, should IF NOT
> > EXISTS require that you would have had permission to create the schema?
> > Or, if the schema does exist, should we just call it good anyway?  I'm
> > too lazy to look at how other INE options resolved this question, but it
> > seems like we ought to be consistent.
>
> Agreed. But if it already exists, where does it currently die? ISTM that
> would be the point to check, if possible.
>
>
I change the patch (attached) to skip only the schema creation and execute
others statements...



> > Also, the AUTHORIZATION clause of CREATE SCHEMA creates an aspect of all
> > this that doesn't exist for any other kind of CREATE command, namely
> > that the object might have been requested to be created under some other
> > user id.  For instance, supposing that we were to go forward with trying
> > to create sub-objects, but the ownership of the existing schema is
> > different from what's implied or specified by CREATE SCHEMA, should the
> > sub-objects be (attempted to be) created as owned by that user instead?
> > Perhaps not, but I'm not at all sure.
>
> I tend to think that if the schema exists, there should be no attempt to
> create the sub-objects. Seems the least astonishing to me.
>
>
Why don't create sub-objects? I think the INE clause must affect only
"CREATE SCHEMA" statement, the others must be executed normally. We can
discuss more about it...


[1] http://www.postgresql.org/docs/9.2/interactive/sql-createschema.html

-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
>> Blog sobre TI: http://fabriziomello.blogspot.com
>> Perfil Linkedin: http://br.linkedin.com/in/fabriziomello
>> Twitter: http://twitter.com/fabriziomello


create_schema_if_not_exists_v6.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PLV8JS

2012-09-21 Thread Daniel Farina
On Fri, Sep 21, 2012 at 3:14 PM, Milton Labanda <1000ton@gmail.com> wrote:
> Hi friends, wich context is the apropiate to install this plugin?
> I have
>
> Ubuntu 11
> x64 architecture
> postgresql 9,2
> libv8-3.1  (system package)
>
> but not get install it. Some ideas?

Well, I'm not sure if pgsql-hackers is quite the right mailing list
for this (there's a plv8 mailing list), but because plv8 is probably
going to be so instrumental to many profitable uses I can help you
address it quickly, and maybe some other denizens of -hackers will
have some opinions, especially because this general problem is going
to surface more frequently as number of extensions mushroom.

Until a distribution/operating system integrator commits to packaging
some version of plv8 and/or v8, I recommend employing static linking
of plv8 to a self-downloaded libv8.   This lets you handle multiple
versions of plv8 and libv8 on the same system (but  different database
installs) and lets you get the latest and greatest v8s at-will.

Alternatively, you may be determined to get the plv8 build to work
with your OS of choice, even if they do not have packages.  I think
that's a longer discussion that'd need to happen at a plv8 mailing
list.

In general, I think if there is to be a guideline for Postgres
extensions it should be to enable static linking of dependencies or
perhaps a more detailed guideline involving enabling self-contained
concurrent installs of dynamically linked extension dependencies.  I
am currently in the depths of dependency hell involving some of the
GIS libraries and PostGIS and really, really wish I could do either of
these.

There is a relevant discussion here:

https://code.google.com/p/plv8js/issues/detail?id=36#makechanges

-- 
fdr


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2 Cascading replication after slave promotion

2012-09-21 Thread Gibheer
On Tue, 14 Aug 2012 10:50:07 -0700
Josh Berkus  wrote:

> 
> > Yeah, I think there's more people that agree with this use-case
> > than you seem to think..  That said, I appreciate that it's not a
> > trivial thing to support cleanly.
> 
> Not trivial, no, but not major either.  Really what needs to happen is
> for the timeline change record to get transmitted over the WAL stream.
> 
> Hmmm.  You know, I bet I could get stream-only remastering working in
> an unsafe way just by disabling the timeline checks.  Time to test ...
> 

Isn't that, what recovery_target_timeline in the recovery.conf already
does? It switches to the next timeline after a master migration. See
http://www.postgresql.org/docs/current/static/recovery-target-settings.html
for further information.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal - assign result of query to psql variable

2012-09-21 Thread Pavel Stehule
Hello

2012/9/19 Shigeru HANADA :
> On Fri, Aug 10, 2012 at 3:21 AM, Pavel Stehule 
> wrote:
>> there is new version of this patch
>>
>> * cleaned var list parser
>> * new regress tests
>> * support FETCH_COUNT > 0
>
> Here are my review comments.
>
> Submission
> ==
> The patch is formatted in context diff style, and it could be applied
> cleanly against latest master.  This patch include document and tests,
> but IMO they need some enhancement.
>
> Usability
> =
> This patch provides new psql command \gset which sends content of query
> buffer to server, and stores result of the query into psql variables.
> The name "\gset" is mixture of \g, which sends result to file or pipe,
> and \set, which sets variable to some value, so it would sound natural
> to psql users.
>
> Freature test
> =
> Compile completed without warning.  Regression tests for \gset passed,
> but I have some comments on them.
>
> - Other regression tests have comment "-- ERROR" just after queries
> which should fail.  It would be nice to follow this manner.
> - Typo "to few" in expected file and source file.
> - How about adding testing "\gset" (no variable list) to "should fail"?
> - Is it intentional that \gset can set special variables such as
> AUTOCOMMIT and HOST?  I don't see any downside for this behavior,
> because \set also can do that, but it is not documented nor tested at all.
>

I use a same "SetVariable" function, so a behave should be same

> Document
> 
> - Adding some description of \gset command, especially about limitation
> of variable list, seems necessary.
> - In addition to the meta-command section, "Advanced features" section
> mentions how to set psql's variables, so we would need some mention
> there too.
> - The term "target list" might not be familiar to users, since it
> appears in only sections mentioning PG internal relatively.  I think
> that the feature described in the section "Retrieving Query Results" in
> ECPG document is similar to this feature.
> http://www.postgresql.org/docs/devel/static/ecpg-variables.html

I invite any proposals about enhancing documentation. Personally I am
a PostgreSQL developer, so I don't known any different term other than
"target list" - but any user friendly description is welcome.

>
> Coding
> ==
> The code follows our coding conventions.  Here are comments for coding.
>
> - Some typo found in comments, please see attached patch.
> - There is a code path which doesn't print error message even if libpq
> reported error (PGRES_BAD_RESPONSE, PGRES_NONFATAL_ERROR,
> PGRES_FATAL_ERROR) in StoreQueryResult.  Is this intentional?  FYI, ecpg
> prints "bad response" message for those errors.

yes - it is question. I use same pattern like PrintQueryResult, but
"bad response" message should be used.

I am sending updated patch

>
> Although I'll look the code more closely later, but anyway I marked the
> patch "Waiting on Author" for comments above.
>
> Regards,
> --
> Shigeru HANADA


gset_04.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal - assign result of query to psql variable

2012-09-21 Thread Shigeru HANADA
Hi Pavel,

(2012/09/21 2:01), Pavel Stehule wrote:
>> - Is it intentional that \gset can set special variables such as
>> AUTOCOMMIT and HOST?  I don't see any downside for this behavior,
>> because \set also can do that, but it is not documented nor tested at all.
>>
> 
> I use a same "SetVariable" function, so a behave should be same

It seems reasonable.

>> Document
>> 
>> - Adding some description of \gset command, especially about limitation
>> of variable list, seems necessary.
>> - In addition to the meta-command section, "Advanced features" section
>> mentions how to set psql's variables, so we would need some mention
>> there too.
>> - The term "target list" might not be familiar to users, since it
>> appears in only sections mentioning PG internal relatively.  I think
>> that the feature described in the section "Retrieving Query Results" in
>> ECPG document is similar to this feature.
>> http://www.postgresql.org/docs/devel/static/ecpg-variables.html
> 
> I invite any proposals about enhancing documentation. Personally I am
> a PostgreSQL developer, so I don't known any different term other than
> "target list" - but any user friendly description is welcome.

How about to say "stores the query's result output into variable"?
Please see attached file for my proposal.  I also mentioned about 1-row
limit and omit of variable.

>> Coding
>> ==
>> The code follows our coding conventions.  Here are comments for coding.
>>
>> - Some typo found in comments, please see attached patch.
>> - There is a code path which doesn't print error message even if libpq
>> reported error (PGRES_BAD_RESPONSE, PGRES_NONFATAL_ERROR,
>> PGRES_FATAL_ERROR) in StoreQueryResult.  Is this intentional?  FYI, ecpg
>> prints "bad response" message for those errors.
> 
> yes - it is question. I use same pattern like PrintQueryResult, but
> "bad response" message should be used.
> 
> I am sending updated patch

It seems ok.

BTW, as far as I see, no psql backslash command including \setenv (it
was added in 9.2) has regression test in core (I mean src/test/regress).
 Is there any convention about this issue?  If psql backslash commands
(or any psql feature else) don't need regression test, we can remove
psql.(sql|out).
# Of course we need to test new feature by hand.

Anyway, IMO the name psql impresses larger area than the patch
implements.  How about to rename psql to psql_cmd or backslash_cmd than
psql as regression test name?

-- 
Shigeru HANADA
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 3693a5a..c4ac674 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1483,8 +1483,8 @@ testdb=>
 way. Use \i for that.) This means that
 if the query ends with (or contains) a semicolon, it is
 immediately executed. Otherwise it will merely wait in the
-query buffer; type semicolon, \g or \gset to send it, or
-\r to cancel.
+query buffer; type semicolon, \g or
+\gset to send it, or \r to cancel.
 
 
 
@@ -1621,9 +1621,19 @@ Tue Oct 26 21:40:57 CEST 1999
 
 
 
-Sends the current query input buffer to the server and stores
-the query's target list a corresponding list of psql
-variables.
+ Sends the current query input buffer to the server and stores the
+ query's output into corresponding variable.  The preceding query must
+ return only one row, and the number of variables must be same as the
+ number of elements in SELECT list.  If you don't
+ need any of items in SELECT list, you can omit
+ corresponding variable.
+ Example:
+
+foo=> SELECT 'hello', 'wonderful', 'world!' \gset var1,,var3 
+foo=> \echo :var1 :var3
+hello world!
+
 
 
   

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_reorg in core?

2012-09-21 Thread M.Sakamoto
Hi,
I'm sakamoto, maintainer of reorg.

>> What could be also great is to move the project directly into github to
>> facilitate its maintenance and development.
>No argument from me there, especially as I have my own fork in github,
>but that's up to the current maintainers.
Yup, I am thinking development on CVS(onPgfoundry) is a bit awkward for
me and github would be a suitable place.

To be honest, we have little available development resources, so
no additional features are added recently. But features and fixes to
be done piled up, which Josh sums up.

In the short term, within this month I'll release minor versionup
of reorg to support PostgreSQL 9.2. And I think it's the time to
reconsider the way we maintain pg_reorg.
It's happy that Josh and Michael are interested in reorg,
and I wish you to be a maintainer :)

I think we can discuss at reorg list.

M.Sakamoto NTT OSS Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers