Re: [GENERAL] could not migrate 8.0.13 database with large object data to 9.5.1

2016-03-01 Thread John R Pierce

(thread moved from pg_bugs)
(upgrading a 8.0.13 database on Windows XP 32bit to 9.5.1 on Windows 8 
64 bit.)


On 3/1/2016 8:05 PM, Premsun Choltanwanich wrote:

Modified command by remove -Ft flag as per you suggestion:
pg_dump -v -h 192.168.200.75 -U clubadmin -d clubprogram | psql -U 
clubadmin -d clubprogram


Result (got same message even with parameter -b or not):
pg_dump: reading rewrite rules
pg_dump: reading policies
pg_dump: reading large objects
...
pg_dump: creating FUNCTION "public.plpgsql_call_handler()"
pg_dump: creating FUNCTION "public.plpgsql_validator(oid)"
...
pg_dump: creating FUNCTION "public.pg_file_write(text, text, boolean)"


those all sound like standard postgres functions, its not clear to me 
why pg_dump is generating the CREATE FUNCTION code for them.



pg_dump: [archiver] could not write to output file: Invalid argument



ok, presumably your new box has plenty of disk space?  try this on the 
new 9.5 system...


"C:\Program Files\PostgreSQL\9.5\bin\pg_dump" -v -h 192.168.200.75 
-U clubadmin -d clubprogram -f "D:\Backups\clubprogram-2016-03-01.sql"


(replace D:\Backups\  with wherever you want to put this potentially 
rather large backup file)


if that works, then ...

"C:\Program Files\PostgreSQL\9.5\bin\psql" -U clubadmin -d 
clubprogram -f "D:\Backups\clubprogram-2016-03-01.sql"


should load the dumped data onto the new system.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] could not migrate 8.0.13 database with large object data to 9.5.1

2016-03-01 Thread Premsun Choltanwanich
Dear All,

I have very old project database which also contain lo data (large object data 
managed by database's functions as lo(oid), lo_in(cstring), lo_oid(lo), 
lo_out(lo) and oid(lo) to manage ) running on PostgreSQL 8.0.13 and need to 
migrate it to most recently version as PostgreSQL 9.5.1. 

After install PostgreSQL 9.5.1, I running a simple command pg_dumb to 8.0.13 
server with parameters required for generate dump file by blobs also be 
included then running psql with all required parameters to create those dump 
data to 9.5.1 server. However, almost tables are transferred to new server 
after the process completed except tables which contain my lo data that all 
those tables are missing from the database.

Please advise to me, How could I migrate 8.0.13 database with large object data 
to be working on 9.5.1?

NETsolutions Asia Limited
http://www.nsasia.co.th




Re: [GENERAL] multicolumn index and setting effective_cache_size using human-readable-numbers

2016-03-01 Thread Jeff Janes
On Mon, Feb 29, 2016 at 3:02 PM, Geoff Winkless  wrote:
>
> Perhaps I'm not being clear. Index 1 has field a and is used in the join no
> matter how small I set effective_cache_size (even 32mb). Index 2 has fields
> a,b but will not be used at ecs of 3gb, 6gb, whatever up til 8gb, when it's
> suddenly used.
>
> For the variation in size of the two indexes (say 20%?) that just doesn't
> add up.

What version of PostgreSQL are you using?  Any chance you can share a
data-generator which reproduces this problem?

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for pure C function APIs for server extension: language handler and SPI

2016-03-01 Thread John R Pierce

On 2/29/2016 3:55 PM, da...@andl.org wrote:


What I need (to find or create) is a ‘pure’ C language API to support 
a Postgres server extension. By ‘pure’ I mean one that has no 
knowledge of Postgres internals and that could be called by a generic 
interface provided by some other tool that can support C language APIs.


The reason is that I’m looking to integrate a new language (*Andl*) 
into Postgres. To do that I need to find or create a ‘pure’ C-language 
API to interface with:


1.The generic language call interface (pl_language_handler), which 
must handle conversion of Incoming language call parameters and 
outgoing language call return value.


1.The backend query execution interface (SPI), which must handle 
conversion of outgoing query parameters and incoming query result values.





by "server extension" do you mean, you want to use Andi as a "PL/Andi" 
so you can write stored procedures i Andi callable within SQL queries?


or do you mean, you want your Andi programs to be able to execute normal 
postgresql queries as a regular client ?



for the first, see
http://www.postgresql.org/docs/9.5/static/plhandler.html

for the latter, see
http://www.postgresql.org/docs/9.5/static/libpq.html



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Looking for pure C function APIs for server extension: language handler and SPI

2016-03-01 Thread John McKown
On Mon, Feb 29, 2016 at 5:55 PM,  wrote:

> What I need (to find or create) is a ‘pure’ C language API to support a
> Postgres server extension. By ‘pure’ I mean one that has no knowledge of
> Postgres internals and that could be called by a generic interface provided
> by some other tool that can support C language APIs.
>

​Well, since nobody else has replied yet, have you read:
http://www.postgresql.org/docs/9.5/interactive/libpq.html

libpq is the C callable API which communicates with the PostgreSQL server.
There is a "shared object" and a normal library which can be "statically
linked". But that's really all that _I_ know about it.
​

>
>
> The reason is that I’m looking to integrate a new language (*Andl*) into
> Postgres. To do that I need to find or create a ‘pure’ C-language API to
> interface with:
>
> 1.   The generic language call interface (pl_language_handler), which
> must handle conversion of Incoming language call parameters and outgoing
> language call return value.
>
> 1.   The backend query execution interface (SPI), which must handle
> conversion of outgoing query parameters and incoming query result values.
>
>
>
> There are 5 generic data types: boolean, binary (blob), number
> (decimal/real/integer), text (string/varchar), time (date/datetime). Each
> data type needs to be converted between the Postgres internal data types (
> Datum) and some intermediate data type that can be passed through a pure
> C API. In my case that will be C# (MS or Mono), but others might want to
> use Java or whatever.
>
>
>
> These conversion tasks are identical to those needed to implement an
> ODBC/JDBC or similar interface, but one bound directly to the server and
> not going through a client (socket) connection. I have already done this
> successfully with Sqlite, which already provides a pure C server API and
> excellent documentation to go with it, so I know what it might look like.
>
>
>
> FYI apart from these two APIs (and their 4x5 conversions), the only other
> thing needed is some SQL code generation and I expect to have a working
> language of considerable power.
>
>
>
> Any help, suggestions, pointers much appreciated.
>
>
>
> Regards
>
> David M Bennett FACS
> *--*
>
> *Andl - A New Database Language - andl.org *
>
>
>

-- 
The man has the intellect of a lobotomized turtle.

Maranatha! <><
John McKown


Re: [GENERAL] Schema Size

2016-03-01 Thread drum.lu...@gmail.com
On 2 March 2016 at 12:23, Scott Mead  wrote:

>
> On Tue, Mar 1, 2016 at 6:07 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> You should read the definitions for the functions you are using to
>> retrieve the sizes.
>>
>> ​http://www.postgresql.org/docs/current/static/functions-admin.html​
>>
>> +1, you've gotta be careful with each of these, they all tend to hide
> different, yet critical components of size that you may be having trouble
> resolving.
>
>  The other thing to consider is that this isn't including any on-disk
> space required for your change traffic in the WAL.  Your $PGDATA will
> always be larger than the sum of all your databases sizes...
>
>
>> On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com <
>> drum.lu...@gmail.com> wrote:
>>
>>> Hi there
>>>
>>> Wanna see how size a schema is in my PostgreSQL 9.2
>>>
>>> Got two queries - they return different values... can u please check?
>>>
>>> cheers;
>>>
>>> Query 1:
>>> SELECT schema_name,
>>> pg_size_pretty(sum(table_size)::bigint) as "disk space",
>>> (sum(table_size) / pg_database_size(current_database())) * 100
>>> as "percent"
>>> FROM (
>>>  SELECT pg_catalog.pg_namespace.nspname as schema_name,
>>>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>>>  FROM   pg_catalog.pg_class
>>>  JOIN pg_catalog.pg_namespace
>>>  ON relnamespace = pg_catalog.pg_namespace.oid
>>> ) t
>>> GROUP BY schema_name
>>> ORDER BY schema_name
>>>
>>>
>> ​​pg_relation_size: "Disk space used by the specified fork ('main',
>> 'fsm', 'vm', or 'init') of the specified table or index"
>>
>> The 'init' fork is (I think) non-zero but extremely small.
>> TOAST for a given relation is considered its own table
>>
>>
>>> Query 2:
>>> select schemaname,
>>> pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
>>> from pg_stat_user_tables
>>> group by schemaname
>>>
>>
>> pg_table_size: "Disk space used by the specified table, excluding indexes
>> (but including TOAST, free space map, and visibility map)"
>>
>
> Personally, I'm a huge fan of 'pg_total_relation_size' which is all of
> pg_table_size + indexes.  It really depends on specifically what you're
> trying to count.  If you're looking for the total disk space required by
> your tables in a schema, I always [personally] want to include indexes in
> this count to make sure I understand the total impact on disk of accessing
> my relations.
>
>
>>
>> David J.​
>>
>>
>

So.. I'm doing this way:

CREATE OR REPLACE FUNCTION pg_schema_size(text) RETURNS BIGINT AS
$$SELECT SUM(pg_total_relation_size(quote_ident(schemaname) || '.' ||
quote_ident(tablename)))::BIGINT FROM pg_tables WHERE schemaname = $1

$$ LANGUAGE SQL;

Link: https://wiki.postgresql.org/wiki/Schema_Size


That's working - But I'd like to test it.. to compare the results with
another one trustfull - Do you have some?


Re: [GENERAL] Function fixing - PostgreSQL 9.2

2016-03-01 Thread drum.lu...@gmail.com
>
>
>>> *Question:*

 Payments in a Pending state cannot be invoiced and are excluded from
 the Invoice Runs section, but they are showing in the count mechanic.

 How can I solve this?

>>>
>>> ​In 9.2 you probably need to convert the count into a conditional sum:
>>>
>>> SELECT sum(CASE WHEN  THEN 1 ELSE 0 END) FROM data;
>>>
>>> You can probably do the same with count since it excludes nulls.
>>>
>>> SELECT count(NULLIF(invoice_status, 'Pending') FROM invoices;
>>>
>>> ​9.4 introduced a FILTER clause for Aggregate Expressions that can do
>>> this much more cleanly and efficiently.​
>>>
>>> ​http://www.postgresql.org/docs/9.4/static/sql-expressions.html​
>>>
>>> David J.
>>>
>>>
>>
> ​I barely scanned your original query - just read the description.  I
> don't have the inclination - especially without a self-contained example -
> to make changes to it.
>
> David J.
> ​
>
>
>
Just an update - Hope this help you to get more data:


There is a number beside each color flag, and a number in the "sub status"
drop down. Instead of "the number of jobs that have billable items that can
be invoiced, it shows the "the number of jobs that have "something"
billable but not yet invoiced."

And since those payments cannot be invoiced alone, they won't show up in
the list.

This is the Query:

> public function search($type=self::SEARCH_TYPE_STATUS_TYPE, &$rawData =
> []){
> if($type === self::SEARCH_TYPE_STATUS_TYPE) {
> $fields = "
> stj.status_type_id,
> stj.jobs_count,
> stj.job_ids,
> (
> SELECT
> array_to_json(array_agg(srcs))
> FROM
> (
> -- property names in the json match column names in g_statuses_jobs()
> SELECT
> (srs.sr[1]::BIGINT) AS status_id,
> (srs.sr[2]::TEXT) AS status_label,
> (srs.sr[3]::BOOLEAN) AS status_is_default,
> (srs.sr[4]::BIGINT) AS jobs_count,
> (srs.sr[5]::JSON) AS job_ids
> FROM
> (SELECT regexp_split_to_array(regexp_split_to_table(stj.status_type_data,
> E'\n'), E'\t')) AS srs(sr)
> ) AS srcs
> ) AS status_type_json
> ";
> $searchFunction =
>  
> 'g_status_types_jobs($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25)
> AS stj';
> $factory = new JobSearchStatusSummaryFactory();
> }else{
> $fields = '*';
> $searchFunction =
>  
> "g_statuses_jobs($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25)";
> $factory = new JobSearchResultFactory();
> }
> $query = "SELECT
> $fields
> FROM $searchFunction";


Re: [GENERAL] substring on bit(n) and bytea types is slow

2016-03-01 Thread Arjen Nienhuis
On Feb 29, 2016 22:26, "Evgeny Morozov" <
evgeny.morozov+list+pg...@shift-technology.com> wrote
> SELECT substring(bitarray from (32 * (n - 1) + 1) for 32) -- bitarray is
a column of type bit(6400)
> FROM array_test_bit
> JOIN generate_series(1, 1) n ON true;

Substring on a bit string is not optimized for long TOASTed values.
Substring on text is optimized for that. The current code fetches the whole
8MB from the table every time.


Re: [GENERAL] Schema Size

2016-03-01 Thread Scott Mead
On Tue, Mar 1, 2016 at 6:07 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> You should read the definitions for the functions you are using to
> retrieve the sizes.
>
> ​http://www.postgresql.org/docs/current/static/functions-admin.html​
>
> +1, you've gotta be careful with each of these, they all tend to hide
different, yet critical components of size that you may be having trouble
resolving.

 The other thing to consider is that this isn't including any on-disk space
required for your change traffic in the WAL.  Your $PGDATA will always be
larger than the sum of all your databases sizes...


> On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com  > wrote:
>
>> Hi there
>>
>> Wanna see how size a schema is in my PostgreSQL 9.2
>>
>> Got two queries - they return different values... can u please check?
>>
>> cheers;
>>
>> Query 1:
>> SELECT schema_name,
>> pg_size_pretty(sum(table_size)::bigint) as "disk space",
>> (sum(table_size) / pg_database_size(current_database())) * 100
>> as "percent"
>> FROM (
>>  SELECT pg_catalog.pg_namespace.nspname as schema_name,
>>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>>  FROM   pg_catalog.pg_class
>>  JOIN pg_catalog.pg_namespace
>>  ON relnamespace = pg_catalog.pg_namespace.oid
>> ) t
>> GROUP BY schema_name
>> ORDER BY schema_name
>>
>>
> ​​pg_relation_size: "Disk space used by the specified fork ('main', 'fsm',
> 'vm', or 'init') of the specified table or index"
>
> The 'init' fork is (I think) non-zero but extremely small.
> TOAST for a given relation is considered its own table
>
>
>> Query 2:
>> select schemaname,
>> pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
>> from pg_stat_user_tables
>> group by schemaname
>>
>
> pg_table_size: "Disk space used by the specified table, excluding indexes
> (but including TOAST, free space map, and visibility map)"
>

Personally, I'm a huge fan of 'pg_total_relation_size' which is all of
pg_table_size + indexes.  It really depends on specifically what you're
trying to count.  If you're looking for the total disk space required by
your tables in a schema, I always [personally] want to include indexes in
this count to make sure I understand the total impact on disk of accessing
my relations.


>
> David J.​
>
>


Re: [GENERAL] Schema Size

2016-03-01 Thread David G. Johnston
You should read the definitions for the functions you are using to retrieve
the sizes.

​http://www.postgresql.org/docs/current/static/functions-admin.html​

On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com 
wrote:

> Hi there
>
> Wanna see how size a schema is in my PostgreSQL 9.2
>
> Got two queries - they return different values... can u please check?
>
> cheers;
>
> Query 1:
> SELECT schema_name,
> pg_size_pretty(sum(table_size)::bigint) as "disk space",
> (sum(table_size) / pg_database_size(current_database())) * 100
> as "percent"
> FROM (
>  SELECT pg_catalog.pg_namespace.nspname as schema_name,
>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>  FROM   pg_catalog.pg_class
>  JOIN pg_catalog.pg_namespace
>  ON relnamespace = pg_catalog.pg_namespace.oid
> ) t
> GROUP BY schema_name
> ORDER BY schema_name
>
>
​​pg_relation_size: "Disk space used by the specified fork ('main', 'fsm',
'vm', or 'init') of the specified table or index"

The 'init' fork is (I think) non-zero but extremely small.
TOAST for a given relation is considered its own table


> Query 2:
> select schemaname,
> pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
> from pg_stat_user_tables
> group by schemaname
>

pg_table_size: "Disk space used by the specified table, excluding indexes
(but including TOAST, free space map, and visibility map)"

David J.​


[GENERAL] Schema Size

2016-03-01 Thread drum.lu...@gmail.com
Hi there

Wanna see how size a schema is in my PostgreSQL 9.2

Got two queries - they return different values... can u please check?

cheers;

Query 1:
SELECT schema_name,
pg_size_pretty(sum(table_size)::bigint) as "disk space",
(sum(table_size) / pg_database_size(current_database())) * 100
as "percent"
FROM (
 SELECT pg_catalog.pg_namespace.nspname as schema_name,
 pg_relation_size(pg_catalog.pg_class.oid) as table_size
 FROM   pg_catalog.pg_class
 JOIN pg_catalog.pg_namespace
 ON relnamespace = pg_catalog.pg_namespace.oid
) t
GROUP BY schema_name
ORDER BY schema_name


Query 2:
select schemaname,
pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
from pg_stat_user_tables
group by schemaname


Re: [GENERAL] multiple UNIQUE indices for FK

2016-03-01 Thread Rafal Pietrak


W dniu 01.03.2016 o 20:02, Igor Neyman pisze:
[]
> 
> 
> It just occured to me: how do I make sure (e.g. force within a database) with 
> the above structure, that a message can have *only  one* sender?
> but, allow for multiple recepients?
> 
> -R
> 
> 
> 
> If I'm not too late, unique partial index:
> 
> CREATE UNIQUE INDEX unique_message_sender (message_id, person_id) WHERE 
> relationship_type = sender;
> 

Of cource.

Apparently I'm so twisted after the numerous rewrites of my schema for
this project, that I forget the obvious.


Thenx,

-R


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] commit time in logical decoding

2016-03-01 Thread Alvaro Herrera
Weiping Qu wrote:
> Hello Artur,
> 
> Thank you for your reply.
> Should it work in a stable version like Postgresql 9.4, since it's enough
> for me and I don't care whether it's 9.6 or 9.5.
> Nevertheless I will try it using 9.4.

Yes, it was introduced by a commit that's in 9.5 and up only, so 9.4
should behave as you expect.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multiple UNIQUE indices for FK

2016-03-01 Thread David G. Johnston
Sorry for the delay - used to getting replied-to-all on messages I send but
you didn't and I didn't notice the response until now.

On Mon, Feb 29, 2016 at 4:03 AM, Rafal Pietrak  wrote:

> >
> > Partitioning and partial indexes both have considerable limitations that
> > you might need to work around.  That said normalization exists for a
> > reason and having multiple "person" columns in a table is a form of
> > duplication that if left presents just the problems you are seeing.
> >
> > I suspect your SSN should fit onto the message-person table.
> >
> > The following doesn't make sense - if the SSN is sender unique then
> > there is no expectation that a receiver would not receive two messages
> > with the same SSN from different senders.
>
> I don't get it.
>
> Of cource it's possible to receive two messages with the same SSN.
>
> By "sender unique" I mean, that every sender has full control of
> whatever he/she wishes to use for SSN, provided that he/she does not
> assign duplicates. It also means, that there is no relation between SSN
> assigned by different senders and collisions *should* be expected unless
> UNIQUE covers both THEM/SENDR and SSN.
>
> Thus:
> > ALTER ... msgs_to_me ADD CONSTRINT them_uniq UNIQUE (THEM,SSN);
>
>
​Yeah, mental lapse on my part.​

​​David J.


Re: [GENERAL] commit time in logical decoding

2016-03-01 Thread Weiping Qu

Hello Artur,

Thank you for your reply.
Should it work in a stable version like Postgresql 9.4, since it's 
enough for me and I don't care whether it's 9.6 or 9.5.

Nevertheless I will try it using 9.4.

Regards,
Weiping

On 01.03.2016 22:04, Artur Zakirov wrote:

Hello, Weiping

It seems that it is a bug. Thank you for report. I guess it will be 
fixed soon.


On 01.03.2016 17:36, Weiping Qu wrote:

Dear postgresql general mailing list,

I am currently using the logical decoding feature (version 9.6 I think
as far as I found in the source, wal_level: logical,
max_replication_slot: > 1, track_commit_timestamp: on, I am not sure
whether this will help or not).
Following the online documentation, everything works fine until I input

SELECT * FROM pg_logical_slot_peek_changes('regression_slot', NULL,
NULL, 'include-timestamp', 'on');


I always got 1999-12-31 16:00 as the commit time for arbitrary
transactions with DML statements.
After several tries, I realize that the txn->commit_time returned was
always 0.
Could you help me by indicating me what could be wrong in my case? Any
missing parameters set?

Thank you in advance,
Kind Regards,
Weiping









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multiple UNIQUE indices for FK

2016-03-01 Thread Igor Neyman


W dniu 28.02.2016 o 03:35, David G. Johnston pisze:
> W dniu 23.02.2016 o 09:39, Rafal Pietrak pisze:
> > Can anybody suggest any other way out of this mass?
> 
> 
> ​The only thought that sticks while reading your prose is:​
> 
> ​message > message-person < person​
>  
> 
> ​message-person (message_id, person_id, relationship_type[sender, 
> receiver])


It just occured to me: how do I make sure (e.g. force within a database) with 
the above structure, that a message can have *only  one* sender?
but, allow for multiple recepients?

-R



If I'm not too late, unique partial index:

CREATE UNIQUE INDEX unique_message_sender (message_id, person_id) WHERE 
relationship_type = sender;



Regards,
Igor

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] commit time in logical decoding

2016-03-01 Thread Artur Zakirov

Hello, Weiping

It seems that it is a bug. Thank you for report. I guess it will be 
fixed soon.


On 01.03.2016 17:36, Weiping Qu wrote:

Dear postgresql general mailing list,

I am currently using the logical decoding feature (version 9.6 I think
as far as I found in the source, wal_level: logical,
max_replication_slot: > 1, track_commit_timestamp: on, I am not sure
whether this will help or not).
Following the online documentation, everything works fine until I input

SELECT * FROM pg_logical_slot_peek_changes('regression_slot', NULL,
NULL, 'include-timestamp', 'on');


I always got 1999-12-31 16:00 as the commit time for arbitrary
transactions with DML statements.
After several tries, I realize that the txn->commit_time returned was
always 0.
Could you help me by indicating me what could be wrong in my case? Any
missing parameters set?

Thank you in advance,
Kind Regards,
Weiping





--
Artur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Custom column ordering

2016-03-01 Thread Steven Xu
Hi all,I'm trying to order some rows based on port names, a text column, using some domain-specific knowledge for Netdisco, an open-source application. In particular, I'm trying to do this without having to redo the entire design for the database. Note that in this database, there are no foreign key constraints, mostly because they weren't considered in the original inception of the software. The rough idea of the solution I've come up with is to create a new composite data type called "port" with a single text column. Then I created the relevant comparison functions, operators and operator classes under the "port_ops" operator family. Then, I created a function "cast_to_port(text)" that casts "text" data types to "port" (which simply creates a tuple with the single text value, see the code for details). Finally, I created an index on the "device_port" table with "cast_to_port(port)" as the indexed column using  "port_ops". However, when I run "select port from device_port order by cast_to_port(port)", it doesn't use the index I created and doesn't even order using the operators I created. Instead, it orders by the lexicographical ordering of the original text column.Questions:  - Why is PostgreSQL not using the functional index I created and why is it not being ordered correctly?  - Is creating a separate data type and using a functional index on the casts to this data type the right approach to a custom ordering?StevenDetails:Creating the "port" type:  create type port as (f1 text);Creating the comparator function, substituted with much shorter code as an example. I can include the original code, but it's much shorter to provide this.It returns -1 if the first argument is "less than" the second argument, 0 if they are equal and 1 otherwise. In this example, "less than" means lexicographically greater, because of the negation.  create or replace function port_cmp(port,port)      RETURNS integer as  $$    my ($a, $b) = @_;    return -( $a cmp $b ); // comparison function does the opposite of cmp  $$ language plperl;Creating the "port_ops" operators and operator classes for the "port" type:CREATE OR REPLACE FUNCTION port_lt(port, port)    RETURNS boolean AS$$    BEGIN        RETURN port_cmp($1, $2) < 0;    END;$$ LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION port_gt(port, port)    RETURNS boolean AS$$    BEGIN        RETURN port_cmp($1, $2) > 0;    END;$$ LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION port_lte(port, port)    RETURNS boolean AS$$    BEGIN        RETURN port_cmp($1, $2) <= 0;    END;$$ LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION port_gte(port, port)    RETURNS boolean AS$$    BEGIN        RETURN port_cmp($1, $2) >= 0;    END;$$ LANGUAGE plpgsql;CREATE OR REPLACE FUNCTION port_eq(port, port)    RETURNS boolean AS$$    BEGIN        RETURN port_cmp($1, $2) = 0;    END;$$ LANGUAGE plpgsql;CREATE OPERATOR < ( PROCEDURE=port_lt, LEFTARG=port, RIGHTARG=port);CREATE OPERATOR <= ( PROCEDURE=port_lte, LEFTARG=port, RIGHTARG=port);CREATE OPERATOR >= ( PROCEDURE=port_gte, LEFTARG=port,  RIGHTARG=port);CREATE OPERATOR > ( PROCEDURE=port_gte, LEFTARG=port, RIGHTARG=port);CREATE OPERATOR = ( PROCEDURE=port_eq, LEFTARG=port, RIGHTARG=port);CREATE OPERATOR CLASS port_ops DEFAULTFOR TYPE port USING btreeASOPERATOR    1   <  ,OPERATOR    2   <= ,OPERATOR    3   =  ,OPERATOR    4   >= ,OPERATOR    5   >  ,FUNCTION    1   port_cmp(port, port);Creating the function to cast "text" data to "port" data:CREATE OR REPLACE FUNCTION cast_to_port(text) RETURNS port LANGUAGE sqlAS $function$  SELECT ($1);$function$ IMMUTABLE;Creating the index:CREATE INDEX idx_device_port_port_proper on device_port (cast_to_port(port) port_ops desc);Sample table structure, shortened for brevity: Table "public.device_port"    Column    |    Type |   Modifiers--+-+ ip   | inet    | not null port | text    | not null



Re: [GENERAL] Does RAISE EXCEPTION rollback previous commands in a stored function?

2016-03-01 Thread Pavel Stehule
Hi

2016-03-01 19:41 GMT+01:00 Alexander Farber :

> Good evening,
>
> in PostgreSQL 9.5 does RAISE EXCEPTION reliably rollback all previous
> commands in a stored function?
>
> I have a stored function (the code is at the bottom), which takes a JSON
> array of objects as arguments.
>
> First it prepares some data and then loops through the JSON array and
> upserts the objects into a table.
>
> However if any of the objects fails an authenticity check (using md5 +
> some secret string) - I would like to rollback everything.
>
> Since I can not use START TRANSACTION in a stored function, I wonder if
> another loop should be added at the very beginning - or if I can just use
> the one I already have at the end.
>

transaction is started implicitly when you start SQL statement.

Pavel


Re: [GENERAL] Does RAISE EXCEPTION rollback previous commands in a stored function?

2016-03-01 Thread Andreas Kretschmer


> Alexander Farber  hat am 1. März 2016 um 19:41
> geschrieben:
> 
> 
> Good evening,
> 
> in PostgreSQL 9.5 does RAISE EXCEPTION reliably rollback all previous
> commands in a stored function?


Yes.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Does RAISE EXCEPTION rollback previous commands in a stored function?

2016-03-01 Thread Alexander Farber
Good evening,

in PostgreSQL 9.5 does RAISE EXCEPTION reliably rollback all previous
commands in a stored function?

I have a stored function (the code is at the bottom), which takes a JSON
array of objects as arguments.

First it prepares some data and then loops through the JSON array and
upserts the objects into a table.

However if any of the objects fails an authenticity check (using md5 + some
secret string) - I would like to rollback everything.

Since I can not use START TRANSACTION in a stored function, I wonder if
another loop should be added at the very beginning - or if I can just use
the one I already have at the end.

Thank you
Alex

CREATE OR REPLACE FUNCTION words_merge_users(
IN in_users jsonb,
IN in_ip inet,
OUT out_uid integer)
RETURNS integer AS
$func$
DECLARE
j jsonb;
uids integer[];
new_vip timestamptz;
new_grand timestamptz;
new_banned timestamptz;
new_reason varchar(255);
BEGIN
uids := (
SELECT ARRAY_AGG(uid)
FROM words_social
JOIN JSONB_ARRAY_ELEMENTS(in_users) x
ON sid = x->>'sid'
AND social = (x->>'social')::int
);

RAISE NOTICE 'uids = %', uids;

SELECT
MIN(uid),
CURRENT_TIMESTAMP + SUM(vip_until - CURRENT_TIMESTAMP),
CURRENT_TIMESTAMP + SUM(grand_until - CURRENT_TIMESTAMP),
MAX(banned_until)
INTO
out_uid,
new_vip,
new_grand,
new_banned
FROM words_users
WHERE uid = ANY(uids);

RAISE NOTICE 'out_uid = %', out_uid;
RAISE NOTICE 'new_vip = %', new_vip;
RAISE NOTICE 'new_grand = %', new_grand;
RAISE NOTICE 'new_banned = %', new_banned;

IF out_uid IS NULL THEN
INSERT INTO words_users (
created,
visited,
ip,
medals,
green,
red
) VALUES (
CURRENT_TIMESTAMP,
CURRENT_TIMESTAMP,
in_ip,
0,
0,
0
) RETURNING uid INTO out_uid;
ELSE
SELECT banned_reason
INTO new_reason
FROM words_users
WHERE banned_until = new_banned
LIMIT 1;

RAISE NOTICE 'new_reason = %', new_reason;

UPDATE words_social
SET uid = out_uid
WHERE uid = ANY(uids);

DELETE FROM words_users
WHERE uid <> out_uid
AND uid = ANY(uids);

UPDATE words_users SET
visited = CURRENT_TIMESTAMP,
ip = in_ip,
vip_until = new_vip,
grand_until = new_grand,
banned_until = new_banned,
banned_reason = new_reason
WHERE uid = out_uid;

END IF;

FOR j IN SELECT * FROM JSONB_ARRAY_ELEMENTS(in_users)
LOOP

  -- XXX will RAISE EXCEPTION here reliably rollback
everything? XXX

  UPDATE words_social SET
social = (j->>'social')::int,
female = (j->>'female')::int,
given  = j->>'given',
family = j->>'family',
photo  = j->>'photo',
place  = j->>'place',
stamp  = (j->>'stamp')::int,
uid= out_uid

WHERE sid = j->>'sid' AND social = (j->>'social')::int;

IF NOT FOUND THEN
INSERT INTO words_social (
sid,
social,
female,
given,
family,
photo,
place,
stamp,
uid
) VALUES (
j->>'sid',
(j->>'social')::int,
(j->>'female')::int,
j->>'given',
j->>'family',
j->>'photo',
j->>'place',
(j->>'stamp')::int,
out_uid
);
END IF;
END LOOP;
END
$func$ LANGUAGE plpgsql;


Re: [GENERAL] Rules on View

2016-03-01 Thread Tom Lane
Sridhar N Bamandlapally  writes:
> Is there a way to avoid creating rule under creation of view ?

If you mean the ON SELECT rule, no.  A view basically *is* an ON SELECT
rule; there's not very much else to it.  What usefulness do you imagine
you'd get from a view without ON SELECT?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Anyone compare PG 9.5 and MongoDB 3.2?

2016-03-01 Thread pbj
Very helpful!!  Thanks!!
 

On Tuesday, March 1, 2016 9:32 AM, Peter Devoy  wrote:
 

 > MongoDB has released 3.2 with their WiredTiger storage.  Has anyone
> benchmarked 9.5 against it, and for JSONB elements several MB in size?
>
> PJ

Hi Paul

I do not have an answer for you but there is a great talk here in
which someone explains why they moved from a NoSQL stack to Postgres:
https://www.youtube.com/watch?v=UgcC_bY4rPg

If I recall correctly JSON functionality was touched upon but, if you
have not seen it, the whole talk is worth a watch.

Hope this helps in some way.

Kind regards


Peter Devoy


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  

[GENERAL] commit time in logical decoding

2016-03-01 Thread Weiping Qu

Dear postgresql general mailing list,

I am currently using the logical decoding feature (version 9.6 I think 
as far as I found in the source, wal_level: logical, 
max_replication_slot: > 1, track_commit_timestamp: on, I am not sure 
whether this will help or not).

Following the online documentation, everything works fine until I input

SELECT * FROM pg_logical_slot_peek_changes('regression_slot', NULL, 
NULL, 'include-timestamp', 'on');



I always got 1999-12-31 16:00 as the commit time for arbitrary 
transactions with DML statements.
After several tries, I realize that the txn->commit_time returned was 
always 0.
Could you help me by indicating me what could be wrong in my case? Any 
missing parameters set?


Thank you in advance,
Kind Regards,
Weiping


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Anyone compare PG 9.5 and MongoDB 3.2?

2016-03-01 Thread Peter Devoy
> MongoDB has released 3.2 with their WiredTiger storage.  Has anyone
> benchmarked 9.5 against it, and for JSONB elements several MB in size?
>
> PJ

Hi Paul

I do not have an answer for you but there is a great talk here in
which someone explains why they moved from a NoSQL stack to Postgres:
https://www.youtube.com/watch?v=UgcC_bY4rPg

If I recall correctly JSON functionality was touched upon but, if you
have not seen it, the whole talk is worth a watch.

Hope this helps in some way.

Kind regards


Peter Devoy


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Anyone compare PG 9.5 and MongoDB 3.2?

2016-03-01 Thread Paul Jones
MongoDB has released 3.2 with their WiredTiger storage.  Has anyone
benchmarked 9.5 against it, and for JSONB elements several MB in size?

PJ


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Rules on View

2016-03-01 Thread Sridhar N Bamandlapally
Hi

Is there a way to avoid creating rule under creation of view ?

please let me know

Thanks
Sridhar