[GENERAL] Streaming replication, master recycling

2016-05-10 Thread fredrik
Hi All,

we are currently using streaming replication on multiple node pairs. We are 
seeing some issues, but I am mainly interrested in clarification.

When a failover occurs, we touch the trigger file, promoting the previous slave 
to master. That works perfectly.

For recycling the previous master, we create a recovery.conf (with 
recovery_target_timeline = 'latest') and *try* to start up. If postgresql 
starts up, we accept it as a new slave. If it does not, we proceed with a full 
basebackup.

This approach seems to work, but I have found indications that it can lead to 
database corruption: 


I am mainly seeking understanding of if and why this approach is a bad idea.

Thanks,
Fredrik Huitfeldt



Re: [GENERAL] Increased I/O / Writes

2016-05-10 Thread Sergey Konoplev
On Sun, May 8, 2016 at 7:07 PM, drum.lu...@gmail.com
 wrote:
> Is there a way to detect the queries that are causing that?
>
> I can use pg_stat_statements to get the most usage queries, but I was 
> wondering how can I find the queries that are causing that much IO?

Take a look at this tool:

https://github.com/grayhemp/pgcookbook/blob/master/statement_statistics_collecting_and_reporting.md

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
https://github.com/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
On 11 May 2016 at 09:50, Melvin Davidson  wrote:

> My bad,  WHERE indexrelname = ' {YOUR INDEX NAME } ';
>
>

Oh! ok...

public ja_feedlog ix_ja_feedlog_visitids 1 94 GB 1939 MB 0 CREATE INDEX
"ix_ja_feedlog_visitids" ON "ja_feedlog" USING "btree" ("visitid")

public ja_feedlog ix_ja_feedlog_visitids 1 94 GB 1939 MB 0 CREATE INDEX
"ix_ja_feedlog_visitids" ON "ja_feedlog" USING "btree" ("visitid")


Two rows for the same index.


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Melvin Davidson
My bad,  WHERE indexrelname = ' {YOUR INDEX NAME } ';


If you put the name of your index, you should get back stats for it. What
are those stats?

On Tue, May 10, 2016 at 5:47 PM, Lucas Possamai 
wrote:

>
>> And what happens if you run this query?
>>
>> SELECT idstat.schemaname AS schema,
>>idstat.relname AS table_name,
>>indexrelname AS index_name,
>>idstat.idx_scan AS times_used,
>>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
>> '.' || quote_ident(idstat.relname))) AS table_size,
>>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
>> '.' || quote_ident(indexrelname))) AS index_size,
>>n_tup_upd + n_tup_ins + n_tup_del as num_writes,
>>indexdef AS definition
>> FROM pg_stat_user_indexes AS idstat
>> JOIN pg_indexes ON indexrelname = indexname
>> JOIN pg_stat_user_tables AS tabstat ON idstat.relname = tabstat.relname
>> WHERE indexrelname = ' {YOUR QUERY NAME } ';
>>
>>
> Sorry.. not sure what I should put into the WHERE clause  ..
>
> But, taking off the WHERE it returns me 600 rows
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
>
>
> And what happens if you run this query?
>
> SELECT idstat.schemaname AS schema,
>idstat.relname AS table_name,
>indexrelname AS index_name,
>idstat.idx_scan AS times_used,
>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
> '.' || quote_ident(idstat.relname))) AS table_size,
>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
> '.' || quote_ident(indexrelname))) AS index_size,
>n_tup_upd + n_tup_ins + n_tup_del as num_writes,
>indexdef AS definition
> FROM pg_stat_user_indexes AS idstat
> JOIN pg_indexes ON indexrelname = indexname
> JOIN pg_stat_user_tables AS tabstat ON idstat.relname = tabstat.relname
> WHERE indexrelname = ' {YOUR QUERY NAME } ';
>
>
Sorry.. not sure what I should put into the WHERE clause  ..

But, taking off the WHERE it returns me 600 rows


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Melvin Davidson
On Tue, May 10, 2016 at 5:23 PM, Lucas Possamai 
wrote:

>
>>>
>> >Some time ago I changed the pg_stat_temp directory from
>> /var/lib/pgsq/whatever to /tmp
>> Have you checked the postgres log to see if there are any errors about it
>> not being able to write to the pg_stat_temp dir?
>>
>>
> Yep no errors =\
>

And what happens if you run this query?

SELECT idstat.schemaname AS schema,
   idstat.relname AS table_name,
   indexrelname AS index_name,
   idstat.idx_scan AS times_used,
   pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
'.' || quote_ident(idstat.relname))) AS table_size,
   pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
'.' || quote_ident(indexrelname))) AS index_size,
   n_tup_upd + n_tup_ins + n_tup_del as num_writes,
   indexdef AS definition
FROM pg_stat_user_indexes AS idstat
JOIN pg_indexes ON indexrelname = indexname
JOIN pg_stat_user_tables AS tabstat ON idstat.relname = tabstat.relname
WHERE indexrelname = ' {YOUR QUERY NAME } ';


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
>
>
>>
> >Some time ago I changed the pg_stat_temp directory from
> /var/lib/pgsq/whatever to /tmp
> Have you checked the postgres log to see if there are any errors about it
> not being able to write to the pg_stat_temp dir?
>
>
Yep no errors =\


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Melvin Davidson
On Tue, May 10, 2016 at 5:17 PM, Lucas Possamai 
wrote:

>
>> Sorry, I was too busy looking at the content.
>>
>> Has the size / # rows changed recently? If the planner thinks it can load
>> all the rows faster, it will use a seqscan  regardless if you have an index.
>>
>> If that is the case, you can force index use by doing a
>>
>> SET enable_seqscan = off
>>
>> before executing the query.
>>
>
> Hmm... ok... but the situation is:
>
> 1 - I dropped the index
> 2 - Found a very slow query
> 3 - The "WHERE" clause was using the index that I've just dropped
> 4 - I ran the query in my test environment (Same DB as prod) with explain
> analyze to see if the query was indeed using the index I've dropped
> 5 - Yes, the query was using the index
> 6 - re-created the index
>
> 7 - The total time went from 2000ms to 200ms
>
> So, I don't think the index was indeed not being used.
> I believe the stats are not working, just don't know how to confirm that,
> as I have nothing on my logs
>

>Some time ago I changed the pg_stat_temp directory from
/var/lib/pgsq/whatever to /tmp
Have you checked the postgres log to see if there are any errors about it
not being able to write to the pg_stat_temp dir?

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
>
>
> Sorry, I was too busy looking at the content.
>
> Has the size / # rows changed recently? If the planner thinks it can load
> all the rows faster, it will use a seqscan  regardless if you have an index.
>
> If that is the case, you can force index use by doing a
>
> SET enable_seqscan = off
>
> before executing the query.
>

Hmm... ok... but the situation is:

1 - I dropped the index
2 - Found a very slow query
3 - The "WHERE" clause was using the index that I've just dropped
4 - I ran the query in my test environment (Same DB as prod) with explain
analyze to see if the query was indeed using the index I've dropped
5 - Yes, the query was using the index
6 - re-created the index

7 - The total time went from 2000ms to 200ms

So, I don't think the index was indeed not being used.
I believe the stats are not working, just don't know how to confirm that,
as I have nothing on my logs


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Melvin Davidson
On Tue, May 10, 2016 at 5:06 PM, Lucas Possamai 
wrote:

>
>>>
>> My crystal ball is not working,  you have a PostgreSQL version?
>>
>
> Maybe you should have a look on the subject of this email...
>
>
>>
>> in postgresql.conf are track_activities and track_counts both on?
>>
>
> yes
>
>
>>
>> Did you ANALYZE the table after you re-added the index?
>>
>
> Yes
>

>Maybe you should have a look on the subject of this email...
Sorry, I was too busy looking at the content.

Has the size / # rows changed recently? If the planner thinks it can load
all the rows faster, it will use a seqscan  regardless if you have an index.

If that is the case, you can force index use by doing a

SET enable_seqscan = off

before executing the query.


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
On 11 May 2016 at 09:06, Lucas Possamai  wrote:

>
>>>
>> My crystal ball is not working,  you have a PostgreSQL version?
>>
>
> Maybe you should have a look on the subject of this email...
>
>
>>
>> in postgresql.conf are track_activities and track_counts both on?
>>
>
> yes
>
>
>>
>> Did you ANALYZE the table after you re-added the index?
>>
>
> Yes
>


Also.. just to let you guys know:

Some time ago I changed the pg_stat_temp directory from
/var/lib/pgsq/whatever to /tmp


postgresql.conf:

> stats_temp_directory = '/tmp/pg_stat_tmp'

I tested it and it's "working":

> #su - postgres
> #cd /tmp/pg_stat_tmp
> #touch test.html


ls -la /tmp/pg_stat_tmp:

> -rw---  1 postgres postgres 263110 May 10 21:12 pgstat.stat


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Melvin Davidson
On Tue, May 10, 2016 at 4:40 PM, Lucas Possamai 
wrote:

> Hi all,
>
> I ran a query to search for unused indexes, and get some free space in my
> DB:
>
> SELECT
>> --*,
>> relid::regclass AS table,
>> indexrelid::regclass AS index,
>> --pg_size_pretty(pg_relation_size(indexrelid::regclass)) AS
>> index_size,
>> pg_relation_size(indexrelid::regclass) AS index_size,
>> idx_tup_read,
>> idx_tup_fetch,
>> idx_scan
>> FROM
>> pg_stat_user_indexes
>> JOIN pg_index USING (indexrelid)
>> WHERE
>> idx_scan = 0
>> AND indisunique IS FALSE;
>
>
> The query returns the columns:
> idx_tup_read,
>  idx_tup_fetch,
>  idx_scan
>
> *What I did was:*
>
> 1 - Run the query above
> 2 - select one index and drop it
> 3 - Found some slow queries... When I saw it, the query was using one of
> the index I've dropped.
> 4 - Re-created the index
> 5 - Ran the query with explain analyze (The query was indeed hitting the
> index)
> 6 - re-ran the first query above, and still.. the index wasn't being used
> from those statistics
> 7 - ?
>
>
> So, my problem is: the statistics are not running? What happened to the
> statistics?
>
> Do you guys know how can I update the stats?
>

My crystal ball is not working,  you have a PostgreSQL version?

in postgresql.conf are track_activities and track_counts both on?

Did you ANALYZE the table after you re-added the index?


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
Hi all,

I ran a query to search for unused indexes, and get some free space in my
DB:

SELECT
> --*,
> relid::regclass AS table,
> indexrelid::regclass AS index,
> --pg_size_pretty(pg_relation_size(indexrelid::regclass)) AS
> index_size,
> pg_relation_size(indexrelid::regclass) AS index_size,
> idx_tup_read,
> idx_tup_fetch,
> idx_scan
> FROM
> pg_stat_user_indexes
> JOIN pg_index USING (indexrelid)
> WHERE
> idx_scan = 0
> AND indisunique IS FALSE;


The query returns the columns:
idx_tup_read,
 idx_tup_fetch,
 idx_scan

*What I did was:*

1 - Run the query above
2 - select one index and drop it
3 - Found some slow queries... When I saw it, the query was using one of
the index I've dropped.
4 - Re-created the index
5 - Ran the query with explain analyze (The query was indeed hitting the
index)
6 - re-ran the first query above, and still.. the index wasn't being used
from those statistics
7 - ?


So, my problem is: the statistics are not running? What happened to the
statistics?

Do you guys know how can I update the stats?


Re: [GENERAL] Columnar store as default for PostgreSQL 10?

2016-05-10 Thread Bruce Momjian
On Mon, Apr 25, 2016 at 11:20:11AM -0300, Alvaro Herrera wrote:
> In our design, columnar or not is going to be an option: you're going to
> be able to say "Dear server, for this table kindly set up columnar
> storage for me, would you? Thank you very much." And then you’re going
> to get a table which may be slower for regular usage but which will rock
> for analytics. For most of your tables the current row-based store will
> still likely be the best option, because row-based storage is much
> better suited to the more general cases.

I am coming late to this thread, but one item not discussed about
columnar storage is the use of compression of identical column values
across rows.  Existing Postgres storage only compresses single values,
not values across rows.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Increased I/O / Writes

2016-05-10 Thread Melvin Davidson
 

On Tue, May 10, 2016 at 1:31 PM, Joshua D. Drake 
wrote:

> On 05/10/2016 09:50 AM, Igor Neyman wrote:
>
> Please, if anyone can share anything.. Thanks a lot!
>>
>> Lucas
>>
>> So, what’s wrong with using pg_stat_statements?
>>
>> It has a set of columns pertaining to IO.
>>
>>
> You could use iotop to determine which postgres pid is eating the IO, then
> use statement logging with PID (or pg_stat_activity/statement) to see what
> the system is actually doing.
>
> JD
>
>
> Regards,
>>
>> Igor Neyman
>>
>>
>
> --
> Command Prompt, Inc.  http://the.postgres.company/
> +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> Everyone appreciates your honesty, until you are honest with them.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

I highly recommend you look into using PgBadger. It gives you a great deal
of info about your queries, including I/O, above and beyond pg_stats.
PgBadger info PgBadger download


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Increased I/O / Writes

2016-05-10 Thread Joshua D. Drake

On 05/10/2016 09:50 AM, Igor Neyman wrote:


Please, if anyone can share anything.. Thanks a lot!

Lucas

So, what’s wrong with using pg_stat_statements?

It has a set of columns pertaining to IO.



You could use iotop to determine which postgres pid is eating the IO, 
then use statement logging with PID (or pg_stat_activity/statement) to 
see what the system is actually doing.


JD



Regards,

Igor Neyman




--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Increased I/O / Writes

2016-05-10 Thread Igor Neyman

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of drum.lu...@gmail.com
Sent: Sunday, May 08, 2016 10:07 PM
To: Postgres General 
Subject: [GENERAL] Increased I/O / Writes

Hi all,

I've recently an increase IO for writes on my DB. I'm currently using 
PostgreSQL 9.2.
[Inline images 1]

I know that much improvements can be done (as I'm using SATA disks), but my 
question is:

Is there a way to detect the queries that are causing that?

I can use pg_stat_statements to get the most usage queries, but I was wondering 
how can I find the queries that are causing that much IO?

Please, if anyone can share anything.. Thanks a lot!
Lucas


So, what’s wrong with using pg_stat_statements?
It has a set of columns pertaining to IO.

Regards,
Igor Neyman



Re: [GENERAL] Postgres RFC3339 datetime formatting

2016-05-10 Thread John McKown
Cheat?

# select translate(to_char(current_timestamp, '-MM-DD HH:MI:SS.MSOF'),'
','T');
 translate

 2016-05-10T11:42:20.479-05
(1 row)

Just translate the blank in the result of to_char() to a T.

On Sun, May 8, 2016 at 3:59 AM, Jasim Mohd  wrote:

> Is there any way to format datetime to RFC3339Nano Eg:
> 2006-01-02T15:04:05.9Z07:00 in postgres 9.3 or 9.5?
>
> I tried with to_char. But there is no documentation how to handle T, Z,
> +07:00, -07:00 etc.
>
> The nearest one I can reach is
>
> v2=# select to_char(current_timestamp, '-MM-DD HH:MI:SS.MSOF');
>   to_char
>  2016-05-08 12:16:14.493+04
>
> Which is default JSON output format datetime in postgres 9.3. Please see
> below.
>
> psql (9.5.1, server 9.3.6)
> Type "help" for help.
>
> fetchrdb=> select to_json(current_timestamp);
>  to_json-
>  "2016-05-08 11:58:04.844548+04"(1 row)
>
> In the case of JSON encoded output from postgres 9.5 is RFC3339Nano eg:
>
> psql (9.5.1)
> Type "help" for help.
>
> v2=# select to_json(current_timestamp);
>   to_json
>  "2016-05-08T11:59:17.121411+04:00"
>
> I could't find an option to format datetime to RFC3339Nano in postgres 9.3
> or 9.5 using to_char.
>
> http://www.postgresql.org/docs/9.5/static/functions-formatting.html
>
> Is there any hidden option/functions you use to achieve the same? Any help
> regarding is appreciated.
>
> --
> Thanks & Regards
> skype: jasim.mk
>



-- 
The unfacts, did we have them, are too imprecisely few to warrant our
certitude.

Maranatha! <><
John McKown


Re: [GENERAL] Postgres RFC3339 datetime formatting

2016-05-10 Thread Albe Laurenz
Jasim Mohd wrote:
> Is there any way to format datetime to RFC3339Nano Eg: 
> 2006-01-02T15:04:05.9Z07:00 in postgres
> 9.3 or 9.5?
> 
> I tried with to_char. But there is no documentation how to handle T, Z, 
> +07:00, -07:00 etc.

The best I can get is:

SELECT to_char(current_timestamp, 'FX-MM-DD"T"HH:MI:SS.US"000Z"OF');

but it will suppress the minute part of the time zone offset if it is 0.

I don't think you can get nanosecond precision in PostgreSQL.

Is that good enough?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Build postgresql

2016-05-10 Thread Roman Khalupa
Hello postgresql team!
I have question about building postgresql. And here it is: how to build
postgresql statically on windows to get static libs?

Thanks


[GENERAL] Postgres RFC3339 datetime formatting

2016-05-10 Thread Jasim Mohd
Is there any way to format datetime to RFC3339Nano Eg:
2006-01-02T15:04:05.9Z07:00 in postgres 9.3 or 9.5?

I tried with to_char. But there is no documentation how to handle T, Z,
+07:00, -07:00 etc.

The nearest one I can reach is

v2=# select to_char(current_timestamp, '-MM-DD HH:MI:SS.MSOF');
  to_char
 2016-05-08 12:16:14.493+04

Which is default JSON output format datetime in postgres 9.3. Please see
below.

psql (9.5.1, server 9.3.6)
Type "help" for help.

fetchrdb=> select to_json(current_timestamp);
 to_json-
 "2016-05-08 11:58:04.844548+04"(1 row)

In the case of JSON encoded output from postgres 9.5 is RFC3339Nano eg:

psql (9.5.1)
Type "help" for help.

v2=# select to_json(current_timestamp);
  to_json
 "2016-05-08T11:59:17.121411+04:00"

I could't find an option to format datetime to RFC3339Nano in postgres 9.3
or 9.5 using to_char.

http://www.postgresql.org/docs/9.5/static/functions-formatting.html

Is there any hidden option/functions you use to achieve the same? Any help
regarding is appreciated.

-- 
Thanks & Regards
skype: jasim.mk


Re: [GENERAL] Using both ident and password in pg_hba.conf

2016-05-10 Thread D'Arcy J.M. Cain
On Tue, 10 May 2016 10:46:39 -0400
George Neuner  wrote:
> On Linux (or Unix) you'd set up a forwarding record in iptables that
> redirects a second port to Postgresql.

Forwarding to a different host is bad enough without explaining
different ports.  I think my idea will do the job cleanly, or at least
as cleanly as I can expect.

> I don't know offhand a way to do that on Windows, but I presume that
> it is possible.

Luckily I could care less about Windows or Linux for that matter.  Unix
is all I run on my servers.

-- 
D'Arcy J.M. Cain  |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 788 2246 (DoD#0082)(eNTP)   |  what's for dinner.
IM: da...@vex.net, VoIP: sip:da...@druid.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using both ident and password in pg_hba.conf

2016-05-10 Thread George Neuner
On Tue, 10 May 2016 09:50:10 -0400, "D'Arcy J.M. Cain"
 wrote:

>On Mon, 09 May 2016 18:15:16 -0400
>Tom Lane  wrote:
>> > I did think of that but how do I define that in pg_hba?  The host
>> > field only specifies the remote IP, not the local one.
>> 
>> Right, but you'd be using it essentially as a loopback interface.
>> Say you set it up as 192.168.0.42 --- you'd tell PHP to connect to
>> Postgres on 192.168.0.42, and Postgres would also see the PHP
>> connections as coming in from 192.168.0.42.
>
>Can you expand on this?  I can't seem to get my head around it.  How
>does the client make it look like it is coming from this ersatz
>loopback IP?  In fact, I don't even need to add this to pg_hba since
>anything outside of my trusted IPs requires a password

On Linux (or Unix) you'd set up a forwarding record in iptables that
redirects a second port to Postgresql.

http://www.cyberciti.biz/faq/linux-port-redirection-with-iptables/

I don't know offhand a way to do that on Windows, but I presume that
it is possible.

George



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using both ident and password in pg_hba.conf

2016-05-10 Thread D'Arcy J.M. Cain
On Tue, 10 May 2016 09:50:10 -0400
"D'Arcy J.M. Cain"  wrote:
> Can you expand on this?  I can't seem to get my head around it.  How
> does the client make it look like it is coming from this ersatz
> loopback IP?  In fact, I don't even need to add this to pg_hba since
> anything outside of my trusted IPs requires a password

Ignore that last sentence.  It was meant for the following paragraph.

-- 
D'Arcy J.M. Cain  |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 788 2246 (DoD#0082)(eNTP)   |  what's for dinner.
IM: da...@vex.net, VoIP: sip:da...@druid.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using both ident and password in pg_hba.conf

2016-05-10 Thread D'Arcy J.M. Cain
On Mon, 09 May 2016 18:15:16 -0400
Tom Lane  wrote:
> > I did think of that but how do I define that in pg_hba?  The host
> > field only specifies the remote IP, not the local one.
> 
> Right, but you'd be using it essentially as a loopback interface.
> Say you set it up as 192.168.0.42 --- you'd tell PHP to connect to
> Postgres on 192.168.0.42, and Postgres would also see the PHP
> connections as coming in from 192.168.0.42.

Can you expand on this?  I can't seem to get my head around it.  How
does the client make it look like it is coming from this ersatz
loopback IP?  In fact, I don't even need to add this to pg_hba since
anything outside of my trusted IPs requires a password

I did consider creating another private network (I already have one for
internal communications) so the web server would alias 192.168.100.75
to the interface that 192.168.151.75 is on and the database would do
the same for it's IP.  Now I can trigger on the host 192.168.100.75.
In fact, I don't even need to add this to pg_hba since anything outside
of my trusted IPs already requires a password

I was hoping for a way that did not involve changing every PHP user's
web site but I guess there is no way around it.

> I think on most modern OSes you can set up this sort of thing
> entirely in software, not even needing a spare NIC card.  I haven't
> done it that way though.

I do things like that all the time.

> > The "all@nobody" field is meant to specify that the remote user is
> > nobody but that they are connecting as user joe.
> 
> As John noted, we don't have any idea what the "remote username" is
> at the time we're scanning pg_hba.conf.

So how do you do ident then?

-- 
D'Arcy J.M. Cain  |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 788 2246 (DoD#0082)(eNTP)   |  what's for dinner.
IM: da...@vex.net, VoIP: sip:da...@druid.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] COPY command & binary format

2016-05-10 Thread Cat
On Tue, May 10, 2016 at 03:00:55PM +0200, Nicolas Paris wrote:
> > The way I want is :
> > csv -> binary -> postgresql
> >
> > Is this just to be quicker or are you going to add some business logic
> > while converting CSV data?
> > As you mentioned ETL, I assume the second, as I don't think that
> > converting CSV to binary and then loading it to PostgreSQL will be more
> > convenient than loading directly from CSV... as quicker as it can be, you
> > have anyway to load data from CSV.
> >
> ​
> Right, ETL process means huge business logic.
> get the data (csv or other) -> transform it -> produce a binary -> copy
> from binary from stdin ​
> 
> Producing 100GO CSVs, is a waste of time.

Ah. You need to fiddle with the data. Then you need to weigh the pros of
something agnostic to Postgres's internals to something that needs to be
aware of them.

You will need to delve into the source code for data types more complex
than INTEGER, TEXT and BYTEA (which was the majority of my data when I
was just looking into it).

-- 
  "A search of his car uncovered pornography, a homemade sex aid, women's 
  stockings and a Jack Russell terrier."
- 
http://www.dailytelegraph.com.au/news/wacky/indeed/story-e6frev20-118083480


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] COPY command & binary format

2016-05-10 Thread Cat
On Tue, May 10, 2016 at 01:38:12PM +0200, Nicolas Paris wrote:
> The way I want is :
> csv -> binary -> postgresql
> 
> And if possible, transforming csv to binary throught java​.
> 
> Use case is ETL process.

Not sure what the point would be tbh if the data is already in CSV.
You might aswell submit the CSV to postgres and let it deal with it.
It'll probably be faster. It'll also be more portable. The BINARY
format is what Postgres uses internally (more or less). I had to
look at the source code to figure out how to insert a timestamp
(FYI: Postgres stores timestamps as epoch based off the year 2000 not
1970 amongst other fun things).


-- 
  "A search of his car uncovered pornography, a homemade sex aid, women's 
  stockings and a Jack Russell terrier."
- 
http://www.dailytelegraph.com.au/news/wacky/indeed/story-e6frev20-118083480


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] COPY command & binary format

2016-05-10 Thread Nicolas Paris
2016-05-10 14:47 GMT+02:00 Moreno Andreo :

> Il 10/05/2016 13:38, Nicolas Paris ha scritto:
>
> 2016-05-10 13:04 GMT+02:00 Moreno Andreo :
>
>> Il 10/05/2016 12:56, Nicolas Paris ha scritto:
>>
>> Hello,
>>
>> What is the way to build a binary format (instead of a csv) ? Is there
>> specification for this file ?
>> http://www.postgresql.org/docs/9.5/static/sql-copy.html
>>
>> I always create binary files with
>> COPY table TO 'path/to/file' WITH BINARY
>>
>>
> ​ Fine, this works in this way :
> postgresql -> binary
> binary -> postgresql
>
> The way I want is :
> csv -> binary -> postgresql
>
> Is this just to be quicker or are you going to add some business logic
> while converting CSV data?
> As you mentioned ETL, I assume the second, as I don't think that
> converting CSV to binary and then loading it to PostgreSQL will be more
> convenient than loading directly from CSV... as quicker as it can be, you
> have anyway to load data from CSV.
>
​
Right, ETL process means huge business logic.
get the data (csv or other) -> transform it -> produce a binary -> copy
from binary from stdin ​

Producing 100GO CSVs, is a waste of time.



> Binary file format is briefly described in the last part of the doc you
> linked, under "Binary format", and there's also reference to source files.
>
>
> And if possible, transforming csv to binary throught java​.
>
> This is beyond my knowledge, ATM. I'm just starting with Java and JDBC is
> still in the TODO list, sorry... :-)
>
> Cheers
> Moreno.-
>

​Documentation explains a bit. Moreover, I have found a detailled answer
here :
​
http://stackoverflow.com/questions/14242117/java-library-to-write-binary-format-for-postgres-copy
​

My ultimate goal is to encapsulate it in a Talend component. (talend is an
open-source java based ETL software).

Thanks, I ll keep you aware.


Re: [SPAM] [GENERAL] COPY command & binary format

2016-05-10 Thread Moreno Andreo

  
  
Il 10/05/2016 13:38, Nicolas Paris ha
  scritto:


  
2016-05-10 13:04 GMT+02:00 Moreno
  Andreo :
  

  
  Il 10/05/2016 12:56, Nicolas Paris ha scritto:
  
  

  Hello,

  
  What

is the way to build a binary format (instead of
a csv) ? Is there specification for this file ?
http://www.postgresql.org/docs/9.5/static/sql-copy.html
  

  
 I always create binary files with
COPY table TO 'path/to/file' WITH BINARY

  

 
  
  ​
Fine, this works in this way : 
postgresql -> binary
  
  binary
-> postgresql

  
  The
way I want is :
  
  csv
-> binary -> postgresql
  

  

Is this just to be quicker or are you going to add some business
logic while converting CSV data?
As you mentioned ETL, I assume the second, as I don't think that
converting CSV to binary and then loading it to PostgreSQL will be
more convenient than loading directly from CSV... as quicker as it
can be, you have anyway to load data from CSV.

Binary file format is briefly described in the last part of the doc
you linked, under "Binary format", and there's also reference to
source files.

  

  
  
  And
if possible, transforming csv to binary throught java​.
  

  

This is beyond my knowledge, ATM. I'm just starting with Java and
JDBC is still in the TODO list, sorry... :-)

Cheers
Moreno.-
  





Re: [GENERAL] COPY command & binary format

2016-05-10 Thread Pujol Mathieu



Le 10/05/2016 à 12:56, Nicolas Paris a écrit :

Hello,

What is the way to build a binary format (instead of a csv) ? Is there 
specification for this file ?

http://www.postgresql.org/docs/9.5/static/sql-copy.html

Could I create such format from java ?

I guess this would be far faster, and maybe safer than CSVs

Thanks by advance,

Hi
Making a driver that do what you want is not difficult. You will achieve 
better performances than than loading data from CSV, and you also will 
have better precision for floating values (there is no text conversion).
In the link you provide there is a description of the file format in 
section Binary Format.

Mathieu Pujol



Re: [SPAM] [GENERAL] COPY command & binary format

2016-05-10 Thread Nicolas Paris
2016-05-10 13:04 GMT+02:00 Moreno Andreo :

> Il 10/05/2016 12:56, Nicolas Paris ha scritto:
>
> Hello,
>
> What is the way to build a binary format (instead of a csv) ? Is there
> specification for this file ?
> http://www.postgresql.org/docs/9.5/static/sql-copy.html
>
> I always create binary files with
> COPY table TO 'path/to/file' WITH BINARY
>
>
​Fine, this works in this way :
postgresql -> binary
binary -> postgresql

The way I want is :
csv -> binary -> postgresql

And if possible, transforming csv to binary throught java​.

Use case is ETL process.


Re: [GENERAL] COPY command & binary format

2016-05-10 Thread Sameer Kumar
On Tue, May 10, 2016 at 4:36 PM Sameer Kumar 
wrote:

> On Tue, May 10, 2016 at 4:26 PM Nicolas Paris  wrote:
>
>> Hello,
>>
>> What is the way to build a binary format (instead of a csv) ? Is there
>> specification for this file ?
>> http://www.postgresql.org/docs/9.5/static/sql-copy.html
>>
>
>>
>> Could I create such format from java ?
>>
>
> You can use COPY JDBC API to copy to STDOUT and then compress it before
> you use usual Java file operations to write it to a file. You will have to
> follow the reverse process while reading from this file and LOADING to a
> table.
>
> But why would you want to do that?
>
>
>>
>> I guess this would be far faster, and maybe safer than CSVs
>>
>
> I don't think assumption is right. COPY is not meant for backup, it is for
> LOAD and UN-LOAD.
>
> What you probably need is pg_dump with -Fc format.
> http://www.postgresql.org/docs/current/static/app-pgdump.html
>
>

Like someone else suggested upthread you can use Binary format in COPY
command (default is text)


>
>> Thanks by advance,
>>
> --
> --
> Best Regards
> Sameer Kumar | DB Solution Architect
> *ASHNIK PTE. LTD.*
>
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
>
> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] COPY command & binary format

2016-05-10 Thread Sameer Kumar
On Tue, May 10, 2016 at 4:26 PM Nicolas Paris  wrote:

> Hello,
>
> What is the way to build a binary format (instead of a csv) ? Is there
> specification for this file ?
> http://www.postgresql.org/docs/9.5/static/sql-copy.html
>

>
> Could I create such format from java ?
>

You can use COPY JDBC API to copy to STDOUT and then compress it before you
use usual Java file operations to write it to a file. You will have to
follow the reverse process while reading from this file and LOADING to a
table.

But why would you want to do that?


>
> I guess this would be far faster, and maybe safer than CSVs
>

I don't think assumption is right. COPY is not meant for backup, it is for
LOAD and UN-LOAD.

What you probably need is pg_dump with -Fc format.
http://www.postgresql.org/docs/current/static/app-pgdump.html


>
> Thanks by advance,
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [SPAM] [GENERAL] COPY command & binary format

2016-05-10 Thread Moreno Andreo

  
  
Il 10/05/2016 12:56, Nicolas Paris ha
  scritto:


  
Hello,
  

What
  is the way to build a binary format (instead of a csv) ? Is
  there specification for this file ?
  http://www.postgresql.org/docs/9.5/static/sql-copy.html

  

I always create binary files with
COPY table TO 'path/to/file' WITH BINARY

Cheers
Moreno.-
  





[GENERAL] COPY command & binary format

2016-05-10 Thread Nicolas Paris
Hello,

What is the way to build a binary format (instead of a csv) ? Is there
specification for this file ?
http://www.postgresql.org/docs/9.5/static/sql-copy.html

Could I create such format from java ?

I guess this would be far faster, and maybe safer than CSVs

Thanks by advance,


Re: [GENERAL] Inserting into a master table with partitions does not return rows affected.

2016-05-10 Thread Sameer Kumar
On Tue, May 10, 2016 at 3:53 AM rverghese  wrote:

> I am moving towards a partitioned schema. I use a function to insert into
> the
> table. If the INSERT fails because of duplicates I do an UPDATE. This works
> fine currently on the non-partitioned table because I can use GET
> DIAGNOSTICS to get the row count on the INSERT.
>
> But when I use the Master table to insert into the partitions, GET
> DIAGNOSTICS always returns 0. So there is no way of knowing whether a row
> was inserted (I am catching the unique violation exception to do the
> UPDATE).
>

If I were you, I would put the update logic in the Trigger Function which
is used for doing the insert in specific partitions.
Of course I am not aware of the exact logic and scenario so it may not work
best for you.


>
> What is a good alternative? We are on 9.4, so the UPSERT is not yet
> available to me.
> There should be some way to know if data was inserted into the partition.
>
> Thanks
> RV
>
>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Inserting-into-a-master-table-with-partitions-does-not-return-rows-affected-tp5902708.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com