Re: [GENERAL] Question about paritioning

2017-07-26 Thread John R Pierce

On 7/26/2017 10:08 PM, Alex Samad wrote:
I have a large table about 3B rows, that I would like to partition on 
a column called _received which is  of type timestamp




a good goal is to have no more than about 100 partitions max, and 
ideally more like 25.


when we partition on time stamp, we typically do it by the week, as 
we're doing 6 month data retention.


IIRC, we're using DATE_TRUNC('week', timestamp)::DATE  for use as the 
partition label and key.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about paritioning

2017-07-26 Thread Alex Samad
Hi

I have a large table about 3B rows, that I would like to partition on a
column called _received which is  of type timestamp

I followed this
https://medium.com/@StreamBright/creating-partitions-automatically-in-postgresql-7006d68c0fbb


and came up with
CREATE OR REPLACE FUNCTION create_partition_and_insert() RETURNS trigger AS
  $BODY$
DECLARE
  partition_date TEXT;
  partition TEXT;
BEGIN
  partition_date := to_char(NEW._received,'MM');
  partition := TG_TABLE_NAME || '_' || partition_date;
  IF NOT EXISTS(SELECT relname FROM pg_class WHERE relname=partition)
THEN
RAISE NOTICE 'A partition has been created %',partition;
EXECUTE 'CREATE TABLE ' || partition || ' (check (
to_char(_received, '''MM''') = ''' || partition_date || ''')) INHERITS
(' || TG_TABLE_NAME || ');';
  END IF;
  EXECUTE 'INSERT INTO ' || partition || ' SELECT(' || TG_TABLE_NAME ||
' ' || quote_literal(NEW) || ').* RETURNING patent_id;';
  RETURN NULL;
END;
  $BODY$
LANGUAGE plpgsql VOLATILE
COST 100;


Which I believe generates something like this
CREATE TABLE feedintra_201707( check (to_char(_received,'MM') =
'201707' )) INHERITS (XX);

My concern is the cost of doing to_char on every row.

So
Is this going to work
Is this the right way to do it.

I was thinking if I could change the check to be something like

check ( _recieved >= MM 1 00:00 and _recieved <= MM  23:59:59.999 )

so I am not sure how to make up the above line based on a timestamp

Thanks
Alex


Re: [GENERAL] ODBC driver issue

2017-07-26 Thread John R Pierce

On 7/26/2017 9:06 PM, Igor Korot wrote:

With the char(), is there a "Standard SQL" way to do trimming?



trim(trailing from fieldname)


but really, if you want a variable length string without padding, don't 
use CHAR() as a data type.  use VARCHAR or TEXT.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ODBC driver issue

2017-07-26 Thread Igor Korot
Hi, John,

On Wed, Jul 26, 2017 at 11:08 PM, John R Pierce  wrote:
> On 7/26/2017 7:25 PM, Igor Korot wrote:
>>
>> When I tried to query the database table with the column "char(129)" I
>> get:
>>
>> "My field text"
>>
>> (the text with the bunch of spaces at the end).
>>
>> The driver is not the current one - but I don't remember the version.
>>
>> Is this known issue? Maybe its already fixed with the latest ODBC driver?
>
>
> a char(129) field is 129 characters long, and will always be padded with
> spaces to that length.
>
> maybe you wanted a varchar (aka char varying) ?  these are variable length
> and return just what you put in them.

With the char(), is there a "Standard SQL" way to do trimming?

Thank you.

>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ODBC driver issue

2017-07-26 Thread John R Pierce

On 7/26/2017 7:25 PM, Igor Korot wrote:

When I tried to query the database table with the column "char(129)" I get:

"My field text"

(the text with the bunch of spaces at the end).

The driver is not the current one - but I don't remember the version.

Is this known issue? Maybe its already fixed with the latest ODBC driver?


a char(129) field is 129 characters long, and will always be padded with 
spaces to that length.


maybe you wanted a varchar (aka char varying) ?  these are variable 
length and return just what you put in them.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Interesting streaming replication issue

2017-07-26 Thread James Sewell
Hi all,

I've got two servers (A,B) which are part of a streaming replication pair.
A is the master, B is a hot standby. I'm sending archived WAL to a
directory on A, B is reading it via SCP.

This all works fine normally. I'm on Redhat 7.3, running EDB 9.6.2 (I'm
currently working to reproduce with standard 9.6)

We have recently seen a situation where B does not catch up when taken
offline for maintenance.

When B is started we see the following in the logs:

2017-07-27 11:55:57 AEST [21432]: [979-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00AA" from
archive
2017-07-27 11:55:58 AEST [21432]: [980-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00AB" from
archive
2017-07-27 11:55:58 AEST [21432]: [981-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00AC" from
archive
2017-07-27 11:55:59 AEST [21432]: [982-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00AD" from
archive
2017-07-27 11:55:59 AEST [21432]: [983-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00AE" from
archive
2017-07-27 11:56:00 AEST [21432]: [984-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00AF" from
archive
2017-07-27 11:56:00 AEST [21432]: [985-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00B0" from
archive
2017-07-27 11:56:01 AEST [21432]: [986-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00B1" from
archive
2017-07-27 11:56:01 AEST [21432]: [987-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00B2" from
archive
2017-07-27 11:56:02 AEST [21432]: [988-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00B3" from
archive
2017-07-27 11:56:02 AEST [21432]: [989-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00B4" from
archive
2017-07-27 11:56:03 AEST [21432]: [990-1] user=,db=,client=
(0:0)LOG:  restored log file "000C005A00B5" from
archive
scp: /archive/xlog//000C005A00B6: No such file or directory
2017-07-27 11:56:03 AEST [46191]: [1-1] user=,db=,client=
(0:0)LOG:  started streaming WAL from primary at 5A/B500 on
timeline 12
2017-07-27 11:56:03 AEST [46191]: [2-1] user=,db=,client=
(0:XX000)FATAL:  could not receive data from WAL stream: ERROR:
requested WAL segment 000C005A00B5 has already been
removed

scp: /archive/xlog//000D.history: No such file or directory
scp: /archive/xlog//000C005A00B6: No such file or directory
2017-07-27 11:56:04 AEST [46203]: [1-1] user=,db=,client=
(0:0)LOG:  started streaming WAL from primary at 5A/B500 on
timeline 12
2017-07-27 11:56:04 AEST [46203]: [2-1] user=,db=,client=
(0:XX000)FATAL:  could not receive data from WAL stream: ERROR:
requested WAL segment 000C005A00B5 has already been
removed

This will loop indefinitely. At this stage the master reports no connected
standbys in pg_stat_replication, and the standby has no running WAL
receiver process.

This can be 'fixed' by running pg_switch_xlog() on the master, at which
time a connection is seen from the standby and the logs show the following:

scp: /archive/xlog//000D.history: No such file or directory
2017-07-27 12:03:19 AEST [21432]: [1029-1] user=,db=,client=  (0:0)LOG:
 restored log file "000C005A00B5" from archive
scp: /archive/xlog//000C005A00B6: No such file or directory
2017-07-27 12:03:19 AEST [63141]: [1-1] user=,db=,client=  (0:0)LOG:
 started streaming WAL from primary at 5A/B500 on timeline 12
2017-07-27 12:03:19 AEST [63141]: [2-1] user=,db=,client=  (0:XX000)FATAL:
 could not receive data from WAL stream: ERROR:  requested WAL segment
000C005A00B5 has already been removed

scp: /archive/xlog//000D.history: No such file or directory
2017-07-27 12:03:24 AEST [21432]: [1030-1] user=,db=,client=  (0:0)LOG:
 restored log file "000C005A00B5" from archive
2017-07-27 12:03:24 AEST [21432]: [1031-1] user=,db=,client=  (0:0)LOG:
 restored log file "000C005A00B6" from archive
scp: /archive/xlog//000C005A00B7: No such file or directory
2017-07-27 12:03:25 AEST [21432]: [1032-1] user=,db=,client=  (0:0)LOG:
 unexpected pageaddr 56/E700 in log segment 000C005A00B7,
offset 0
2017-07-27 12:03:25 AEST [63196]: [1-1] user=,db=,client=  (0:0)LOG:
 started streaming WAL from primary

At this time replication is running as normal and all changes are streamed.

Am I missing something here, this seems very odd. One thing I have noticed
is it only seems to be caused after a lot of WAL is produced and the pg_xlog
directory is sitting at max_wal_size

James Sewell,
PostgreSQL Team Lead / Solutions Architect



Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
*P *(+61) 2 8099 9000 <(+61)%202%208099%209000>  *W* www.jirotech.com  *F *

Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread JingYuan Chen
I recommend that you can try Squirrel SQL.
(http://squirrel-sql.sourceforge.net/index.php?page=screenshots)

It is a SQL tool based on JVM. You can execute it whether on Linux or
WIndow, even MAC.
It use JDBC to connect Database. So that you can install differet JDBC
Driver to connect to different databases, such as Oracle (Thin Driver) ,
PostgreSQL (Pg JDBC), MS-SQL ( JTDS ) and so on.

Beside DDL oprations, I also use it to copy data in order to build sandbox
environment. Select the table, Right click and Copy it, then Paste it. Only
three steps to create a new table with data for test or development
purpose. Some nice facilities I often used are rotated table, view metadata
and monitor server performance. Yes, it can also export the data you
fetched as Excel or CVS file.

Try it and you will love it !


Regards,
Chingyuan

On Thu, Jul 27, 2017 at 9:31 AM, Joshua D. Drake 
wrote:

> On 07/26/2017 05:58 PM, Tim Uckun wrote:
>
>> I am waiting till pgadmin4 gets a bit faster. It seems kind of slow on my
>> mac.
>>
>
> PostgreSQL Studio
> Navicat
> EMS PostgreSQL Manager
> PgAdmin3 LTS (BigSQL)
>
> JD
>
>
> --
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
>
> PostgreSQL Centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://pgconf.us
> * Unless otherwise stated, opinions are my own.   *
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread rob stone


On Wed, 2017-07-26 at 18:31 -0700, Joshua D. Drake wrote:
> On 07/26/2017 05:58 PM, Tim Uckun wrote:
> > I am waiting till pgadmin4 gets a bit faster. It seems kind of slow
> > onĀ 
> > my mac.
> 
> PostgreSQL Studio
> Navicat
> EMS PostgreSQL Manager
> PgAdmin3 LTS (BigSQL)
> 
> JD
> 
> 


If you are a Java person, go to executequery.org and download the jar
file or the Deb package and install it.
It has the advantage of being able to work on multiple RDBMSs' at the
same time. All you need are the appropriate JDBC's. For example, you
could be connected to a Postgres database and also an Oracle one.

HTH.
Rob


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread Reid Thompson
See  dbeaver. http://dbeaver.jkiss.org 

community and Enterprise editions are free and it's very good

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ODBC driver issue

2017-07-26 Thread Igor Korot
Hi,
I'm testing my program and got an interesting issue.

I have an OSX 10.8 with iODBC manager and PostgreSQL ODBC driver.

When I tried to query the database table with the column "char(129)" I get:

"My field text"

(the text with the bunch of spaces at the end).

The driver is not the current one - but I don't remember the version.

Is this known issue? Maybe its already fixed with the latest ODBC driver?

Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread Joshua D. Drake

On 07/26/2017 05:58 PM, Tim Uckun wrote:
I am waiting till pgadmin4 gets a bit faster. It seems kind of slow on 
my mac.


PostgreSQL Studio
Navicat
EMS PostgreSQL Manager
PgAdmin3 LTS (BigSQL)

JD


--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL Centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://pgconf.us
* Unless otherwise stated, opinions are my own.   *


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread Tim Uckun
I am waiting till pgadmin4 gets a bit faster. It seems kind of slow on my
mac.

On Thu, Jul 27, 2017 at 12:05 PM, Brent Wood  wrote:

>
> +1
> I think PgAdmin4 has yet to catch up with v3 for functionality, but it is
> more actively developed and supported.
>
> You might check this page out as well:
> https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools
>
> 
> Cheers,
>
> Brent Wood
> --
> *From:* Tim Uckun 
> *To:* Tiffany Thang 
> *Cc:* pgsql-general 
> *Sent:* Thursday, July 27, 2017 11:34 AM
> *Subject:* Re: [GENERAL] Developer GUI tools for PostgreSQL
>
> I still use pgadmin3. I also have a subscription to the jetbrains tools so
> I also give datagrip a try once in a while. Datagrip has a lot going for it
> but it's lagging behind pgadmin3 is a lot of ways so it hasn't replaced it
> for me.
>
> On Thu, Jul 27, 2017 at 10:41 AM, Tiffany Thang 
> wrote:
>
> Hi,
> I'm new to PostgreSQL. I'm looking for a developer tool that works
> similarly to TOAD or SQL Developer for Oracle/MySQL which would allow me to
> view and make DDL changes to database objects and create data models. It
> would be a plus if I can use the same tool to perform some database
> administration tasks.
>
> So far, I've found TOra and pgAdmin 4. Are there any other popular GUI
> tools?
>
> Thanks in advance.
>
>
>
>
>
>


Re: [GENERAL] Indexes being ignored after upgrade to 9.5

2017-07-26 Thread Peter Geoghegan
On Wed, Jul 26, 2017 at 2:05 PM, Peter Geoghegan  wrote:
> On Tue, Jul 25, 2017 at 10:34 PM, Nick Brennan  wrote:
>> We've added duplicate indexes and analyzing, however the new indexes are
>> still ignored unless we force using enable_seqscan=no or reduce
>> random_page_cost to 2. The query response times using the new indexes are
>> still as slow when we do this. Checking pg_stat_user_indexes the number of
>> tuples returned per idx_scan is far greater after the upgrade than before.
>> All indexes show valid in pg_indexes.

I assume that you mean that pg_stat_user_indexes.idx_tup_read is a lot
higher than before, in proportion to pg_stat_user_indexes.idx_scan.
What about the ratio between pg_stat_user_indexes.idx_tup_read and
pg_stat_user_indexes.idx_tup_fetch? How much has that changed by?

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread Tim Uckun
I still use pgadmin3. I also have a subscription to the jetbrains tools so
I also give datagrip a try once in a while. Datagrip has a lot going for it
but it's lagging behind pgadmin3 is a lot of ways so it hasn't replaced it
for me.

On Thu, Jul 27, 2017 at 10:41 AM, Tiffany Thang 
wrote:

> Hi,
> I'm new to PostgreSQL. I'm looking for a developer tool that works
> similarly to TOAD or SQL Developer for Oracle/MySQL which would allow me to
> view and make DDL changes to database objects and create data models. It
> would be a plus if I can use the same tool to perform some database
> administration tasks.
>
> So far, I've found TOra and pgAdmin 4. Are there any other popular GUI
> tools?
>
> Thanks in advance.
>
>
>


Re: [GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread marcelo
You can try SQL Manager for PostgreSql. The Lite edition is enough ans 
it's free.

It's fast, secure and very friendly.

On 26/07/17 19:41, Tiffany Thang wrote:

Hi,
I'm new to PostgreSQL. I'm looking for a developer tool that works 
similarly to TOAD or SQL Developer for Oracle/MySQL which would allow 
me to view and make DDL changes to database objects and create data 
models. It would be a plus if I can use the same tool to perform some 
database administration tasks.


So far, I've found TOra and pgAdmin 4. Are there any other popular GUI 
tools?


Thanks in advance.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange case of database bloat

2017-07-26 Thread Peter Geoghegan
On Tue, Jul 4, 2017 at 10:18 PM, Chris Travers  wrote:
> First, I haven't seen major problems of database bloat in a long time which
> is why I find this case strange.  I wanted to ask here what may be causing
> it.
>
> Problem:
> ==
> Database is in the 100GB to 200GB size range, running on btrfs (not my
> choice) with nodatacow enabled (which I set up to fix a performance issue).
> The workload is a very heavy batch-update workload.
>
> The database bloats linearly.  I have measured this on one  table (of 149M
> rows).
>
> After vacuum full this table is (including indexes): 17GB
> Every 24 hrs, seems to add its original space in size to the file system
> +/-.
>
> Bloat seems to be affecting both indexes and underlying tables.
>
> Vacuum verbose does not indicate a disproportionate number of rows being
> unremovable.  So autovacuum is keeping up without too much difficulty.

That is odd. I find the linear increase in bloat alarming.

I found this thread in the context of investigating a possible
performance regression in 9.5 [1]. It's far from clear that that's
actually what's up here, but it's worth considering as a possibility.
Some questions about that:

Can you show the schema of at least one of the tables that are most
affected, its indexes, etc?

Are all indexes bloated equally?

Do you notice that unique indexes are not as badly affected as other
indexes? How does it break down, in terms of how much each individual
index grows in size?

You say that the problem is with both indexes and tables. How much of
this is table bloat, and how much is index bloat?

Thanks

[1] 
https://postgr.es/m/CAH2-Wz=sfakvmv1x9jh19ej8am8tzn9f-yecips9hrrrqss...@mail.gmail.com
-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Developer GUI tools for PostgreSQL

2017-07-26 Thread Tiffany Thang
Hi,
I'm new to PostgreSQL. I'm looking for a developer tool that works
similarly to TOAD or SQL Developer for Oracle/MySQL which would allow me to
view and make DDL changes to database objects and create data models. It
would be a plus if I can use the same tool to perform some database
administration tasks.

So far, I've found TOra and pgAdmin 4. Are there any other popular GUI
tools?

Thanks in advance.


Re: [GENERAL] Indexes being ignored after upgrade to 9.5

2017-07-26 Thread Peter Geoghegan
On Tue, Jul 25, 2017 at 10:34 PM, Nick Brennan  wrote:
> We've added duplicate indexes and analyzing, however the new indexes are
> still ignored unless we force using enable_seqscan=no or reduce
> random_page_cost to 2. The query response times using the new indexes are
> still as slow when we do this. Checking pg_stat_user_indexes the number of
> tuples returned per idx_scan is far greater after the upgrade than before.
> All indexes show valid in pg_indexes.
>
>
> We have tried increasing effective_cache_size but no effect (the queries
> appear to go slower). The DB is 24x7 so we cannot reindex the tables/
> partitions.
>
>
> Can anyone suggest why this would be happening?

Are the indexes bloated? Are they larger than before, as indicated by
psql's \di+ or similar? Did you notice that this happened immediately,
or did it take a while? Are these unique indexes or not? Do you have a
workload with many UPDATEs?

I ask all these questions because I think it's possible that this is
explained by a regression in 9.5's handling of index bloat, described
here:

http://postgr.es/m/CAH2-Wz=sfakvmv1x9jh19ej8am8tzn9f-yecips9hrrrqss...@mail.gmail.com

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning

2017-07-26 Thread George Neuner
On Tue, 25 Jul 2017 18:21:43 +0530, Krithika Venkatesh
 wrote:

>I have a table that is partitioned on a numeric column (ID).
>
>Partitioning works when I query the table with no joins.
>
>SELECT * FROM TABLE A a WHERE ID IN (SELECT ID FROM TABLE B b WHERE
>CREATED_TS = CURRENT_TIMESTAMP)
>
>Partitioning doesn't work when I do join.
>
>SELECT A.* FROM TABLE A a INNER JOIN TABLE B b ON  a.ID = b.ID.
>
>Is there any other option that would work.
>
>Thanks in Advance..

The subselect is constraining the set of ID value(s) to be matched in
A, which (at least potentially) permits identifying the relevant
partition(s).

The join must include all partitions of A because the set of ID values
to be matched with B are not constrained.

Also, the join query is not equivalent because it does not include the
timestamp constraint on B.  I don't think that will make any
difference to the query plan ... AFAICS, it still needs to consider
all partitions of A ... but it may improve performance.

George



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Perfomance of IN-clause with many elements and possible solutions

2017-07-26 Thread Dmitry Lazurkin
On 23.07.2017 14:35, dilaz03 . wrote:
> - IN-VALUES clause adds new node to plan. Has additional node big
> overhead? How about filter by two or more IN-VALUES clause?
>

Hmmm. This works.

-- Full table can fit in memory
show shared_buffers;
 shared_buffers

4GB


show work_mem;
 work_mem
--
 16MB


SET max_parallel_workers_per_gather TO 0;
SET max_parallel_workers TO 0;

-- 10 000 000 events of 30 types from 500 sources
CREATE TABLE events AS
SELECT trunc(random() * 500)::bigint AS source_id, md5(trunc(random() *
30)::text) AS type
FROM generate_series(1, 1000);

-- Prepare all clauses
SELECT ('(' || string_agg(source_id::text, ',') || ')') AS
source_id_in_clause
FROM (SELECT source_id FROM events GROUP BY source_id ORDER BY source_id
LIMIT 200) AS s \gset

SELECT ('(' || string_agg(( || type || ), ',') || ')') AS
type_in_clause
FROM (SELECT type FROM events GROUP BY type ORDER BY type LIMIT 100) AS
s \gset

SELECT ('(VALUES ' || string_agg('(' || source_id::text || ')', ',') ||
')') AS source_id_values_clause
FROM (SELECT source_id FROM events GROUP BY source_id ORDER BY source_id
LIMIT 200) AS s \gset

SELECT ('(VALUES ' || string_agg('(''' || type::text || ''')', ',') ||
')') AS type_values_clause
FROM (SELECT type FROM events GROUP BY type ORDER BY type LIMIT 100) AS
s \gset

-- Run queries
EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) FROM events WHERE source_id
IN :source_id_in_clause AND type IN :type_in_clause;
 Execution time: 21314.277 ms

EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) FROM events WHERE source_id
IN :source_id_values_clause AND type IN :type_in_clause;
 Execution time: 9421.592 ms

EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) FROM events WHERE source_id
IN :source_id_in_clause AND type IN :type_values_clause;
 Execution time: 17598.467 ms

EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) FROM events WHERE source_id
IN :source_id_values_clause AND type IN :type_values_clause;
 Execution time: 5589.925 ms





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Indexes being ignored after upgrade to 9.5

2017-07-26 Thread Nick Brennan
Hi,

We have recently promoted our Prod DB slave (2TB) to migrate to new
hardware, and upgraded from v9.2.9.21 to 9.5.1.6 using pg_upgrade.


The upgrade went without incident and we have been running for a week, but
the optimizer is ignoring indexes on 2 of our largest partitioned tables
causing very slow response times.


The indexes are Btree indexes on BIGINT columns, which the optimizer used
to return queries with ms response times on 9.2. Post-upgrade the queries
sequential scan and do not use indexes unless we force them.


We've added duplicate indexes and analyzing, however the new indexes are
still ignored unless we force using enable_seqscan=no or reduce
random_page_cost to 2. The query response times using the new indexes are
still as slow when we do this. Checking pg_stat_user_indexes the number of
tuples returned per idx_scan is far greater after the upgrade than before.
All indexes show valid in pg_indexes.


We have tried increasing effective_cache_size but no effect (the queries
appear to go slower). The DB is 24x7 so we cannot reindex the tables/
partitions.


Can anyone suggest why this would be happening?


Many thanks

Nick


Re: [GENERAL] Fwd: getting error while parsing log file using pgbadger

2017-07-26 Thread
On Wed, Jul 26, 2017 at 3:18 PM, PAWAN SHARMA  wrote:
> 
> On Wed, Jul 26, 2017 at 2:42 PM, Michael Paquier  
> wrote:
> On Wed, Jul 26, 2017 at 10:59 AM, PAWAN SHARMA
>  wrote:
> >
> > Hi All,
> >
> > I am facing below error while parsing log file.
> >
> > [postgres@abc pgaudit]$ pgbadger -f stderr postgres-2017-07-25_121445.csv
> > Can't locate Text/CSV_XS.pm in @INC (@INC contains: /usr/local/lib64/perl5
> > /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
> > /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
> > /bin/pgbadger line 2620.
> 
...
> Running make install
> "/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- CSV_XS.bs 
> blib/arch/auto/Text/CSV_XS/CSV_XS.bs 644
> Manifying 1 pod document
> Files found in blib/arch: installing files in blib/lib into architecture 
> dependent library tree
> Installing 
> /root/perl5/lib/perl5/x86_64-linux-thread-multi/auto/Text/CSV_XS/CSV_XS.so
> Installing /root/perl5/lib/perl5/x86_64-linux-thread-multi/Text/CSV_XS.pm
> Installing /root/perl5/man/man3/Text::CSV_XS.3pm
> Appending installation info to 
> /root/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod
>   HMBRAND/Text-CSV_XS-1.31.tgz
>   /bin/make install  -- OK
... 
> [postgres@abc pgaudit]$ pgbadger -f stderr postgres-2017-07-26_00.csv  -o 
> abc.html
> Can't locate Text/CSV_XS.pm in @INC (@INC contains: /usr/local/lib64/perl5 
> /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl 
> /usr/share/perl5/vendor_perl /usr/lib64/ perl5 /usr/share/perl5 .) at 
> /bin/pgbadger line 2620.

You have an environment problem in that the 2 different users have a different
PATH and you're getting 2 different perl executables, or at least that's what
it looks like to me. Please note that where you installed the module to is not
listed in the @INC of the other command.

I saw in another post that you have it fixed, but all you've done (from what I
can see) is that you've only put a band-aide on the problem not fixed the root 
issue
because you've installed the module into 2 different places. The root problem 
should
still exist.

You can run with a custom perl, we do; but that also means you must make sure
that all apps use it by setting PATH and PERLLIB appropriately, usually by
changing a system file and making sure all environments source it. If you do 
that,
then PgBadger will work just fine -and- use the same perl as all of your other 
programs.

HTH,
Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: getting error while parsing log file using pgbadger

2017-07-26 Thread PAWAN SHARMA
On Wed, Jul 26, 2017 at 5:38 PM, Michael Paquier 
wrote:

> On Wed, Jul 26, 2017 at 2:01 PM, PAWAN SHARMA 
> wrote:
> > For root user its working fine
> >
> > [root@abc :/opt/PostgreSQL/9.5/data/pgaudit]#
> > #-> pgbadger -f stderr postgres-2017-07-26_00.csv  -o abc.html
> > [==>  ] Parsed 1873673 bytes of 1991998 (94.06%),
> > queries: 0, events: 845
> > LOG: Ok, generating html report...
> >
> > but for the Postgres user, its showing the below error.
>
> There is no way to be sure what's wrong as this depends on the
> environment you are using and the packaging system that you have, but
> at short glance, I think that you should first try to update PERL5LIB
> so as it points to the location where the module has been installed.
> Good luck! It does not sound complicated to me to address anyway.
> --
> Michael
>


Hi Michal,

Thanks for your support.

finally, it's done.

I have installed Text::CSV_XS using PostgreSQL user.

[postgres@abcpgaudit]$  perl -MCPAN -e shell

cpan> install Text::CSV_XS
CPAN: Storable loaded ok
Going to read /root/.cpan/Metadata
Warning: Found only 0 objects in /root/.cpan/Metadata
CPAN: LWP::UserAgent loaded ok
Fetching with LWP:
  ftp://ftp.perl.org/pub/CPAN/authors/01mailrc.txt.gz
Going to read /root/.cpan/sources/authors/01mailrc.txt.gz
CPAN: Compress::Zlib loaded ok
Fetching with LWP:
  ftp://ftp.perl.org/pub/CPAN/modules/02packages.details.txt.gz
Going to read /root/.cpan/sources/modules/02packages.details.txt.gz
  Database was generated on Thu, 28 Nov 2013 08:17:03 GMT

  There's a new CPAN.pm version (v2.00) available!
  [Current version is v1.7602]
  You might want to try
install Bundle::CPAN
reload cpan
  without quitting the current session. It should be a seamless upgrade
  while we are running...

Fetching with LWP:
  ftp://ftp.perl.org/pub/CPAN/modules/03modlist.data.gz
Going to read /root/.cpan/sources/modules/03modlist.data.gz
Going to write /root/.cpan/Metadata
Running install for module Text::CSV_XS
Running make for H/HM/HMBRAND/Text-CSV_XS-1.02.tgz
Fetching with LWP:
  ftp://ftp.perl.org/pub/CPAN/authors/id/H/HM/HMBRAND/Text-CSV_XS-1.02.tgz
CPAN: Digest::MD5 loaded ok
Fetching with LWP:
  ftp://ftp.perl.org/pub/CPAN/authors/id/H/HM/HMBRAND/CHECKSUMS
Checksum for
/root/.cpan/sources/authors/id/H/HM/HMBRAND/Text-CSV_XS-1.02.tgz ok
Scanning cache /root/.cpan/build for sizes
Text-CSV_XS-1.02/
Text-CSV_XS-1.02/examples/
Text-CSV_XS-1.02/examples/parser-xs.pl
Text-CSV_XS-1.02/examples/speed.pl
Text-CSV_XS-1.02/examples/csv2xls
Text-CSV_XS-1.02/examples/csvdiff
Text-CSV_XS-1.02/examples/csv-check
Text-CSV_XS-1.02/ChangeLog
Text-CSV_XS-1.02/META.yml
Text-CSV_XS-1.02/MANIFEST
Text-CSV_XS-1.02/ppport.h
Text-CSV_XS-1.02/Makefile.PL
Text-CSV_XS-1.02/CSV_XS.xs
Text-CSV_XS-1.02/files/
Text-CSV_XS-1.02/files/macosx.csv
Text-CSV_XS-1.02/files/utf8.csv
Text-CSV_XS-1.02/README
Text-CSV_XS-1.02/META.json
Text-CSV_XS-1.02/t/
Text-CSV_XS-1.02/t/46_eol_si.t
Text-CSV_XS-1.02/t/20_file.t
Text-CSV_XS-1.02/t/76_magic.t
Text-CSV_XS-1.02/t/81_subclass.t
Text-CSV_XS-1.02/t/util.pl
Text-CSV_XS-1.02/t/40_misc.t
Text-CSV_XS-1.02/t/70_rt.t
Text-CSV_XS-1.02/t/22_scalario.t
Text-CSV_XS-1.02/t/21_lexicalio.t
Text-CSV_XS-1.02/t/77_getall.t
Text-CSV_XS-1.02/t/12_acc.t
Text-CSV_XS-1.02/t/00_pod.t
Text-CSV_XS-1.02/t/60_samples.t
Text-CSV_XS-1.02/t/01_pod.t
Text-CSV_XS-1.02/t/75_hashref.t
Text-CSV_XS-1.02/t/65_allow.t
Text-CSV_XS-1.02/t/50_utf8.t
Text-CSV_XS-1.02/t/41_null.t
Text-CSV_XS-1.02/t/30_types.t
Text-CSV_XS-1.02/t/51_utf8.t
Text-CSV_XS-1.02/t/10_base.t
Text-CSV_XS-1.02/t/55_combi.t
Text-CSV_XS-1.02/t/80_diag.t
Text-CSV_XS-1.02/t/45_eol.t
Text-CSV_XS-1.02/t/15_flags.t
Text-CSV_XS-1.02/CSV_XS.pm

  CPAN.pm: Going to build H/HM/HMBRAND/Text-CSV_XS-1.02.tgz

Checking if your kit is complete...
Looks good
Writing Makefile for Text::CSV_XS
cp CSV_XS.pm blib/lib/Text/CSV_XS.pm
/usr/bin/perl /usr/lib/perl5/5.8.8/ExtUtils/xsubpp  -typemap
/usr/lib/perl5/5.8.8/ExtUtils/typemap  CSV_XS.xs > CSV_XS.xsc && mv
CSV_XS.xsc CSV_XS.c
gcc -c   -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe
-Wdeclaration-after-statement -I/usr/local/include -D_LARGEFILE_SOURCE
-D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic   -DVERSION=\"1.02\"
-DXS_VERSION=\"1.02\" -fPIC
"-I/usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE"   CSV_XS.c
CSV_XS.xs: In function 'cx_Print':
CSV_XS.xs:627: warning: unused variable 'l'
CSV_XS.xs: In function 'cx_c_xsParse':
CSV_XS.xs:1450: warning: unused variable 'sp'
CSV_XS.c: In function 'XS_Text__CSV_XS_error_input':
CSV_XS.xs:1665: warning: unused variable 'hv'
CSV_XS.c: At top level:
CSV_XS.xs:1546: warning: 'cx_av_free' defined but not used
Running Mkbootstrap for Text::CSV_XS ()
chmod 644 CSV_XS.bs
rm -f blib/arch/auto/Text/CSV_XS/CSV_XS.so
gcc  -shared -O2 -g -pipe -Wall 

Re: [GENERAL] Fwd: getting error while parsing log file using pgbadger

2017-07-26 Thread Michael Paquier
On Wed, Jul 26, 2017 at 2:01 PM, PAWAN SHARMA  wrote:
> For root user its working fine
>
> [root@abc :/opt/PostgreSQL/9.5/data/pgaudit]#
> #-> pgbadger -f stderr postgres-2017-07-26_00.csv  -o abc.html
> [==>  ] Parsed 1873673 bytes of 1991998 (94.06%),
> queries: 0, events: 845
> LOG: Ok, generating html report...
>
> but for the Postgres user, its showing the below error.

There is no way to be sure what's wrong as this depends on the
environment you are using and the packaging system that you have, but
at short glance, I think that you should first try to update PERL5LIB
so as it points to the location where the module has been installed.
Good luck! It does not sound complicated to me to address anyway.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: getting error while parsing log file using pgbadger

2017-07-26 Thread PAWAN SHARMA
On Wed, Jul 26, 2017 at 3:18 PM, PAWAN SHARMA 
wrote:

>
> On Wed, Jul 26, 2017 at 2:42 PM, Michael Paquier <
> michael.paqu...@gmail.com> wrote:
>
>> On Wed, Jul 26, 2017 at 10:59 AM, PAWAN SHARMA
>>  wrote:
>> >
>> > Hi All,
>> >
>> > I am facing below error while parsing log file.
>> >
>> > [postgres@abc pgaudit]$ pgbadger -f stderr
>> postgres-2017-07-25_121445.csv
>> > Can't locate Text/CSV_XS.pm in @INC (@INC contains:
>> /usr/local/lib64/perl5
>> > /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
>> > /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
>> > /bin/pgbadger line 2620.
>> >
>> > Please suggest..!!
>>
>> This means that you need to install the perl module Text::CVS_XS:
>> http://search.cpan.org/~hmbrand/Text-CSV_XS-1.31/CSV_XS.pm
>> This is defined in the requirements of pgbadger's README when parsing csv
>> files:
>> https://github.com/dalibo/pgbadger/blob/master/README
>> --
>> Michael
>>
>
>
> Hi Michal,
>
> Thanks for the response.
>
> I have tried to install Text::CSV_XS, but still facing same error.
>
>
> cpan[1]> install Text::CSV_XS
> Reading '/root/.cpan/Metadata'
>   Database was generated on Wed, 26 Jul 2017 09:17:02 GMT
> Running install for module 'Text::CSV_XS'
> Running make for H/HM/HMBRAND/Text-CSV_XS-1.31.tgz
> Fetching with HTTP::Tiny:
> http://mirror.liquidtelecom.com/CPAN/authors/id/H/HM/
> HMBRAND/Text-CSV_XS-1.31.tgz
> Fetching with HTTP::Tiny:
> http://mirror.liquidtelecom.com/CPAN/authors/id/H/HM/HMBRAND/CHECKSUMS
> Checksum for /root/.cpan/sources/authors/id/H/HM/HMBRAND/Text-CSV_XS-1.31.tgz
> ok
> Scanning cache /root/.cpan/build for sizes
> 
> DONE
>
>   CPAN.pm: Building H/HM/HMBRAND/Text-CSV_XS-1.31.tgz
>
> Checking if your kit is complete...
> Looks good
> Generating a Unix-style Makefile
> Writing Makefile for Text::CSV_XS
> Writing MYMETA.yml and MYMETA.json
> cp CSV_XS.pm blib/lib/Text/CSV_XS.pm
> Running Mkbootstrap for CSV_XS ()
> chmod 644 "CSV_XS.bs"
> "/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- CSV_XS.bs
> blib/arch/auto/Text/CSV_XS/CSV_XS.bs 644
> "/usr/bin/perl" "/usr/share/perl5/vendor_perl/ExtUtils/xsubpp"  -typemap
> '/usr/share/perl5/ExtUtils/typemap'  CSV_XS.xs > CSV_XS.xsc
> mv CSV_XS.xsc CSV_XS.c
> gcc -c   -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe
> -fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE
> -D_FILE_OFFSET_BITS=64 -O2 -g -pipe -Wall -Wp ,-D_FORTIFY_SOURCE=2
> -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4
> -grecord-gcc-switches -m64 -mtune=generic   -DVERSION=\"1.31\"
> -DXS_VERSION=\"1.31 \" -fPIC "-I/usr/lib64/perl5/CORE"   CSV_XS.c
> rm -f blib/arch/auto/Text/CSV_XS/CSV_XS.so
> gcc  -shared -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
> -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches
> -m64 -mtune=generic -Wl,- z,relro  CSV_XS.o  -o 
> blib/arch/auto/Text/CSV_XS/CSV_XS.so
>  \
>   \
>
> chmod 755 blib/arch/auto/Text/CSV_XS/CSV_XS.so
> Manifying 1 pod document
>   HMBRAND/Text-CSV_XS-1.31.tgz
>   /bin/make -- OK
> 'YAML' not installed, will not store persistent state
> Running make test
> "/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- CSV_XS.bs
> blib/arch/auto/Text/CSV_XS/CSV_XS.bs 644
> PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM"
> "-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0,
> 'blib/lib', 'blib/arch')" t/*.t
> t/00_pod.t  skipped: Test::Pod 1.00 required for testing POD
> t/01_pod.t  skipped: Test::Pod::Coverage required for testing POD
> Coverage
> t/10_base.t ... ok
> t/12_acc.t  ok
> t/15_flags.t .. ok
> t/20_file.t ... ok
> t/21_lexicalio.t .. ok
> t/22_scalario.t ... ok
> t/30_types.t .. ok
> t/40_misc.t ... ok
> t/41_null.t ... ok
> t/45_eol.t  ok
> t/46_eol_si.t . ok
> t/50_utf8.t ... ok
> t/51_utf8.t ... ok
> t/55_combi.t .. ok
> t/60_samples.t  ok
> t/65_allow.t .. ok
> t/70_rt.t . ok
> t/75_hashref.t  ok
> t/76_magic.t .. ok
> t/77_getall.t . ok
> t/78_fragment.t ... ok
> t/79_callbacks.t .. ok
> t/80_diag.t ... ok
> t/81_subclass.t ... ok
> t/85_util.t ... ok
> t/90_csv.t  ok
> t/91_csv_cb.t . ok
> All tests successful.
> Files=29, Tests=50834, 45 wallclock secs ( 4.18 usr  0.16 sys + 43.91 cusr
>  0.69 csys = 48.94 CPU)
> Result: PASS
>   HMBRAND/Text-CSV_XS-1.31.tgz
>   /bin/make test -- OK
> Running make install
> "/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- CSV_XS.bs
> blib/arch/auto/Text/CSV_XS/CSV_XS.bs 644
> Manifying 1 pod document
> Files found in blib/arch: installing files in blib/lib into architecture
> dependent library tree
> Installing /root/perl5/lib/perl5/x86_64-linux-thread-multi/auto/Text/
> CSV_XS/CSV_XS.so
> Installing 

Re: [GENERAL] Fwd: getting error while parsing log file using pgbadger

2017-07-26 Thread PAWAN SHARMA
On Wed, Jul 26, 2017 at 2:42 PM, Michael Paquier 
wrote:

> On Wed, Jul 26, 2017 at 10:59 AM, PAWAN SHARMA
>  wrote:
> >
> > Hi All,
> >
> > I am facing below error while parsing log file.
> >
> > [postgres@abc pgaudit]$ pgbadger -f stderr
> postgres-2017-07-25_121445.csv
> > Can't locate Text/CSV_XS.pm in @INC (@INC contains:
> /usr/local/lib64/perl5
> > /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
> > /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
> > /bin/pgbadger line 2620.
> >
> > Please suggest..!!
>
> This means that you need to install the perl module Text::CVS_XS:
> http://search.cpan.org/~hmbrand/Text-CSV_XS-1.31/CSV_XS.pm
> This is defined in the requirements of pgbadger's README when parsing csv
> files:
> https://github.com/dalibo/pgbadger/blob/master/README
> --
> Michael
>


Hi Michal,

Thanks for the response.

I have tried to install Text::CSV_XS, but still facing same error.


cpan[1]> install Text::CSV_XS
Reading '/root/.cpan/Metadata'
  Database was generated on Wed, 26 Jul 2017 09:17:02 GMT
Running install for module 'Text::CSV_XS'
Running make for H/HM/HMBRAND/Text-CSV_XS-1.31.tgz
Fetching with HTTP::Tiny:
http://mirror.liquidtelecom.com/CPAN/authors/id/H/HM/HMBRAND/Text-CSV_XS-1.31.tgz
Fetching with HTTP::Tiny:
http://mirror.liquidtelecom.com/CPAN/authors/id/H/HM/HMBRAND/CHECKSUMS
Checksum for
/root/.cpan/sources/authors/id/H/HM/HMBRAND/Text-CSV_XS-1.31.tgz ok
Scanning cache /root/.cpan/build for sizes
DONE

  CPAN.pm: Building H/HM/HMBRAND/Text-CSV_XS-1.31.tgz

Checking if your kit is complete...
Looks good
Generating a Unix-style Makefile
Writing Makefile for Text::CSV_XS
Writing MYMETA.yml and MYMETA.json
cp CSV_XS.pm blib/lib/Text/CSV_XS.pm
Running Mkbootstrap for CSV_XS ()
chmod 644 "CSV_XS.bs"
"/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- CSV_XS.bs
blib/arch/auto/Text/CSV_XS/CSV_XS.bs 644
"/usr/bin/perl" "/usr/share/perl5/vendor_perl/ExtUtils/xsubpp"  -typemap
'/usr/share/perl5/ExtUtils/typemap'  CSV_XS.xs > CSV_XS.xsc
mv CSV_XS.xsc CSV_XS.c
gcc -c   -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe
-fstack-protector -I/usr/local/include -D_LARGEFILE_SOURCE
-D_FILE_OFFSET_BITS=64 -O2 -g -pipe -Wall -Wp ,-D_FORTIFY_SOURCE=2
-fexceptions -fstack-protector-strong --param=ssp-buffer-size=4
-grecord-gcc-switches -m64 -mtune=generic   -DVERSION=\"1.31\"
-DXS_VERSION=\"1.31 \" -fPIC "-I/usr/lib64/perl5/CORE"   CSV_XS.c
rm -f blib/arch/auto/Text/CSV_XS/CSV_XS.so
gcc  -shared -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches
-m64 -mtune=generic -Wl,- z,relro  CSV_XS.o  -o
blib/arch/auto/Text/CSV_XS/CSV_XS.so  \
  \

chmod 755 blib/arch/auto/Text/CSV_XS/CSV_XS.so
Manifying 1 pod document
  HMBRAND/Text-CSV_XS-1.31.tgz
  /bin/make -- OK
'YAML' not installed, will not store persistent state
Running make test
"/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- CSV_XS.bs
blib/arch/auto/Text/CSV_XS/CSV_XS.bs 644
PERL_DL_NONLAZY=1 "/usr/bin/perl" "-MExtUtils::Command::MM"
"-MTest::Harness" "-e" "undef *Test::Harness::Switches; test_harness(0,
'blib/lib', 'blib/arch')" t/*.t
t/00_pod.t  skipped: Test::Pod 1.00 required for testing POD
t/01_pod.t  skipped: Test::Pod::Coverage required for testing POD
Coverage
t/10_base.t ... ok
t/12_acc.t  ok
t/15_flags.t .. ok
t/20_file.t ... ok
t/21_lexicalio.t .. ok
t/22_scalario.t ... ok
t/30_types.t .. ok
t/40_misc.t ... ok
t/41_null.t ... ok
t/45_eol.t  ok
t/46_eol_si.t . ok
t/50_utf8.t ... ok
t/51_utf8.t ... ok
t/55_combi.t .. ok
t/60_samples.t  ok
t/65_allow.t .. ok
t/70_rt.t . ok
t/75_hashref.t  ok
t/76_magic.t .. ok
t/77_getall.t . ok
t/78_fragment.t ... ok
t/79_callbacks.t .. ok
t/80_diag.t ... ok
t/81_subclass.t ... ok
t/85_util.t ... ok
t/90_csv.t  ok
t/91_csv_cb.t . ok
All tests successful.
Files=29, Tests=50834, 45 wallclock secs ( 4.18 usr  0.16 sys + 43.91 cusr
 0.69 csys = 48.94 CPU)
Result: PASS
  HMBRAND/Text-CSV_XS-1.31.tgz
  /bin/make test -- OK
Running make install
"/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- CSV_XS.bs
blib/arch/auto/Text/CSV_XS/CSV_XS.bs 644
Manifying 1 pod document
Files found in blib/arch: installing files in blib/lib into architecture
dependent library tree
Installing
/root/perl5/lib/perl5/x86_64-linux-thread-multi/auto/Text/CSV_XS/CSV_XS.so
Installing /root/perl5/lib/perl5/x86_64-linux-thread-multi/Text/CSV_XS.pm
Installing /root/perl5/man/man3/Text::CSV_XS.3pm
Appending installation info to
/root/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod
  HMBRAND/Text-CSV_XS-1.31.tgz
  /bin/make install  -- OK

cpan[2]> quit
Terminal does not support GetHistory.
Lockfile removed.
[root@abc:/home/psharm89]#
#-> rpm -qa | 

Re: [GENERAL] Fwd: getting error while parsing log file using pgbadger

2017-07-26 Thread Michael Paquier
On Wed, Jul 26, 2017 at 10:59 AM, PAWAN SHARMA
 wrote:
>
> Hi All,
>
> I am facing below error while parsing log file.
>
> [postgres@abc pgaudit]$ pgbadger -f stderr postgres-2017-07-25_121445.csv
> Can't locate Text/CSV_XS.pm in @INC (@INC contains: /usr/local/lib64/perl5
> /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
> /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
> /bin/pgbadger line 2620.
>
> Please suggest..!!

This means that you need to install the perl module Text::CVS_XS:
http://search.cpan.org/~hmbrand/Text-CSV_XS-1.31/CSV_XS.pm
This is defined in the requirements of pgbadger's README when parsing csv files:
https://github.com/dalibo/pgbadger/blob/master/README
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fwd: getting error while parsing log file using pgbadger

2017-07-26 Thread PAWAN SHARMA
Hi All,

I am facing below error while parsing log file.

[postgres@abc pgaudit]$ pgbadger -f stderr postgres-2017-07-25_121445.csv
Can't locate Text/CSV_XS.pm in @INC (@INC contains: /usr/local/lib64/perl5
/usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at
/bin/pgbadger line 2620.

Please suggest..!!

-Pawan


Re: [GENERAL] Monitoring of a hot standby with a largely idle master

2017-07-26 Thread Michael Paquier
On Mon, Jul 24, 2017 at 9:08 PM, Jeff Janes  wrote:
> On Sun, Jul 16, 2017 at 8:47 AM, Michael Paquier 
> wrote:
>> What do you think about the patch attached?
>
> Looks OK.  Should it mention specifically "On a hot standby" rather than "On
> a standby"?  Otherwise people might be left confused on how they are
> supposed to do this on a generic standby.  It is the kind of thing which is
> obvious once you know it, but confusing the first time you encounter it.

Yes, right. Let's update as you suggest.

>>  
>>   You can retrieve a list of WAL sender processes via the
>> - 
>> + 
>>   pg_stat_replication view. Large differences
>> between
>> In the previous paragraph I have noticed that the link reference is
>> incorrect. pg_stat_replication is listed under
>> monitoring-stats-dynamic-views-table.
>
> Yes, that is clearly wrong.  But why not link directly to the description of
> the view itself, pg-stat-replication-view, rather than the correct table
> which mentions the view?  Is that the accepted docs style to link to the
> more generic place?  (Same thing applies to your patch, it could link
> directly to pg-stat-wal-receiver-view.

Yes, that's even better.

> Sorry for the delay, it took me awhile to get the new doc build system to
> work (solution seems to be, "Don't use CentOS6 anymore")

:)
Thanks for the review.
-- 
Michael


walreceiver-doc-v2.patch
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general