Re: Code of Conduct

2018-09-20 Thread Stephen Cook
On 2018-09-20 16:13, Bruce Momjian wrote:
> On Thu, Sep 20, 2018 at 05:20:55PM +0200, Chris Travers wrote:
>> I suspect most of us could probably get behind the groups listed in the
>> antidiscrimination section of the European Charter of Fundamental Rights at
>> least as a compromise.
>>
>> Quoting the  English version:
>>
>> "Any discrimination based on any ground such as sex, race, colour, ethnic or
>> social origin, genetic features, language, religion or belief, political or 
>> any
>> other opinion, membership of a national minority, property, birth, 
>> disability,
>> age or sexual orientation shall be prohibited."
>>
>> The inclusion of "political or any other opinion" is a nice addition and
>> prevents a lot of concern.
> 
> Huh.  Certainly something to consider when we review the CoC in a year.
> 


Too bad it wasn't brought up earlier.


-- Stephen




Re: help with startup slave after pg_rewind

2018-09-20 Thread Michael Paquier
On Wed, Sep 19, 2018 at 10:29:44PM +, Dylan Luong wrote:
> After promoting slave to master, I completed a pg_rewind of the slave
> (old master) to the new master. But when I try to start the slave I am
> getting the following error.
>
> I tried to run pg_rewind again, but now it says I cannot do it as its
> already same timeline.

What did pg_rewind tell you after the first run?  If you remove the set
of WAL segments on the rewound instance and let it replay only segments
from the archive, are you able to get past?

There is an inconsistency in the WAL records you are trying to replay.
In this case a contrecord refers to a WAL record split across multiple
pages.  The WAL reader is expecting one, and cannot find it.  And that's
not normal.  My bet is that something is wrong in your failover flow
which you think is right.  It is hard to get that right.
--
Michael


signature.asc
Description: PGP signature


Re: Out of Memory

2018-09-20 Thread Tom Lane
greigwise  writes:
> Hello, I'm running postgres 9.6.10 on Centos 7.   Seeing the occasional out
> of memory error trying to run a query.   In the logs I see something like
> this:

> Grand total: 462104832 bytes in 795 blocks; 142439136 free (819860 chunks);
> 319665696 used
> 2018-09-20 18:08:01 UTC  5ba3e1a2.7a8a dbname ERROR:  out of memory
> 2018-09-20 18:08:01 UTC  5ba3e1a2.7a8a dbname DETAIL:  Failed on request
> of size 2016.

> If I have 142439136 free, then why am I failing on a request of size 2016?

The free space must be in contexts other than the one that last little
request wanted space in.  Overall, you've got about 460MB of space
consumed in that session, so it's not *that* surprising that you got OOM.
(At least, it's unsurprising on a 32-bit machine.  If the server is
64-bit I'd have thought the kernel would be a bit more liberal.)

But anyway, this looks like a mighty inefficient usage pattern at best,
and maybe a memory leak at worst.  Can you create a self-contained test
case that does this?

regards, tom lane



Re: Out of Memory

2018-09-20 Thread greigwise
ulimit -a for the postgres user shows memory unlimited.

numactl --hardware gives command not found.

Thanks again.

Greig



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Out of Memory

2018-09-20 Thread Paul Carlucci
My first two guesses are ulimit or numa.  numactl --hardware will show your
how many nodes your box has and if you're exhausting any of them.

On Thu, Sep 20, 2018, 6:11 PM greigwise  wrote:

> Hello, I'm running postgres 9.6.10 on Centos 7.   Seeing the occasional out
> of memory error trying to run a query.   In the logs I see something like
> this:
>
> Grand total: 462104832 bytes in 795 blocks; 142439136 free (819860 chunks);
> 319665696 used
> 2018-09-20 18:08:01 UTC  5ba3e1a2.7a8a dbname ERROR:  out of memory
> 2018-09-20 18:08:01 UTC  5ba3e1a2.7a8a dbname DETAIL:  Failed on
> request
> of size 2016.
>
> If I have 142439136 free, then why am I failing on a request of size 2016?
>
> Am I misunderstanding here?
>
> Thanks,
> Greig Wise
>
>
>
> --
> Sent from:
> http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>
>


Out of Memory

2018-09-20 Thread greigwise
Hello, I'm running postgres 9.6.10 on Centos 7.   Seeing the occasional out
of memory error trying to run a query.   In the logs I see something like
this:

Grand total: 462104832 bytes in 795 blocks; 142439136 free (819860 chunks);
319665696 used
2018-09-20 18:08:01 UTC  5ba3e1a2.7a8a dbname ERROR:  out of memory
2018-09-20 18:08:01 UTC  5ba3e1a2.7a8a dbname DETAIL:  Failed on request
of size 2016.

If I have 142439136 free, then why am I failing on a request of size 2016?

Am I misunderstanding here?

Thanks,
Greig Wise



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Code of Conduct

2018-09-20 Thread Bruce Momjian
On Thu, Sep 20, 2018 at 05:20:55PM +0200, Chris Travers wrote:
> I suspect most of us could probably get behind the groups listed in the
> antidiscrimination section of the European Charter of Fundamental Rights at
> least as a compromise.
> 
> Quoting the  English version:
> 
> "Any discrimination based on any ground such as sex, race, colour, ethnic or
> social origin, genetic features, language, religion or belief, political or 
> any
> other opinion, membership of a national minority, property, birth, disability,
> age or sexual orientation shall be prohibited."
> 
> The inclusion of "political or any other opinion" is a nice addition and
> prevents a lot of concern.

Huh.  Certainly something to consider when we review the CoC in a year.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: array must have even number of elements

2018-09-20 Thread Susan Hurst
 

Thanks, everyone! 

I get it now. It's not just an array but an hstore array. I changed my
code to include the original values so now it works: 

-- new will be substituted for $1 during execution with using clause 

l_query_string := 'select ($1 #= hstore(array[' || l_orig_list || '],' 

 || 'array[' || l_clean_list || '])).*'; 

Pavel...I am creating a trigger function to look for columns with char,
varchar or text data types to purge any incoming or updated data of
extraneous spaces and tabs both within the string and on either end. We
can use the same function from any table that calls it from a trigger.
Now that it works, we can refactor it to make it better. I would welcome
your suggestions for alternatives to hstore. 

Thanks for your help! 

Sue 

---

Susan E Hurst
Principal Consultant
Brookhurst Data LLC
Email: susan.hu...@brookhurstdata.com
Mobile: 314-486-3261

On 2018-09-20 13:04, Pavel Stehule wrote: 

> Hi
> 
> čt 20. 9. 2018 v 19:55 odesílatel Susan Hurst 
>  napsal: 
> 
>> Why must an array have an even number of elements? I need to use a 
>> trigger function on any table, some of which may have an odd number of 
>> columns that I want to cleanse before inserting/updating.
> 
> The hstore function get parameters as sequence of pairs (key, value) - so the 
> number should be even. Odd parameter signalize broken format. 
> 
> Your example is pretty crazy - I cannot to decode it. Maybe you should to use 
> different function, I don't see a sense for using hstore type there. But I 
> cannot to decode it. 
> 
> Regards 
> 
> Pavel 
> 
>> Is there a workaround for this?
>> 
>> ERROR: array must have even number of elements
>> 
>> SQL state: 2202E
>> 
>> Context: SQL statement "SELECT ($1 #= 
>> hstore(array[trim(replace(regexp_replace($1.c,'( ){2,}',' ','g'),' ',' 
>> ')),trim(replace(regexp_replace($1.vc [1],'( ){2,}',' ','g'),' ',' 
>> ')),trim(replace(regexp_replace($1.t,'( ){2,}',' ','g'),' ',' '))])).*" 
>> PL/pgSQL function store.trim_string_before_dml() line 44 at EXECUTE
>> 
>> -- my test table
>> create table dm.trg_test (c character(8), vc varchar(16), t text);
>> insert into dm.trg_test (c,vc,t) values ('ctest',' vctest ',' 
>> ttest ');
>> 
>> -- code snippet that produced the error.
>> -- new will be substituted for $1 during execution with using clause
>> l_query_string := 'select ($1 #= hstore(array[' || l_column_list || 
>> '])).*';
>> execute format(l_query_string) using new into new;
>> return new;
>> 
>> Thanks for your help!
>> 
>> -- 
>> 
>> Susan E Hurst
>> Principal Consultant
>> Brookhurst Data LLC
>> Email: susan.hu...@brookhurstdata.com
>> Mobile: 314-486-3261
 

Links:
--
[1] http://1.vc


Re: array must have even number of elements

2018-09-20 Thread Pavel Stehule
Hi

čt 20. 9. 2018 v 19:55 odesílatel Susan Hurst <
susan.hu...@brookhurstdata.com> napsal:

>
> Why must an array have an even number of elements?  I need to use a
> trigger function on any table, some of which may have an odd number of
> columns that I want to cleanse before inserting/updating.
>

The hstore function get parameters as sequence of pairs (key, value) - so
the number should be even. Odd parameter signalize broken format.

Your example is pretty crazy - I cannot to decode it. Maybe you should to
use different function, I don't see a sense for using hstore type there.
But I cannot to decode it.

Regards

Pavel




>
> Is there a workaround for this?
>
>
> ERROR: array must have even number of elements
>
> SQL state: 2202E
>
> Context: SQL statement "SELECT ($1 #=
> hstore(array[trim(replace(regexp_replace($1.c,'( ){2,}',' ','g'),' ','
> ')),trim(replace(regexp_replace($1.vc,'( ){2,}',' ','g'),' ','
> ')),trim(replace(regexp_replace($1.t,'( ){2,}',' ','g'),' ',' '))])).*"
> PL/pgSQL function store.trim_string_before_dml() line 44 at EXECUTE
>
>
>
> --  my test table
> create table dm.trg_test (c character(8), vc varchar(16), t text);
> insert into dm.trg_test (c,vc,t) values ('ctest','   vctest  ','
> ttest   ');
>
>
> -- code snippet that produced the error.
> -- new will be substituted for $1 during execution with using clause
> l_query_string := 'select ($1 #= hstore(array[' || l_column_list ||
> '])).*';
> execute format(l_query_string) using  new into   new;
> return new;
>
>
> Thanks for your help!
>
> --
> 
> Susan E Hurst
> Principal Consultant
> Brookhurst Data LLC
> Email: susan.hu...@brookhurstdata.com
> Mobile: 314-486-3261
>
>


array must have even number of elements

2018-09-20 Thread Susan Hurst



Why must an array have an even number of elements?  I need to use a 
trigger function on any table, some of which may have an odd number of 
columns that I want to cleanse before inserting/updating.



Is there a workaround for this?


ERROR: array must have even number of elements

SQL state: 2202E

Context: SQL statement "SELECT ($1 #= 
hstore(array[trim(replace(regexp_replace($1.c,'( ){2,}',' ','g'),' ',' 
')),trim(replace(regexp_replace($1.vc,'( ){2,}',' ','g'),' ',' 
')),trim(replace(regexp_replace($1.t,'( ){2,}',' ','g'),' ',' '))])).*" 
PL/pgSQL function store.trim_string_before_dml() line 44 at EXECUTE




--  my test table
create table dm.trg_test (c character(8), vc varchar(16), t text);
insert into dm.trg_test (c,vc,t) values ('ctest','   vctest  ','   
ttest   ');



-- code snippet that produced the error.
-- new will be substituted for $1 during execution with using clause
l_query_string := 'select ($1 #= hstore(array[' || l_column_list || 
'])).*';

execute format(l_query_string) using  new into   new;
return new;


Thanks for your help!

--

Susan E Hurst
Principal Consultant
Brookhurst Data LLC
Email: susan.hu...@brookhurstdata.com
Mobile: 314-486-3261



Re: Advice on machine specs for growth

2018-09-20 Thread Laurenz Albe
Steven Winfield wrote:
> Just to be clear, are you suggesting (on linux) setting overcommit_memory = 2
> but leaving overcommit_ratio alone, which defaults to 50%?
> I’ve tried setting them to 2 and 0% respectively in the past and it didn’t 
> end well…

Yes, vm.overcommit_memory = 2

As for vm.overcommit ratio, set it to 100 * (RAM - swap) / RAM
That is because Linux will commit memory up to
swap + overcommit_ratio * RAM / 100, and you don't want to commit more
than the available RAM.

> Also I’ve read, and now use, swappiness = 1 which is supposed to disable 
> swapping
> entirely except when not swapping would cause OOM.
> Any thoughts on that?

My thought is vm.swappiness = 0

If you don't overcommit memory, you should never have to swap.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: Converting to number with given format

2018-09-20 Thread Gabriel Furstenheim Milerud
Hi Ken,
Thanks a lot, that's a cool idea and I think that it will cover my needs.

On Thu, 20 Sep 2018 at 02:04, Ken Tanzer  wrote:

> On Wed, Sep 19, 2018 at 6:34 AM Gabriel Furstenheim Milerud <
> furstenh...@gmail.com> wrote:
>
> Maybe that is not possible with numbers? To say in a format something like
>> "my numbers have comma as decimal separator and no thousands separators" or
>> "my numbers are point separated and have comma as thousands separator"
>>
>>
> Would stripping out the thousand separator, and leaving in the decimal
> separator work?
>
> SELECT replace('9,000.34',',','')::numeric;
>  replace
> -
>  9000.34
>
> If so, then (conceptually) does this work?
>
> SELECT replace(
>
>   replace(my_numeric_string, user_thousand_sep, ''),
>
>   user_decimal_sep, system_decimal_sep
>
> )::numeric
>
>
> Or maybe I'm missing something about this!
>
> Cheers,
>
> Ken
>
>
>
> --
> AGENCY Software
> A Free Software data system
> By and for non-profits
> *http://agency-software.org/ *
> *https://demo.agency-software.org/client
> *
> ken.tan...@agency-software.org
> (253) 245-3801
>
> Subscribe to the mailing list
>  to
> learn more about AGENCY or
> follow the discussion.
>


Re: Code of Conduct

2018-09-20 Thread Chris Travers
On Wed, Sep 19, 2018 at 11:31 PM Bruce Momjian  wrote:

> On Wed, Sep 19, 2018 at 11:24:29AM +1000, Julian Paul wrote:
> > It's overly long and convoluted.
> >
> > "inclusivity" Is a ideologue buzzword of particular individuals that
> offer
> > very little value apart from excessive policing of speech and behaviour
> > assumed to be a problem where none exist.
> >
> > "Personal attacks and negative comments on personal characteristics are
> > unacceptable, and will not be permitted. Examples of personal
> > characteristics include, but are not limited to age, race, national
> origin
> > or ancestry, religion, gender, or sexual orientation."
> >
> > So just leaving it at "Personal attacks" and ending it there won't do
> > obviously. I'm a big advocate of people sorting out there own personal
> > disputes in private but...
> >
> > "further personal attacks (public or *private*);"
> >
> > ...lets assume people don't have the maturity for that and make it all
> > public.
> >
> > "may be considered offensive by fellow members" - Purely subjective and
> > irrelevant to a piece of community software.
>
> You might notice that a bullet list was removed and those example items
> were added 18 months ago:
>
>
> https://wiki.postgresql.org/index.php?title=Code_of_Conduct=31924=29402
>
> I realize that putting no examples has its attractions, but some felt
> that having examples would be helpful.  I am not a big fan of the
> "protected groups" concept because it is often exploited, which is why
> they are listed more as examples.
>

I suspect most of us could probably get behind the groups listed in the
antidiscrimination section of the European Charter of Fundamental Rights at
least as a compromise.

Quoting the  English version:

"Any discrimination based on any ground such as sex, race, colour, ethnic
or social origin, genetic features, language, religion or belief, political
or any other opinion, membership of a national minority, property, birth,
disability, age or sexual orientation shall be prohibited."

The inclusion of "political or any other opinion" is a nice addition and
prevents a lot of concern.

>
> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
> + As you are, so once was I.  As I am, so you will be. +
> +  Ancient Roman grave inscription +
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 7:25 PM Durgamahesh Manne 
wrote:

>
>
> On Thu, Sep 20, 2018 at 6:39 PM Andreas Kretschmer <
> andr...@a-kretschmer.de> wrote:
>
>>
>>
>> Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:
>> > Query was executed at less time without distinct
>> >
>> > As well as query was taking around 7 minutes to complete execution
>> > with distinct
>> >
>> >  select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
>> > rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
>> > sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
>> > join table2 sub_head on
>> > rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
>> > rec."bFetch"=false and sub_head."bFetch"=false ;
>> >
>> > I need to execute above distinct query at less time as distinct query
>> > was taking more time to execute  even i have created indexes on
>> > required columns of the tables
>> >
>>
>>  >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
>> time=326397.551..389515.863 rows=370
>> loops=1) |
>>  > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
>> (actual time=326397.550..372470.846 rows=4050
>> loops=1)  |
>>  > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
>> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
>> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>>  > | Sort Method: external merge  Disk:
>> 3923224kB  |
>>
>>
>> as you can see: there are 40.500.000 rows to sort to filter out
>> duplicate rows, the result contains 'only' 3.700.000 rows. But for this
>> step the database needs nearly 4TB on-disk. This will, of course, need
>> some time.
>>
>> If you have enough ram you can try to set work_mem to 5 or 6 GB to
>> change the plan to a in-memory - sort. But keep in mind, this is
>> dangerous! If the machine don't have enough free ram the kernal can
>> decide to Out-Of-Memory - killing processes.
>>
>> What kind of disks do you have? Maybe you can use a separate fast SSD as
>> temp_tablespaces?
>>
>>
>> Regards, Andreas
>> --
>>
>> 2ndQuadrant - The PostgreSQL Support Company.
>> www.2ndQuadrant.com
>>
>>
>>
> Hi
>
> sdb[HDD]
> sdc[HDD]
> sda[HDD]
>
> i checked that there are hdd's in linux
>
> Regards
>
>
>
hi

 distinct query executed very fast as i have increased work_mem value to
3gb temporarily

Thank you very much for this valuable information

now i would like to ask one question related to built in bdr replication

when can be available bdr built in replication for use in production

can i use v3 built in replication in prod?

please let me know  about the configuration of v3 bdr built in replication



Regards

Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 6:39 PM Andreas Kretschmer 
wrote:

>
>
> Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:
> > Query was executed at less time without distinct
> >
> > As well as query was taking around 7 minutes to complete execution
> > with distinct
> >
> >  select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> > rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> > sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
> > join table2 sub_head on
> > rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
> > rec."bFetch"=false and sub_head."bFetch"=false ;
> >
> > I need to execute above distinct query at less time as distinct query
> > was taking more time to execute  even i have created indexes on
> > required columns of the tables
> >
>
>  >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> time=326397.551..389515.863 rows=370
> loops=1) |
>  > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
> (actual time=326397.550..372470.846 rows=4050
> loops=1)  |
>  > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>  > | Sort Method: external merge  Disk:
> 3923224kB  |
>
>
> as you can see: there are 40.500.000 rows to sort to filter out
> duplicate rows, the result contains 'only' 3.700.000 rows. But for this
> step the database needs nearly 4TB on-disk. This will, of course, need
> some time.
>
> If you have enough ram you can try to set work_mem to 5 or 6 GB to
> change the plan to a in-memory - sort. But keep in mind, this is
> dangerous! If the machine don't have enough free ram the kernal can
> decide to Out-Of-Memory - killing processes.
>
> What kind of disks do you have? Maybe you can use a separate fast SSD as
> temp_tablespaces?
>
>
> Regards, Andreas
> --
>
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi

sdb[HDD]
sdc[HDD]
sda[HDD]

i checked that there are hdd's in linux

Regards


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Andreas Kretschmer




Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:

Query was executed at less time without distinct

As well as query was taking around 7 minutes to complete execution 
with distinct


 select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" , 
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,  
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec 
join table2 sub_head on 
rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where 
rec."bFetch"=false and sub_head."bFetch"=false ;


I need to execute above distinct query at less time as distinct query 
was taking more time to execute  even i have created indexes on 
required columns of the tables




>  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual 
time=326397.551..389515.863 rows=370 
loops=1) |
> |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) 
(actual time=326397.550..372470.846 rows=4050 
loops=1)  |
> | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName", 
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice", 
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
> | Sort Method: external merge  Disk: 
3923224kB  |



as you can see: there are 40.500.000 rows to sort to filter out 
duplicate rows, the result contains 'only' 3.700.000 rows. But for this 
step the database needs nearly 4TB on-disk. This will, of course, need 
some time.


If you have enough ram you can try to set work_mem to 5 or 6 GB to 
change the plan to a in-memory - sort. But keep in mind, this is 
dangerous! If the machine don't have enough free ram the kernal can 
decide to Out-Of-Memory - killing processes.


What kind of disks do you have? Maybe you can use a separate fast SSD as 
temp_tablespaces?



Regards, Andreas
--

2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Not debuginfo rpm for postgresql96-server package?

2018-09-20 Thread Zexuan Luo
Finally I found that postgresql96-debuginfo actually contains the
debuginfo of the postgres executable.
So the postgresql96-debuginfo package contains both the debuginfo of
the client and the debuginfo of the server.
Zexuan Luo  于2018年9月20日周四 下午6:28写道:
>
> Hello all:
> Is there any debuginfo rpm package for postgresql96-server package?
> I searched it on https://yum.postgresql.org/9.6/redhat/rhel-7-x86_64/
> and also via 'yum search', but not matched package found.
> There is the postgresql96-debuginfo package, but it is for the
> postgresql96 package (the client one), not for the server one.
>
> Thanks for any responses.



Re: Regrading brin_index on required column of the table

2018-09-20 Thread Alban Hertroys
On Thu, 20 Sep 2018 at 11:42, Durgamahesh Manne
 wrote:

...

> |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual 
> time=326397.550..372470.846 rows=4050 loops=1)
>   |
> | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName", 
> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice", 
> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
> | Sort Method: external merge  Disk: 3923224kB
>   
>|
The above is a clear sign of a problem.
To get distinct records, the results need to be sorted, and that
doesn't fit in the available memory and spills to disk.

The actual filters on the boolean fields, even though they are
performed in seq-scans, hardly take any time at all. The hash join to
combine them takes a bit over 6s.

> so i am unable to reduce the query execution time as it is taken around 7 
> minutes to execute with indexes & without indexes
>
>  please help in reducing the query execution time

...

> On Wed, Sep 19, 2018 at 9:07 PM Alban Hertroys  wrote:
>>
>> The results of explain analyze would shed light on the core problem.
>>
>> My guess is that your conditions are not very selective - ie. most
>> records in both tables have bFetch = false - and therefore you are
>> retrieving most of your data and that is what's taking 7 minutes. No
>> index is going to fix that.
>>
>> If those boolean values are distributed very unevenly (say 99.9% has
>> false and 0.1% has true), you may get better results by excluding the
>> records with 'true' values (instead of including those that are
>> 'false'), for example by using a where not exists(...) subquery.
>>
>> Obviously, that still won't help if you're just fetching a lot of data.
>> On Wed, 19 Sep 2018 at 16:23, Durgamahesh Manne
>>  wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman  wrote:
>> >>
>> >>
>> >>
>> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
>> >> Sent: Wednesday, September 19, 2018 10:04 AM
>> >> To: Igor Neyman 
>> >> Subject: Re: Regrading brin_index on required column of the table
>> >>
>> >> On Wed, Sep 19, 2018 at 7:19 PM Igor Neyman  
>> >> wrote:
>> >>
>> >>
>> >>
>> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
>> >> Sent: Wednesday, September 19, 2018 9:43 AM
>> >> To: PostgreSQL mailing lists 
>> >> Subject: Regrading brin_index on required column of the table
>> >>
>> >> Hi
>> >>
>> >> Respected postgres community members
>> >>
>> >>
>> >>
>> >> I have created BRIN index on few columns of the table without any issues. 
>> >> But i am unable to create BRIN index on one column of the table as i got 
>> >> error listed below
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using brin 
>> >> ("dFetch");
>> >>
>> >> ERROR:  data type boolean has no default operator class for access method 
>> >> "brin"
>> >>
>> >> HINT:  You must specify an operator class for the index or define a 
>> >> default operator class for the data type.
>> >>
>> >>
>> >>
>> >>  below is the column description:
>> >>
>> >> Column datatype   collationnullable   defaultstorage
>> >>
>> >>
>> >>
>> >>  dFetchboolean false  
>> >>   plain
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> so please help in creating of the BRIN index on above column of the table 
>> >> .
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> Regards
>> >>
>> >>
>> >>
>> >> Durgamahesh Manne
>> >>
>> >>
>> >>
>> >> Why would you want BRIN index on Boolean-type column?
>> >>
>> >> What kind of interval will you specify?
>> >>
>> >>
>> >>
>> >> Regards,
>> >>
>> >> Igor Neyman
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>  Hi
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>  I have complex query like for ex  select  distinct 
>> >> sub_head."vchSubmittersCode" ,rec."vchFileName" , 
>> >> rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,  
>> >> sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec 
>> >> join  "table2" sub_head on 
>> >> rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where  
>> >> rec."bFetch"=false and sub_head."bFetch"=false ;
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> Query taken around 7 minutes time to execute without indexes on required 
>> >> columns
>> >>
>> >>
>> >>
>> >>  SO  i need to execute this distinct query at less time by creating 
>> >> indexes on required columns of the tables
>> >>
>> >>
>> >>
>> >> i have created brin indexes on vchsubmitterscode of two tables
>> >>
>> >>
>> >>
>> >> i am not able to create brin indexes on bfetch tables as i got a error  
>> >> ERROR:  data type boolean has no default operator class for access method 
>> >> "brin"
>> >>
>> >> HINT:  You must specify an operator class for the index or define a 
>> >> 

Re: Regrading brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 3:12 PM Durgamahesh Manne 
wrote:

>
>
> Hi
>
> As per your suggestion
>
>
> i ran explain analyse for distinct query
>
> the size of the table1 is 30mb
> the size of the table2 is 368kb
>
>  EXPLAIN ANALYZE select  distinct sub_head."vchSubmittersCode"
> ,rec."vchFileName" , rec."vchCusipFundIDSubFundID" ,
> rec."vchFundUnitPrice"
> ,  sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
> join  table2 sub_head on rec."vchS
> ubmittersCode"=sub_head."vchSubmittersCode" where  rec."bFetch"=false and
> sub_head."bFetch"=false ;
>
>
>
>   Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> time=326397.551..389515.863 rows=370 loops=1)
>   |
> |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual
> time=326397.550..372470.846 rows=4050 loops=1)
> |
> | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
> | Sort Method: external merge  Disk: 3923224kB
>
>  |
> | ->  Hash Join  (cost=21.06..457723.28 rows=40500405 width=89)
> (actual time=0.339..6939.296 rows=4050 loops=1)
> |
> |   Hash Cond: ((rec."vchSubmittersCode")::text =
> (sub_head."vchSubmittersCode")::text)
>   |
> |   ->  Seq Scan on table1 rec  (cost=0.00..1822.66
> rows=11 width=80) (actual time=0.011..56.998 rows=10 loops=1)
>   |
> | Filter: (NOT "bFetch")
>
>  |
> | Rows Removed by Filter: 4706
>
>  |
> |   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
> time=0.319..0.319 rows=405 loops=1)
> |
> | Buckets: 1024  Batches: 1  Memory Usage: 26kB
>
> |
> | ->  Seq Scan on table2 sub_head  (cost=0.00..16.00
> rows=405 width=11) (actual time=0.005..0.248 rows=405 loops=1) |
> |   Filter: (NOT "bFetch")
>
>  |
> |   Rows Removed by Filter: 375
>
> |
> | Planning time: 0.237 ms
>
> |
> | Execution time: 390252.089 ms
>
>
> so i am unable to reduce the query execution time as it is taken around 7
> minutes to execute with indexes & without indexes
>
>  please help in reducing the query execution time
>
>
> Regards
> Durgamahesh Manne
>
>
> On Wed, Sep 19, 2018 at 9:07 PM Alban Hertroys  wrote:
>
>> The results of explain analyze would shed light on the core problem.
>>
>> My guess is that your conditions are not very selective - ie. most
>> records in both tables have bFetch = false - and therefore you are
>> retrieving most of your data and that is what's taking 7 minutes. No
>> index is going to fix that.
>>
>> If those boolean values are distributed very unevenly (say 99.9% has
>> false and 0.1% has true), you may get better results by excluding the
>> records with 'true' values (instead of including those that are
>> 'false'), for example by using a where not exists(...) subquery.
>>
>> Obviously, that still won't help if you're just fetching a lot of data.
>> On Wed, 19 Sep 2018 at 16:23, Durgamahesh Manne
>>  wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman 
>> wrote:
>> >>
>> >>
>> >>
>> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
>> >> Sent: Wednesday, September 19, 2018 10:04 AM
>> >> To: Igor Neyman 
>> >> Subject: Re: Regrading brin_index on required column of the table
>> >>
>> >> On Wed, Sep 19, 2018 at 7:19 PM Igor Neyman 
>> wrote:
>> >>
>> >>
>> >>
>> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
>> >> Sent: Wednesday, September 19, 2018 9:43 AM
>> >> To: PostgreSQL mailing lists 
>> >> Subject: Regrading brin_index on required column of the table
>> >>
>> >> Hi
>> >>
>> >> Respected postgres community members
>> >>
>> >>
>> >>
>> >> I have created BRIN index on few columns of the table without any
>> issues. But i am unable to create BRIN index on one column of the table as
>> i got error listed below
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using
>> brin ("dFetch");
>> >>
>> >> ERROR:  data type boolean has no default operator class for access
>> method "brin"
>> >>
>> >> HINT:  You must specify an operator class for the index or define a
>> default operator class for the data type.
>> >>
>> >>
>> >>
>> >>  below is the column description:
>> >>
>> >> Column datatype   collationnullable   defaultstorage
>> >>
>> >>
>> >>
>> >>  dFetchboolean false
>>   plain
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> 

Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 3:41 PM Durgamahesh Manne 
wrote:

> hi
> as per your request
> i ran below query without distinct
>
> select  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec join
> table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
> where  rec."bFetch"=false and sub_head."bFetch"=false ;
>
> the above query took around 47 sec to execute
> the above query took around 7 minutes to execute with distinct
>
>
>
> On Thu, Sep 20, 2018 at 3:28 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer <
>> andr...@a-kretschmer.de> wrote:
>>
>>> Hi,
>>>
>>>
>>> the problem is there:
>>>
>>>
>>> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
>>> >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
>>> > time=326397.551..389515.863 rows=370 loops=1)
>>> > |
>>> > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
>>> > (actual time=326397.550..372470.846 rows=4050 loops=1)
>>> >   |
>>> > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
>>> > rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
>>> > sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>>> > | Sort Method: external merge  Disk: 3923224kB
>>> >  |
>>>
>>> Please check the execution time without DISTINCT.
>>>
>>> Regards, Andreas
>>> --
>>>
>>> 2ndQuadrant - The PostgreSQL Support Company.
>>> www.2ndQuadrant.com
>>>
>>>
>> hi
>>
>> as per your request
>>
>> i ran explain analyze query without distinct
>>
>>
>>
>> +--+
>> |
>> QUERY PLAN
>> |
>>
>> +--+
>> | Hash Join  (cost=21.06..457723.28 rows=40500405 width=89) (actual
>> time=0.429..6763.942 rows=4050 loops=1)
>> |
>> |   Hash Cond: ((rec."vchSubmittersCode")::text =
>> (sub_head."vchSubmittersCode")::text)
>>   |
>> |   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11 width=80)
>> (actual time=0.006..48.610 rows=10 loops=1)|
>> | Filter: (NOT "bFetch")
>>
>>  |
>> | Rows Removed by Filter: 4706
>>
>>  |
>> |   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
>> time=0.404..0.404 rows=405 loops=1)
>> |
>> | Buckets: 1024  Batches: 1  Memory Usage: 26kB
>>
>>   |
>> | ->  Seq Scan on table2 sub_head  (cost=0.00..16.00 rows=405
>> width=11) (actual time=0.004..0.326 rows=405 loops=1) |
>> |   Filter: (NOT "bFetch")
>>
>>  |
>> |   Rows Removed by Filter: 375
>>
>>   |
>> | Planning time: 0.351 ms
>>
>>   |
>> | Execution time: 8371.819 ms
>>
>>   |
>>
>> +--+
>> (12 rows)
>>
>>

Hi


Query was executed at less time without distinct

As well as query was taking around 7 minutes to complete execution with
distinct

 select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec join
table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;

I need to execute above distinct query at less time as distinct query was
taking more time to execute  even i have created indexes on required
columns of the tables


Regards
Durgamahesh Manne


Re: Pgbouncer and postgres

2018-09-20 Thread Fabio Pardi
I tried to compile 1.8.1 on my (now) up to date Centos 7 and everything
went ok. I can run pgbouncer and login to it.


this is the relevant part of the diff between config.log files


> config.status:1112: lib/usual/config.h is unchanged
4676c4689
< CPPFLAGS=''
---
> CPPFLAGS=' -I/usr/local/lib/include'

4690c4703
< LDFLAGS='-g  -Wl,--as-needed'
---
> LDFLAGS='-g  -Wl,--as-needed -L/usr/local/lib/lib'


together with the socket problem mentioned earlier makes me think that
we are overlooking something or we miss some important piece of information.

regards,

fabio pardi


On 09/19/2018 06:11 PM, Nicola Contu wrote:
> No the old one is stopped, so not sure why it says the port is in use.
> 
> Attached the config.log
> 
> Thanks
> 
> Il giorno mer 19 set 2018 alle ore 18:02 Adrian Klaver
> mailto:adrian.kla...@aklaver.com>> ha scritto:
> 
> On 9/19/18 8:39 AM, ncontu1 wrote:
> > So unexpectedly with the rpms, it installed it on a different
> place and it is
> > working from command line
> >
> > [root@cmd-dev1 pgbouncer]# /bin/pgbouncer
> /etc/pgbouncer/pgbouncer.ini -u
> > postgres
> > 2018-09-19 15:38:36.081 46322 LOG File descriptor limit: 1024
> (H:4096),
> > max_client_conn: 100, max fds possible: 230
> > 2018-09-19 15:38:36.082 46322 LOG Stale pidfile, removing
> > 2018-09-19 15:38:36.082 46322 LOG listening on 0.0.0.0:6543
> 
> > 2018-09-19 15:38:36.082 46322 LOG listening on ::/6543
> > 2018-09-19 15:38:36.083 46322 WARNING Cannot listen on
> > unix:/tmp/.s.PGSQL.6543: bind(): Address already in use
> 
> Is the compiled version still running?
> 
> Or is something else sitting on the 6543 socket?
> 
> > 2018-09-19 15:38:36.083 46322 LOG process up: pgbouncer 1.8.1,
> libevent
> > 2.0.21-stable (epoll), adns: c-ares 1.10.0, tls: OpenSSL
> 1.0.2k-fips  26 Jan
> > 2017
> >
> > [root@cmd-dev1 ~]# psql -U ncontu -p 6543 cmd3dev -h IP_ADD
> > Password for user ncontu:
> > psql (10.5)
> > Type "help" for help.
> >
> > cmd3dev=# \q
> >
> >
> > So at this point... I don't really know why compiled from source
> it does not
> > work.
> 
> Not sure.
> 
> For those that might know it would help to provide the complete
> ./configure used and the config.log(or equivalent)
> 
> >
> >
> >
> > --
> > Sent from:
> http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
> >
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 
> 



Not debuginfo rpm for postgresql96-server package?

2018-09-20 Thread Zexuan Luo
Hello all:
Is there any debuginfo rpm package for postgresql96-server package?
I searched it on https://yum.postgresql.org/9.6/redhat/rhel-7-x86_64/
and also via 'yum search', but not matched package found.
There is the postgresql96-debuginfo package, but it is for the
postgresql96 package (the client one), not for the server one.

Thanks for any responses.



Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
hi
as per your request
i ran below query without distinct

select  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec join
table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;

the above query took around 47 sec to execute
the above query took around 7 minutes to execute with distinct



On Thu, Sep 20, 2018 at 3:28 PM Durgamahesh Manne 
wrote:

>
>
> On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer <
> andr...@a-kretschmer.de> wrote:
>
>> Hi,
>>
>>
>> the problem is there:
>>
>>
>> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
>> >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
>> > time=326397.551..389515.863 rows=370 loops=1)
>> > |
>> > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
>> > (actual time=326397.550..372470.846 rows=4050 loops=1)
>> >   |
>> > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
>> > rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
>> > sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>> > | Sort Method: external merge  Disk: 3923224kB
>> >  |
>>
>> Please check the execution time without DISTINCT.
>>
>> Regards, Andreas
>> --
>>
>> 2ndQuadrant - The PostgreSQL Support Company.
>> www.2ndQuadrant.com
>>
>>
> hi
>
> as per your request
>
> i ran explain analyze query without distinct
>
>
>
> +--+
> |
> QUERY PLAN
> |
>
> +--+
> | Hash Join  (cost=21.06..457723.28 rows=40500405 width=89) (actual
> time=0.429..6763.942 rows=4050 loops=1)
> |
> |   Hash Cond: ((rec."vchSubmittersCode")::text =
> (sub_head."vchSubmittersCode")::text)
>   |
> |   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11 width=80)
> (actual time=0.006..48.610 rows=10 loops=1)|
> | Filter: (NOT "bFetch")
>
>  |
> | Rows Removed by Filter: 4706
>
>  |
> |   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
> time=0.404..0.404 rows=405 loops=1)
> |
> | Buckets: 1024  Batches: 1  Memory Usage: 26kB
>
> |
> | ->  Seq Scan on table2 sub_head  (cost=0.00..16.00 rows=405
> width=11) (actual time=0.004..0.326 rows=405 loops=1) |
> |   Filter: (NOT "bFetch")
>
>  |
> |   Rows Removed by Filter: 375
>
> |
> | Planning time: 0.351 ms
>
> |
> | Execution time: 8371.819 ms
>
> |
>
> +--+
> (12 rows)
>
>


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer 
wrote:

> Hi,
>
>
> the problem is there:
>
>
> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
> >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> > time=326397.551..389515.863 rows=370 loops=1)
> > |
> > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
> > (actual time=326397.550..372470.846 rows=4050 loops=1)
> >   |
> > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
> > rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
> > sub_head."vchValuationDate", rec."vchAdvisorLabel" |
> > | Sort Method: external merge  Disk: 3923224kB
> >  |
>
> Please check the execution time without DISTINCT.
>
> Regards, Andreas
> --
>
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
hi

as per your request

i ran explain analyze query without distinct


+--+
|
QUERY PLAN
|
+--+
| Hash Join  (cost=21.06..457723.28 rows=40500405 width=89) (actual
time=0.429..6763.942 rows=4050 loops=1)
|
|   Hash Cond: ((rec."vchSubmittersCode")::text =
(sub_head."vchSubmittersCode")::text)
  |
|   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11 width=80)
(actual time=0.006..48.610 rows=10 loops=1)|
| Filter: (NOT "bFetch")
   |
| Rows Removed by Filter: 4706
   |
|   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
time=0.404..0.404 rows=405 loops=1)
|
| Buckets: 1024  Batches: 1  Memory Usage: 26kB

|
| ->  Seq Scan on table2 sub_head  (cost=0.00..16.00 rows=405
width=11) (actual time=0.004..0.326 rows=405 loops=1) |
|   Filter: (NOT "bFetch")
   |
|   Rows Removed by Filter: 375

|
| Planning time: 0.351 ms

|
| Execution time: 8371.819 ms

|
+--+
(12 rows)


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Andreas Kretschmer

Hi,


the problem is there:


Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
 Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual 
time=326397.551..389515.863 rows=370 loops=1)                     
                            |
|   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) 
(actual time=326397.550..372470.846 rows=4050 loops=1)            
                                      |
|         Sort Key: sub_head."vchSubmittersCode", rec."vchFileName", 
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice", 
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
|         Sort Method: external merge  Disk: 3923224kB                 
                                 |


Please check the execution time without DISTINCT.

Regards, Andreas
--

2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Wed, Sep 19, 2018 at 8:27 PM Andreas Kretschmer 
wrote:

>
>
> Am 19.09.2018 um 16:43 schrieb Durgamahesh Manne:
> >
> >
> > On Wed, Sep 19, 2018 at 8:02 PM Andreas Kretschmer
> > mailto:andr...@a-kretschmer.de>> wrote:
> >
> >
> >
> > Am 19.09.2018 um 15:51 schrieb Durgamahesh Manne:
> > > I have created BRIN index on few columns of the table without any
> > > issues. But i am unable to create BRIN index on one column of the
> > > table as i got error listed below
> > >
> > >
> > > [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei
> > using
> > > brin ("dFetch");
> > > ERROR:  data type boolean has no default operator class for access
> > > method "brin"
> > > HINT:  You must specify an operator class for the index or define a
> > > default operator class for the data type.
> >
> > honestly, a BRIN-Index on a bool-column doesn't make much sense.
> > What do
> > you want to achive? Maybe a partial index with a where-condition
> > on that
> > column makes much more sense.
> >
> >
> > Regards, Andreas
> >
> > --
> > 2ndQuadrant - The PostgreSQL Support Company.
> > www.2ndQuadrant.com 
> >
> >
> >
> > Hi
> >
> >
> > I want to execute distinct query at less possible time
> >
> > for that reason ,Even i have already tried with BTREE indexes & HASH
> > indexes on required columns .distinct query execution time was not
> reduced
> >
> > select  distinct sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> > rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> > sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec
> > join  "table2" sub_head on
> > rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
> > rec."bFetch"=false and sub_head."bFetch"=false ;
> >
> > Query taken around 7 minutes time to execute with BTREE indexes & HASH
> > indexes on required columns
> >
>
> try an index like
>
> create index idx_t1 on table1 ("vchSubmittersCode") where "bFetch"=false
> and
> create index idx_t2 on table2 ("vchSubmittersCode") where "bFetch"=false
>
> and check if the plan changed and the indexes are in use. You can use
> create index concurrently to prevent lockings.
>
>
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi
 As per your suggestion

 i have created  partial  indexes with where condition on required columns

distinct query execution time was not reduced as query taken around 7
minutes time to execute with indexes & without indexes

so i ran explain analyze for distinct query

EXPLAIN ANALYZE select  distinct sub_head."vchSubmittersCode"
,rec."vchFileName" , rec."vchCusipFundIDSubFundID" ,
rec."vchFundUnitPrice"
,  sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
join  table2 sub_head on
 rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
rec."bFetch"=false and sub_head."bFetch"=false ;


 Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
time=326397.551..389515.863 rows=370 loops=1)
  |
|   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual
time=326397.550..372470.846 rows=4050 loops=1)
|
| Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
| Sort Method: external merge  Disk: 3923224kB

   |
| ->  Hash Join  (cost=21.06..457723.28 rows=40500405 width=89)
(actual time=0.339..6939.296 rows=4050 loops=1)
|
|   Hash Cond: ((rec."vchSubmittersCode")::text =
(sub_head."vchSubmittersCode")::text)
  |
|   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11
width=80) (actual time=0.011..56.998 rows=10 loops=1)
  |
| Filter: (NOT "bFetch")

   |
| Rows Removed by Filter: 4706

   |
|   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
time=0.319..0.319 rows=405 loops=1)
|
| Buckets: 1024  Batches: 1  Memory Usage: 26kB

|
| ->  Seq Scan on table2 sub_head  (cost=0.00..16.00
rows=405 width=11) (actual time=0.005..0.248 rows=405 loops=1) |
|   Filter: (NOT "bFetch")

   |
|   Rows Removed by Filter: 375

|
| Planning time: 0.237 ms

|
| Execution time: 390252.089 ms



so please help in reducing the distinct query execution time

Regrads

Durgamahesh Manne


Re: Regrading brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
Hi

As per your suggestion


i ran explain analyse for distinct query

the size of the table1 is 30mb
the size of the table2 is 368kb

 EXPLAIN ANALYZE select  distinct sub_head."vchSubmittersCode"
,rec."vchFileName" , rec."vchCusipFundIDSubFundID" ,
rec."vchFundUnitPrice"
,  sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
join  table2 sub_head on rec."vchS
ubmittersCode"=sub_head."vchSubmittersCode" where  rec."bFetch"=false and
sub_head."bFetch"=false ;



  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
time=326397.551..389515.863 rows=370 loops=1)
  |
|   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual
time=326397.550..372470.846 rows=4050 loops=1)
|
| Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
| Sort Method: external merge  Disk: 3923224kB

   |
| ->  Hash Join  (cost=21.06..457723.28 rows=40500405 width=89)
(actual time=0.339..6939.296 rows=4050 loops=1)
|
|   Hash Cond: ((rec."vchSubmittersCode")::text =
(sub_head."vchSubmittersCode")::text)
  |
|   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11
width=80) (actual time=0.011..56.998 rows=10 loops=1)
  |
| Filter: (NOT "bFetch")

   |
| Rows Removed by Filter: 4706

   |
|   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
time=0.319..0.319 rows=405 loops=1)
|
| Buckets: 1024  Batches: 1  Memory Usage: 26kB

|
| ->  Seq Scan on table2 sub_head  (cost=0.00..16.00
rows=405 width=11) (actual time=0.005..0.248 rows=405 loops=1) |
|   Filter: (NOT "bFetch")

   |
|   Rows Removed by Filter: 375

|
| Planning time: 0.237 ms

|
| Execution time: 390252.089 ms


so i am unable to reduce the query execution time as it is taken around 7
minutes to execute with indexes & without indexes

 please help in reducing the query execution time


Regards
Durgamahesh Manne


On Wed, Sep 19, 2018 at 9:07 PM Alban Hertroys  wrote:

> The results of explain analyze would shed light on the core problem.
>
> My guess is that your conditions are not very selective - ie. most
> records in both tables have bFetch = false - and therefore you are
> retrieving most of your data and that is what's taking 7 minutes. No
> index is going to fix that.
>
> If those boolean values are distributed very unevenly (say 99.9% has
> false and 0.1% has true), you may get better results by excluding the
> records with 'true' values (instead of including those that are
> 'false'), for example by using a where not exists(...) subquery.
>
> Obviously, that still won't help if you're just fetching a lot of data.
> On Wed, 19 Sep 2018 at 16:23, Durgamahesh Manne
>  wrote:
> >
> >
> >
> >
> >
> >
> > On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman 
> wrote:
> >>
> >>
> >>
> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> >> Sent: Wednesday, September 19, 2018 10:04 AM
> >> To: Igor Neyman 
> >> Subject: Re: Regrading brin_index on required column of the table
> >>
> >> On Wed, Sep 19, 2018 at 7:19 PM Igor Neyman 
> wrote:
> >>
> >>
> >>
> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> >> Sent: Wednesday, September 19, 2018 9:43 AM
> >> To: PostgreSQL mailing lists 
> >> Subject: Regrading brin_index on required column of the table
> >>
> >> Hi
> >>
> >> Respected postgres community members
> >>
> >>
> >>
> >> I have created BRIN index on few columns of the table without any
> issues. But i am unable to create BRIN index on one column of the table as
> i got error listed below
> >>
> >>
> >>
> >>
> >>
> >> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using
> brin ("dFetch");
> >>
> >> ERROR:  data type boolean has no default operator class for access
> method "brin"
> >>
> >> HINT:  You must specify an operator class for the index or define a
> default operator class for the data type.
> >>
> >>
> >>
> >>  below is the column description:
> >>
> >> Column datatype   collationnullable   defaultstorage
> >>
> >>
> >>
> >>  dFetchboolean false
> plain
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> so please help in creating of the BRIN index on above column of the
> table .
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> Regards
> >>
> >>
> >>
> >> Durgamahesh Manne
> >>
> >>
> >>
> >> Why would you want BRIN index on Boolean-type column?
> >>
> >> What kind of interval will you specify?
> >>
> >>
> 

RE: Advice on machine specs for growth

2018-09-20 Thread Steven Winfield
> Disable memory overcommit and set swappiness to 0 on database servers.

Just to be clear, are you suggesting (on linux) setting overcommit_memory = 2 
but leaving overcommit_ratio alone, which defaults to 50%?
I’ve tried setting them to 2 and 0% respectively in the past and it didn’t end 
well…

Also I’ve read, and now use, swappiness = 1 which is supposed to disable 
swapping entirely except when not swapping would cause OOM.
Any thoughts on that?

Cheers,
Steve.



Re: Replicate Tables from SAP (DB2/HANA) to PostgreSQL

2018-09-20 Thread Thomas Güttler

Am 20.09.2018 um 04:19 schrieb bricklen:



On Tue, Sep 18, 2018 at 11:31 PM Thomas Güttler mailto:guettl...@thomas-guettler.de>> wrote:

Hi,

is it possible to replicate some tables from SAP to PostgreSQL?
At the moment there are two underlaying database systems.
Some run DB2 and some run SAP-HANA.
In my case it would be nice, if it would be possible to replicate
only some rows, not all.

The replication should be unidirectional. The data in PostgreSQL
is only needed for reading, not for inserts/updates.


My team is about to start a proof-of-concept doing the same thing, except it's a large MSSQL to PG11 replication 
process. The tool we are evaluating is from https://www.hvr-software.com/solutions/database-replication/, and the demos 
looked pretty good (at least, for our use-case). You didn't mention if you're looking for free or paid software; HVR is 
not free so that may or may not fit your needs.


if it is commercial, then it depends on the price. I like open source software. But my goal is to make the customer 
happy, not me :-)


Regards,
  Thomas Güttler


--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines



Re: Advice on machine specs for growth

2018-09-20 Thread Rory Campbell-Lange
On 18/09/18, Laurenz Albe (laurenz.a...@cybertec.at) wrote:
> Rory Campbell-Lange wrote:
> > We are looking to upgrade our current database server infrastructure so
> > that it is suitable for the next 3 years or so.
> > 
> > We envisage needing about 800GB of primary database storage in the next
> > three years, with 1000 databases in the cluster.
> 
> 1000 is a lot, but should still be ok.
...
> It is much more difficult to scale a single monolithic database server.
...
> Disable memory overcommit and set swappiness to 0 on database servers.

Thanks for your advice, Laurenz.

As a matter of interest, is 800GB considered "large"? 

I appreciate that, given your advice, it might be better to start
dividing up the load now rather than, say, in three years, partly so we
become adept at working across more than one server.

Rory