Regarding use case of epoch to generate nanoseconds precision

2024-05-22 Thread Durgamahesh Manne
Hi

Postgres supports only upto microseconds (6 decimal precision).
How do we generate timestamp with nanoseconds as rds postgres not
supported timestamp9 extension ?
Is there a way to generate timestamp with nanoseconds precision on
pg_partman with epoch without typecasting or with typecasting  ?

p_epoch =>   (to_timestamp(control column))
Here what is the control column?
How to run it with the create_parent function of partman?

Here as per the pg_partman doc
p_epoch - tells pg_partman that the control column is an integer type, but
actually represents an epoch time value. Valid values for this option are:
'seconds', 'milliseconds', 'nanoseconds', and 'none'. The default is
'none'. All table names will be time-based. In addition to a normal index
on the control column, be sure you create a functional, time-based index on
the control column (to_timestamp(control column)) as well so this works
efficiently.


Regards,
Durga Mahesh Manne


About to know the info of foreign table reference used in any of call ,views,functions

2021-04-27 Thread Durgamahesh Manne
Hi Team

By using the system view and function

"I need to get the info of foreign table reference used in any of call
,views,functions"

I found info of views and functions and sprocs that are executed frequently
through application using pg_stat_user_functions view

Please help  for the info i need


Thanks & Regards
Durgamahesh Manne


About to find all foreign tables are being used by sproc and view and function

2021-04-26 Thread Durgamahesh Manne
Hi  Respected Team

I need to find foreign tables  used in function and sproc and view
How to find all foreign tables being used by sproc,view,function



Thanks & Regards
Durgamahesh Manne


Re: Regarding creation of gin index on column that has varchar datatype

2020-05-23 Thread Durgamahesh Manne
On Sat, May 23, 2020 at 6:50 PM Andreas Kretschmer 
wrote:

>
>
> Am 23.05.20 um 12:37 schrieb Durgamahesh Manne:
> > Hi
> >
> > Respected to PGDG GLOBAL TEAM
> >
> > I am getting this error( ERROR:  data type character varying has no
> > default operator class for access method "gin"
> > HINT:  You must specify an operator class for the index or define a
> > default operator class for the data type ) while i try to create gin
> > index on vch_message column of slp01 table (CREATE INDEX ON slp01
> > using gin(vch_message);)
> >
> > vch_message column has lot of this info like
> > {"requestBody":{"firstName":"SALVATORE","lastName":"NAPOLITANO","email":"
> remuda...@hotmail.com
> > <mailto:remuda...@hotmail.com>","personID":"C48268","ClientSSN":"153520698","advisorPersonID":["10214","15270","15271","15272"]},"header":{"cache-control":"no-cache","content-type":"application/json","authorization":"Basic
>
> >
> Y3JlYXRlVXNlcnM6ZGFrdm5laXdvbjRpOWZqb3duY3VpMzRmdW4zOTQ4aGY=","accept":"application/json,
>
> > text/json, text/x-json, text/javascript, application/xml, text/xml"
> >
> >
>
>
> the string looks like JSON (with missing the correct end of the string).
> Please check. If you are storing JSON or JSONB - Values, you should use
> the proper datatype -JSON/JSONB. In this case you can create an
> GIN-Index on it.
>
>
> Regards, Andreas
> (and please don't email me private the same question)
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
> Hi Andreas
>

Thank you for this information


Regards

Durgamahesh Manne


Regarding creation of gin index on column that has varchar datatype

2020-05-23 Thread Durgamahesh Manne
Hi

Respected to PGDG GLOBAL TEAM

I am getting this error( ERROR:  data type character varying has no default
operator class for access method "gin"
HINT:  You must specify an operator class for the index or define a default
operator class for the data type ) while i try to create gin index on
vch_message column of slp01 table (CREATE INDEX ON slp01 using
gin(vch_message);)

vch_message column has lot of this info like
{"requestBody":{"firstName":"SALVATORE","lastName":"NAPOLITANO","email":"
remuda...@hotmail.com","personID":"C48268","ClientSSN":"153520698","advisorPersonID":["10214","15270","15271","15272"]},"header":{"cache-control":"no-cache","content-type":"application/json","authorization":"Basic
Y3JlYXRlVXNlcnM6ZGFrdm5laXdvbjRpOWZqb3duY3VpMzRmdW4zOTQ4aGY=","accept":"application/json,
text/json, text/x-json, text/javascript, application/xml, text/xml"

NOTE:i have created pg_trgm based ginindex on this vch_message of slp01
table but it occupied more disk space hence i deleted trgm based gin index

please help in creating gin index on vch_message column  of slp01 table

Regards
Durgamahesh Manne


Regarding db performance improvement

2020-04-29 Thread Durgamahesh Manne
Hi to PGDG TEAM


kernel.shmmax=18446744073692774399

kernel.shmall=18446744073692774399

 I am using linux ubuntu server(16.04) to manage pgsql. So  kernel default
value for kernel.shmax was 18446744073692774399 & kernel default value for
kernel.shmall was 18446744073692774399

I am using postgresql 12.1 version and ram was 32gb and 8 cpu's

1) Do i need to change the default values for kernel.shmmax 
parameters by debugging this below script


Does pgsql have performance improvement well by changing the kernel.shmmax
and kernel.shmall default values ?
 #!/bin/bash
# simple shmsetup script
page_size=`getconf PAGE_SIZE`
phys_pages=`getconf _PHYS_PAGES`
shmall=`expr $phys_pages / 2`
shmmax=`expr $shmall \* $page_size`
echo kernel.shmmax = $shmmax
echo kernel.shmall = $shmall



Regards
Durgamahesh Manne


Re: Regarding automatic table partitioning in PostgreSQL 12 version is possible or not

2020-01-17 Thread Durgamahesh Manne
On Fri, Jan 17, 2020 at 7:43 PM Stephen Frost  wrote:

> Greetings,
>
> * Durgamahesh Manne (maheshpostgr...@gmail.com) wrote:
> > Please let me know that automatic table partitioning is possible in pgsql
> > 12 or not without using trigger function
>
> The approach I'd recommend for automatically getting new table partitions
> created, at least today, is to use pg_partman, which is a very handy
> extension for working with partitions in PG.
>
> Thanks,
>
> Stephen
>

Hi sir


Thank you for this valuable information


Regarding automatic table partitioning without using trigger function in pgsql 12 is possible or not

2020-01-17 Thread Durgamahesh Manne
Hi
 To the respected PostgreSQL international team

Please let me know that automatic table partitioning without using trigger
function is possible in pgsql 12 or not ?



Regards

Durgamahesh Manne


Regarding automatic table partitioning in PostgreSQL 12 version is possible or not

2020-01-17 Thread Durgamahesh Manne
Hi
 To the respected PostgreSQL international team

Please let me know that automatic table partitioning is possible in pgsql
12 or not without using trigger function



Regards
Durgamahesh Manne


Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Durgamahesh Manne
On Wed, Oct 16, 2019 at 3:22 PM Durgamahesh Manne 
wrote:

>
>
> On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari  wrote:
>
>> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
>>  wrote:
>> >  Is there any way to reduce dump time when i take dump of the table
>> which has 148gb in size without creating partition* on that table has 148gb
>> in size ?
>>
>> I would at least try a parallel dump: should it be -j option to pg_dump.
>> I'm not sure it is an effective parallelism against a single table.
>>
>> Luca
>>
>
> Hi
>
> parallel jobs with pg_dump can* be effective for the database which
> contains lot of tables dependent objects
>
> parallel jobs with pg_dump can not be effective against a single table
>
>
> Regards
> Durgamahesh Manne
>
>
>


Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Durgamahesh Manne
On Wed, Oct 16, 2019 at 3:09 PM Luca Ferrari  wrote:

> On Wed, Oct 16, 2019 at 11:27 AM Durgamahesh Manne
>  wrote:
> >  Is there any way to reduce dump time when i take dump of the table
> which has 148gb in size without creating partition* on that table has 148gb
> in size ?
>
> I would at least try a parallel dump: should it be -j option to pg_dump.
> I'm not sure it is an effective parallelism against a single table.
>
> Luca
>

Hi

parallel jobs with pg_dump be effective for the database which contains lot
of tables dependent objects

parallel jobs with pg_dump can not be effective against a single table


Regards
Durgamahesh Manne


Re: Regarding db dump with Fc taking very long time to completion

2019-10-16 Thread Durgamahesh Manne
On Fri, Aug 30, 2019 at 4:12 PM Luca Ferrari  wrote:

> On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne
>  wrote:
> >  Logical dump of that table is taking more than 7 hours to be completed
> >
> >  I need to reduce to dump time of that table that has 88GB in size
>
> Good luck!
> I would see two possible solutions to the problem:
> 1) use physical backup and switch to incremental (e..g, pgbackrest)
> 2) partition the table and backup single pieces, if possible
> (constraints?) and be assured it will become hard to maintain (added
> partitions, and so on).
>
> Are all of the 88 GB be written during a bulk process? I guess no, so
> maybe partitioning you can avoid locking the whole dataset and reduce
> contention (and thus time).
>
> Luca
>


Hi respected postgres team

  Are all of the 88 GB be written during a bulk process?
   NO
 Earlier table size was 88gb
 Now table size is about 148 GB
 Is there any way to reduce dump time when i take dump of the table which
has 148gb in size without creating partiton on that table has 148gb in size
?


Regards
Durgamahesh Manne


Regarding db dump with Fc taking very long time to completion

2019-08-30 Thread Durgamahesh Manne
Hi
To respected international postgresql team

I am using postgresql 11.4 version
I have scheduled logical dump job which runs daily one time at db level
There was one table that has write intensive activity for every 40 seconds
in db
The size of the table is about 88GB
 Logical dump of that table is taking more than 7 hours to be completed

 I need to reduce to dump time of that table that has 88GB in size


Regards
Durgamahesh Manne


Regarding TailNMail

2019-04-05 Thread Durgamahesh Manne
Hi Sir

 EMAIL: maheshpostgr...@gmail.com 
MAILSUBJECT: HOST Postgres errors UNIQUE : NUMBER

INCLUDE: ERROR:
INCLUDE: FATAL:
INCLUDE: PANIC:

FILE1: /mnt2/staging/postgres/data/log/LATEST
LASTFILE1: /mnt2/staging/postgres/data/log/postgresql-2019-04-05_00.log
OFFSET1: 10630272


i have configured tailnmail in server

 how to get dbobject errors for specific user(ravi) from postgreslog

suppose username is ravi

INCLUDE:  ravi ERROR:
is this correct approach to get errors related specific db user (ravi) from
pglog?


Regards

Durgamahesh Manne


Re: dbuser acess privileges

2019-04-04 Thread Durgamahesh Manne
*From:* Durgamahesh Manne 
*Sent:* Thursday, April 4, 2019 12:07 PM
*To:* pgsql-general@lists.postgresql.org
*Subject:* dbuser acess privileges



hi



Respected international pgsql team



pershing=# grant INSERT on public.hyd to ravi;

GRANT

i have granted insert command access to non superuser(ravi)

pershing=> insert into hyd (id,name) values('2','delhi');

INSERT 0 1

here data inserted





pershing=# grant UPDATE on public.hyd to ravi;

GRANT

i have granted update command access to non superuser(ravi)



pershing=> update public.hyd set id = 3 where name = 'hyderabad';

ERROR:  permission denied for relation hyd

please let me know what is the issue with update command



On Thu, Apr 4, 2019 at 5:55 PM Patrick FICHE 
wrote:

> Hi,
>
>
>
> If I’m not wrong, UPDATE requires SELECT permission as the UPDATE
> statement needs to read the data to be updated.
>
> So, you should probably add GRANT SELECT and you get it work.
>
>
>
> Regards,
>
>
>
> *Patrick Fiche*
>
> Database Engineer, Aqsacom Sas.
>
> *c.* 33 6 82 80 69 96
>
>
>
> [image: 01-03_AQSA_Main_Corporate_Logo_JPEG_White_Low.jpg]
> <http://www.aqsacom.com/>
>
>
>
>
>
>
>

 Hi  Patrick Fiche  as per your info

Grant access on update command is worked  AFTER I DID EXECUTE THIS   GRANT
SELECT ON TABLE(hyd)  for non superuser in 10.6


But

 in 9.3 versionGrant access on update command is worked  EVEN I DID NOT
EXECUTE THIS   GRANT SELECT ON TABLE(hyd)  for non superuser


Regards

Durgamahesh Manne


Re: dbuser acess privileges

2019-04-04 Thread Durgamahesh Manne
On Thu, Apr 4, 2019 at 4:14 PM Durgamahesh Manne 
wrote:

>
>
>
> On Thu, Apr 4, 2019 at 3:55 PM Ron  wrote:
>
>> On 4/4/19 5:07 AM, Durgamahesh Manne wrote:
>> > hi
>> > Respected international pgsql team
>> >
>> > pershing=# grant INSERT on public.hyd to ravi;
>> > GRANT
>> > i have granted insert command access to non superuser(ravi)
>> > pershing=> insert into hyd (id,name) values('2','delhi');
>> > INSERT 0 1
>> > here data inserted
>> >
>> >
>> > pershing=# grant UPDATE on public.hyd to ravi;
>> > GRANT
>> > i have granted update command access to non superuser(ravi)
>> >
>> > pershing=> update public.hyd set id = 3 where name = 'hyderabad';
>> > ERROR:  permission denied for relation hyd
>> > please let me know what is the issue with update command
>>
>> Are there any triggers on public.hyd which modify other tables?
>>
>> --
>> Angular momentum makes the world go 'round.
>>
>>
>> Hi
> there are no triggers on public.hyd table
>
> Regards
> durgamahesh manne
>
>
>

 Hi

i found that there was bug for grant access on update command for non
superusers

grant access on update command worked fine on 9.3 version


please i request you to fix grant access bug on update command for
nonsupeuser asap in the next pg version 10.8



Regards

durgamahesh manne


Re: dbuser acess privileges

2019-04-04 Thread Durgamahesh Manne
On Thu, Apr 4, 2019 at 3:55 PM Ron  wrote:

> On 4/4/19 5:07 AM, Durgamahesh Manne wrote:
> > hi
> > Respected international pgsql team
> >
> > pershing=# grant INSERT on public.hyd to ravi;
> > GRANT
> > i have granted insert command access to non superuser(ravi)
> > pershing=> insert into hyd (id,name) values('2','delhi');
> > INSERT 0 1
> > here data inserted
> >
> >
> > pershing=# grant UPDATE on public.hyd to ravi;
> > GRANT
> > i have granted update command access to non superuser(ravi)
> >
> > pershing=> update public.hyd set id = 3 where name = 'hyderabad';
> > ERROR:  permission denied for relation hyd
> > please let me know what is the issue with update command
>
> Are there any triggers on public.hyd which modify other tables?
>
> --
> Angular momentum makes the world go 'round.
>
>
> Hi
there are no triggers on public.hyd table

Regards
durgamahesh manne


dbuser acess privileges

2019-04-04 Thread Durgamahesh Manne
hi

Respected international pgsql team

pershing=# grant INSERT on public.hyd to ravi;
GRANT
i have granted insert command access to non superuser(ravi)
pershing=> insert into hyd (id,name) values('2','delhi');
INSERT 0 1
here data inserted


pershing=# grant UPDATE on public.hyd to ravi;
GRANT
i have granted update command access to non superuser(ravi)

pershing=> update public.hyd set id = 3 where name = 'hyderabad';
ERROR:  permission denied for relation hyd
please let me know what is the issue with update command


Regarding pgaudit

2019-03-31 Thread Durgamahesh Manne
Hi

Respected postgres team

Please let me know open source application interface to monitor the pgaudit
log files only as I have installed pgaudit tool

Regards

Durgamahesh Manne


Regarding pgaudit log_directory

2019-03-31 Thread Durgamahesh Manne
On Saturday, March 30, 2019, David Steele  wrote:

> On 3/29/19 3:32 PM, Durgamahesh Manne wrote:
>
>>
>>I could not find parameter related to pgaudit log_directory  .
>>
>
> pgAudit does not support logging outside the standard PostgreSQL logging
> facility and there are no plans for such a feature.
>
> The general solution is to use Splunk, ELK, etc. to do manipulation of the
> PostgreSQL logs.
>
> Regards,
> --
> -David
> da...@pgmasters.net
>

Hi

Please let me know open source application interface to do manipulation of
pgaudit logs only
As I have installed pgaudit successfully

Regards

Durgamahesh Manne


Re: Regarding pgaudit log_directory

2019-03-31 Thread Durgamahesh Manne
On Saturday, March 30, 2019, David Steele  wrote:

> On 3/29/19 3:32 PM, Durgamahesh Manne wrote:
>
>>
>>I could not find parameter related to pgaudit log_directory  .
>>
>
> pgAudit does not support logging outside the standard PostgreSQL logging
> facility and there are no plans for such a feature.
>
> The general solution is to use Splunk, ELK, etc. to do manipulation of the
> PostgreSQL logs.
>
> Regards,
> --
> -David
> da...@pgmasters.net
>

Hi

Please let me know open source application interface to monitor of pgaudit
logs only
As I have installed pgaudit successfully

Regards

Durgamahesh Manne


Re: Regarding pgaudit log_directory

2019-03-29 Thread Durgamahesh Manne
On Fri, Mar 29, 2019 at 8:58 PM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 29/3/19 5:15 μ.μ., Durgamahesh Manne wrote:
>
> Hi
> Respected pgsql team
>
> please let me know the pgaudit parameter to store pgaudit log files only
>
> i don't want to store pgaudit log files at pgsql log_directory file
> location
>
> let me guess, there is none?
>
>
>
>
> Regards
>
> durgamahesh manne
>
>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
   hi

  I could not find parameter related to pgaudit log_directory  .



Regards

Durgamahesh Manne


Regarding pgaudit log_directory

2019-03-29 Thread Durgamahesh Manne
Hi
Respected pgsql team

please let me know the pgaudit parameter to store pgaudit log files only

i don't want to store pgaudit log files at pgsql log_directory file
location



Regards

durgamahesh manne


Re: Regarding query execution for long time

2019-01-28 Thread Durgamahesh Manne
On Mon, Jan 28, 2019 at 8:41 PM Adrian Klaver 
wrote:

> On 1/28/19 5:04 AM, Ron wrote:
> > On 1/28/19 6:20 AM, Durgamahesh Manne wrote:
> >> Hi
> >>
> >> below query is being executed for long time
> >>
> >> Select
> >> distinct ltrim(rtrim(ssnumber)), CL.iInsightClientId,
> >> ltrim(rtrim(TFA.client_account_key)) as vchAccountNumber,
> >> concat(TFA.first_name, ' ', TFA.last_name) as vchAccountName,
> >> concat('AP:TIAA', TFA.routing_number) as vchRepLabel,
> >> null as vchOpenDate, TFA.address1 as vchAccountTiteLine2,
> >> TFA.address2 as vchAccountTitleLine3,null as vchAccountType,
> >> TFA.address1 as vchPrimaryContact, 'TIAA-CREF' as Custodian,
> >> TFA.routing_number as vchCustodian_RepId, null as vchCustodianRepName,
> >> CONCAT ('TIAA-CREF:',ltrim(rtrim(client_account_key)),':',
> >> ltrim(rtrim(ssnumber))) as vchAccountKey,
> >> null as vchFeedsAccountType
> >> from accounts as TFA
> >> join client3 as CL on
> >> ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
> >> left join account3 as AC on
> >>
> ltrim(rtrim(AC."vchCustodianAccountNumber"))=ltrim(rtrim(TFA.client_account_key))
>
> >>
> >> where AC."iInsightAccountID" is null;
> >>
> >>  query is being executed for long time even after i have created
> >> required indexes on columns of the tables
> >> please help for fast query execution
> >
> > I bet this is what's causing your troubles:
> > on ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
>
> There is also the question of why not do?:
>
> btrim(Cl.vchssnumber) = btrim(TFA.ssnumber)
>
> >
> > The query optimizer probably can't use indexes on those tables.
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
hi

CREATE INDEX idx10 on account3 USING btree
(ltrim(rtrim("vchCustodianAccountNumber")));
CREATE INDEX idx11 on accounts USING btree
(ltrim(rtrim(client_account_key)));
CREATE INDEX idx12 on accounts USING btree (ltrim(rtrim(ssnumber)));
 CREATE INDEX idx13 on client3 USING btree (ltrim(rtrim(vchssnumber)));

after i have created indexes on columns by adding the ltrim rtrim functions

query took very less to execution


Regards

durgamahesh manne


Re: Regarding query execution for long time

2019-01-28 Thread Durgamahesh Manne
On Mon, Jan 28, 2019 at 6:34 PM Ron  wrote:

> On 1/28/19 6:20 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > below query is being executed for long time
> >
> > Select
> > distinct ltrim(rtrim(ssnumber)), CL.iInsightClientId,
> > ltrim(rtrim(TFA.client_account_key)) as vchAccountNumber,
> > concat(TFA.first_name, ' ', TFA.last_name) as vchAccountName,
> > concat('AP:TIAA', TFA.routing_number) as vchRepLabel,
> > null as vchOpenDate, TFA.address1 as vchAccountTiteLine2,
> > TFA.address2 as vchAccountTitleLine3,null as vchAccountType,
> > TFA.address1 as vchPrimaryContact, 'TIAA-CREF' as Custodian,
> > TFA.routing_number as vchCustodian_RepId, null as vchCustodianRepName,
> > CONCAT ('TIAA-CREF:',ltrim(rtrim(client_account_key)),':',
> > ltrim(rtrim(ssnumber))) as vchAccountKey,
> > null as vchFeedsAccountType
> > from accounts as TFA
> > join client3 as CL on
> ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
> > left join account3 as AC on
> >
> ltrim(rtrim(AC."vchCustodianAccountNumber"))=ltrim(rtrim(TFA.client_account_key))
> > where AC."iInsightAccountID" is null;
> >
> >  query is being executed for long time even after i have created
> required
> > indexes on columns of the tables
> > please help for fast query execution
>
> I bet this is what's causing your troubles:
> on ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
>
> The query optimizer probably can't use indexes on those tables.
>
>
> --
> Angular momentum makes the world go 'round.
>
> Hi

  why query optimizer can't use indexes on those columns of the tables


Regarding query execution for long time

2019-01-28 Thread Durgamahesh Manne
Hi

below query is being executed for long time

Select
distinct ltrim(rtrim(ssnumber)), CL.iInsightClientId,
ltrim(rtrim(TFA.client_account_key)) as vchAccountNumber,
concat(TFA.first_name, ' ', TFA.last_name) as vchAccountName,
concat('AP:TIAA', TFA.routing_number) as vchRepLabel,
null as vchOpenDate, TFA.address1 as vchAccountTiteLine2,
TFA.address2 as vchAccountTitleLine3,null as vchAccountType,
TFA.address1 as vchPrimaryContact, 'TIAA-CREF' as Custodian,
TFA.routing_number as vchCustodian_RepId, null as vchCustodianRepName,
CONCAT ('TIAA-CREF:',ltrim(rtrim(client_account_key)),':',
ltrim(rtrim(ssnumber))) as vchAccountKey,
null as vchFeedsAccountType
from accounts as TFA
join client3 as CL on
ltrim(rtrim(Cl.vchssnumber))=ltrim(rtrim(TFA.ssnumber))
left join account3 as AC on
ltrim(rtrim(AC."vchCustodianAccountNumber"))=ltrim(rtrim(TFA.client_account_key))
where AC."iInsightAccountID" is null;

 query is being executed for long time even after i have created required
indexes on columns of the tables
please help for fast query execution

Regards
durgamahesh manne


Re: Regarding Tds_fdw

2018-11-28 Thread Durgamahesh Manne
On Wed, Nov 28, 2018 at 6:31 PM Durgamahesh Manne 
wrote:

>
>
> On Wed, Nov 28, 2018 at 4:22 PM Pavel Stehule 
> wrote:
>
>> Hi
>>
>> st 28. 11. 2018 v 11:28 odesílatel Durgamahesh Manne <
>> maheshpostgr...@gmail.com> napsal:
>>
>>> Hi
>>>
>>> Respected community members
>>>
>>> I have configured tds_fdw on postgres server.. I have created multiple
>>> foreign tables related to sql server as of now i could run select queries
>>> with out any issues
>>>
>>>   i got  this error ERROR: cannot insert into foreign table "pgsql"
>>> when i tried to insert the data into foerign table
>>>
>>> 1) Is there any way to run insert delete update  queries on foreign
>>> tables ?
>>>
>>
>> I can read on page
>> https://support.google.com/hangouts/answer/3110347?hl=en_topic=2944918=0-737329123936-1543400907610
>>
>> "The current version does not yet support JOIN push-down, or write
>> operations."
>>
>> Regards
>>
>> Pavel
>>
>>
>>>
>>>
>>>
>>> Regards
>>>
>>> durgamahesh manne
>>>
>>
>
>
>  Hi
>
> are there any fdws which supports dml operation on  sql server foreign
> tables i have created in pg server ?
>
>
>
> Regards
>
>
>


Re: Regarding Tds_fdw

2018-11-28 Thread Durgamahesh Manne
On Wed, Nov 28, 2018 at 4:22 PM Pavel Stehule 
wrote:

> Hi
>
> st 28. 11. 2018 v 11:28 odesílatel Durgamahesh Manne <
> maheshpostgr...@gmail.com> napsal:
>
>> Hi
>>
>> Respected community members
>>
>> I have configured tds_fdw on postgres server.. I have created multiple
>> foreign tables related to sql server as of now i could run select queries
>> with out any issues
>>
>>   i got  this error ERROR: cannot insert into foreign table "pgsql"  when
>> i tried to insert the data into foerign table
>>
>> 1) Is there any way to run insert delete update  queries on foreign
>> tables ?
>>
>
> I can read on page
> https://support.google.com/hangouts/answer/3110347?hl=en_topic=2944918=0-737329123936-1543400907610
>
> "The current version does not yet support JOIN push-down, or write
> operations."
>
> Regards
>
> Pavel
>
>
>>
>>
>>
>> Regards
>>
>> durgamahesh manne
>>
>


 Hi

are there any fdw which supports dml operation on  sql server foreign
tables i have created in pg server ?



Regards


Regarding Tds_fdw

2018-11-28 Thread Durgamahesh Manne
Hi

Respected community members

I have configured tds_fdw on postgres server.. I have created multiple
foreign tables related to sql server as of now i could run select queries
with out any issues

  i got  this error ERROR: cannot insert into foreign table "pgsql"  when i
tried to insert the data into foerign table

1) Is there any way to run insert delete update  queries on foreign tables ?



Regards

durgamahesh manne


Re: Regarding varchar max length in postgres

2018-10-16 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 9:52 PM Adrian Klaver 
wrote:

> On 10/15/18 8:56 AM, Durgamahesh Manne wrote:
>
>
> >   I request you all community members to provide built in bdr v3 version
> > replication for public as multimaster replication is on high priority
> > against other dbms
>
> BDR v3 is third party extension from 2ndQuadrant, it is not community
> code. Elements of it have found there into the community code as logical
> replication, but that is as far as it has gotten. You will need to take
> this up with 2ndQuadrant.
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


Thank you for this information sir


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 9:07 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Oct 15, 2018 at 8:24 AM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>> So i need unlimited length data type for required column of the table for
>> storing the large values
>> is there any issue to use unlimited length datatype  text  for the
>> required column of the table instead of using varchar ?
>>
>
> Between the two you should use the "text" data type for those columns.
> You will need to describe your use case in more detail if you want input as
> to whether you need to use the large object API instead.
>
> The database cannot store an unlimited amount of data in a single
> row+column (cell) - you will still encounter a physical limit to the number
> of bytes able to be stored in a single cell when using text.
>
> David J.
>


Hi sir

 i need to store more values on required column of the table by using text
than varchar

you said that physical limit needs to be stored values  in column of the
table

text variable unlimited length


Thank you for this valuable information

 PostgreSQL is always no 1 world s leading open source  RDBMS



 I request you all community members to provide built in bdr v3 version
replication for public as multimaster replication is on high priority
against other dbms


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 7:54 PM Tom Lane  wrote:

> Durgamahesh Manne  writes:
> >>> If character varying is used without length specifier, the type
> >>> accepts strings of any size
> >>> but varchar does not accept more than this 10485760 value
>
> You're confusing the size of string that can be stored with the
> largest value accepted for "n" in "varchar(n)".  This is documented,
> in the same place that people have been pointing you to:
>
> In any case, the longest possible character string that can be stored
> is about 1 GB. (The maximum value that will be allowed for n in the
> ---
> data type declaration is less than that. It wouldn't be useful to
> 
> change this because with multibyte character encodings the number of
> characters and bytes can be quite different. If you desire to store
> long strings with no specific upper limit, use text or character
> varying without a length specifier, rather than making up an arbitrary
> length limit.)
>
> As you found out, the limit for "n" is ~ 10 million.
>
> In principle, we could have allowed it to be as much as 1Gb divided by
> the maximum character length of the database's encoding, but it did
> not seem like a great idea for the limit to be encoding-dependent.
>
> As the last sentence in the doc paragraph points out, the preferred
> thing to do if you just want to allow very long strings is to leave
> off "(n)" altogether.
>
> The subtext here, which maybe we ought to state in a more in-your-face
> way, is that if you use char(N) or varchar(N) without a concrete
> application-driven reason why N has to be that particular value,
> no more or less, then You're Doing It Wrong.  Artificially-chosen
> column width limits are a bad idea left over from the days of
> punched cards.  The reason the limit on N is much smaller than it
> could theoretically be is that column declarations with very large
> N are, without exception, violations of this principle.
>
> regards, tom lane
>



Hi sir

>>> If character varying is used without length specifier, the  datatype
>>> accepts strings of any size up to maximum of 1GB as  i found this info
in pgdg doc

I have not used  this max length 10485760 value at varchar in table of db
as well as i have not confused about this maximium length of the string for
varchar upto 1GB

I have used this column datatype varchar with out using any limit

I have checked with more than above value by creating table test
with create table test(id serial primary key, str varchar(10485761)) as an
example

ERROR: length for type varchar cannot exceed 10485760


text variable unlimited length
character varying(*n*), varchar(*n*) variable-length with limit
as per the documented text is with unlimited length and varchar variable
length is with limit 1GB

So i need unlimited length data type for required column of the table for
storing the large values

is there any issue to use unlimited length datatype  text  for the required
column of the table instead of using varchar ?



Regards

Durgamahesh Manne


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 3:11 PM Thomas Kellerer  wrote:

> Durgamahesh Manne schrieb am 15.10.2018 um 11:18:
> > was there any specific reason that you have given max length for varchar
> is limited to 10485760 value?
> >
> > why you have not given max length for varchar is unlimited like text
> datatype ?
> >
> > |character varying(/|n|/)|, |varchar(/|n|/)|variable-length with limit
> > |character(/|n|/)|, |char(/|n|/)|fixed-length, blank padded
> > |text|variable unlimited length
>
> It "text" type is a "character string" just like all the other character
> types and thus is also limited to 1GB
>
> "text", "varchar", "character varying" and "character" are all identical
> in how they are stored and processed.
>
> Thomas
>
>
> Thank you for this information
>
>
>


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 2:42 PM Durgamahesh Manne 
wrote:

>
>
> On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
>> maheshpostgr...@gmail.com> wrote:
>>
>>>
>>>
>>> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
>>> wrote:
>>>
>>>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>>>> > Hi
>>>> >
>>>> > please let me know the max length of varchar  & text in postgres
>>>>
>>>> https://www.postgresql.org/docs/10/static/datatype-character.html
>>>> >
>>>> >
>>>> >
>>>> > Regards
>>>> >
>>>> > Durgamahesh Manne
>>>>
>>>>
>>>> --
>>>> Adrian Klaver
>>>> adrian.kla...@aklaver.com
>>>
>>>
>>>  Hi
>>>
>>> Thank you for this information
>>>
>>
>   as per the postgresql documentation
>
>  If character varying is used without length specifier, the type accepts
> strings of any size
>
> but varchar does not accept more than this 
>
>>
>>> as per the postgresql documentation   please ignore incomplete above
>>> message i mailed to respected community members
>>>
>>>  If character varying is used without length specifier, the type
>>> accepts strings of any size
>>>
>>> but varchar does not accept more than this 10485760 value
>>>
>>
>>create table test(id serial primary key, str varchar(10485761));
>>
>>  ERROR: length for type varchar cannot exceed 10485760
>>
>

as thomas said that

Further down on that page the overall limit is documented:

In any case, the longest possible character string that can be stored
is about 1 GB

So the part that you quoted implicitly means "accepts strings of any size
- up to the maximum of 1GB"
Maybe it makes sense to make that clearer at that point.


was there any specific reason that you have given max length for varchar is
limited to 10485760 value?

why you have not given max length for varchar is unlimited like text
datatype ?

character varying(*n*), varchar(*n*)variable-length with limit
character(*n*), char(*n*)fixed-length, blank padded
textvariable unlimited length



Regards


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 2:35 PM Durgamahesh Manne 
wrote:

>
>
> On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
>> wrote:
>>
>>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>>> > Hi
>>> >
>>> > please let me know the max length of varchar  & text in postgres
>>>
>>> https://www.postgresql.org/docs/10/static/datatype-character.html
>>> >
>>> >
>>> >
>>> > Regards
>>> >
>>> > Durgamahesh Manne
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>
>>
>>  Hi
>>
>> Thank you for this information
>>
>
  as per the postgresql documentation

 If character varying is used without length specifier, the type accepts
strings of any size

but varchar does not accept more than this 

>
>> as per the postgresql documentation   please ignore incomplete above
>> message i mailed to respected community members
>>
>>  If character varying is used without length specifier, the type accepts
>> strings of any size
>>
>> but varchar does not accept more than this 10485760 value
>>
>
>create table test(id serial primary key, str varchar(10485761));
>
>  ERROR: length for type varchar cannot exceed 10485760
>


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Mon, Oct 15, 2018 at 2:32 PM Durgamahesh Manne 
wrote:

>
>
> On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
> wrote:
>
>> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
>> > Hi
>> >
>> > please let me know the max length of varchar  & text in postgres
>>
>> https://www.postgresql.org/docs/10/static/datatype-character.html
>> >
>> >
>> >
>> > Regards
>> >
>> > Durgamahesh Manne
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>
>
>  Hi
>
> Thank you for this information
>
> as per the postgresql documentation   please ignore incomplete message i
> mailed to respected community members
>
>  If character varying is used without length specifier, the type accepts
> strings of any size
>
> but varchar does not accept more than this 10485760 value
>

   create table test(id serial primary key, str varchar(10485761));

 ERROR: length for type varchar cannot exceed 10485760


Re: Regarding varchar max length in postgres

2018-10-15 Thread Durgamahesh Manne
On Fri, Oct 5, 2018 at 8:55 PM Adrian Klaver 
wrote:

> On 10/5/18 8:18 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > please let me know the max length of varchar  & text in postgres
>
> https://www.postgresql.org/docs/10/static/datatype-character.html
> >
> >
> >
> > Regards
> >
> > Durgamahesh Manne
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


 Hi

Thank you for this information

as per the postgresql documentation

 If character varying is used without length specifier, the type accepts
strings of any size

but varchar does not accept more than this


Regarding varchar max length in postgres

2018-10-05 Thread Durgamahesh Manne
Hi

please let me know the max length of varchar  & text in postgres



Regards

Durgamahesh Manne


Re: regarding bdr extension

2018-10-03 Thread Durgamahesh Manne
On Tue, Oct 2, 2018 at 7:22 PM Andreas Kretschmer 
wrote:

>
>
> Am 02.10.2018 um 15:29 schrieb Adrian Klaver:
> >
> >>
> >> does this pgdg repository useful for configuration of bdr v3 ?
> >
> > BDR is a third party extension from 2nd Quadrant, they would be the
> > ones to ask about configuring.
>
> i said it already, BDR3 is not for public, only for our customers. You
> will need a own support contract.
>
>
> Durgamahesh Manne, please contact us, if you are interesst in BDR version
> 3.
>
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
Ok, thank you.  i will contact you for bdr contract support if my company
needs to use bdr v3


Re: regarding bdr extension

2018-10-01 Thread Durgamahesh Manne
On Mon, Oct 1, 2018 at 7:34 PM Adrian Klaver 
wrote:

> On 10/1/18 1:08 AM, Durgamahesh Manne wrote:
> >
> >
> > On Fri, Sep 28, 2018 at 10:43 PM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 9/28/18 8:41 AM, Durgamahesh Manne wrote:
> >  > Hi
> >  >
> >  > This is regarding bdr extension issue. I got below error at the
> > time i
> >  > have tried to create the bdr extention
> >  >
> >  >
> >  > ERROR: could not open extension control file
> >  > "opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No
> > such file
> >  > or directory
> >
> > OS and version?
> >
> > How are you installing BDR, from package or source?
> >
> > What was the exact command that led to the error?
> >
> > Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?
> >
> >  >
> >  >
> >  >
> >  > Regards
> >  > Durgamahesh Manne
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
> >
> >
> >   Hi
> >
> >  OS and version?
> >
> > DISTRIB_ID=Ubuntu
> > DISTRIB_RELEASE=16.04
> > DISTRIB_CODENAME=xenial
> > DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
> > NAME="Ubuntu"
> > VERSION="16.04.3 LTS (Xenial Xerus)"
> > ID=ubuntu
> > ID_LIKE=debian
> > PRETTY_NAME="Ubuntu 16.04.3 LTS"
> > VERSION_ID="16.04"
> > HOME_URL="http://www.ubuntu.com/;
> > SUPPORT_URL="http://help.ubuntu.com/;
> > BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
> > VERSION_CODENAME=xenial
> > UBUNTU_CODENAME=xenial
> >
> > database version is 10
> >
> > 2) How are you installing BDR, from package or source?
> >
> > i am trying to install bdr by using postgres 10 one click installer
>
> This one?:
>
> https://www.postgresql.org/download/linux/
>
> Since you are on Ubuntu why not use the PGDG repo?:
>
> https://www.postgresql.org/download/linux/ubuntu/
>
> >
> > 3) What was the exact command that led to the error?
> >
> >   after  i have installed postgres 10  i ran CREATE EXTENSION bdr led to
> > error
> >
> > 4) Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?
>
> Understood, but the question remains does that file exist in the
> location specified above?
>
> If it does then there is possibly a permissions problem. If it does not
> then either the extension was installed somewhere else or it was not
> installed at all.
>
> >
> >i ran CREATE EXTENSION bdr which led to below error
> >
> >> ERROR: could not open extension control file
> >opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file

>> or directory> "
> >
> >
> >
> > Regards
> >
> > Durgamahesh Manne
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



 Hi  sir

* Since you are on Ubuntu why not use the PGDG repo?:

 in my company env  i am using pg installer to manage the postgres servers

does this pgdg repository useful for configuration of bdr v3 ?

If yes then i can use it for bdr

please let me know some information about pgdg repository in detail

* Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?

   bdr.control  file not available


Regards

Durgamahesh Manne


Re: regarding bdr extension

2018-10-01 Thread Durgamahesh Manne
On Fri, Sep 28, 2018 at 10:43 PM Adrian Klaver 
wrote:

> On 9/28/18 8:41 AM, Durgamahesh Manne wrote:
> > Hi
> >
> > This is regarding bdr extension issue. I got below error at the time i
> > have tried to create the bdr extention
> >
> >
> > ERROR: could not open extension control file
> > "opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file
> > or directory
>
> OS and version?
>
> How are you installing BDR, from package or source?
>
> What was the exact command that led to the error?
>
> Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?
>
> >
> >
> >
> > Regards
> > Durgamahesh Manne
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



 Hi

OS and version?

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.3 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/;
SUPPORT_URL="http://help.ubuntu.com/;
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/;
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial

database version is 10

2) How are you installing BDR, from package or source?

i am trying to install bdr by using postgres 10 one click installer

3) What was the exact command that led to the error?

 after  i have installed postgres 10  i ran CREATE EXTENSION bdr led to
error

4) Does opt/PostgreSQL/10/share/postgresql/extension/bdr.control exist?

  i ran CREATE EXTENSION bdr which led to below error

  > ERROR: could not open extension control file
> "opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file
> or directory



   Regards

Durgamahesh Manne


regarding bdr extension

2018-09-28 Thread Durgamahesh Manne
Hi

This is regarding bdr extension issue. I got below error at the time i have
tried to create the bdr extention


ERROR: could not open extension control file
"opt/PostgreSQL/10/share/postgresql/extension/bdr.control": No such file or
directory



Regards
Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-24 Thread Durgamahesh Manne
Thank you all very much for this information

On Sat, Sep 22, 2018 at 12:38 AM Alban Hertroys  wrote:

>
>
> > On 21 Sep 2018, at 17:49, Durgamahesh Manne 
> wrote:
> >
> > 
>
> Considering how hard you try to get rid of duplicates, I'm quite convinced
> that you're at least short a few join conditions. Getting rid of duplicates
> early has the added benefit of having to aggregate fewer rows, which should
> drastically improve the performance of this query.
>
> In addition, your approach of using both distinct and group by raises a
> red flag that you're fighting symptoms (most likely: duplicate results)
> without understanding their causes. I can't think of a single situation
> where both are justified in the same (sub-)query.
>
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>
>


Re: *Regarding brin_index on required column of the table

2018-09-21 Thread Durgamahesh Manne
On Fri, Sep 21, 2018 at 9:12 PM Andreas Kretschmer 
wrote:

>
>
> Am 21.09.2018 um 17:13 schrieb Durgamahesh Manne:
> > query is below
>
> query and plan still not readable. Store it into a textfile and attach
> it here.
>
>
> Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi

Please find below attached query plan file


Regards

Durgamahesh Manne


Query
Description: Binary data


Re: *Regarding brin_index on required column of the table

2018-09-21 Thread Durgamahesh Manne
On Fri, Sep 21, 2018 at 7:38 PM Durgamahesh Manne 
wrote:

> Hi
>
> Complex query taken around 30 minutes to execute even i have
> increased work_mem value to 4GB  temporarily as total ram is 16gb
>
> Explain analyze query taken around 30 minutes to execute even i have
> created partial indexes with where condition on required columns
>
>
>
> Below is the query plan for explain analyze query :
>
> | HashAggregate  (cost=16034967.98..16035010.75 rows=4277 width=1710)
> (actual time=1806653.536..1806680.802 rows=26098 loops=1)
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>   |
> |   Group Key: (max((v."vchSubmittersCode")::text)),
> (max((v."vchRecordType")::text)), (max((v."vchSequenceNumber")::text)),
> v."vchContractNumber", v."vchContractPartyRoleQualifier", v."vchPartyRole",
> (max((v."vchPartyNatural_Non_NaturalEntity")::text)),
> (max((v."vchPartyLastName")::text)), (max((v."vchPartyFirstName")::text)),
> (max((v."vchPartyMiddleName")::text)), (max((v."vchPartyPrefix")::text)),
> (max((v."vchPartySuffix")::text)), (NULL::text), v."vchPartyID",
> (max((v."vchPartyIDQualifier")::text)),
> (max((v."vchTrustRevocabilityIndicator")::text)), (NULL::text),
> (NULL::text), (max((v."vchFiller1")::text)),
> (max((v."vchRejectCode")::text)),
> (max((v."vchContractEntityAddressLine1")::text)),
> (max((v."vchContractEntityAddressLine2")::text)),
> (max((v."vchContractEntityCity")::text)),
> (max((v."vchContractEntityState")::text)),
> (max((v."vchContractEntityZip")::text)),
> (max((v."vchContractEntityAddressLine3")::text)),
> (max((v."vchContractEntityAddressLine4")::text)),
> (max((v."vchContractEntityAddressLine5")::text)),
> (max((v."vchPartyDateofBirth")::text)),
> (max((v."vchPartyAddressLine1")::text)),
> (max((v."vchContractStatus")::text)), (string_agg(DISTINCT
> btrim((s."vchAgentTaxID")::text), ','::text)), v."vchPartyRole",
> (max((v."vchAdvisorLabel")::text)), v."vchFileName",
> (max((v.vchpartycity)::text)), (max((v.vchpartystate)::text)),
> (max((v.vchpartypostalcode)::text)), (string_agg(DISTINCT
> btrim((s."vchAgentFirstName")::text) || ' '::text) ||
> btrim((s."vchAgentMiddleName")::text)) || ' '::text) ||
> btrim((s."vchAgentLastName")::text)), ','::text)) |
> |   ->  Append  (cost=48944.67..16034550.97 rows=4277 width=1710) (actual
> time=3324.233..1806605.691 rows=26098 loops=1)
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>  |
> | ->  Unique  (cost=48944.67..49103.43 rows=1764 width=1145)
> (actual time=3324.233..3327.824 rows=3637 loops=1)
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>   |
> |   ->  Sort  (cost=48944.67..48949.08 rows=1764 width=1145)
> (actual time=3324.232..3324.447 rows=3637 loops=1)
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>   |
> | Sort Key: (max((v."vchSubmittersCode")::text)),
> (max((v."vchRecordType")::text)), (max((v."vchSequenceNumber")::text)),
> v."vchContractNumber", v."vchContractPartyRoleQualifier", v."vchPartyRole",
> (max((v."vchPartyNatural_Non_NaturalEntity")::text)),
> (max((v."vchPartyLastName")::text)), (max((v."vchPartyFirstName")::text)),
> (max((v."vchPartyMiddleName")::text)), (max((v."vchPartyPrefix")::text)),
> (max((v."vchPartySuffix")::text)), v."vchPartyID",
> (max((v."vchPartyIDQualifier")::text)),
> (max((v."vchTrustRevocabilityIndicator")::text)),
> (max((v."vchFiller1")::text)), (max((v."vchRejectCode")::text)),
> (max((v."vchContractEntityAddressLine1")::text)),
> (max((v."vchContractEntityAddressLine2")::text)),
> (max((v."vchContractEntityCity")::text)),
> (max((v."vchContractEntityState")::text)),
> (max((v."vchContractEntityZip")::text)),
> (max((v."vchContractEntityAddressLine3")::text)),
> (max((v."vchContractEntityAddressLine4")::text)),
> (max((v."vchContractEntityAddressLine5"):

Re: *Regarding brin_index on required column of the table

2018-09-21 Thread Durgamahesh Manne
  |
|   ->  GroupAggregate
(cost=10588651.59..15985055.52 rows=2513 width=1180) (actual
time=847482.207..1802617.045 rows=22461 loops=1)


















  |
| Group Key: j."vchFileName",
j."vchContractEntityRole", j."vchContractNumber",
j."vchContractEntityPersonalIdentifier"


















|
| ->  Sort  (cost=10588651.59..10738549.61
rows=59959206 width=677) (actual time=847464.789..1486679.680 rows=61595746
loops=1)

















|
|   Sort Key: j."vchFileName",
j."vchContractEntityRole", j."vchContractNumber",
j."vchContractEntityPersonalIdentifier"


















   |
|   Sort Method: external merge  Disk:
42758304kB


















|
|       ->  Nested Loop
(cost=0.42..266305.78 rows=59959206 width=677) (actual
time=0.122..73786.837 rows=61595746 loops=1)


















   |
| ->  Seq Scan on "table3" j
(cost=0.00..669.12 rows=25132 width=591) (actual time=0.021..28.338
rows=25132 loops=1)

















 |
|   Filter: (NOT "bFetch")



















   |
| ->  Index Scan using cpr_idx4
on table2 k  (cost=0.42..6.92 rows=365 width=107) (actual time=0.838..2.244
rows=2451 loops=25132)

















 |
|   Index Cond:
(("vchAgentTaxID")::text = (j.vchagenttaxid)::text)


















|
| Planning time: 2.369 ms



















|
| Execution time: 1807771.091 ms








 So i need to execute below query at less time. please help in
 optimising the complex query execution time


Regards

Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 7:25 PM Durgamahesh Manne 
wrote:

>
>
> On Thu, Sep 20, 2018 at 6:39 PM Andreas Kretschmer <
> andr...@a-kretschmer.de> wrote:
>
>>
>>
>> Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:
>> > Query was executed at less time without distinct
>> >
>> > As well as query was taking around 7 minutes to complete execution
>> > with distinct
>> >
>> >  select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
>> > rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
>> > sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
>> > join table2 sub_head on
>> > rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
>> > rec."bFetch"=false and sub_head."bFetch"=false ;
>> >
>> > I need to execute above distinct query at less time as distinct query
>> > was taking more time to execute  even i have created indexes on
>> > required columns of the tables
>> >
>>
>>  >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
>> time=326397.551..389515.863 rows=370
>> loops=1) |
>>  > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
>> (actual time=326397.550..372470.846 rows=4050
>> loops=1)  |
>>  > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
>> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
>> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>>  > | Sort Method: external merge  Disk:
>> 3923224kB  |
>>
>>
>> as you can see: there are 40.500.000 rows to sort to filter out
>> duplicate rows, the result contains 'only' 3.700.000 rows. But for this
>> step the database needs nearly 4TB on-disk. This will, of course, need
>> some time.
>>
>> If you have enough ram you can try to set work_mem to 5 or 6 GB to
>> change the plan to a in-memory - sort. But keep in mind, this is
>> dangerous! If the machine don't have enough free ram the kernal can
>> decide to Out-Of-Memory - killing processes.
>>
>> What kind of disks do you have? Maybe you can use a separate fast SSD as
>> temp_tablespaces?
>>
>>
>> Regards, Andreas
>> --
>>
>> 2ndQuadrant - The PostgreSQL Support Company.
>> www.2ndQuadrant.com
>>
>>
>>
> Hi
>
> sdb[HDD]
> sdc[HDD]
> sda[HDD]
>
> i checked that there are hdd's in linux
>
> Regards
>
>
>
hi

 distinct query executed very fast as i have increased work_mem value to
3gb temporarily

Thank you very much for this valuable information

now i would like to ask one question related to built in bdr replication

when can be available bdr built in replication for use in production

can i use v3 built in replication in prod?

please let me know  about the configuration of v3 bdr built in replication



Regards

Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 6:39 PM Andreas Kretschmer 
wrote:

>
>
> Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:
> > Query was executed at less time without distinct
> >
> > As well as query was taking around 7 minutes to complete execution
> > with distinct
> >
> >  select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> > rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> > sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
> > join table2 sub_head on
> > rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
> > rec."bFetch"=false and sub_head."bFetch"=false ;
> >
> > I need to execute above distinct query at less time as distinct query
> > was taking more time to execute  even i have created indexes on
> > required columns of the tables
> >
>
>  >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> time=326397.551..389515.863 rows=370
> loops=1) |
>  > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
> (actual time=326397.550..372470.846 rows=4050
> loops=1)  |
>  > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>  > | Sort Method: external merge  Disk:
> 3923224kB  |
>
>
> as you can see: there are 40.500.000 rows to sort to filter out
> duplicate rows, the result contains 'only' 3.700.000 rows. But for this
> step the database needs nearly 4TB on-disk. This will, of course, need
> some time.
>
> If you have enough ram you can try to set work_mem to 5 or 6 GB to
> change the plan to a in-memory - sort. But keep in mind, this is
> dangerous! If the machine don't have enough free ram the kernal can
> decide to Out-Of-Memory - killing processes.
>
> What kind of disks do you have? Maybe you can use a separate fast SSD as
> temp_tablespaces?
>
>
> Regards, Andreas
> --
>
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi

sdb[HDD]
sdc[HDD]
sda[HDD]

i checked that there are hdd's in linux

Regards


Re: Regrading brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 3:12 PM Durgamahesh Manne 
wrote:

>
>
> Hi
>
> As per your suggestion
>
>
> i ran explain analyse for distinct query
>
> the size of the table1 is 30mb
> the size of the table2 is 368kb
>
>  EXPLAIN ANALYZE select  distinct sub_head."vchSubmittersCode"
> ,rec."vchFileName" , rec."vchCusipFundIDSubFundID" ,
> rec."vchFundUnitPrice"
> ,  sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
> join  table2 sub_head on rec."vchS
> ubmittersCode"=sub_head."vchSubmittersCode" where  rec."bFetch"=false and
> sub_head."bFetch"=false ;
>
>
>
>   Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> time=326397.551..389515.863 rows=370 loops=1)
>   |
> |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual
> time=326397.550..372470.846 rows=4050 loops=1)
> |
> | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
> rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
> sub_head."vchValuationDate", rec."vchAdvisorLabel" |
> | Sort Method: external merge  Disk: 3923224kB
>
>  |
> | ->  Hash Join  (cost=21.06..457723.28 rows=40500405 width=89)
> (actual time=0.339..6939.296 rows=4050 loops=1)
> |
> |   Hash Cond: ((rec."vchSubmittersCode")::text =
> (sub_head."vchSubmittersCode")::text)
>   |
> |   ->  Seq Scan on table1 rec  (cost=0.00..1822.66
> rows=11 width=80) (actual time=0.011..56.998 rows=10 loops=1)
>   |
> | Filter: (NOT "bFetch")
>
>  |
> | Rows Removed by Filter: 4706
>
>  |
> |   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
> time=0.319..0.319 rows=405 loops=1)
> |
> | Buckets: 1024  Batches: 1  Memory Usage: 26kB
>
> |
> | ->  Seq Scan on table2 sub_head  (cost=0.00..16.00
> rows=405 width=11) (actual time=0.005..0.248 rows=405 loops=1) |
> |   Filter: (NOT "bFetch")
>
>      |
> |   Rows Removed by Filter: 375
>
> |
> | Planning time: 0.237 ms
>
> |
> | Execution time: 390252.089 ms
>
>
> so i am unable to reduce the query execution time as it is taken around 7
> minutes to execute with indexes & without indexes
>
>  please help in reducing the query execution time
>
>
> Regards
> Durgamahesh Manne
>
>
> On Wed, Sep 19, 2018 at 9:07 PM Alban Hertroys  wrote:
>
>> The results of explain analyze would shed light on the core problem.
>>
>> My guess is that your conditions are not very selective - ie. most
>> records in both tables have bFetch = false - and therefore you are
>> retrieving most of your data and that is what's taking 7 minutes. No
>> index is going to fix that.
>>
>> If those boolean values are distributed very unevenly (say 99.9% has
>> false and 0.1% has true), you may get better results by excluding the
>> records with 'true' values (instead of including those that are
>> 'false'), for example by using a where not exists(...) subquery.
>>
>> Obviously, that still won't help if you're just fetching a lot of data.
>> On Wed, 19 Sep 2018 at 16:23, Durgamahesh Manne
>>  wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman 
>> wrote:
>> >>
>> >>
>> >>
>> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
>> >> Sent: Wednesday, September 19, 2018 10:04 AM
>> >> To: Igor Neyman 
>> >> Subject: Re: Regrading brin_index on required column of the table
>> >>
>> >> On Wed, Sep 19, 2018 at 7:19 PM Igor Neyman 
>> wrote:
>> >>
>> >>
>> >>
>> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
>> >> Sent: Wednesday, September 19, 2018 9:43 AM
>> >> To: PostgreSQL mailing lists 
>> >> Subject: Regrading brin_index on required column of the table
>> >>
>> >&g

Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 3:41 PM Durgamahesh Manne 
wrote:

> hi
> as per your request
> i ran below query without distinct
>
> select  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec join
> table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
> where  rec."bFetch"=false and sub_head."bFetch"=false ;
>
> the above query took around 47 sec to execute
> the above query took around 7 minutes to execute with distinct
>
>
>
> On Thu, Sep 20, 2018 at 3:28 PM Durgamahesh Manne <
> maheshpostgr...@gmail.com> wrote:
>
>>
>>
>> On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer <
>> andr...@a-kretschmer.de> wrote:
>>
>>> Hi,
>>>
>>>
>>> the problem is there:
>>>
>>>
>>> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
>>> >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
>>> > time=326397.551..389515.863 rows=370 loops=1)
>>> > |
>>> > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
>>> > (actual time=326397.550..372470.846 rows=4050 loops=1)
>>> >   |
>>> > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
>>> > rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
>>> > sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>>> > | Sort Method: external merge  Disk: 3923224kB
>>> >  |
>>>
>>> Please check the execution time without DISTINCT.
>>>
>>> Regards, Andreas
>>> --
>>>
>>> 2ndQuadrant - The PostgreSQL Support Company.
>>> www.2ndQuadrant.com
>>>
>>>
>> hi
>>
>> as per your request
>>
>> i ran explain analyze query without distinct
>>
>>
>>
>> +--+
>> |
>> QUERY PLAN
>> |
>>
>> +--+
>> | Hash Join  (cost=21.06..457723.28 rows=40500405 width=89) (actual
>> time=0.429..6763.942 rows=4050 loops=1)
>> |
>> |   Hash Cond: ((rec."vchSubmittersCode")::text =
>> (sub_head."vchSubmittersCode")::text)
>>   |
>> |   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11 width=80)
>> (actual time=0.006..48.610 rows=10 loops=1)|
>> | Filter: (NOT "bFetch")
>>
>>  |
>> | Rows Removed by Filter: 4706
>>
>>  |
>> |   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
>> time=0.404..0.404 rows=405 loops=1)
>> |
>> | Buckets: 1024  Batches: 1  Memory Usage: 26kB
>>
>>   |
>> | ->  Seq Scan on table2 sub_head  (cost=0.00..16.00 rows=405
>> width=11) (actual time=0.004..0.326 rows=405 loops=1) |
>> |   Filter: (NOT "bFetch")
>>
>>  |
>> |   Rows Removed by Filter: 375
>>
>>   |
>> | Planning time: 0.351 ms
>>
>>   |
>> | Execution time: 8371.819 ms
>>
>>   |
>>
>> +--+
>> (12 rows)
>>
>>

Hi


Query was executed at less time without distinct

As well as query was taking around 7 minutes to complete execution with
distinct

 select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec join
table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;

I need to execute above distinct query at less time as distinct query was
taking more time to execute  even i have created indexes on required
columns of the tables


Regards
Durgamahesh Manne


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
hi
as per your request
i ran below query without distinct

select  sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec join
table2 sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;

the above query took around 47 sec to execute
the above query took around 7 minutes to execute with distinct



On Thu, Sep 20, 2018 at 3:28 PM Durgamahesh Manne 
wrote:

>
>
> On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer <
> andr...@a-kretschmer.de> wrote:
>
>> Hi,
>>
>>
>> the problem is there:
>>
>>
>> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
>> >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
>> > time=326397.551..389515.863 rows=370 loops=1)
>> > |
>> > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
>> > (actual time=326397.550..372470.846 rows=4050 loops=1)
>> >   |
>> > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
>> > rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
>> > sub_head."vchValuationDate", rec."vchAdvisorLabel" |
>> > | Sort Method: external merge  Disk: 3923224kB
>> >  |
>>
>> Please check the execution time without DISTINCT.
>>
>> Regards, Andreas
>> --
>>
>> 2ndQuadrant - The PostgreSQL Support Company.
>> www.2ndQuadrant.com
>>
>>
> hi
>
> as per your request
>
> i ran explain analyze query without distinct
>
>
>
> +--+
> |
> QUERY PLAN
> |
>
> +--+
> | Hash Join  (cost=21.06..457723.28 rows=40500405 width=89) (actual
> time=0.429..6763.942 rows=4050 loops=1)
> |
> |   Hash Cond: ((rec."vchSubmittersCode")::text =
> (sub_head."vchSubmittersCode")::text)
>   |
> |   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11 width=80)
> (actual time=0.006..48.610 rows=10 loops=1)|
> | Filter: (NOT "bFetch")
>
>  |
> | Rows Removed by Filter: 4706
>
>  |
> |   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
> time=0.404..0.404 rows=405 loops=1)
> |
> | Buckets: 1024  Batches: 1  Memory Usage: 26kB
>
> |
> | ->  Seq Scan on table2 sub_head  (cost=0.00..16.00 rows=405
> width=11) (actual time=0.004..0.326 rows=405 loops=1) |
> |   Filter: (NOT "bFetch")
>
>  |
> |   Rows Removed by Filter: 375
>
> |
> | Planning time: 0.351 ms
>
> |
> | Execution time: 8371.819 ms
>
> |
>
> +--+
> (12 rows)
>
>


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Thu, Sep 20, 2018 at 3:22 PM Andreas Kretschmer 
wrote:

> Hi,
>
>
> the problem is there:
>
>
> Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
> >  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
> > time=326397.551..389515.863 rows=370 loops=1)
> > |
> > |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89)
> > (actual time=326397.550..372470.846 rows=4050 loops=1)
> >   |
> > | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
> > rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
> > sub_head."vchValuationDate", rec."vchAdvisorLabel" |
> > | Sort Method: external merge  Disk: 3923224kB
> >  |
>
> Please check the execution time without DISTINCT.
>
> Regards, Andreas
> --
>
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
hi

as per your request

i ran explain analyze query without distinct


+--+
|
QUERY PLAN
|
+--+
| Hash Join  (cost=21.06..457723.28 rows=40500405 width=89) (actual
time=0.429..6763.942 rows=4050 loops=1)
|
|   Hash Cond: ((rec."vchSubmittersCode")::text =
(sub_head."vchSubmittersCode")::text)
  |
|   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11 width=80)
(actual time=0.006..48.610 rows=10 loops=1)|
| Filter: (NOT "bFetch")
   |
| Rows Removed by Filter: 4706
   |
|   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
time=0.404..0.404 rows=405 loops=1)
|
| Buckets: 1024  Batches: 1  Memory Usage: 26kB

|
| ->  Seq Scan on table2 sub_head  (cost=0.00..16.00 rows=405
width=11) (actual time=0.004..0.326 rows=405 loops=1) |
|   Filter: (NOT "bFetch")
   |
|   Rows Removed by Filter: 375

|
| Planning time: 0.351 ms

|
| Execution time: 8371.819 ms

|
+--+
(12 rows)


Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
On Wed, Sep 19, 2018 at 8:27 PM Andreas Kretschmer 
wrote:

>
>
> Am 19.09.2018 um 16:43 schrieb Durgamahesh Manne:
> >
> >
> > On Wed, Sep 19, 2018 at 8:02 PM Andreas Kretschmer
> > mailto:andr...@a-kretschmer.de>> wrote:
> >
> >
> >
> > Am 19.09.2018 um 15:51 schrieb Durgamahesh Manne:
> > > I have created BRIN index on few columns of the table without any
> > > issues. But i am unable to create BRIN index on one column of the
> > > table as i got error listed below
> > >
> > >
> > > [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei
> > using
> > > brin ("dFetch");
> > > ERROR:  data type boolean has no default operator class for access
> > > method "brin"
> > > HINT:  You must specify an operator class for the index or define a
> > > default operator class for the data type.
> >
> > honestly, a BRIN-Index on a bool-column doesn't make much sense.
> > What do
> > you want to achive? Maybe a partial index with a where-condition
> > on that
> > column makes much more sense.
> >
> >
> > Regards, Andreas
> >
> > --
> > 2ndQuadrant - The PostgreSQL Support Company.
> > www.2ndQuadrant.com <http://www.2ndQuadrant.com>
> >
> >
> >
> > Hi
> >
> >
> > I want to execute distinct query at less possible time
> >
> > for that reason ,Even i have already tried with BTREE indexes & HASH
> > indexes on required columns .distinct query execution time was not
> reduced
> >
> > select  distinct sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> > rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> > sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec
> > join  "table2" sub_head on
> > rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
> > rec."bFetch"=false and sub_head."bFetch"=false ;
> >
> > Query taken around 7 minutes time to execute with BTREE indexes & HASH
> > indexes on required columns
> >
>
> try an index like
>
> create index idx_t1 on table1 ("vchSubmittersCode") where "bFetch"=false
> and
> create index idx_t2 on table2 ("vchSubmittersCode") where "bFetch"=false
>
> and check if the plan changed and the indexes are in use. You can use
> create index concurrently to prevent lockings.
>
>
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi
 As per your suggestion

 i have created  partial  indexes with where condition on required columns

distinct query execution time was not reduced as query taken around 7
minutes time to execute with indexes & without indexes

so i ran explain analyze for distinct query

EXPLAIN ANALYZE select  distinct sub_head."vchSubmittersCode"
,rec."vchFileName" , rec."vchCusipFundIDSubFundID" ,
rec."vchFundUnitPrice"
,  sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
join  table2 sub_head on
 rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where
rec."bFetch"=false and sub_head."bFetch"=false ;


 Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
time=326397.551..389515.863 rows=370 loops=1)
  |
|   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual
time=326397.550..372470.846 rows=4050 loops=1)
|
| Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
| Sort Method: external merge  Disk: 3923224kB

   |
| ->  Hash Join  (cost=21.06..457723.28 rows=40500405 width=89)
(actual time=0.339..6939.296 rows=4050 loops=1)
|
|   Hash Cond: ((rec."vchSubmittersCode")::text =
(sub_head."vchSubmittersCode")::text)
  |
|   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11
width=80) (actual time=0.011..56.998 rows=10 loops=1)
  |
| Filter: (NOT "bFetch")

   |
| Rows Removed by Filter: 4706

   |
|   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
time=0.319..0.319 rows=405 loops=1)
|
| Buckets: 1024  Batches: 1  Memory Usage: 26kB

|
| ->  Seq Scan on table2 sub_head  (cost=0.00..16.00
rows=405 width=11) (actual time=0.005..0.248 rows=405 loops=1) |
|   Filter: (NOT "bFetch")

   |
|   Rows Removed by Filter: 375

|
| Planning time: 0.237 ms

|
| Execution time: 390252.089 ms



so please help in reducing the distinct query execution time

Regrads

Durgamahesh Manne


Re: Regrading brin_index on required column of the table

2018-09-20 Thread Durgamahesh Manne
Hi

As per your suggestion


i ran explain analyse for distinct query

the size of the table1 is 30mb
the size of the table2 is 368kb

 EXPLAIN ANALYZE select  distinct sub_head."vchSubmittersCode"
,rec."vchFileName" , rec."vchCusipFundIDSubFundID" ,
rec."vchFundUnitPrice"
,  sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec
join  table2 sub_head on rec."vchS
ubmittersCode"=sub_head."vchSubmittersCode" where  rec."bFetch"=false and
sub_head."bFetch"=false ;



  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual
time=326397.551..389515.863 rows=370 loops=1)
  |
|   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) (actual
time=326397.550..372470.846 rows=4050 loops=1)
|
| Sort Key: sub_head."vchSubmittersCode", rec."vchFileName",
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice",
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
| Sort Method: external merge  Disk: 3923224kB

   |
| ->  Hash Join  (cost=21.06..457723.28 rows=40500405 width=89)
(actual time=0.339..6939.296 rows=4050 loops=1)
|
|   Hash Cond: ((rec."vchSubmittersCode")::text =
(sub_head."vchSubmittersCode")::text)
  |
|   ->  Seq Scan on table1 rec  (cost=0.00..1822.66 rows=11
width=80) (actual time=0.011..56.998 rows=10 loops=1)
  |
| Filter: (NOT "bFetch")

   |
| Rows Removed by Filter: 4706

   |
|   ->  Hash  (cost=16.00..16.00 rows=405 width=11) (actual
time=0.319..0.319 rows=405 loops=1)
|
| Buckets: 1024  Batches: 1  Memory Usage: 26kB

|
| ->  Seq Scan on table2 sub_head  (cost=0.00..16.00
rows=405 width=11) (actual time=0.005..0.248 rows=405 loops=1) |
|   Filter: (NOT "bFetch")

   |
|   Rows Removed by Filter: 375

|
| Planning time: 0.237 ms

|
| Execution time: 390252.089 ms


so i am unable to reduce the query execution time as it is taken around 7
minutes to execute with indexes & without indexes

 please help in reducing the query execution time


Regards
Durgamahesh Manne


On Wed, Sep 19, 2018 at 9:07 PM Alban Hertroys  wrote:

> The results of explain analyze would shed light on the core problem.
>
> My guess is that your conditions are not very selective - ie. most
> records in both tables have bFetch = false - and therefore you are
> retrieving most of your data and that is what's taking 7 minutes. No
> index is going to fix that.
>
> If those boolean values are distributed very unevenly (say 99.9% has
> false and 0.1% has true), you may get better results by excluding the
> records with 'true' values (instead of including those that are
> 'false'), for example by using a where not exists(...) subquery.
>
> Obviously, that still won't help if you're just fetching a lot of data.
> On Wed, 19 Sep 2018 at 16:23, Durgamahesh Manne
>  wrote:
> >
> >
> >
> >
> >
> >
> > On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman 
> wrote:
> >>
> >>
> >>
> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> >> Sent: Wednesday, September 19, 2018 10:04 AM
> >> To: Igor Neyman 
> >> Subject: Re: Regrading brin_index on required column of the table
> >>
> >> On Wed, Sep 19, 2018 at 7:19 PM Igor Neyman 
> wrote:
> >>
> >>
> >>
> >> From: Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> >> Sent: Wednesday, September 19, 2018 9:43 AM
> >> To: PostgreSQL mailing lists 
> >> Subject: Regrading brin_index on required column of the table
> >>
> >> Hi
> >>
> >> Respected postgres community members
> >>
> >>
> >>
> >> I have created BRIN index on few columns of the table without any
> issues. But i am unable to create BRIN index on one column of the table as
> i got error listed below
> >>
> >>
> >>
> >>
> >>
> >> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using
> brin ("dFetch");
> >>
> >> ERROR:  data type boolean has no default operator class for access
> method "brin"
> >>
> >> HINT:  You must specif

Re: *Regarding brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
Hi

I have complex query like for ex  select  distinct
sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec join
"table2" sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;


Even i have already tried with BTREE indexes & HASH indexes on required
columns .distinct query execution time was not reduced


Query taken around 7 minutes time to execute with BTREE indexes & HASH
indexes on required columns


SO please help in reducing the distinct query execution time

Regards

Durgamahesh Manne


On Wed, Sep 19, 2018 at 7:21 PM Durgamahesh Manne 
wrote:

> Hi
> Respected postgres community members
>
> I have created BRIN index on few columns of the table without any issues.
> But i am unable to create BRIN index on one column of the table as i got
> error listed below
>
>
> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using brin
> ("dFetch");
> ERROR:  data type boolean has no default operator class for access method
> "brin"
> HINT:  You must specify an operator class for the index or define a
> default operator class for the data type.
>
>  below is the column description:
> Column datatype   collationnullable   defaultstorage
>
>  dFetchboolean false
>   plain
>
>
>
> so please help in creating of the BRIN index on above column of the table .
>
>
>
> Regards
>
> Durgamahesh Manne
>


Re: *Regarding brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
On Wed, Sep 19, 2018 at 8:02 PM Andreas Kretschmer 
wrote:

>
>
> Am 19.09.2018 um 15:51 schrieb Durgamahesh Manne:
> > I have created BRIN index on few columns of the table without any
> > issues. But i am unable to create BRIN index on one column of the
> > table as i got error listed below
> >
> >
> > [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using
> > brin ("dFetch");
> > ERROR:  data type boolean has no default operator class for access
> > method "brin"
> > HINT:  You must specify an operator class for the index or define a
> > default operator class for the data type.
>
> honestly, a BRIN-Index on a bool-column doesn't make much sense. What do
> you want to achive? Maybe a partial index with a where-condition on that
> column makes much more sense.
>
>
> Regards, Andreas
>
> --
> 2ndQuadrant - The PostgreSQL Support Company.
> www.2ndQuadrant.com
>
>
>
Hi


I want to execute distinct query at less possible time

for that reason ,Even i have already tried with BTREE indexes & HASH
indexes on required columns .distinct query execution time was not reduced

select  distinct sub_head."vchSubmittersCode" ,rec."vchFileName" ,
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec join
"table2" sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
where  rec."bFetch"=false and sub_head."bFetch"=false ;

Query taken around 7 minutes time to execute with BTREE indexes & HASH
indexes on required columns



Regards

Durgamahesh Manne


Re: Regrading brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
On Wed, Sep 19, 2018 at 7:41 PM Igor Neyman  wrote:

>
>
> *From:* Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> *Sent:* Wednesday, September 19, 2018 10:04 AM
> *To:* Igor Neyman 
> *Subject:* Re: Regrading brin_index on required column of the table
>
> On Wed, Sep 19, 2018 at 7:19 PM Igor Neyman 
> wrote:
>
>
>
> *From:* Durgamahesh Manne [mailto:maheshpostgr...@gmail.com]
> *Sent:* Wednesday, September 19, 2018 9:43 AM
> *To:* PostgreSQL mailing lists 
> *Subject:* Regrading brin_index on required column of the table
>
> Hi
>
> Respected postgres community members
>
>
>
> I have created BRIN index on few columns of the table without any issues.
> But i am unable to create BRIN index on one column of the table as i got
> error listed below
>
>
>
>
>
> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using brin
> ("dFetch");
>
> ERROR:  data type boolean has no default operator class for access method
> "brin"
>
> HINT:  You must specify an operator class for the index or define a
> default operator class for the data type.
>
>
>
>  below is the column description:
>
> Column datatype   collationnullable   defaultstorage
>
>
>
>  dFetchboolean         false
>   plain
>
>
>
>
>
>
>
> so please help in creating of the BRIN index on above column of the table .
>
>
>
>
>
>
>
> Regards
>
>
>
> Durgamahesh Manne
>
>
>
> Why would you want BRIN index on Boolean-type column?
>
> What kind of interval will you specify?
>
>
>
> Regards,
>
> Igor Neyman
>
>
>
>
>
>
>
>
>
>  Hi
>
>
>
>
>
>  I have complex query like for ex  select  distinct
> sub_head."vchSubmittersCode" ,rec."vchFileName" ,
> rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,
> sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec join
> "table2" sub_head on rec."vchSubmittersCode"=sub_head."vchSubmittersCode"
> where  rec."bFetch"=false and sub_head."bFetch"=false ;
>
>
>
>
>
> Query taken around 7 minutes time to execute without indexes on required
> columns
>
>
>
>  SO  i need to execute this distinct query at less time by creating
> indexes on required columns of the tables
>
>
>
> i have created brin indexes on vchsubmitterscode of two tables
>
>
>
> i am not able to create brin indexes on bfetch tables as i got a error
> ERROR:  data type boolean has no default operator class for access method
> "brin"
>
> HINT:  You must specify an operator class for the index or define a
> default operator class for the data type.
>
>
>
>
>
> so please help in creating of the BRIN index on above column of the table
> as i need to reduce the query execution time
>
>
>
>
>
> Regards
>
>
>
> Durgamahesh Manne
>
> Again, BRIN indexes are not design to work on Boolean columns. If you want
> to index Boolean column, just create regular BTREE index.
>
> Regards,
>
> Igor
>
>
>
>
>

 Hi

I have already tried with BTREE indexes & HASH indexes on required columns
.but distinct query execution time was not reduced


Query taken around 7 minutes time to execute with BTREE indexes & HASH
indexes on required columns


Regards

Durgamahesh Manne


*Regarding brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
Hi
Respected postgres community members

I have created BRIN index on few columns of the table without any issues.
But i am unable to create BRIN index on one column of the table as i got
error listed below


[local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using brin
("dFetch");
ERROR:  data type boolean has no default operator class for access method
"brin"
HINT:  You must specify an operator class for the index or define a default
operator class for the data type.

 below is the column description:
Column datatype   collationnullable   defaultstorage

 dFetchboolean false
plain



so please help in creating of the BRIN index on above column of the table .



Regards

Durgamahesh Manne


Regrading brin_index on required column of the table

2018-09-19 Thread Durgamahesh Manne
Hi
Respected postgres community members

I have created BRIN index on few columns of the table without any issues.
But i am unable to create BRIN index on one column of the table as i got
error listed below


[local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using brin
("dFetch");
ERROR:  data type boolean has no default operator class for access method
"brin"
HINT:  You must specify an operator class for the index or define a default
operator class for the data type.

 below is the column description:
Column datatype   collationnullable   defaultstorage

 dFetchboolean false
plain



so please help in creating of the BRIN index on above column of the table .



Regards

Durgamahesh Manne