Re: [GENERAL] Request to share information regarding deadlock in postgresql-8.1.18

2016-11-13 Thread Yogesh Sharma
Dear John, Thanks for sharing information. >this is a logic error in your application. Ok, I will check. >you can look those relation numbers up in the pg_catalog to see what they are. > you can see what the processes are in pg_stat_activity. Currently machine is not available. Please suggest

Re: [GENERAL] Request to share information regarding deadlock in postgresql-8.1.18

2016-11-13 Thread John R Pierce
On 11/13/2016 11:01 PM, Yogesh Sharma wrote: DETAIL: Process 12345 waits for AccessShareLock on relation 16446 of database 16385; blocked by process 23136. Process 23136 waits for ShareLock on relation 16482 of database 16385; blocked by process 12345. you have two processes that are each

[GENERAL] Request to share information regarding deadlock in postgresql-8.1.18

2016-11-13 Thread Yogesh Sharma
Dear All, Thanks in advance. In my Linux system, below error is found and after 10~12 hours, WAL file in pg_xlog directory are increased continuously. PostgreSQL:ERROR: deadlock detected DETAIL: Process 12345 waits for AccessShareLock on relation 16446 of database 16385; blocked by process

Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Aaron Lewis
Hey guys, I'm trying to understand the performance impact of "Index Recheck", I googled for Index Recheck, but didn't find much details about it, where can I know more about it? And how did you know the performance is being significantly hurt by inadequate work_mem? I'm running PG 9.6.1, built

Re: [GENERAL] Wal files being delayed - Pgsql 9.2

2016-11-13 Thread Venkata B Nagothi
On Mon, Nov 14, 2016 at 1:22 PM, Patrick B wrote: > Hi guys, > > My current scenario is: > > master01 - Postgres 9.2 master DB > slave01 - Postgres 9.2 streaming replication + wal_files slave server for > read-only queries > slave02 - Postgres 9.2 streaming replication

[GENERAL] Wal files being delayed - Pgsql 9.2

2016-11-13 Thread Patrick B
Hi guys, My current scenario is: master01 - Postgres 9.2 master DB slave01 - Postgres 9.2 streaming replication + wal_files slave server for read-only queries slave02 - Postgres 9.2 streaming replication + wal_files slave server @ AWS master01 sends wal_files to both slaves via ssh. *On the

Re: [GENERAL] Sequential parallel scan / agg

2016-11-13 Thread James Sewell
OK, looks like this question is answered by the wiki: https://wiki.postgresql.org/wiki/Parallel_Query Even when parallel query is enabled in general, the query planner will > never generate a parallel plan if any of the following are true: > >- The query writes data. If a query contains a

Re: [GENERAL] Sequential parallel scan / agg

2016-11-13 Thread James Sewell
Hi, Yes, same result (non-parallel seq scan) Cheers, James Sewell, PostgreSQL Team Lead / Solutions Architect Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009 *P *(+61) 2 8099 9000 <(+61)%202%208099%209000> *W* www.jirotech.com *F * (+61) 2 8099 9099

Re: [GENERAL] Sequential parallel scan / agg

2016-11-13 Thread melvin6925
Have you tried:CREATE TABLE blah AS SELECT ...  ? Sent via the Samsung Galaxy S® 6, an AT 4G LTE smartphone Original message From: James Sewell Date: 11/13/16 18:04 (GMT-05:00) To: pgsql-general Subject: [GENERAL]

[GENERAL] Sequential parallel scan / agg

2016-11-13 Thread James Sewell
Hi all, I have a customer who is making use of the new parallel features on a 64 core box - this is working really well for them. However one issue (which I've since realised I've never seen an example of) is getting the data from these queries into a table. I can't seem to create a plan which

Re: [GENERAL] Gin indexes on intarray is fast when value in array does not exists, and slow, when value exists

2016-11-13 Thread Jeff Janes
On Sat, Nov 12, 2016 at 5:33 PM, otar shavadze wrote: > Tried > > OPERATOR(pg_catalog.@>) > > > as Tom mentioned, but still, don't get fast performance when value does > not existed in any array. > Did you build the correct index? > > Also "played" with many

Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
thank you. it listens to the local network, which are my two other MacBooks. I reported the bug to DaVinci. Hopefully the have a fix then I can put it back on md5. thank you for your script. will try it. best Robert > On 13 Nov 2016, at 23:27, John R Pierce wrote: > >

Re: [GENERAL] Trigram is slow when 10m rows

2016-11-13 Thread Jeff Janes
On Sun, Nov 13, 2016 at 3:54 AM, Aaron Lewis wrote: > I have a simple table with Trigram index, > > create table mytable(hash char(40), title text); > create index title_trgm_idx on mytable using gin(title gin_trgm_ops); > > When I run a query with 10m rows, it uses

Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
Thank you, I understand. Nobody else can access the database. As the database backup button in DaVinci started to work I could understand that DaVinci is only making pg_dump of the database. The restore works fine from any DaVinci installation to another one. I tested this. I still can make a

Re: [GENERAL] Exclude pg_largeobject form pg_dump

2016-11-13 Thread Guillaume Lelarge
2016-11-08 6:01 GMT+01:00 amul sul : > On Tue, Nov 8, 2016 at 5:36 AM, Andreas Joseph Krogh > wrote: > > > > > > I don't see what you mean. It forces dump of Blobs if we didn't use -B > and > > if we include everything in the dump, which seems good to me.

Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread Adrian Klaver
On 11/13/2016 01:01 PM, aws backup wrote: Hi, thank you so much. With your help I could solve all my problems. DaVinci has a bug somewhere with the database configuration. I installed everything new and set all auth method to trust instead of md5 which was default from DaVinci. Just remember

Re: [GENERAL] Trigram is slow when 10m rows

2016-11-13 Thread Oleg Bartunov
On Sun, Nov 13, 2016 at 5:25 PM, Aaron Lewis wrote: > Thanks Oleg. > > I've increased work_mem to 128MB, now the query falls down to 1.7s, > faster but still not good enough. > > Is there any other thing I can do about it? > your query 'x264' is short in terms of the

Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Tom Lane
Oleg Bartunov writes: > On Sun, Nov 13, 2016 at 6:05 PM, Aaron Lewis >> It takes 500ms with 10m rows, could it be faster? > sure. Recheck with function call is pretty expensive, so I'd not recommend > to create functional index, just create

Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Oleg Bartunov
On Sun, Nov 13, 2016 at 6:05 PM, Aaron Lewis wrote: > Sigh, didn't notice that. Thanks for the heads up. > > It takes 500ms with 10m rows, could it be faster? > sure. Recheck with function call is pretty expensive, so I'd not recommend to create functional index,

Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread John R Pierce
On 11/13/2016 8:51 AM, aws backup wrote: I assume that the postgres database password is the one which is shown in the DaVinci connect database window. user: postgres password: DaVinci there are no database passwords, thats the password for the postgres database role/user. But if I work

Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
Hi Adrian, thank you for the explanation. I will look into you links. I am doing this because I want to make backups from the database. Ideally automatic backups every night. The Blackmagic support says they can't help me with this. The Backup and Restore button in DaVinci does not work. Every

Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread Adrian Klaver
On 11/13/2016 05:51 AM, aws backup wrote: Hi, now I have another problem. Sorry I am an absolute beginner. When I restore the dumpall backup with "psql -f infile postgres" I get lot of errors > "already exists" and the database is not restored to the point of the backup. Yes, because it

Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Aaron Lewis
Sigh, didn't notice that. Thanks for the heads up. It takes 500ms with 10m rows, could it be faster? I've increased work_mem to 256MB test=# explain analyze select * from mytable where to_tsvector('english', title) @@ 'x264'::tsquery limit 1000 offset 10;

Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Julien Rouhaud
On 13/11/2016 15:26, Aaron Lewis wrote: > Hi Oleg, > > Can you elaborate on the title column? I don't get it. > >>> create table mytable(hash char(40), title varchar(500)); >>> create index name_fts on mytable using gin(to_tsvector('english', >>> 'title')); You created an index on the text

Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Aaron Lewis
Hi Oleg, Can you elaborate on the title column? I don't get it. On Sun, Nov 13, 2016 at 10:10 PM, Oleg Bartunov wrote: > > > On Sun, Nov 13, 2016 at 2:50 PM, Aaron Lewis > wrote: >> >> I have a simple table, and a gin index, >> >> create table

Re: [GENERAL] Trigram is slow when 10m rows

2016-11-13 Thread Aaron Lewis
Thanks Oleg. I've increased work_mem to 128MB, now the query falls down to 1.7s, faster but still not good enough. Is there any other thing I can do about it? test=# explain analyze select * from mytable where title ilike 'x264';

Re: [GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Oleg Bartunov
On Sun, Nov 13, 2016 at 2:50 PM, Aaron Lewis wrote: > I have a simple table, and a gin index, > > create table mytable(hash char(40), title varchar(500)); > create index name_fts on mytable using gin(to_tsvector('english', > 'title')); >

Re: [GENERAL] Trigram is slow when 10m rows

2016-11-13 Thread Oleg Bartunov
On Sun, Nov 13, 2016 at 2:54 PM, Aaron Lewis wrote: > I have a simple table with Trigram index, > > create table mytable(hash char(40), title text); > create index title_trgm_idx on mytable using gin(title gin_trgm_ops); > > When I run a query with 10m rows, it uses

Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
Hi, now I have another problem. Sorry I am an absolute beginner. When I restore the dumpall backup with "psql -f infile postgres" I get lot of errors > "already exists" and the database is not restored to the point of the backup. I mean after I made the backup I changed something and I

Re: [GENERAL] pg_dumpall: could not connect to database "template1": FATAL:

2016-11-13 Thread aws backup
Hi Adrian, thank you for the answer. There is one password for the postgres database and one for the postgres user. Both are not working somehow. Is there a way to look up the passwords? I saw in the documentation that there is a .pgpass file. But I can't find it. I changed the auth method to

[GENERAL] Trigram is slow when 10m rows

2016-11-13 Thread Aaron Lewis
I have a simple table with Trigram index, create table mytable(hash char(40), title text); create index title_trgm_idx on mytable using gin(title gin_trgm_ops); When I run a query with 10m rows, it uses the Trigram index, but takes 3s to execute, very slow. (I have 80m rows, but only inserted

[GENERAL] Why is this query not using GIN index?

2016-11-13 Thread Aaron Lewis
I have a simple table, and a gin index, create table mytable(hash char(40), title varchar(500)); create index name_fts on mytable using gin(to_tsvector('english', 'title')); create unique index md5_uniq_idx on mytable(hash); When I execute a query with tsquery, the GIN index was not in use: