Re: [GENERAL] Perl script is killed by SIGPIPE

2017-09-12 Thread Yogesh Sharma
Dear Daniel,
Yes , we are using DBI for connection.

Basically we observed that after dbh->do() return nothing if we are trying
to insert duplicate entry.
But it is not occurred always.
It return exit 1 if try to insert duplicate entry into Db.
But sometime it return nothing and child script is killed.

On Tuesday, September 12, 2017, Daniel Verite 
wrote:

> Yogesh Sharma wrote:
>
> > We have found child script is killed by signal 13 SIGPIPE. When duplicate
> > key violates error occured, script is killed but not all time.
>
> "child script" and this kind of error suggests that a forked process
> inherits a database connection opened by a parent process.
>
> When the database handle goes out of scope, it might
> close the connection to the database, affecting the
> parent process too, since it's the same connection.
>
> If you're using DBI, it has a setting to avoid that issue:
> https://metacpan.org/pod/DBI#InactiveDestroy
>
> Aside from that, inherited connections can't be used
> simultaneously by parent and child process.
> In general, a child process should open and close
> its own connection.
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>


Re: [GENERAL] pg_rewind copy so much data

2017-09-12 Thread Hung Phan
Hi,

Thanks for your response. I have just replayed switching master and slave
once again:

- one master and one slave (total size of each server is more than 4GB).
Currently the last log of the slave is "started  streaming WAL from primary
at 2/D600 on timeline 10".

- stop master, the slave show below logs:
  replication terminated by primary server
  End of WAL reached on timeline 10 at 2/D69304D0
  Invalid record length at 2/D69304D0
  could not connect to primary server

- promote the slave:
  receive promote request
  redo done at 2/D6930460
  selected new timeline ID: 11
  archive recovery complete
  MultiXact member wraparound protections are now enabled
  database system is ready to accept connections
  autovacuum launcher started

- start and stop old master, then run pg_rewind (all are executed
immediately after promoting the slave). Logs of pg_rewind:
  servers diverged at WAL position 2/D69304D0 on timeline 10
  rewinding from last common checkpoint at 2/D6930460 on timeline 10
  reading source file list
  reading target file list
  reading WAL in target
  need to copy 4168 MB (total source directory is 4186 MB)
  4268372/4268372 kB (100%) copied
  creating backup label and updating control file
  syncing target data directory
  Done!

If I run pg_rewind with debug option, it just show additional bunch of
files copied in directories like base or pg_tblspc. I claim that there is
no data inserted of modified from the first step. The only difference
between two server is caused by restarting old master.

Thanks and Regards,

Hung Phan



On Wed, Sep 13, 2017 at 10:48 AM, Michael Paquier  wrote:

> On Wed, Sep 13, 2017 at 12:41 PM, Hung Phan  wrote:
> > I have tested pg_rewind (ver 9.5) with the following scenario:
> >
> > - one master and one slave (total size of each server is more than 4GB)
> > - set wal_log_hint=on and restart both
> > - stop master, promote slave
> > - start old master again (now two servers have diverged)
> > - stop old master, run pg_rewind with progress option
>
> That's a good flow. Don't forget to run a manual checkpoint after
> promotion to update the control file of the promoted standby so as
> pg_rewind is able to identify the timeline difference between the
> source and the target servers.
>
> > The pg_rewind ran successfully but I saw it copied more than 4GB
> (4265891 kB
> > copied). So I wonder there was very minor difference between two servers
> but
> > why did pg_rewind copy almost all data of new master?
>
> Without knowing exactly the list of things that have been registered
> as things to copy from the active source to the target, it is hard to
> give a conclusion. But my bet here is that you let the target server
> online long enough that it had a bunch of block updated, causing more
> relation blocks to be copied from the source because more efforts
> would be needed to re-sync it. That's only an assumption without data
> with clear numbers, numbers that could be found using the --debug
> messages of pg_rewind.
> --
> Michael
>


Re: [GENERAL] pg_rewind copy so much data

2017-09-12 Thread Michael Paquier
On Wed, Sep 13, 2017 at 12:41 PM, Hung Phan  wrote:
> I have tested pg_rewind (ver 9.5) with the following scenario:
>
> - one master and one slave (total size of each server is more than 4GB)
> - set wal_log_hint=on and restart both
> - stop master, promote slave
> - start old master again (now two servers have diverged)
> - stop old master, run pg_rewind with progress option

That's a good flow. Don't forget to run a manual checkpoint after
promotion to update the control file of the promoted standby so as
pg_rewind is able to identify the timeline difference between the
source and the target servers.

> The pg_rewind ran successfully but I saw it copied more than 4GB (4265891 kB
> copied). So I wonder there was very minor difference between two servers but
> why did pg_rewind copy almost all data of new master?

Without knowing exactly the list of things that have been registered
as things to copy from the active source to the target, it is hard to
give a conclusion. But my bet here is that you let the target server
online long enough that it had a bunch of block updated, causing more
relation blocks to be copied from the source because more efforts
would be needed to re-sync it. That's only an assumption without data
with clear numbers, numbers that could be found using the --debug
messages of pg_rewind.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_rewind copy so much data

2017-09-12 Thread Hung Phan
Hi,

I have tested pg_rewind (ver 9.5) with the following scenario:

- one master and one slave (total size of each server is more than 4GB)
- set wal_log_hint=on and restart both
- stop master, promote slave
- start old master again (now two servers have diverged)
- stop old master, run pg_rewind with progress option

The pg_rewind ran successfully but I saw it copied more than 4GB (4265891
kB copied). So I wonder there was very minor difference between two servesr
but why did pg_rewind copy almost all data of new master?

Regards,

Hung Phan


Re: [GENERAL] WAL & ready files retained after turning off log shipping

2017-09-12 Thread Michael Paquier
On Tue, Sep 12, 2017 at 11:43 PM, Ron Johnson  wrote:
> On 09/07/2017 09:32 AM, Tom Lane wrote:
>>
>> Ron Johnson  writes:
>>>
>>> On 09/07/2017 09:08 AM, Tom Lane wrote:

 Manual cleanup shouldn't be very hard, fortunately.  Run pg_controldata
 to see where the last checkpoint is, and delete WAL files whose names
 indicate they are before that (but not the one including the
 checkpoint!).

Just noticed. This is actually forgetting that you can only remove
safely WAL segment past the last *prior* checkpoint, not the last
checkpoint. So you need to keep WAL segments worth of two completed
checkpoints.

>>> All WAL files after log shipping was stopped will keep accumulating
>>> "forever"?
>>
>> Hmm ... on second thought, I think if you just remove the .ready/.done
>> files, the next checkpoint should clean up the old WAL files.  That'd
>> certainly be safer than doing it manually.
>
> This weekend, (early Sunday morning) WAL files on the master started
> accumulating again.  Now, .ready files are regenerated every time I delete
> them, even though according to pg_controldate the last checkpoint was 28
> minutes ago.

My general advice here would be: do not mess up with the contents of
the data folder while Postgres is running, you will never do that
right. What is your archiving command telling then? If those .ready
files are here, it means that you are not able to archive correctly
segments. It seems to me that at the end you should try to just set
archive_command = '/bin/true', this would solve all your problems, and
trick the server correctly...
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Aquameta 0.1 - Request for reviews, contributors

2017-09-12 Thread Eric Hanson
Thanks Nico.  I definitely like this syntax better.

--
Eric Hanson
CEO, Aquameta Labs
503-929-1073
www.aquameta.com


On Fri, Sep 8, 2017 at 4:26 PM, Nico Williams  wrote:

>
> Here's a review comment.  Just one for now.
>
> Looking at the meta module, I see things like this:
>
> execute 'select (count(*) = 1) from ' || 
> quote_ident((row_id::meta.schema_id).name)
> || '.' || quote_ident((row_id::meta.relation_id).name) ||
> ' where ' || quote_ident((row_id.pk_column_id).name) || '
> = ' || quote_literal(row_id.pk_value)
> into answer;
>
> I recently learned what I find to be a better idiom:
>
> execute format(
> $q$
> select exists (select *
>from %1$I.%2$I
>where %3$I = %4$L);
> $q$,
> -- interpolated arguments here
> (row_id::meta.schema_id).name, (row_id::meta.relation_id).name,
> (row_id.pk_column_id).name, row_id.pk_value
>   into answer;
>
> That is, PostgreSQL has extended string literal syntax where you can use
> $stuff$ instead of single-quotes, and that makes it much easier to write
> dynamic (generated for EXECUTE) SQL.  In particular, because your
> $EDITOR [generally] won't recognize this, syntax highlighting for the
> $quoted$ code will work as expected!
>
> This is better not only because it's more concise, easier to line-wrap,
> and easier on the eyes, but also because you get to use format().  I
> suspect using format() makes it harder to forget to quote something
> appropriately -- harder to accidentally create a SQL injection
> vulnerability.  I usually use argument numbering (%$I) instead of
> referring to the positionally (%I, %L, %s) because it helps a lot
> whenever I need to refer to one of them multiple times.
>
> Of course, this is just a matter of style, but I strongly feel that this
> is the superior style (at least I find or stumble into a better style),
> especially when you have several layers of trigger functions creating
> more trigger functions, as you can easily nest $foo$-quoted string
> literals by having different quote forms for each level.
>
> Also, I used exists() instead of count(*) = 1 -- that's just my personal
> preference, and a less defensible style matter (it is more verbose...).
>
> Nico
> --
>


Re: [GENERAL] Perl script is killed by SIGPIPE

2017-09-12 Thread
> Daniel Verite wrote:
> > Yogesh Sharma wrote:
> 
> > We have found child script is killed by signal 13 SIGPIPE. When 
> > duplicate key violates error occured, script is killed but not all time.
> 
> "child script" and this kind of error suggests that a forked process inherits 
> a database connection opened by a parent process.
> 
> When the database handle goes out of scope, it might close the connection to 
> the database, affecting the parent process too, since it's the same 
> connection.
> 
> If you're using DBI, it has a setting to avoid that issue:
> https://metacpan.org/pod/DBI#InactiveDestroy
> 
> Aside from that, inherited connections can't be used simultaneously by parent 
> and child process.
> In general, a child process should open and close its own connection.

In addition to Daniel's advice, be sure you have a try/catch around your DB 
work. If you've never used
something like that, check out Try::Tiny. Then in the catch, you can print 
$dbh->errstr() to see what's
really going on.

I think Daniel has it correct though that you may not have a valid $dbh, so 
this is an app error.

HTH,
Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL COPY Statement Error On Linux

2017-09-12 Thread Francisco Olarte
George:

On Tue, Sep 12, 2017 at 6:40 PM, George Neuner  wrote:
> Francisco already pointed out that Linux doesn't understand the
> backslashes in the file path, however it should be noted that Windows
> *does* understand forward slashes and that [modulo disk names in
> Windows] you can use forward slash paths on both systems.

That's not strictly correct. Linus understand backslahes in paths
fine, they are just not a directory separator ( there are only two
reserved byte values in path names, IIRC, slash for dir sep and nul
for string end, C issues ). Windows, OTOH, inherits path separator
logic from MSDOS 2.0, and if it hasn't changed in the last fifteen
years treats any slash as a separator.

But the issue is that windos treats a \\netname\resource prefix as a
network request, and transform it internally, while linux does not. In
*ix you have to connect to
the machine and mount its resources first, similarly to what you do
with local disks. Normally //x is treated the same as /x, or as x:

folarte@n:~$ ls -ldi /tmp //tmp ///tmp
655361 drwxrwxrwt 14 root root 45056 Sep 12 19:17 /tmp
655361 drwxrwxrwt 14 root root 45056 Sep 12 19:17 //tmp
655361 drwxrwxrwt 14 root root 45056 Sep 12 19:17 ///tmp

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL COPY Statement Error On Linux

2017-09-12 Thread George Neuner
On Tue, 12 Sep 2017 11:30:02 +0100, Osahon Oduware
 wrote:

>I am trying to utilize the "COPY" statement below to copy a .CSV file to a
>table in a PostgreSQL database.:
>*COPY .() FROM
>'\\shared\network\path\to\csv\test.csv' DELIMITER ',' CSV HEADER;*
>
>This works with a PostgreSQL database installed in a WINDOWS environment
>(Windows 7), but fails with the following error with a similar PostgreSQL
>database in a Linux environment (Centos 7):
>*org.postgresql.util.PSQLException: ERROR: could not open file
>"\\shared\network\path\to\csv\test.csv" for reading: No such file or
>directory*
>

Francisco already pointed out that Linux doesn't understand the
backslashes in the file path, however it should be noted that Windows
*does* understand forward slashes and that [modulo disk names in
Windows] you can use forward slash paths on both systems.

George



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] contrecord is requested

2017-09-12 Thread Scott Marlowe
On Tue, Sep 12, 2017 at 10:19 AM, Jeff Janes  wrote:
> On Mon, Sep 11, 2017 at 1:27 PM, Scott Marlowe 
> wrote:
>>
>> So we have a db we're trying to rewind and get synced to the master.
>> pg_rewind says it doesn't need rewinding, and when we try to bring it
>> up, it gets this error:
>>
>> "contrecord is requested by 2E7/4028"
>>
>> And fails to get back up.
>>
>> Is this a known issue? Possible bug in the continuation record code?
>>
>> The only references I can find for it are in the xlogreader code.
>
>
> I've seen this twice lately and both times it was user error.
>
> One time someone who shall remain nameless made a replica of a remote QA
> server using "pg_basebackup -R ...", but then copied the *.conf files
> **including recovery.conf** from the running replica of the the remote
> production server into the new directory for the replica of the remote QA
> server.  So primary_conninfo had been overwritten to point to the wrong
> master server.
>
> The other time someone who shall also remain nameless accidentally fully
> opened up a newly cloned (from a cold backup, I think) of an dummy
> benchmarking server, instead of putting it into standby.  And then tried to
> shut it down and re-open it as a standby without doing a full refresh. But
> of course it was too late to do that.

Thanks will check on that.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL COPY Statement Error On Linux

2017-09-12 Thread Osahon Oduware
Hi Francisco,

Thanks for your response. I would try this out and give you feedback.

On Tue, Sep 12, 2017 at 12:00 PM, Francisco Olarte 
wrote:

> On Tue, Sep 12, 2017 at 12:30 PM, Osahon Oduware 
> wrote:
> > I am trying to utilize the "COPY" statement below to copy a .CSV file to
> a
> > table in a PostgreSQL database.:
> > COPY .() FROM
> > '\\shared\network\path\to\csv\test.csv' DELIMITER ',' CSV HEADER;
> >
> > This works with a PostgreSQL database installed in a WINDOWS environment
> > (Windows 7), but fails with the following error with a similar PostgreSQL
> > database in a Linux environment (Centos 7):
> > org.postgresql.util.PSQLException: ERROR: could not open file
> > "\\shared\network\path\to\csv\test.csv" for reading: No such file or
> > directory
> >
> > I have granted READ access to EVERYONE on the CSV folder on the network
> path
> > as depicted in the attached image.
> > Could someone point me to the reason for the failure in Linux?
>
> You are invoking server side copy. This means the SERVER neads to be
> able to access the file under the name you've given to it.
>
> The network path you have given is valid on windows machines ( UNC
> path? It's been a decade an a half since Iast used windows ), but not
> on linux. Typically on linux you mount the shared folder /some/where
> and type the path as /some/where/path/to/csv/test.csv.
>
> You may be needing a CLIENT copy. I do not see which client program
> you are using, it may be some fancy GUI stuff, in which case I cannot
> help you. If you are using the standard "psql" tool you can just use
> \copy. As explained in the docs this just does "copy from stdin" ( or
> to stdout ) on the client side and redirects the file you give in the
> command line ( or you can issue a [psql ... -c "copy ...from stdin"]
> in a command line and feed the file via shell redirections, but, IIRC,
> windows shells are terrible at quoting arguments and redirecting i/o,
> so it may be better to avoid it).
>
> Francisco Olarte.
>


Re: [GENERAL] PostgreSQL COPY Statement Error On Linux

2017-09-12 Thread Osahon Oduware
Hi Charles,

Thanks for your response. I would try this out and give you feedback.

On Tue, Sep 12, 2017 at 12:01 PM, Charles Clavadetscher <
clavadetsc...@swisspug.org> wrote:

> Hello
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:pgsql-general-owner@
> postgresql.org] *On Behalf Of *Osahon Oduware
> *Sent:* Dienstag, 12. September 2017 12:30
> *To:* pgsql-general@postgresql.org
> *Subject:* [GENERAL] PostgreSQL COPY Statement Error On Linux
>
>
>
> Hi All,
>
> I am trying to utilize the "COPY" statement below to copy a .CSV file to a
> table in a PostgreSQL database.:
> *COPY .() FROM
> '\\shared\network\path\to\csv\test.csv' DELIMITER ',' CSV HEADER;*
>
> This works with a PostgreSQL database installed in a WINDOWS environment
> (Windows 7), but fails with the following error with a similar PostgreSQL
> database in a Linux environment (Centos 7):
> *org.postgresql.util.PSQLException: ERROR: could not open file
> "\\shared\network\path\to\csv\test.csv" for reading: No such file or
> directory*
>
> It looks like the share is not visible for the Linux system. You probably
> need to mount it first using Samba and then access it through the mount
> point using slashes instead of bakslashes: /
>
> Instructions on how to mount a Windows share in Linux can be found on the
> internet. Since I am not an expert on this myself, I can’t give you more
> concrete instructions.
>
>
>
> This may help: http://www.serverlab.ca/tutorials/linux/storage-file-
> systems-linux/mounting-smb-shares-centos-7/
>
>
>
> An alternative would be to copy the file to the Linux system using e.g.
> scp of sftp and the load it locally.
>
>
>
> Hope this helps.
>
> Bye
>
> Charles
>
>
> I have granted READ access to EVERYONE on the CSV folder on the network
> path as depicted in the attached image.
>
> Could someone point me to the reason for the failure in Linux?
>


Re: [GENERAL] contrecord is requested

2017-09-12 Thread Jeff Janes
On Mon, Sep 11, 2017 at 1:27 PM, Scott Marlowe 
wrote:

> So we have a db we're trying to rewind and get synced to the master.
> pg_rewind says it doesn't need rewinding, and when we try to bring it
> up, it gets this error:
>
> "contrecord is requested by 2E7/4028"
>
> And fails to get back up.
>
> Is this a known issue? Possible bug in the continuation record code?
>
> The only references I can find for it are in the xlogreader code.
>

I've seen this twice lately and both times it was user error.

One time someone who shall remain nameless made a replica of a remote QA
server using "pg_basebackup -R ...", but then copied the *.conf files
**including recovery.conf** from the running replica of the the remote
production server into the new directory for the replica of the remote QA
server.  So primary_conninfo had been overwritten to point to the wrong
master server.

The other time someone who shall also remain nameless accidentally fully
opened up a newly cloned (from a cold backup, I think) of an dummy
benchmarking server, instead of putting it into standby.  And then tried to
shut it down and re-open it as a standby without doing a full refresh. But
of course it was too late to do that.

Cheers,

Nameless


Re: [GENERAL] pgxn manager down

2017-09-12 Thread hubert depesz lubaczewski
On Tue, Sep 12, 2017 at 02:56:26PM +0200, Chris Travers wrote:
> Normally I would not email the general list over this but it has been over
> a day and the google group for pgxn seems low enough traffic I figure I
> would mention it here.
> 
> manager.pgxn.org is giving internal server errors (which means no new
> extensions can be released on the platform).
> 
> If folks are working on this, is there an ETA on a fix?
> 
> Is there anything I can do to help?

Hi,
looks like it's fixed.

It's hosted on my server, and being taken care of by David - you can
reach him on twitter via @pgxn.

As far as I can tell, new request work fine.

Best regards,

depesz



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WAL & ready files retained after turning off log shipping

2017-09-12 Thread Ron Johnson

On 09/07/2017 09:32 AM, Tom Lane wrote:

Ron Johnson  writes:

On 09/07/2017 09:08 AM, Tom Lane wrote:

Manual cleanup shouldn't be very hard, fortunately.  Run pg_controldata
to see where the last checkpoint is, and delete WAL files whose names
indicate they are before that (but not the one including the checkpoint!).

All WAL files after log shipping was stopped will keep accumulating "forever"?

Hmm ... on second thought, I think if you just remove the .ready/.done
files, the next checkpoint should clean up the old WAL files.  That'd
certainly be safer than doing it manually.


This weekend, (early Sunday morning) WAL files on the master started 
accumulating again.  Now, .ready files are regenerated every time I delete 
them, even though according to pg_controldate the last checkpoint was 28 
minutes ago.


--
World Peace Through Nuclear Pacification



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Pavel Stehule
2017-09-12 14:01 GMT+02:00 Tom Lane :

> "Frank Millman"  writes:
> > Pavel Stehule wrote:
> >> 2017-09-12 8:45 GMT+02:00 Frank Millman :
> >>>   I am experimenting with optimising a SQL statement. One version uses
> 4 LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the
> filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no
> CASE statements.
>
> >> you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher
> number 14 maybe 16
>
> > I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.
>
> For this you need to increase join_collapse_limit, not
> from_collapse_limit.  (Usually, though, there's little reason not to keep
> them the same.)
>

sure - my mistake - I though it.

Thank you

Pavel

>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Chris Travers
On Tue, Sep 12, 2017 at 3:15 PM, Frank Millman  wrote:

>
> 2017-09-12 12:39 GMT+02:00 Pavel Stehule :
>
>>
>>
>> 2017-09-12 12:25 GMT+02:00 Frank Millman :
>>
>>> Pavel Stehule wrote:
>>>
>>> 2017-09-12 9:36 GMT+02:00 Frank Millman :
>>>
 Pavel Stehule wrote:
 >
 > 2017-09-12 8:45 GMT+02:00 Frank Millman :

> I am using 9.4.4 on Fedora 22.
>
> I am experimenting with optimising a SQL statement. One version uses 4
> LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the
> filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no
> CASE statements.
>
> My test involves selecting a single row. Both versions work. The first
> version takes 0.06 seconds. The second takes 0.23 seconds. On further
> experimentation, the time for the second one seems to taken in setting up
> the joins, because if I omit selecting anything from the joined tables, it
> still takes 0.23 seconds.
>
 >
 > please send result of explain analyze
 >
 > you can experimentally try increase FROM_COLLAPSE_LIMIT to some
 higher number 14 maybe 16
 >
 I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.

 I have attached files containing my SQL command, and the results of
 EXPLAIN ANALYSE

>>>
>>> > please use https://explain.depesz.com/ for both plans (slow, fast)
>>>
>>>
>>> Here are the results -
>>>
>>> sql_slow - https://explain.depesz.com/s/9vn3
>>>
>>> sql_fast - https://explain.depesz.com/s/oW0F
>>>
>>
>> I don't see any issue there - it looks like some multi dimensional query
>> and it should not be well optimized due not precious estimations. The slow
>> query has much more complex - some bigger logic is under nested loop -
>> where estimation is not fully correct, probably due dependencies between
>> columns.
>>
>> what does SET enable_nestloop to off;
>>
> >
> > from statistics - the ar_tran_inv table is scanned 6x in slow query and
> 2times in fast query. Maybe there should be some index
> >
>
> Setting enable_nestloop to off makes no difference.
>
> Setting from_collapse_limit and join_collapse_limit to 16, as suggested by
> Tom, actually slowed it down.
>
> I mentioned before that I was running this from python, which complicated
> it slightly. I have now saved the command to a file on the Fedora side, so
> I can execute it in psql using the ‘\i’ command. It makes life easier, and
> I can use ‘\timing’ to time it. It shows exactly the same results.
>
> It could be an index problem, but I have just double-checked that, if I
> remove the lines from the body of the statement that actually select from
> the joined tables, it makes virtually no difference. However, maybe the
> planner checks to see what indexes it has before preparing the query, so
> that does not rule it out as a possibility.
>
> I will play with it some more tomorrow, when my  brain is a bit fresher. I
> will report back with any results.
>

I am not convinced that the nested loop is a problem here.  I cannot think
of a faster join plan than a nested loop when you only have one iteration
of the loop (and looking through I did not see any loop counts above 1).

If you read and count ms carefully you will find that ar_tran_inv is
scanned 6 times and each of these times is taking about 25ms.  25x6 is half
of your query time right there and then you have the overhead in the joins
on top of that.  Quick eyeball estimates is that this is where approx 200ms
of your query time comes from.  Looking at this in more detail it doesn't
look

This is not a problem with too many tables in the join but the fact that
you are joining the same tables in multiple times in ways you end up
needing to repeatedly sequentially scan them.

I also don't think an index is going to help unless you have accounting
data going way back (since you are looking for about a year's worth of
data) or unless 90% of your transactions get marked as deleted.  So I think
you are stuck with the sequential scans on this table and optimizing will
probably mean reducing the number of times you scan that table.

>
> Frank
>
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] Perl script is killed by SIGPIPE

2017-09-12 Thread Daniel Verite
Yogesh Sharma wrote:

> We have found child script is killed by signal 13 SIGPIPE. When duplicate
> key violates error occured, script is killed but not all time.

"child script" and this kind of error suggests that a forked process
inherits a database connection opened by a parent process.

When the database handle goes out of scope, it might
close the connection to the database, affecting the
parent process too, since it's the same connection.

If you're using DBI, it has a setting to avoid that issue:
https://metacpan.org/pod/DBI#InactiveDestroy

Aside from that, inherited connections can't be used
simultaneously by parent and child process.
In general, a child process should open and close
its own connection.

Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Frank Millman

2017-09-12 12:39 GMT+02:00 Pavel Stehule :



  2017-09-12 12:25 GMT+02:00 Frank Millman :

Pavel Stehule wrote:

2017-09-12 9:36 GMT+02:00 Frank Millman :

  Pavel Stehule wrote:
  > 
  > 2017-09-12 8:45 GMT+02:00 Frank Millman :

I am using 9.4.4 on Fedora 22.

I am experimenting with optimising a SQL statement. One version uses 4 
LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the 
filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no CASE 
statements.

My test involves selecting a single row. Both versions work. The first 
version takes 0.06 seconds. The second takes 0.23 seconds. On further 
experimentation, the time for the second one seems to taken in setting up the 
joins, because if I omit selecting anything from the joined tables, it still 
takes 0.23 seconds.
  > 
  > please send result of explain analyze
  > 
  > you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher 
number 14 maybe 16
  > 
  I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.

  I have attached files containing my SQL command, and the results of 
EXPLAIN ANALYSE

> please use https://explain.depesz.com/ for both plans (slow, fast)


Here are the results - 

sql_slow - https://explain.depesz.com/s/9vn3

sql_fast - https://explain.depesz.com/s/oW0F

  I don't see any issue there - it looks like some multi dimensional query and 
it should not be well optimized due not precious estimations. The slow query 
has much more complex - some bigger logic is under nested loop - where 
estimation is not fully correct, probably due dependencies between columns.

  what does SET enable_nestloop to off;
> 
> from statistics - the ar_tran_inv table is scanned 6x in slow query and 
> 2times in fast query. Maybe there should be some index 
>

Setting enable_nestloop to off makes no difference.

Setting from_collapse_limit and join_collapse_limit to 16, as suggested by Tom, 
actually slowed it down.

I mentioned before that I was running this from python, which complicated it 
slightly. I have now saved the command to a file on the Fedora side, so I can 
execute it in psql using the ‘\i’ command. It makes life easier, and I can use 
‘\timing’ to time it. It shows exactly the same results.

It could be an index problem, but I have just double-checked that, if I remove 
the lines from the body of the statement that actually select from the joined 
tables, it makes virtually no difference. However, maybe the planner checks to 
see what indexes it has before preparing the query, so that does not rule it 
out as a possibility.

I will play with it some more tomorrow, when my  brain is a bit fresher. I will 
report back with any results.

Frank


[GENERAL] pgxn manager down

2017-09-12 Thread Chris Travers
Hi;

Normally I would not email the general list over this but it has been over
a day and the google group for pgxn seems low enough traffic I figure I
would mention it here.

manager.pgxn.org is giving internal server errors (which means no new
extensions can be released on the platform).

If folks are working on this, is there an ETA on a fix?

Is there anything I can do to help?

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Tom Lane
"Frank Millman"  writes:
> Pavel Stehule wrote:
>> 2017-09-12 8:45 GMT+02:00 Frank Millman :
>>>   I am experimenting with optimising a SQL statement. One version uses 4 
>>> LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the 
>>> filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no 
>>> CASE statements.

>> you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher 
>> number 14 maybe 16

> I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.

For this you need to increase join_collapse_limit, not
from_collapse_limit.  (Usually, though, there's little reason not to keep
them the same.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Perl script is killed by SIGPIPE

2017-09-12 Thread Vick Khera
On Mon, Sep 11, 2017 at 9:02 PM, Yogesh Sharma 
wrote:

> Dear All,
>
>
> We have one perl script that is opening DB connection and performaing
> insert DB operation.When duplicate entry Comming in query,Postgres
> forecfully killing process itself and in Postgres log "unexpected EOF on
> client connection" error is Comming.
> This issue is not Comming every time.
> We have found child script is killed by signal 13 SIGPIPE. When duplicate
> key violates error occured, script is killed but not all time.
>
>
I'm going to bet that the SIGPIPE is totally unrelated to having a
duplicate key violation. You will need to provide much more detail of what
you observe to figure this out.


Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Pavel Stehule
2017-09-12 12:39 GMT+02:00 Pavel Stehule :

>
>
> 2017-09-12 12:25 GMT+02:00 Frank Millman :
>
>> Pavel Stehule wrote:
>>
>> 2017-09-12 9:36 GMT+02:00 Frank Millman :
>>
>>> Pavel Stehule wrote:
>>> >
>>> > 2017-09-12 8:45 GMT+02:00 Frank Millman :
>>>
 I am using 9.4.4 on Fedora 22.

 I am experimenting with optimising a SQL statement. One version uses 4
 LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the
 filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no
 CASE statements.

 My test involves selecting a single row. Both versions work. The first
 version takes 0.06 seconds. The second takes 0.23 seconds. On further
 experimentation, the time for the second one seems to taken in setting up
 the joins, because if I omit selecting anything from the joined tables, it
 still takes 0.23 seconds.

>>> >
>>> > please send result of explain analyze
>>> >
>>> > you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher
>>> number 14 maybe 16
>>> >
>>> I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.
>>>
>>> I have attached files containing my SQL command, and the results of
>>> EXPLAIN ANALYSE
>>>
>>
>> > please use https://explain.depesz.com/ for both plans (slow, fast)
>>
>>
>> Here are the results -
>>
>> sql_slow - https://explain.depesz.com/s/9vn3
>>
>> sql_fast - https://explain.depesz.com/s/oW0F
>>
>
> I don't see any issue there - it looks like some multi dimensional query
> and it should not be well optimized due not precious estimations. The slow
> query has much more complex - some bigger logic is under nested loop -
> where estimation is not fully correct, probably due dependencies between
> columns.
>
> what does SET enable_nestloop to off;
>

from statistics - the ar_tran_inv table is scanned 6x in slow query and
2times in fast query. Maybe there should be some index



> ?
>
> Regards
>
> Pavel
>
>
>
>>
>> Frank
>>
>>
>
>


Re: [GENERAL] PostgreSQL COPY Statement Error On Linux

2017-09-12 Thread Francisco Olarte
On Tue, Sep 12, 2017 at 12:30 PM, Osahon Oduware  wrote:
> I am trying to utilize the "COPY" statement below to copy a .CSV file to a
> table in a PostgreSQL database.:
> COPY .() FROM
> '\\shared\network\path\to\csv\test.csv' DELIMITER ',' CSV HEADER;
>
> This works with a PostgreSQL database installed in a WINDOWS environment
> (Windows 7), but fails with the following error with a similar PostgreSQL
> database in a Linux environment (Centos 7):
> org.postgresql.util.PSQLException: ERROR: could not open file
> "\\shared\network\path\to\csv\test.csv" for reading: No such file or
> directory
>
> I have granted READ access to EVERYONE on the CSV folder on the network path
> as depicted in the attached image.
> Could someone point me to the reason for the failure in Linux?

You are invoking server side copy. This means the SERVER neads to be
able to access the file under the name you've given to it.

The network path you have given is valid on windows machines ( UNC
path? It's been a decade an a half since Iast used windows ), but not
on linux. Typically on linux you mount the shared folder /some/where
and type the path as /some/where/path/to/csv/test.csv.

You may be needing a CLIENT copy. I do not see which client program
you are using, it may be some fancy GUI stuff, in which case I cannot
help you. If you are using the standard "psql" tool you can just use
\copy. As explained in the docs this just does "copy from stdin" ( or
to stdout ) on the client side and redirects the file you give in the
command line ( or you can issue a [psql ... -c "copy ...from stdin"]
in a command line and feed the file via shell redirections, but, IIRC,
windows shells are terrible at quoting arguments and redirecting i/o,
so it may be better to avoid it).

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL COPY Statement Error On Linux

2017-09-12 Thread Charles Clavadetscher
Hello

 

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Osahon Oduware
Sent: Dienstag, 12. September 2017 12:30
To: pgsql-general@postgresql.org
Subject: [GENERAL] PostgreSQL COPY Statement Error On Linux

 

Hi All,

I am trying to utilize the "COPY" statement below to copy a .CSV file to a 
table in a PostgreSQL database.:
COPY .() FROM 
'\\shared\network\path\to\csv\test.csv' DELIMITER ',' CSV HEADER;

This works with a PostgreSQL database installed in a WINDOWS environment 
(Windows 7), but fails with the following error with a similar PostgreSQL 
database in a Linux environment (Centos 7):
org.postgresql.util.PSQLException: ERROR: could not open file 
"\\shared\network\path\to\csv\test.csv 
 " for reading: No such file or 
directory



It looks like the share is not visible for the Linux system. You probably need 
to mount it first using Samba and then access it through the mount point using 
slashes instead of bakslashes: /

Instructions on how to mount a Windows share in Linux can be found on the 
internet. Since I am not an expert on this myself, I can’t give you more 
concrete instructions.

 

This may help: 
http://www.serverlab.ca/tutorials/linux/storage-file-systems-linux/mounting-smb-shares-centos-7/

 

An alternative would be to copy the file to the Linux system using e.g. scp of 
sftp and the load it locally.

 

Hope this helps.

Bye

Charles


I have granted READ access to EVERYONE on the CSV folder on the network path as 
depicted in the attached image. 

Could someone point me to the reason for the failure in Linux?



Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Pavel Stehule
2017-09-12 12:25 GMT+02:00 Frank Millman :

> Pavel Stehule wrote:
>
> 2017-09-12 9:36 GMT+02:00 Frank Millman :
>
>> Pavel Stehule wrote:
>> >
>> > 2017-09-12 8:45 GMT+02:00 Frank Millman :
>>
>>> I am using 9.4.4 on Fedora 22.
>>>
>>> I am experimenting with optimising a SQL statement. One version uses 4
>>> LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the
>>> filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no
>>> CASE statements.
>>>
>>> My test involves selecting a single row. Both versions work. The first
>>> version takes 0.06 seconds. The second takes 0.23 seconds. On further
>>> experimentation, the time for the second one seems to taken in setting up
>>> the joins, because if I omit selecting anything from the joined tables, it
>>> still takes 0.23 seconds.
>>>
>> >
>> > please send result of explain analyze
>> >
>> > you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher
>> number 14 maybe 16
>> >
>> I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.
>>
>> I have attached files containing my SQL command, and the results of
>> EXPLAIN ANALYSE
>>
>
> > please use https://explain.depesz.com/ for both plans (slow, fast)
>
>
> Here are the results -
>
> sql_slow - https://explain.depesz.com/s/9vn3
>
> sql_fast - https://explain.depesz.com/s/oW0F
>

I don't see any issue there - it looks like some multi dimensional query
and it should not be well optimized due not precious estimations. The slow
query has much more complex - some bigger logic is under nested loop -
where estimation is not fully correct, probably due dependencies between
columns.

what does SET enable_nestloop to off;

?

Regards

Pavel



>
> Frank
>
>


[GENERAL] PostgreSQL COPY Statement Error On Linux

2017-09-12 Thread Osahon Oduware
Hi All,

I am trying to utilize the "COPY" statement below to copy a .CSV file to a
table in a PostgreSQL database.:
*COPY .() FROM
'\\shared\network\path\to\csv\test.csv' DELIMITER ',' CSV HEADER;*

This works with a PostgreSQL database installed in a WINDOWS environment
(Windows 7), but fails with the following error with a similar PostgreSQL
database in a Linux environment (Centos 7):
*org.postgresql.util.PSQLException: ERROR: could not open file
"\\shared\network\path\to\csv\test.csv" for reading: No such file or
directory*

I have granted READ access to EVERYONE on the CSV folder on the network
path as depicted in the attached image.

Could someone point me to the reason for the failure in Linux?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Frank Millman
Pavel Stehule wrote:

2017-09-12 9:36 GMT+02:00 Frank Millman :

  Pavel Stehule wrote:
  > 
  > 2017-09-12 8:45 GMT+02:00 Frank Millman :

I am using 9.4.4 on Fedora 22.

I am experimenting with optimising a SQL statement. One version uses 4 LEFT 
JOIN’s and a 5-way CASE statement in the body. The second moves the filtering 
into the JOIN section, and I end up with 16 LEFT JOIN’s and no CASE statements.

My test involves selecting a single row. Both versions work. The first 
version takes 0.06 seconds. The second takes 0.23 seconds. On further 
experimentation, the time for the second one seems to taken in setting up the 
joins, because if I omit selecting anything from the joined tables, it still 
takes 0.23 seconds.
  > 
  > please send result of explain analyze
  > 
  > you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher 
number 14 maybe 16
  > 
  I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.

  I have attached files containing my SQL command, and the results of EXPLAIN 
ANALYSE

> please use https://explain.depesz.com/ for both plans (slow, fast)


Here are the results - 

sql_slow - https://explain.depesz.com/s/9vn3

sql_fast - https://explain.depesz.com/s/oW0F

Frank
 

Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Pavel Stehule
2017-09-12 9:36 GMT+02:00 Frank Millman :

> Pavel Stehule wrote:
> >
> > 2017-09-12 8:45 GMT+02:00 Frank Millman :
>
>> I am using 9.4.4 on Fedora 22.
>>
>> I am experimenting with optimising a SQL statement. One version uses 4
>> LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the
>> filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no
>> CASE statements.
>>
>> My test involves selecting a single row. Both versions work. The first
>> version takes 0.06 seconds. The second takes 0.23 seconds. On further
>> experimentation, the time for the second one seems to taken in setting up
>> the joins, because if I omit selecting anything from the joined tables, it
>> still takes 0.23 seconds.
>>
> >
> > please send result of explain analyze
> >
> > you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher
> number 14 maybe 16
> >
> I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.
>
> I have attached files containing my SQL command, and the results of
> EXPLAIN ANALYSE
>

please use https://explain.depesz.com/ for both plans (slow, fast)

Regards

Pavel


>
> Frank
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>


Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Frank Millman
Ron Johnson wrote:
> 
On 09/12/2017 01:45 AM, Frank Millman wrote:

Hi all

I am using 9.4.4 on Fedora 22.

I am experimenting with optimising a SQL statement. One version uses 4 LEFT 
JOIN’s and a 5-way CASE statement in the body. The second moves the filtering 
into the JOIN section, and I end up with 16 LEFT JOIN’s and no CASE statements.

My test involves selecting a single row. Both versions work. The first version 
takes 0.06 seconds. The second takes 0.23 seconds. On further experimentation, 
the time for the second one seems to taken in setting up the joins, because if 
I omit selecting anything from the joined tables, it still takes 0.23 seconds.

Exactly the same exercise on Sql Server results in 0.06 seconds for both 
versions.

I realise that, if I was selecting a large number of rows, 0.23 seconds is 
trivial and the overall result could be different. But still, it seems odd.

> Just out of curiosity, what if you PREPARE the statement, and take multiple 
> timings?


My setup is a bit complicated, as I am executing the commands from a python 
program on Windows against a PostgreSQL database on Fedora, so I hope I did it 
correctly!With that caveat, the results are that the time was reduced from 0.23 
seconds to 0.22 seconds. The difference is consistent, so I think it is 
real.Frank 

Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Frank Millman
Pavel Stehule wrote:
> 
> 2017-09-12 8:45 GMT+02:00 Frank Millman :

  I am using 9.4.4 on Fedora 22.

  I am experimenting with optimising a SQL statement. One version uses 4 LEFT 
JOIN’s and a 5-way CASE statement in the body. The second moves the filtering 
into the JOIN section, and I end up with 16 LEFT JOIN’s and no CASE statements.

  My test involves selecting a single row. Both versions work. The first 
version takes 0.06 seconds. The second takes 0.23 seconds. On further 
experimentation, the time for the second one seems to taken in setting up the 
joins, because if I omit selecting anything from the joined tables, it still 
takes 0.23 seconds.
> 
> please send result of explain analyze
> 
> you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher number 
> 14 maybe 16
> 
I tried increasing FROM_COLLAPSE_LIMIT, but it made no difference.

I have attached files containing my SQL command, and the results of EXPLAIN 
ANALYSE

Frank
 SELECT a.row_id,
(SELECT (b.inv_net_tot_cus + b.inv_tax_tot_cus + b.crn_net_tot_cus + 
b.crn_tax_tot_cus
+ b.jnl_tot_cus + b.rec_net_tot_cus + b.rec_dsc_tot_cus + b.rec_dtx_tot_cus)
FROM ccc.ar_cust_totals b
WHERE b.cust_row_id = a.row_id AND b.tran_date <= '2015-09-30'
ORDER BY b.tran_date DESC LIMIT 1)
as "balance_cust AS [DECTEXT]",

COALESCE(SUM(due_curr.amount_cust), 0) +
COALESCE(SUM(alloc_curr.alloc_cust + alloc_curr.disc_cust), 0)
AS "bal_cust_curr AS [DECTEXT]",
COALESCE(SUM(due_30.amount_cust), 0) +
COALESCE(SUM(alloc_30.alloc_cust + alloc_30.disc_cust), 0)
AS "bal_cust_30 AS [DECTEXT]",
COALESCE(SUM(due_60.amount_cust), 0) +
COALESCE(SUM(alloc_60.alloc_cust + alloc_60.disc_cust), 0)
AS "bal_cust_60 AS [DECTEXT]",
COALESCE(SUM(due_90.amount_cust), 0) +
COALESCE(SUM(alloc_90.alloc_cust + alloc_90.disc_cust), 0)
AS "bal_cust_90 AS [DECTEXT]",
COALESCE(SUM(due_120.amount_cust), 0) +
COALESCE(SUM(alloc_120.alloc_cust + alloc_120.disc_cust), 0)
AS "bal_cust_120 AS [DECTEXT]"

FROM ccc.ar_customers a
LEFT JOIN ccc.ar_trans trans ON trans.cust_row_id = a.row_id

LEFT JOIN ccc.ar_trans_due due_curr ON
due_curr.tran_type = trans.tran_type AND due_curr.tran_row_id = 
trans.tran_row_id
AND trans.tran_date > '2015-08-31'
LEFT JOIN ccc.ar_trans_alloc alloc_curr ON
alloc_curr.due_row_id = due_curr.row_id
LEFT JOIN ccc.ar_trans trans_alloc_curr ON
trans_alloc_curr.tran_type = alloc_curr.tran_type AND
trans_alloc_curr.tran_row_id = alloc_curr.tran_row_id AND
trans_alloc_curr.tran_date <= '2015-09-30'

LEFT JOIN ccc.ar_trans_due due_30 ON
due_30.tran_type = trans.tran_type AND due_30.tran_row_id = 
trans.tran_row_id
AND trans.tran_date > '2015-07-31' AND trans.tran_date <= '2015-08-31'
LEFT JOIN ccc.ar_trans_alloc alloc_30 ON
alloc_30.due_row_id = due_30.row_id
LEFT JOIN ccc.ar_trans trans_alloc_30 ON
trans_alloc_30.tran_type = alloc_30.tran_type AND
trans_alloc_30.tran_row_id = alloc_30.tran_row_id AND
trans_alloc_30.tran_date <= '2015-09-30'

LEFT JOIN ccc.ar_trans_due due_60 ON
due_60.tran_type = trans.tran_type AND due_60.tran_row_id = 
trans.tran_row_id
AND trans.tran_date > '2015-06-30' AND trans.tran_date <= '2015-07-31'
LEFT JOIN ccc.ar_trans_alloc alloc_60 ON
alloc_60.due_row_id = due_60.row_id
LEFT JOIN ccc.ar_trans trans_alloc_60 ON
trans_alloc_60.tran_type = alloc_60.tran_type AND
trans_alloc_60.tran_row_id = alloc_60.tran_row_id AND
trans_alloc_60.tran_date <= '2015-09-30'

LEFT JOIN ccc.ar_trans_due due_90 ON
due_90.tran_type = trans.tran_type AND due_90.tran_row_id = 
trans.tran_row_id
AND trans.tran_date > '2015-05-31' AND trans.tran_date <= '2015-06-30'
LEFT JOIN ccc.ar_trans_alloc alloc_90 ON
alloc_90.due_row_id = due_90.row_id
LEFT JOIN ccc.ar_trans trans_alloc_90 ON
trans_alloc_90.tran_type = alloc_90.tran_type AND
trans_alloc_90.tran_row_id = alloc_90.tran_row_id AND
trans_alloc_90.tran_date <= '2015-09-30'

LEFT JOIN ccc.ar_trans_due due_120 ON
due_120.tran_type = trans.tran_type AND due_120.tran_row_id = 
trans.tran_row_id
AND trans.tran_date <= '2015-05-31'
LEFT JOIN ccc.ar_trans_alloc alloc_120 ON
alloc_120.due_row_id = due_120.row_id
LEFT JOIN ccc.ar_trans trans_alloc_120 ON
trans_alloc_120.tran_type = alloc_120.tran_type AND
trans_alloc_120.tran_row_id = alloc_120.tran_row_id AND
trans_alloc_120.tran_date <= '2015-09-30'

WHERE a.ledger_row_id = ? AND a.party_row_id = ? AND a.deleted_id = ?
GROUP BY a.row_id
('HashAggregate  (cost=11123.83..11211.17 rows=1 width=234) (actual 
time=299.781..299.782 rows=1 loops=1)',)
('  Group Key: a.row_id',)
('  ->  Hash Right Join  (cost=9833.36..11122.59 rows=31 width=234) (actual 
time=295.962..296.496 rows=1801 loops=1)',)
('Hash Cond: (("*SELECT* 1_5".tran_type = (alloc_120.tran_type)::text) 
AND ("*SELECT* 1_5".tran_row_id = alloc_120.tran_row_id))',)
('->  Append  

Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Ron Johnson

On 09/12/2017 01:45 AM, Frank Millman wrote:

Hi all
I am using 9.4.4 on Fedora 22.
I am experimenting with optimising a SQL statement. One version uses 4 
LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the 
filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no 
CASE statements.
My test involves selecting a single row. Both versions work. The first 
version takes 0.06 seconds. The second takes 0.23 seconds. On further 
experimentation, the time for the second one seems to taken in setting up 
the joins, because if I omit selecting anything from the joined tables, it 
still takes 0.23 seconds.
Exactly the same exercise on Sql Server results in 0.06 seconds for both 
versions.
I realise that, if I was selecting a large number of rows, 0.23 seconds is 
trivial and the overall result could be different. But still, it seems odd.


Just out of curiosity, what if you PREPARE the statement, and take multiple 
timings?


--
World Peace Through Nuclear Pacification



Re: [GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Pavel Stehule
hi

2017-09-12 8:45 GMT+02:00 Frank Millman :

> Hi all
>
> I am using 9.4.4 on Fedora 22.
>
> I am experimenting with optimising a SQL statement. One version uses 4
> LEFT JOIN’s and a 5-way CASE statement in the body. The second moves the
> filtering into the JOIN section, and I end up with 16 LEFT JOIN’s and no
> CASE statements.
>
> My test involves selecting a single row. Both versions work. The first
> version takes 0.06 seconds. The second takes 0.23 seconds. On further
> experimentation, the time for the second one seems to taken in setting up
> the joins, because if I omit selecting anything from the joined tables, it
> still takes 0.23 seconds.
>
> Exactly the same exercise on Sql Server results in 0.06 seconds for both
> versions.
>
> I realise that, if I was selecting a large number of rows, 0.23 seconds is
> trivial and the overall result could be different. But still, it seems odd.
>
> Is this normal, or should I investigate further?
>

please send result of explain analyze

you can experimentally try increase FROM_COLLAPSE_LIMIT to some higher
number 14 maybe 16

regards



>
> Frank Millman
>
>


[GENERAL] Joining 16 tables seems slow

2017-09-12 Thread Frank Millman
Hi all

I am using 9.4.4 on Fedora 22.

I am experimenting with optimising a SQL statement. One version uses 4 LEFT 
JOIN’s and a 5-way CASE statement in the body. The second moves the filtering 
into the JOIN section, and I end up with 16 LEFT JOIN’s and no CASE statements.

My test involves selecting a single row. Both versions work. The first version 
takes 0.06 seconds. The second takes 0.23 seconds. On further experimentation, 
the time for the second one seems to taken in setting up the joins, because if 
I omit selecting anything from the joined tables, it still takes 0.23 seconds.

Exactly the same exercise on Sql Server results in 0.06 seconds for both 
versions.

I realise that, if I was selecting a large number of rows, 0.23 seconds is 
trivial and the overall result could be different. But still, it seems odd.

Is this normal, or should I investigate further?

Frank Millman