Re: SQL pretty pritner?

2019-10-28 Thread DiasCosta

Hello,

For some formatting I use PSPad.exe (https://www.pspad.com/)
It has, at least, some of the features you are looking for.
It is free and has the possibility to format SQL and other languages.

Dias Costa




On 28-10-2019 17:06, Reid Thompson wrote:

On Sun, 2019-10-27 at 16:54 -0700, Adrian Klaver wrote:

https://urldefense.proofpoint.com/v2/url?u=https-3A__sourceforge.net_projects_pgformatter_=DwICaQ=w8fEmIi-B5lsC0MnOiGTloUx5YmN70-3jVCmqqbXNxM=4r1gVE34nFB7YgioINuzq1cdGrlIBszDh26sKYP7ux8=9zYVbOR6UMXoTr5abTczqiDnMEYJ1mNU7uePMEtYLJ8=7jw7I_b0hNpBHZAfA2NLtvgbWepTlS5zqSGEt6xq9IA=


this ^^^




--
J. M. Dias Costa
Telef. 214026948

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a privacidade de todos e cada um.
Obrigado.

Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que não 
respeitem o
malfadado acordo ortográfico.



Re: Importing from CSV, auto creating table?

2019-08-21 Thread DiasCosta

Hi Stan,

I uploaded to the database (PostgreSQL 9.6), monthly and for several 
years, over 50 000 000 csv records
using a version of pgtfutter that I compiled (with some changes if I 
remember correctly) and the tables

were created in loading process from the column titles.

Dias Costa




On 21-08-2019 22:15, stan wrote:

I have a situation where we need to import data, as an interim measure,
from spreadsheets.

I have read up on \copy and COPY, but I do not see that either of these can
use the header from a CSV file to define a new table. Am I missing
something?

Also downloaded something called pgfutter, which I thought would do this,
but have not had any success with this. After I (thought) I had figured out
the arguments, it just seams to hag forever.





--
J. M. Dias Costa
Telef. 214026948

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a privacidade de todos e cada um.
Obrigado.

Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que não 
respeitem o
malfadado acordo ortográfico.



Re: Recomended front ends?

2019-08-08 Thread DiasCosta

Hi,
After many years of using Oracle Forms and Oracle Reports with Oracle 
DB, I have been using Lazarus and "Code Typhon"

for many years now.
Both work with Free Pascal Compiler and both are open source and free.
Both have a very good IDE, the code produced is Pascal (very easily 
readable), and they connect directly to many DBMS including PostgreSQL, 
Oracle,  MSSQL, SQLITE, etc..


You can find information here: https://en.wikipedia.org/wiki/Lazarus_(IDE)
and here:  https://www.pilotlogic.com/sitejoom/

Also here:   https://en.wikipedia.org/wiki/Lazarus_(IDE)
you can find some interesting information.

Also here:   https://www.getlazarus.org/learn/tutorials/intro/
"If you are haven't used Lazarus  recently 
then this tutorial is for you. In it we give users a broad overview of 
Lazarus
and some of its key features. We look at the type of applications you 
can create with Lazarus, and show you the core

concepts to desktop application development it makes so very easy.
Highlights include the two way design process, events handlers, testing 
and debugging, and deployment.
A brief gallery of applications 
 I've personally created 
with Lazarus is included at the end, and I honestly believe it's
the best tool in the world for developing platform agnostic desktop 
applications. Like the video says, give Lazarus a try 
."



Dias Costa









On 08-08-2019 20:26, Basques, Bob (CI-StPaul) wrote:

All,

No Web driven, but . . . . we’ve had some success with using LibreOffice(calc) 
as a frontend.  Fairly easy to build forms, etc.  Only limited experience so 
far, but was able to build domain lists from SQL calls, for form pulldown 
lists, etc.

bobb



On Aug 8, 2019, at 2:10 PM, Rich Shepard  wrote:

Think Before You Click: This email originated outside our organization.


On Thu, 8 Aug 2019, Tim Clarke wrote:


We tried Django without any pleasant results.

Tim,

That's unexpected and too bad.


I'd also caution using MS Access, we're desperate to get away from it.
Sharing code has challenges and it is horribly aggressive with caching
unless you use un-bound forms and write all the CRUD interface code
yourself.

Slightly off-topic, but I've not seen anything good about Access. My
understanding is it's a flat-file database intended as a user front end to
Microsoft's relational database product. My experiences with those who use
it have been painful.

Just yesterday I downloaded a very large database of fisheries data from a
federal agency and have started translating it to postgres using the
mdbtools. There's no schema provided, only 32 pages of table columns and
types without descriptions of the column names. No primary keys, no foreign
keys, and only 66 tables were found in the .mdb file while all table names
starting with s through z were not available. There are also many tables
that hold redundant data which should not exist as the contents are easily
generated by SQL queries. It will take me a while to make it a working
relational database.

Rich






--
J. M. Dias Costa
Telef. 214026948 Telem. 939307421

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a privacidade de todos e cada um.
Obrigado.

Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que não 
respeitem o
malfadado acordo ortográfico.



Re: Cloning schemas

2018-07-09 Thread DiasCosta

Hi Melvin,

I followed your recommendation and it did not work.

Since I was in a rush I did try to understand where the function crashed 
and commenting in the function the creation of

comments for indexes, as follows, was sufficient for the function work.

    IF FOUND
  THEN
--    EXECUTE 'COMMENT ON INDEX ' || quote_ident(dest_schema) || '.' 
|| quote_ident(object)

-- || ' IS ''' || v_def || ''';';

Problem is I don't usually comment indexes.

Thanks and
Keep good working
Dias Costa









On 09-07-2018 22:50, Melvin Davidson wrote:


Adrian,
The code that CREATES the TABLE is

EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE ' || 
quote_ident(source_schema) || '.' || quote_ident(object)

    || ' INCLUDING ALL)';

The schema names are supposed to be changed!

This function HAS been tested and does WORK. Please do not muddle the 
problem without testing yourself.



On Mon, Jul 9, 2018 at 2:56 PM, Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 07/09/2018 09:49 AM, Melvin Davidson wrote:



On Mon, Jul 9, 2018 at 12:32 PM, DiasCosta
mailto:diasco...@diascosta.org>
<mailto:diasco...@diascosta.org
<mailto:diasco...@diascosta.org>>> wrote:

    Hi Melvin,

    Trying run 9.6 clone_schema on a different schema and I
get the
    following error:

    NOTICE:  search path = {public,pg_catalog}
    CONTEXT:  PL/pgSQL function
clone_schema(text,text,boolean) line 79
    at RAISE
    ERROR:  relation
"bilhetica_logic_schema.idx_unq_cod_operador_aml"
    does not exist
    CONTEXT:  SQL statement "COMMENT ON INDEX
    bilhetica_logic_schema.idx_unq_cod_operador_aml IS 'garante
    unicidade do Cod_Operador_AML';"
    PL/pgSQL function clone_schema(text,text,boolean) line 401
at EXECUTE
    ** Error **

    ERROR: relation
"bilhetica_logic_schema.idx_unq_cod_operador_aml"
    does not exist
    SQL state: 42P01
    Context: SQL statement "COMMENT ON INDEX
    bilhetica_logic_schema.idx_unq_cod_operador_aml IS 'garante
    unicidade do Cod_Operador_AML';"
    PL/pgSQL function clone_schema(text,text,boolean) line 401
at EXECUTE


    Can you help me, please?
    Thanks in advance
    Dias Costa


Dias
 > NOTICE:  search path = {public,pg_catalog}
 >ERROR: relation
"bilhetica_logic_schema.idx_unq_cod_operador_aml" does not exist

This is not related to the clone_schema function. It looks
like you may have corruption in your syste catalogs,
Try reindexing your system_catalogs.


Or from clone_schema.sql:

EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE ' ||
quote_ident(source_schema) || '.' || quote_ident(object)
        || ' INCLUDING ALL)';

https://www.postgresql.org/docs/10/static/sql-createtable.html
<https://www.postgresql.org/docs/10/static/sql-createtable.html>

"LIKE source_table [ like_option ... ]

...

Indexes, PRIMARY KEY, UNIQUE, and EXCLUDE constraints on the
original table will be created on the new table only if INCLUDING
INDEXES is specified. <*/Names for the new indexes and constraints
are chosen according to the default rules, regardless of how the
originals were named. (This behavior avoids possible
duplicate-name failures for the new indexes.)/*>

...
INCLUDING ALL is an abbreviated form of INCLUDING COMMENTS
INCLUDING CONSTRAINTS INCLUDING DEFAULTS INCLUDING IDENTITY
INCLUDING INDEXES INCLUDING STATISTICS INCLUDING STORAGE.

..."

See tagged part(<*/ /*>) part above. I could see where the indexes
in the new schema have new names while the index comments in the
old schema refer to the old name. Then you would get the error the
OP showed.


REINDEX VERBOSE SYSTEM  ;



-- 
*Melvin Davidson**

Maj. Database & Exploration Specialist**
Universe Exploration Command – UXC***
Employment by invitation only!



-- 
Adrian Klaver

adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>




--
*Melvin Davidson**
Maj. Database & Exploration Specialist**
Universe Exploration Command – UXC*
Employment by invitation only!



--
J. M. Dias Costa
Telef. 214026948 Telem. 939307421

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a

Re: Cloning schemas

2018-07-09 Thread DiasCosta

Hi Melvin,

Trying run 9.6 clone_schema on a different schema and I get the 
following error:


NOTICE:  search path = {public,pg_catalog}
CONTEXT:  PL/pgSQL function clone_schema(text,text,boolean) line 79 at RAISE
ERROR:  relation "bilhetica_logic_schema.idx_unq_cod_operador_aml" does 
not exist
CONTEXT:  SQL statement "COMMENT ON INDEX 
bilhetica_logic_schema.idx_unq_cod_operador_aml IS 'garante unicidade do 
Cod_Operador_AML';"

PL/pgSQL function clone_schema(text,text,boolean) line 401 at EXECUTE
** Error **

ERROR: relation "bilhetica_logic_schema.idx_unq_cod_operador_aml" does 
not exist

SQL state: 42P01
Context: SQL statement "COMMENT ON INDEX 
bilhetica_logic_schema.idx_unq_cod_operador_aml IS 'garante unicidade do 
Cod_Operador_AML';"

PL/pgSQL function clone_schema(text,text,boolean) line 401 at EXECUTE


Can you help me, please?
Thanks in advance
Dias Costa




On 07-07-2018 09:32, DiasCosta wrote:

Hi Melvin,

Thank you.

Dias Costa

On 04-07-2018 23:38, Melvin Davidson wrote:



On Wed, Jul 4, 2018 at 2:48 PM, DiasCosta <mailto:diasco...@diascosta.org>> wrote:


Hi Melvin,

I'm new to clone_schema.
Can I use it on PostgreSQL 9.6?

TIA
DCostaployment by invitation only!


> Can I use it on PostgreSQL 9.6?

Yes, but because the developer(s) once again monkeyed with the system 
catalogs, there are now
two versions. One for 10 and one for 9.6 and below. I've attached the 
9.6 version for you.




--
J. M. Dias Costa
Telef. 214026948 Telem. 939307421

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a privacidade de todos e cada um.
Obrigado.

Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que não 
respeitem o
malfadado acordo ortográfico.



--
J. M. Dias Costa
Telef. 214026948 Telem. 939307421

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a privacidade de todos e cada um.
Obrigado.

Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que não 
respeitem o
malfadado acordo ortográfico.



Re: Cloning schemas

2018-07-07 Thread DiasCosta

Hi Melvin,

Thank you.

Dias Costa

On 04-07-2018 23:38, Melvin Davidson wrote:



On Wed, Jul 4, 2018 at 2:48 PM, DiasCosta <mailto:diasco...@diascosta.org>> wrote:


Hi Melvin,

I'm new to clone_schema.
Can I use it on PostgreSQL 9.6?

TIA
DCostaployment by invitation only!


> Can I use it on PostgreSQL 9.6?

Yes, but because the developer(s) once again monkeyed with the system 
catalogs, there are now
two versions. One for 10 and one for 9.6 and below. I've attached the 
9.6 version for you.




--
J. M. Dias Costa
Telef. 214026948 Telem. 939307421

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a privacidade de todos e cada um.
Obrigado.

Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que não 
respeitem o
malfadado acordo ortográfico.



Re: Cloning schemas

2018-07-04 Thread DiasCosta

Hi Melvin,

I'm new to clone_schema.
Can I use it on PostgreSQL 9.6?

TIA
DCosta



On 03-07-2018 14:34, Melvin Davidson wrote:




ERROR: BŁĄD: you cannot add to column "TopoToVersion_ID" DETAIL:
Column "TopoToVersion_ID" is an identity column defined as
GENERATED ALWAYS. HINT: Use OVERRIDING SYSTEM VALUE to override.
CONTEXT: wyrażenie SQL "INSERT INTO version4.t_topotoversions
SELECT * FROM public.t_topotoversions;" funkcja PL/pgSQL
clone_schema(text,text,boolean), wiersz 212 w EXECUTE


Lukasz,
That ERROR is occuring because you choose to copy the data 
(include_recs = TRUE).
I have added OVERRIDING SYSTEM VALUE to the insert statement and 
attached revised version.
CAUTION: The value of TopoToVersion_ID and any other IDENTITY columns 
may be changed.


The revised version is attached.



--
J. M. Dias Costa
Telef. 214026948 Telem. 939307421

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a privacidade de todos e cada um.
Obrigado.

Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que não 
respeitem o
malfadado acordo ortográfico.



Re: SQL Query never ending...

2018-06-22 Thread DiasCosta

Hello Tom, good evening.

Thank you for your prompt answer and help.

It was enough to ANALYZE the temp tables to achieve a magnificent 
result:  37 seconds.


I'm portuguese.
It's a shame you live so far from me. So I can't put a box of bottles of 
Porto wine at your door.


I also thank David and Fabrízio for their willingness to help me.
They deserve a bottle of Porto wine each.

Best regards
Dias Costa





On 21-06-2018 22:08, Tom Lane wrote:

DiasCosta  writes:

This is the query plan for only 19684 rows.

I think you're getting a bad query plan, mostly as a result of two
factors:

* Poor row estimates.  It looks like the bottom-most misestimations
are on temp tables, which makes me wonder whether you've ANALYZEd
those tables.  Your application has to do that explicitly after
populating the tables; auto-analyze can't help on temp tables.

* Too many tables --- I count 33 table scans in this query.  You
might get better planning results by raising join_collapse_limit
and/or from_collapse_limit, but it will come at a cost in planning
time, and in any case a query with this many tables is never likely
to be cheap.  You might want to think about restructuring your schema
to not need so many tables, or maybe just do some hand optimization
of the query to eliminate unnecessary joins.  (It looks to me like
at least some of the joins to tt_eotb1 might be unnecessary?)

regards, tom lane




--
J. M. Dias Costa
Telef. 214026948 Telem. 939307421

Se divulgar esta mensagem por terceiros, por favor:
1. Apague o meu endereço de correio electrónico e o meu nome.
2. Apague também os endereços dos seus amigos antes de distribuir.
3. Enderece como cópia oculta (Cc ou Bcc) para os SEUS destinatários.
Agindo deste modo, dificultará a disseminação de "vírus", "spams" e
"banners" e contribuirá para manter a privacidade de todos e cada um.
Obrigado.

Nota: Não se deverá ao acaso a ocorrência de palavras na minha escrita que não 
respeitem o
malfadado acordo ortográfico.



Re: SQL Query never ending...

2018-06-21 Thread DiasCosta

Hello David and Fabrízio,


The names of the tables and indexes differ from the original script. 
Only the names.


This is the query plan for only 19684 rows.
I have another query running for around 3 rows, but it takes an 
eternity to finish.

If it finishes in acceptable time I'll make it available to you.

As in previous times when trying to optimize, I submitted this execution 
plan to https://explain.depesz.com but now, as it happened then, I am 
not able to extract information to decide me on what to do or to decide 
on a path leading to optimization.


The environment conditions are exactly the same as described in my 
previous message.



Thank you in advance for your attention and help.
They will be greatly appreciated.

Dias Costa
--
***
"QUERY PLAN"
"Nested Loop  (cost=3336.02..3353.51 rows=1 width=528) (actual 
time=867.213..6452673.494 rows=19684 loops=1)"
"  Output: at_2.operador, at_2.num_serie, at_2.titulo, 
n2v_1.titulo_base, (count(*)), tt_km_por_etapa_2017.etapa_km, 
(((count(*)))::numeric * tt_km_por_etapa_2017.etapa_km), 
((sumcount(*)))::numeric * k.etapa_km))) / 
(tt_eotb1.eotb_etapas)::numeric), tr (...)"
"  Join Filter: ((at_2.operador = at_5.operador) AND 
(tt_eotb1.titulo_base = n2v_4.titulo_base))"

"  Rows Removed by Join Filter: 157472"
"  Buffers: local hit=418076253"
"  ->  Nested Loop  (cost=2658.99..2673.26 rows=1 width=782) (actual 
time=744.047..6272023.716 rows=19684 loops=1)"
"    Output: at_2.operador, at_2.num_serie, at_2.titulo, 
n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, tt_eotb1.eotb_etapas, tt_eotb1.operador, 
tt_eotb1.titulo_b (...)"
"    Join Filter: ((at_2.operador = at_4.operador) AND 
(tt_eotb1.titulo_base = n2v_3.titulo_base))"

"    Rows Removed by Join Filter: 157472"
"    Buffers: local hit=418064955"
"    ->  Nested Loop  (cost=1329.63..1337.01 rows=1 width=686) 
(actual time=369.637..1236.464 rows=19684 loops=1)"
"  Output: at_2.operador, at_2.num_serie, at_2.titulo, 
n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, tt_eotb1.eotb_etapas, tt_eotb1.operador, 
tt_eotb1.ti (...)"

"  Buffers: local hit=558900"
"  ->  Nested Loop  (cost=1329.49..1336.74 rows=1 width=614) 
(actual time=369.631..1126.109 rows=19684 loops=1)"
"    Output: at_2.operador, at_2.num_serie, at_2.titulo, 
n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, (sumcount(*)))::numeric * 
k.etapa_km))), a (...)"

"    Buffers: local hit=519532"
"    ->  Nested Loop  (cost=1329.36..1336.47 rows=1 
width=542) (actual time=369.625..1015.389 rows=19684 loops=1)"
"  Output: at_2.operador, at_2.num_serie, 
at_2.titulo, n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, (sumcount(*)))::numeric * k.etapa_km 
(...)"

"  Buffers: local hit=480164"
"  ->  Nested Loop (cost=1329.22..1336.20 rows=1 
width=470) (actual time=369.614..895.215 rows=19684 loops=1)"
"    Output: at_2.operador, at_2.num_serie, 
at_2.titulo, n2v_1.titulo_base, (count(*)), at_2.ticket_code, 
at_2.ticket_operator_code, tt_km_por_etapa_2017.etapa_km, 
tt_km_por_etapa_2017.operador, (sumcount(*)))::numeric * k.et (...)"

"    Buffers: local hit=440796"
"    ->  Merge Join (cost=1328.95..1333.92 
rows=1 width=358) (actual time=369.586..503.283 rows=19684 loops=1)"
"  Output: at_2.operador, 
at_2.num_serie, at_2.titulo, n2v_1.titulo_base, (count(*)), 
at_2.ticket_code, at_2.ticket_operator_code, n2v_1.cod_titulo, 
(sumcount(*)))::numeric * k.etapa_km))), at_1.operador, n2v.titulo_b 
(...)"
"  Merge Cond: (at_1.operador = 
at_2.operador)"
"  Join Filter: (n2v_1.titulo_base = 
n2v.titulo_base)"

"  Rows Removed by Join Filter: 157472"
"  Buffers: local hit=22563"
"  ->  GroupAggregate 
(cost=672.74..674.98 rows=1 width=96) (actual time=119.552..128.686 
rows=9 loops=1)"
"    Output: at_1.operador, 
n2v.titulo_base, sumcount(*)))::numeric * k.etapa_km))"
"    Group Key: at_1.operador, 
n2v.titulo_base"

"    Buffers: local hit=11295"
"    ->  Merge 

SQL Query never ending...

2018-06-20 Thread DiasCosta

Hi all,
can someone help me?

I don't know if this is the correct list for this matter. If I'm wrong, 
please bear with me and point me in right direction.



I have a large query which, largely after more than 24 hours running, 
doesn't come to an end;
However I can see, using system tools, that the postgres process keeps, 
although  slowly, reading and writing bytes and the "afinity" dedicated 
cores are at 6.25% .


I tried https://www.depesz.com/ but the query was rejected.

-- Scenario:
-- OS: Windows 12 R2 Standard
-- RAM: 128GB
-- CPU: Intel Xeon E5-2640 v4 @2.40GH (2 processors) (16 cores)
-- PostgreSQL 9.6
-- Database  category: OLAP (Tables used in the query are not dynamic 
and the statistics for all of them are up to date)

--
***
-- The query you'll see later bellow uses the following tables:
--
-- This facts table has around 150 rows.
CREATE TEMPORARY TABLE  analise_transac_mes
( ctrl_cod_valida integer NOT NULL,
  cod_controlo integer NOT NULL,
  causa character varying(300),
  Fornecedor text,
  ordem integer,
  num_serie text,
  titulo text,
  tipo_produto text,
  data_dia_hora text,
  cod_viatura text,
  cod_licenca text,
  val_inval_excl character varying(12),
  mes character varying(25),
  tipo_licenca character varying(25),
  data_controlo timestamp without time zone NOT NULL DEFAULT now(),
  utilizador character varying(30) DEFAULT "current_user"(),
  senha_entrega character varying(12),
  senha_fornecedor_entrega character varying(12),
  proc_extra character(3),
  quando date DEFAULT now(),
  cod_cliente character varying(15),
  tem_venda character varying(6),
  CONSTRAINT pk_analise_transac_mes PRIMARY KEY (ctrl_cod_valida);
--
SELECT pg_prewarm('TT_Analise_Transac_Oper_Mes');
--
-- With following indexes:
CREATE INDEX IF NOT EXISTS TT_Analise_Transac_Oper_Mes_Fornecedor ON 
TT_Analise_Transac_Oper_Mes(Fornecedor);
CREATE INDEX IF NOT EXISTS TT_Analise_Transac_Oper_Mes_encomenda ON 
TT_Analise_Transac_Oper_Mes(encomenda);
CREATE INDEX IF NOT EXISTS TT_Analise_Transac_Oper_Mes_val_inval_excl ON 
TT_Analise_Transac_Oper_Mes(val_inval_excl);
CREATE INDEX IF NOT EXISTS TT_Analise_Transac_Oper_Mes_proc_extra ON 
TT_Analise_Transac_Oper_Mes(proc_extra);
CREATE INDEX IF NOT EXISTS TT_Analise_Transac_Oper_Mes_senha_entrega ON 
TT_Analise_Transac_Oper_Mes(senha_entrega);
CREATE INDEX IF NOT EXISTS 
TT_Analise_Transac_Oper_Mes_senha_fornecedor_entrega ON 
TT_Analise_Transac_Oper_Mes(senha_fornecedor_entrega);

--
--

-- Following table has  1800 rows
--
CREATE TABLE bilhetica_base_2017_01.lst_km_por_etapa_2017
(
  cod_encomenda text NOT NULL,
  encomenda character varying(150),
  encomenda_amtl character varying(150),
  Fornecedor character varying(60) NOT NULL,
  etapa_km numeric(13,9),
  mes_ref character varying(15) NOT NULL,
  utilizador character varying(30) DEFAULT "current_user"(),
  data timestamp without time zone DEFAULT now(),
  notas character varying,
  caracter character(1),
  senha_entrega character varying(12),
  senha_fornecedor_entrega character varying(12),
  CONSTRAINT pk_lst_km_por_etapa_2017 PRIMARY KEY (cod_encomenda, 
Fornecedor, mes_ref),
  CONSTRAINT dv_lst_km_por_etapa_caracter CHECK (caracter = ANY 
(ARRAY[NULL::bpchar, 'P'::bpchar, 'D'::bpchar]))

);

SELECT pg_prewarm('TT_Km_por_Etapa_2017');
--
--With following indexes:
CREATE INDEX i_Km_por_Etapa_Fornecedor  ON TT_Km_por_Etapa_2017 
(Fornecedor);

CREATE INDEX i_Km_por_Etapa_Mes_Ref  ON TT_Km_por_Etapa_2017 (Mes_Ref);
CREATE INDEX i_Km_por_Etapa_Cod_encomenda  ON TT_Km_por_Etapa_2017  
(Cod_encomenda);

CREATE INDEX i_Km_por_Etapa_encomenda  ON TT_Km_por_Etapa_2017 (encomenda);
--
--

 -- This table has  90 rows
CREATE TABLE bilhetica_base_2017_01.encomendas_n2v_2017
(
  senha_fornecedor_entrega text,
  senha_entrega text,
  cod_encomenda character varying(12) NOT NULL,
  desig_encomenda_aml text,
  desig_encomenda_polis_antigo text,
  desig_encomenda_polis_novo text,
  encomenda_base text,
  modalidade text,
  tipo_aml text,
  tipo_polis text,
  tarifa text,
  ultima_actualizacao text,
  ano_corrente text,
  pvp_ano_corrente numeric(7,3),
  desconto_ano_corrente numeric(6,2),
  mes_pvp_ano_corrente character varying(13),
  pvp_ref_ano_corrente numeric(7,3),
  mes_pvp_ref_ano_corrente character varying(13),
  siit_4_18_e_sub23 character varying(25),
  entra_nas_contas character(1),
  etapa_km_julho numeric(6,3),
  mes_ref character varying(13) NOT NULL,
  versao text,
  notas_aml text,
  notas_polis text,
  notas text,
  CONSTRAINT pk_encomendas_n2v_2017 PRIMARY KEY (cod_encomenda, mes_ref));
--
SELECT pg_prewarm('TT_encomendas_N2V_2017');
--
-- With following indexes;
CREATE INDEX i_encomendas_n2v_2017_senha_entrega  ON 
TT_encomendas_N2v_2017