Re: ON CONFLICT DO NOTHING ignored on bulk insert

2021-05-25 Thread Hellmuth Vargas
https://www.postgresql.org/docs/11/sql-insert.html#SQL-ON-CONFLICT

"The optional ON CONFLICT clause specifies an alternative action to raising
a *unique violation or exclusion constraint violation* error."

El mar, 25 de may. de 2021 a la(s) 03:29, Geoff Winkless (
pgsqlad...@geoff.dj) escribió:

> On Tue, 25 May 2021 at 08:18, Andrus  wrote:
>
>> Looking for a method to do bulk insert ignoring product foreign key
>> mismatches.
>>
> ON CONFLICT only works with unique constraints, it's not designed for what
> you're trying to use it for.
>
> Geoff
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


PostgreSQL log query's result size

2021-04-07 Thread Hellmuth Vargas
Excuse me in advance for my English, I'm improving :-)

Could you tell me if it is possible that in addition to the configuration
that the log presents the duration of the delayed queries, it can also
present the size of the result data? especially those who want to return a
lot of information. My request is that by configuring some parameter, the
size of the records obtained could be obtained something similar to the
parameter log_min_duration_statement

Thank you very much!

-- 
Cordialmente,

Ing. Hellmuth I. Vargas S


Re: count(*) vs count(id)

2021-02-02 Thread Hellmuth Vargas
Hello list

My English is not very good, so I pretend that through the examples you
understand what I intend to expose

-- Recreate the query that is supposedly wrong

select calendar.entry, count(*)
from generate_series('2020-08-20'::date, '2020-09-15'::date, '1
day'::interval) as calendar(entry)
left join (values  (1,'2020-08-28 09:44:11'::timestamp),
  (2,'2020-08-28 10:44:11'::timestamp),
  (3,'2020-08-29 11:44:11'::timestamp),
  (4,'2020-09-01 02:44:11'::timestamp),
  (5,'2020-09-02 03:44:11'::timestamp),
  (6,'2020-09-02 04:44:11'::timestamp),
  (7,'2020-09-03 05:44:11'::timestamp),
  (8,'2020-09-04 06:44:11'::timestamp),
  (10,'2020-09-04 07:44:11'::timestamp)) as call_records(id,timestamp)
on calendar.entry = call_records.timestamp::date
group by calendar.entry;

-- wrong???

 entry  | count
+---
 2020-08-20 00:00:00-05 | 1
 2020-08-21 00:00:00-05 | 1
 2020-08-22 00:00:00-05 | 1
 2020-08-23 00:00:00-05 | 1
 2020-08-24 00:00:00-05 | 1
 2020-08-25 00:00:00-05 | 1
 2020-08-26 00:00:00-05 | 1
 2020-08-27 00:00:00-05 | 1
 2020-08-28 00:00:00-05 | 2
 2020-08-29 00:00:00-05 | 1
 2020-08-30 00:00:00-05 | 1
 2020-08-31 00:00:00-05 | 1
 2020-09-01 00:00:00-05 | 1
 2020-09-02 00:00:00-05 | 2
 2020-09-03 00:00:00-05 | 1
 2020-09-04 00:00:00-05 | 2
 2020-09-05 00:00:00-05 | 1
 2020-09-06 00:00:00-05 | 1
 2020-09-07 00:00:00-05 | 1
 2020-09-08 00:00:00-05 | 1
 2020-09-09 00:00:00-05 | 1
 2020-09-10 00:00:00-05 | 1
 2020-09-11 00:00:00-05 | 1
 2020-09-12 00:00:00-05 | 1
 2020-09-13 00:00:00-05 | 1
 2020-09-14 00:00:00-05 | 1
 2020-09-15 00:00:00-05 | 1

-- In the count I will only consider the records of call_records

select calendar.entry, count(call_records.*)
from generate_series('2020-08-20'::date, '2020-09-15'::date, '1
day'::interval) as calendar(entry)
left join (values  (1,'2020-08-28 09:44:11'::timestamp),
  (2,'2020-08-28 10:44:11'::timestamp),
  (3,'2020-08-29 11:44:11'::timestamp),
  (4,'2020-09-01 02:44:11'::timestamp),
  (5,'2020-09-02 03:44:11'::timestamp),
  (6,'2020-09-02 04:44:11'::timestamp),
  (7,'2020-09-03 05:44:11'::timestamp),
  (8,'2020-09-04 06:44:11'::timestamp),
  (10,'2020-09-04 07:44:11'::timestamp)) as call_records(id,timestamp)
on calendar.entry = call_records.timestamp::date
group by calendar.entry;


--- perfect
 entry  | count
+---
 2020-08-20 00:00:00-05 | 0
 2020-08-21 00:00:00-05 | 0
 2020-08-22 00:00:00-05 | 0
 2020-08-23 00:00:00-05 | 0
 2020-08-24 00:00:00-05 | 0
 2020-08-25 00:00:00-05 | 0
 2020-08-26 00:00:00-05 | 0
 2020-08-27 00:00:00-05 | 0
 2020-08-28 00:00:00-05 | 2
 2020-08-29 00:00:00-05 | 1
 2020-08-30 00:00:00-05 | 0
 2020-08-31 00:00:00-05 | 0
 2020-09-01 00:00:00-05 | 1
 2020-09-02 00:00:00-05 | 2
 2020-09-03 00:00:00-05 | 1
 2020-09-04 00:00:00-05 | 2
 2020-09-05 00:00:00-05 | 0
 2020-09-06 00:00:00-05 | 0
 2020-09-07 00:00:00-05 | 0
 2020-09-08 00:00:00-05 | 0
 2020-09-09 00:00:00-05 | 0
 2020-09-10 00:00:00-05 | 0
 2020-09-11 00:00:00-05 | 0
 2020-09-12 00:00:00-05 | 0
 2020-09-13 00:00:00-05 | 0
 2020-09-14 00:00:00-05 | 0
 2020-09-15 00:00:00-05 | 0

when placing * I want to bring all the join records between both tables and
when counting them of course there will be a row for the dates   2020-08-30
,  2020-08-31 so the call_records fields are null

select *
from generate_series('2020-08-20'::date, '2020-09-15'::date, '1
day'::interval) as calendar(entry)
left join (values  (1,'2020-08-28 09:44:11'::timestamp),
  (2,'2020-08-28 10:44:11'::timestamp),
  (3,'2020-08-29 11:44:11'::timestamp),
  (4,'2020-09-01 02:44:11'::timestamp),
  (5,'2020-09-02 03:44:11'::timestamp),
  (6,'2020-09-02 04:44:11'::timestamp),
  (7,'2020-09-03 05:44:11'::timestamp),
  (8,'2020-09-04 06:44:11'::timestamp),
  (10,'2020-09-04 07:44:11'::timestamp)) as call_records(id,timestamp)
on calendar.entry = call_records.timestamp::date


 entry  | id |  timestamp
++-
 2020-08-20 00:00:00-05 ||
 2020-08-21 00:00:00-05 ||
 2020-08-22 00:00:00-05 ||
 2020-08-23 00:00:00-05 ||
 2020-08-24 00:00:00-05 ||
 2020-08-25 00:00:00-05 ||
 2020-08-26 00:00:00-05 ||
 2020-08-27 00:00:00-05 ||
 2020-08-28 00:00:00-05 |  1 | 2020-08-28 09:44:11
 2020-08-28 00:00:00-05 |  2 | 2020-08-28 10:44:11
 2020-08-29 00:00:00-05 |  3 | 2020-08-29 11:44:11
 2020-08-30 00:00:00-05 ||
 2020-08-31 00:00:00-05 ||
 2020-09-01 00:00:00-05 |  4 | 2020-09-01 02:44:11
 2020-09-02 00:00:00-05 |  5 | 2020-09-02 03:44:11
 2020-09-02 00:00:00-05 |  6 | 2020-09-02 04:44:11
 2020-09-03 00:00:00-05 |  7 | 2020-09-03 

Re: Partitioned Table conversion to Non-Partition table in postgresql open source

2020-12-28 Thread Hellmuth Vargas
Hi

I had already read about some procedures to do that


https://www.enterprisedb.com/blog/postgres-partition-upgrade-your-partitioning-inheritance-declarative
https://www.2ndquadrant.com/en/blog/how-to-migrate-from-inheritance-based-partitioning-to-declarative-partitioning-in-postgresql/


El lun, 28 de dic. de 2020 a la(s) 04:38, Brajendra Pratap Singh (
singh.bpratap...@gmail.com) escribió:

> Hi,
>
> Currently we are running on open-postgresql 9.6 version with some
> partitioned Table which we need to convert from Inheritance partitioning to
> declarative partitioning during migration to 12.3 open-postgresql version.
>
> Is there any safest and faster way to fulfill this requirement, Please
> suggest?
>
> Thanks and regards,
> Singh
>
>
>

-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: Detecting which columns a query will modify in a function called by a trigger

2020-03-03 Thread Hellmuth Vargas
hello

By means of json you can detect what change between NEW and OLD

example:


   select b.*
   from
   (values (now(),123456,'pepito perez',false)) as
 old(dato1,dato2,dato3,dato4),json_each_text(row_to_json(old))  as
b(text1,text2)
   except
   select b.*
   from
   (values (now(),98765,'pepito perez',true)) as
 new(dato1,dato2,dato3,dato4),json_each_text(row_to_json(new))  as
b(text1,text2)



El mar., 3 de mar. de 2020 a la(s) 13:48, Adrian Klaver (
adrian.kla...@aklaver.com) escribió:

> On 3/3/20 9:42 AM, stan wrote:
> > On Mon, Mar 02, 2020 at 01:44:52PM -0700, David G. Johnston wrote:
> >> On Mon, Mar 2, 2020 at 1:28 PM stan  wrote:
> >>
> >>> Envision a table with a good many columns. This table represents the
> "life
> >>> history" of a part on a project. Some of the columns need to be
> >>> created/modified by the engineer. Some need to be created/modified by
> the
> >>> purchasing agent, some of the columns need to be created by the
> receiving
> >>> department, some of the columns need to be created/modified by the
> accounts
> >>> payable department.
> >>>
> >>> Make sense?
> >>>
> >>
> >> On a theory level this design is insufficiently normalized.  The fact
> that
> >> you are having issues and challenges working with it suggests you should
> >> seriously consider a different design, one that exhibits better
> >> normalization properties.
> >>
> >> Alternatively you might consider just removing direct access to the
> table
> >> and provide views and/or functions that can use normal permission
> grants.
> >> Add some check constraints to the table to describe and enforce the
> >> inter-field relationships that are present.
> >>
> >
> > Thanks for the input.
> >
> > I have, indeed created views that restrict the subset of columns that a
> > particular job function needs access to to the appropriate ones, but
> > unfortunately to the best of my knowledge, I cannot INSERT/UPDATE a table
> > through a view.
> >
> > Am I suffering from a lack of knowledge here?
>
> Yes:
>
> https://www.postgresql.org/docs/12/sql-createview.html
>
> Updatable Views
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>

-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Hellmuth Vargas
Thank you Stephen 

El mar., 19 de feb. de 2019 a la(s) 10:05, Stephen Frost (sfr...@snowman.net)
escribió:

> Greetings,
>
> * Hellmuth Vargas (hiv...@gmail.com) wrote:
> > But could you do the following procedure?:
>
> > pg_upgrade of the master
> > rysnc with a hot standby
>
> The above should be alright provided both the primary and the standby
> are down and the instructions in the pg_upgrade docs are followed.
>
> > arracar master
> > hot standby start
>
> So, start both the primary and the replica?  That part should be fine by
> itself.
>
> > stop hot standby and rsync the other hot standby with the migrated hot
> > standby?
>
> At some later point, shut down the replica completely, then do an rsync
> from that replica to the other replica and build its hard-link tree that
> way, and update anything that's changed while the 'migrated' replica was
> online?  I don't see any obvious issue with that as the result should
> mean that the two replicas are identical from PG's perspective from that
> point moving forward.
>
> Ultimately, it really depends on your specific environment though, of
> course.  It also might not be a bad idea to do a regular backup of the
> upgraded primary and then restore that to the second replica, just to
> make sure you have that whole process working and to test out your
> restore process.
>
> Thanks!
>
> Stephen
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-19 Thread Hellmuth Vargas
Hi

But could you do the following procedure?:
pg_upgrade of the master
rysnc with a hot standby
arracar master
hot standby start
stop hot standby and rsync the other hot standby with the migrated hot
standby?

El mar., 19 de feb. de 2019 a la(s) 06:12, Stephen Frost (sfr...@snowman.net)
escribió:

> Greetings,
>
> * Martín Fernández (fmarti...@gmail.com) wrote:
> > After reading the pg_upgrade documentation multiple times, it seems that
> after running pg_upgrade on the primary instance, we can't start it until
> we run rsync from the primary to the standby. I'm understanding this from
> the following section in the pg_upgrade manual page.
> >
> > ```
> > You will not be running pg_upgrade on the standby servers, but rather
> rsync on the
> >primary. Do not start any servers yet.
> > ```
> >
> > I'm understanding the `any` as primary and standbys.
>
> Yes, that's correct, you shouldn't start up anything yet.
>
> > On the other hand, we've been doing tests that start
> the primary instance as soon as pg_upgrade is done. This tests have worked
> perfectly fine so far. We make the rsync call with the primary instance
> running and the standby can start later on after rsync is done and we copy
> the new configuration files.
>
> This is like taking an online backup of the primary without actually
> doing pg_start_backup / pg_stop_backup and following the protocol for
> that, meaning that the replica will start up without a backup_label and
> will think it's at whatever point in the WAL stream that the pg_control
> file says its at as of whenever the rsync copies that file.
>
> That is NOT SAFE and it's a sure way to end up with corruption.
>
> The rsync while everything is down should be pretty fast, unless you
> have unlogged tables that are big (in which case, you should truncate
> them before shutting down the primary) or temporary tables left around
> (which you should clean up) or just generally other things that a
> replica doesn't normally have.
>
> If you can't have any downtime during this process then, imv, the answer
> is to build out a new replica that will essentially be a 'throw-away',
> move all the read load over to it and then go through the documented
> pg_upgrade process with the primary and the other replicas, then flip
> the traffic back to the primary + original replicas and then you can
> either throw away the replica that was kept online or rebuild it using
> the traditional methods of pg_basebackup (or for a larger system, you
> could use pgbackrest which can run in parallel and is much, much faster
> than pg_basebackup).
>
> > If what we are doing is wrong, we need to run `rsync` before starting
> the primary instance, that would mean that the primary and the standby are
> not usable if pg10 doesn't start correctly in the primary right ?
>
> This is another reason why it's good to have an independent replica, as
> it can be a fail-safe if things go completely south (you can just
> promote it and have it be the primary and then rebuild replicas using
> the regular backup+restore method and figure out what went wrong with
> the pg10 migration).
>
> Thanks!
>
> Stephen
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: PG Upgrade with hardlinks, when to start/stop master and replicas

2019-02-18 Thread Hellmuth Vargas
Hola Martin

Pues si uno sigue la secuencia de la pagina de ayuda de PostgreSQL

https://www.postgresql.org/docs/10/pgupgrade.html

Usage
(...)
7. Stop both servers
(...)
10. Upgrade Streaming Replication and Log-Shipping standby servers
(...)
12. Start the new server

*The new server can now be safely started, and then any rsync'ed standby
servers.*


Pensaría que toca esperar a que terminen todo los rsync de las replicas
antes de iniciar la master... Pero tengo incluso una pregunta adicional: si
tengo 2 o mas replicas.. se podria  sincronizar primero la master con una
replica (rsync), iniciar la master y luego emplear la replica sincronizada
para sincronizar el resto de replicas mientras que la máster ya se
encuentra en funcionamiento?.

El lun., 18 de feb. de 2019 a la(s) 15:21, Laurenz Albe (
laurenz.a...@cybertec.at) escribió:

> Martín Fernández wrote:
> > After reading the pg_upgrade documentation multiple times, it seems that
> after running pg_upgrade on the primary instance, we can't start it until
> we run rsync from the primary to the standby. I'm understanding this from
> the following section in the pg_upgrade manual page.
> >
> > You will not be running pg_upgrade on the standby servers, but rather
> rsync on the
> >primary. Do not start any servers yet.
>
> Immediately following, you can read:
>
>  If you did not use link mode, do not have or do not want to use rsync, or
> want an easier
>  solution, skip the instructions in this section and simply recreate the
> standby servers
>  once pg_upgrade completes and the new primary is running.
>
> So this is not compulsory, it's just an efficient method to quickly get
> the standby
> server updated.
>
> There is nothing wrong with rebuilding the standby later.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>
>

-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: How duplicate values inserted into the primary key column of table and how to fix it

2019-01-24 Thread Hellmuth Vargas
Hi
you could provide the definition of the table that includes the definition
of the primary key?

El jue., 24 de ene. de 2019 a la(s) 09:18, Raghavendra Rao J S V (
raghavendra...@gmail.com) escribió:

> Hi All,
>
>
> We are using Postgresql 9.2 database.
>
>
>
> In one of the transactional table, I have observed duplicate values for
> the primary key columns.
>
>
>
> Please guide me how is it possible and how to fix this kind of issue.
>
> --
> Regards,
> Raghavendra Rao J S V
>
>

-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.


Re: Idle query that's not ""?

2018-11-06 Thread Hellmuth Vargas
Hi
In the documentation describes the data in this field:

https://www.postgresql.org/docs/10/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW

"Text of this backend's most recent query. If state is active this field
shows the currently executing query. In all other states, it shows the last
query that was executed. "


El mar., 6 de nov. de 2018 a la(s) 15:46, Ron (ronljohnso...@gmail.com)
escribió:

> Hi,
>
> v9.6.6
>
> Why do these idle queries (pids 8357, 11260 and 11355) "remember" the
> queries they ran instead of having the text ""?
>
> postgres=# select pid,
>xact_start as txn_start,
>to_char(EXTRACT(epoch FROM now() - query_start),
> '999,999.') as query_age_secs,
>state,
>cast (query as char(20))
> from pg_stat_activity
> where datname <> 'postgres'
> order by query_start;
> ;
>
>  pid  |   txn_start   | query_age_secs | state  |
> query
>
> --+---+++--
> 26538 | 2018-11-06 14:40:55.053842-05 |3,451.9853  | active | SELECT
> to_char(b.pr
> 27497 | 2018-11-06 14:59:26.946854-05 |2,340.5871  | active | SELECT
> to_char(b.pr
> 29110 | 2018-11-06 14:59:50.479934-05 |2,317.1725  | active | SELECT
> to_char(b.pr
> * 8357 |   |1,324.1356  | idle   | SELECT
> CAST(c.ecid*
>  9016 | 2018-11-06 15:34:51.535476-05 |  215.8391  | active | SELECT
> to_char(b.pr
>  9810 | 2018-11-06 15:35:00.483292-05 |  206.7676  | active | SELECT
> to_char(b.pr
> *11260 |   |  190.0814  | idle   | WITH
> configs AS (SEL*
> 12800 | 2018-11-06 15:35:49.540631-05 |  157.9880  | active | SELECT
> to_char(b.pr
> *11355 |   |   42.9772  | idle   | SELECT
> CASE WHEN typ*
> 22618 | 2018-11-06 15:38:02.317146-05 |   25.3219  | active | SELECT
> to_char(b.pr
> 23176 | 2018-11-06 15:38:12.90985-05  |   14.7325  | active | SELECT
> to_char(b.pr
> 23566 | 2018-11-06 15:38:28.802919-05 | .6116  | active | select
> tms.TRIGGER.T
> 23588 | 2018-11-06 15:38:29.207373-05 | .2089  | active | select
> cds.IMAGE_RPS
> 23590 | 2018-11-06 15:38:29.233724-05 | .1814  | active | select
> tms.TRIGGER.T
> 23584 | 2018-11-06 15:38:29.046393-05 | .0442  | active | select
> tms.MARK_SENS
> 23595 | 2018-11-06 15:38:29.403969-05 | .0001  | active | select
> JOB_STEP.JOB_
> (16 rows)
>
>
> --
> Angular momentum makes the world go 'round.
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: Postgres 10, slave not catching up with master

2018-10-24 Thread Hellmuth Vargas
Hi
El mié., 24 de oct. de 2018 a la(s) 00:39, Boris Sagadin (
bo...@infosplet.com) escribió:

> Yes, times are all identical, set to UTC, ntpd is used.
>
>  log_delay
> ---
>  15.788175
>
> This is delay at this moment, but we graph replication delay and it's
> fluctuating between 0 and 30s.
>


But the fluctuation is between 0 and 30s!1, are not 4 hours fortunately.
Apart from the theme wal compression I think you should check networks



> Before I turned off wal compression, lag was much bigger (0 to up to 8
> minutes). We have lots of tables (40k) and many upserts.
>
>
> Boris
>
> On Tue, Oct 23, 2018 at 8:24 PM, Hellmuth Vargas  wrote:
>
>> Hi
>>
>> Both servers are configured with the same date, time and time
>> configuration?
>>
>> El mar., 23 de oct. de 2018 a la(s) 13:16, Hellmuth Vargas (
>> hiv...@gmail.com) escribió:
>>
>>> Hi
>>>
>>> which result you get from the following query:
>>>
>>> SELECT CASE WHEN pg_last_wal_receive_lsn() = pg_last_wal_replay_lsn()
>>> THEN 0
>>> ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())
>>> END AS log_delay;
>>>
>>> source:
>>>
>>> https://severalnines.com/blog/postgresql-streaming-replication-deep-dive
>>>
>>> El mar., 23 de oct. de 2018 a la(s) 11:28, Boris Sagadin (
>>> bo...@infosplet.com) escribió:
>>>
>>>> Nothing special, just:
>>>>
>>>> standby_mode = 'on'
>>>> primary_conninfo = 'host=...  user=repmgr application_name=nodex'
>>>> recovery_target_timeline = 'latest'
>>>>
>>>>
>>>> Boris
>>>>
>>>> On Tue, Oct 23, 2018 at 3:10 PM, Hellmuth Vargas 
>>>> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> can share recovery.conf file settings??
>>>>>
>>>>> El mar., 23 de oct. de 2018 a la(s) 00:28, Boris Sagadin (
>>>>> bo...@infosplet.com) escribió:
>>>>>
>>>>>> Yes, turning wal_compression off improves things. Slave that was
>>>>>> mentioned unfortunately lagged too much before this setting was applied 
>>>>>> and
>>>>>> was turned off. However the remaining slave lags less now, although still
>>>>>> occasionally up to a few minutes. I think single threadedness of recovery
>>>>>> is a big slowdown for write heavy databases. Maybe an option to increase
>>>>>> wal_size beyond 16MB in v11 will help.
>>>>>>
>>>>>> In the meantime we'll solve this by splitting the DB to 2 or 3
>>>>>> clusters or maybe trying out some sharding solution like Citus.
>>>>>>
>>>>>>
>>>>>> Boris
>>>>>>
>>>>>> On Sun, Oct 21, 2018 at 9:06 AM, Boris Sagadin 
>>>>>> wrote:
>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> I have a database running on i3.8xlarge (256GB RAM, 32 CPU cores, 4x
>>>>>>> 1.9TB NVMe drive) AWS instance with about 5TB of disk space occupied, 
>>>>>>> ext4,
>>>>>>> Ubuntu 16.04.
>>>>>>>
>>>>>>> Multi-tenant DB with about 4 tables, insert heavy.
>>>>>>>
>>>>>>> I started a new slave with identical HW specs, SR. DB started
>>>>>>> syncing from master, which took about 4 hours, then it started applying 
>>>>>>> the
>>>>>>> WALs. However, it seems it can't catch up. Delay is still around 3 hours
>>>>>>> (measured with now() - pg_last_xact_replay_timestamp()), even a day 
>>>>>>> later.
>>>>>>> It goes a few 100s up and down, but it seems to float around 3h mark.
>>>>>>>
>>>>>>> Disk IO is low at about 10%, measured with iostat, no connected
>>>>>>> clients, recovery process is at around 90% CPU single core usage.
>>>>>>>
>>>>>>> Tried tuning the various parameters, but with no avail. Only thing I
>>>>>>> found suspicious is stracing the recovery process constantly produces 
>>>>>>> many
>>>>>>> errors such as:
>>>>>>>
>>>>>>> lseek(428, 0, SEEK_END) = 780124160
>>>>>>> lseek(30, 0, SEEK_END)  =

Re: Postgres 10, slave not catching up with master

2018-10-23 Thread Hellmuth Vargas
Hi

Both servers are configured with the same date, time and time configuration?

El mar., 23 de oct. de 2018 a la(s) 13:16, Hellmuth Vargas (hiv...@gmail.com)
escribió:

> Hi
>
> which result you get from the following query:
>
> SELECT CASE WHEN pg_last_wal_receive_lsn() = pg_last_wal_replay_lsn()
> THEN 0
> ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())
> END AS log_delay;
>
> source:
>
> https://severalnines.com/blog/postgresql-streaming-replication-deep-dive
>
> El mar., 23 de oct. de 2018 a la(s) 11:28, Boris Sagadin (
> bo...@infosplet.com) escribió:
>
>> Nothing special, just:
>>
>> standby_mode = 'on'
>> primary_conninfo = 'host=...  user=repmgr application_name=nodex'
>> recovery_target_timeline = 'latest'
>>
>>
>> Boris
>>
>> On Tue, Oct 23, 2018 at 3:10 PM, Hellmuth Vargas 
>> wrote:
>>
>>> Hi
>>>
>>> can share recovery.conf file settings??
>>>
>>> El mar., 23 de oct. de 2018 a la(s) 00:28, Boris Sagadin (
>>> bo...@infosplet.com) escribió:
>>>
>>>> Yes, turning wal_compression off improves things. Slave that was
>>>> mentioned unfortunately lagged too much before this setting was applied and
>>>> was turned off. However the remaining slave lags less now, although still
>>>> occasionally up to a few minutes. I think single threadedness of recovery
>>>> is a big slowdown for write heavy databases. Maybe an option to increase
>>>> wal_size beyond 16MB in v11 will help.
>>>>
>>>> In the meantime we'll solve this by splitting the DB to 2 or 3 clusters
>>>> or maybe trying out some sharding solution like Citus.
>>>>
>>>>
>>>> Boris
>>>>
>>>> On Sun, Oct 21, 2018 at 9:06 AM, Boris Sagadin 
>>>> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I have a database running on i3.8xlarge (256GB RAM, 32 CPU cores, 4x
>>>>> 1.9TB NVMe drive) AWS instance with about 5TB of disk space occupied, 
>>>>> ext4,
>>>>> Ubuntu 16.04.
>>>>>
>>>>> Multi-tenant DB with about 4 tables, insert heavy.
>>>>>
>>>>> I started a new slave with identical HW specs, SR. DB started syncing
>>>>> from master, which took about 4 hours, then it started applying the WALs.
>>>>> However, it seems it can't catch up. Delay is still around 3 hours
>>>>> (measured with now() - pg_last_xact_replay_timestamp()), even a day later.
>>>>> It goes a few 100s up and down, but it seems to float around 3h mark.
>>>>>
>>>>> Disk IO is low at about 10%, measured with iostat, no connected
>>>>> clients, recovery process is at around 90% CPU single core usage.
>>>>>
>>>>> Tried tuning the various parameters, but with no avail. Only thing I
>>>>> found suspicious is stracing the recovery process constantly produces many
>>>>> errors such as:
>>>>>
>>>>> lseek(428, 0, SEEK_END) = 780124160
>>>>> lseek(30, 0, SEEK_END)  = 212992
>>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>>> temporarily unavailable)
>>>>> lseek(680, 0, SEEK_END) = 493117440
>>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>>> temporarily unavailable)
>>>>> lseek(774, 0, SEEK_END) = 583368704
>>>>>
>>>>> ...[snip]...
>>>>>
>>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>>> temporarily unavailable)
>>>>> lseek(774, 0, SEEK_END) = 583368704
>>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>>> temporarily unavailable)
>>>>> lseek(277, 0, SEEK_END) = 502882304
>>>>> lseek(6, 516096, SEEK_SET)  = 516096
>>>>> read(6,
>>>>> "\227\320\5\0\1\0\0\0\0\340\7\246\26\274\0\0\315\0\0\0\0\0\0\0}\0178\5&/\260\r"...,
>>>>> 8192) = 8192
>>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>>> temporarily unavailable)
>>>>> lseek(735, 0, SEEK_END) = 272809984
>>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>>> temporarily unavailable)
>>>>> l

Re: Postgres 10, slave not catching up with master

2018-10-23 Thread Hellmuth Vargas
Hi

which result you get from the following query:

SELECT CASE WHEN pg_last_wal_receive_lsn() = pg_last_wal_replay_lsn()
THEN 0
ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())
END AS log_delay;

source:

https://severalnines.com/blog/postgresql-streaming-replication-deep-dive

El mar., 23 de oct. de 2018 a la(s) 11:28, Boris Sagadin (
bo...@infosplet.com) escribió:

> Nothing special, just:
>
> standby_mode = 'on'
> primary_conninfo = 'host=...  user=repmgr application_name=nodex'
> recovery_target_timeline = 'latest'
>
>
> Boris
>
> On Tue, Oct 23, 2018 at 3:10 PM, Hellmuth Vargas  wrote:
>
>> Hi
>>
>> can share recovery.conf file settings??
>>
>> El mar., 23 de oct. de 2018 a la(s) 00:28, Boris Sagadin (
>> bo...@infosplet.com) escribió:
>>
>>> Yes, turning wal_compression off improves things. Slave that was
>>> mentioned unfortunately lagged too much before this setting was applied and
>>> was turned off. However the remaining slave lags less now, although still
>>> occasionally up to a few minutes. I think single threadedness of recovery
>>> is a big slowdown for write heavy databases. Maybe an option to increase
>>> wal_size beyond 16MB in v11 will help.
>>>
>>> In the meantime we'll solve this by splitting the DB to 2 or 3 clusters
>>> or maybe trying out some sharding solution like Citus.
>>>
>>>
>>> Boris
>>>
>>> On Sun, Oct 21, 2018 at 9:06 AM, Boris Sagadin 
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> I have a database running on i3.8xlarge (256GB RAM, 32 CPU cores, 4x
>>>> 1.9TB NVMe drive) AWS instance with about 5TB of disk space occupied, ext4,
>>>> Ubuntu 16.04.
>>>>
>>>> Multi-tenant DB with about 4 tables, insert heavy.
>>>>
>>>> I started a new slave with identical HW specs, SR. DB started syncing
>>>> from master, which took about 4 hours, then it started applying the WALs.
>>>> However, it seems it can't catch up. Delay is still around 3 hours
>>>> (measured with now() - pg_last_xact_replay_timestamp()), even a day later.
>>>> It goes a few 100s up and down, but it seems to float around 3h mark.
>>>>
>>>> Disk IO is low at about 10%, measured with iostat, no connected
>>>> clients, recovery process is at around 90% CPU single core usage.
>>>>
>>>> Tried tuning the various parameters, but with no avail. Only thing I
>>>> found suspicious is stracing the recovery process constantly produces many
>>>> errors such as:
>>>>
>>>> lseek(428, 0, SEEK_END) = 780124160
>>>> lseek(30, 0, SEEK_END)  = 212992
>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>> temporarily unavailable)
>>>> lseek(680, 0, SEEK_END) = 493117440
>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>> temporarily unavailable)
>>>> lseek(774, 0, SEEK_END) = 583368704
>>>>
>>>> ...[snip]...
>>>>
>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>> temporarily unavailable)
>>>> lseek(774, 0, SEEK_END) = 583368704
>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>> temporarily unavailable)
>>>> lseek(277, 0, SEEK_END) = 502882304
>>>> lseek(6, 516096, SEEK_SET)  = 516096
>>>> read(6,
>>>> "\227\320\5\0\1\0\0\0\0\340\7\246\26\274\0\0\315\0\0\0\0\0\0\0}\0178\5&/\260\r"...,
>>>> 8192) = 8192
>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>> temporarily unavailable)
>>>> lseek(735, 0, SEEK_END) = 272809984
>>>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource
>>>> temporarily unavailable)
>>>> lseek(277, 0, SEEK_END) = 502882304
>>>>
>>>> ls -l fd/9
>>>> lr-x-- 1 postgres postgres 64 Oct 21 06:21 fd/9 -> pipe:[46358]
>>>>
>>>>
>>>> Perf top on recovery produces:
>>>>
>>>>  27.76%  postgres[.] pglz_decompress
>>>>9.90%  [kernel][k] entry_SYSCALL_64_after_swapgs
>>>>7.09%  postgres[.] hash_search_with_hash_value
>>>>4.26%  libpthread-2.23.so  [.] llseek
>>>>3.64%  libpthread-2

Re: Postgres 10, slave not catching up with master

2018-10-23 Thread Hellmuth Vargas
Hi

can share recovery.conf file settings??

El mar., 23 de oct. de 2018 a la(s) 00:28, Boris Sagadin (
bo...@infosplet.com) escribió:

> Yes, turning wal_compression off improves things. Slave that was mentioned
> unfortunately lagged too much before this setting was applied and was
> turned off. However the remaining slave lags less now, although still
> occasionally up to a few minutes. I think single threadedness of recovery
> is a big slowdown for write heavy databases. Maybe an option to increase
> wal_size beyond 16MB in v11 will help.
>
> In the meantime we'll solve this by splitting the DB to 2 or 3 clusters or
> maybe trying out some sharding solution like Citus.
>
>
> Boris
>
> On Sun, Oct 21, 2018 at 9:06 AM, Boris Sagadin 
> wrote:
>
>> Hello,
>>
>> I have a database running on i3.8xlarge (256GB RAM, 32 CPU cores, 4x
>> 1.9TB NVMe drive) AWS instance with about 5TB of disk space occupied, ext4,
>> Ubuntu 16.04.
>>
>> Multi-tenant DB with about 4 tables, insert heavy.
>>
>> I started a new slave with identical HW specs, SR. DB started syncing
>> from master, which took about 4 hours, then it started applying the WALs.
>> However, it seems it can't catch up. Delay is still around 3 hours
>> (measured with now() - pg_last_xact_replay_timestamp()), even a day later.
>> It goes a few 100s up and down, but it seems to float around 3h mark.
>>
>> Disk IO is low at about 10%, measured with iostat, no connected clients,
>> recovery process is at around 90% CPU single core usage.
>>
>> Tried tuning the various parameters, but with no avail. Only thing I
>> found suspicious is stracing the recovery process constantly produces many
>> errors such as:
>>
>> lseek(428, 0, SEEK_END) = 780124160
>> lseek(30, 0, SEEK_END)  = 212992
>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
>> unavailable)
>> lseek(680, 0, SEEK_END) = 493117440
>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
>> unavailable)
>> lseek(774, 0, SEEK_END) = 583368704
>>
>> ...[snip]...
>>
>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
>> unavailable)
>> lseek(774, 0, SEEK_END) = 583368704
>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
>> unavailable)
>> lseek(277, 0, SEEK_END) = 502882304
>> lseek(6, 516096, SEEK_SET)  = 516096
>> read(6,
>> "\227\320\5\0\1\0\0\0\0\340\7\246\26\274\0\0\315\0\0\0\0\0\0\0}\0178\5&/\260\r"...,
>> 8192) = 8192
>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
>> unavailable)
>> lseek(735, 0, SEEK_END) = 272809984
>> read(9, 0x7ffe4001f557, 1)  = -1 EAGAIN (Resource temporarily
>> unavailable)
>> lseek(277, 0, SEEK_END) = 502882304
>>
>> ls -l fd/9
>> lr-x-- 1 postgres postgres 64 Oct 21 06:21 fd/9 -> pipe:[46358]
>>
>>
>> Perf top on recovery produces:
>>
>>  27.76%  postgres[.] pglz_decompress
>>9.90%  [kernel][k] entry_SYSCALL_64_after_swapgs
>>7.09%  postgres[.] hash_search_with_hash_value
>>4.26%  libpthread-2.23.so  [.] llseek
>>3.64%  libpthread-2.23.so  [.] __read_nocancel
>>2.80%  [kernel][k] __fget_light
>>2.67%  postgres[.] 0x0034d3ba
>>1.85%  [kernel][k] ext4_llseek
>>1.84%  postgres[.] pg_comp_crc32c_sse42
>>1.44%  postgres[.] hash_any
>>1.35%  postgres[.] 0x0036afad
>>1.29%  postgres[.] MarkBufferDirty
>>1.21%  postgres[.] XLogReadRecord
>> [...]
>>
>> Tried changing the process limits with prlimit to unlimited, but no
>> change.
>>
>> I can turn off the WAL compression but I doubt this is the main culprit.
>> Any ideas appreciated.
>>
>> Regards,
>> Boris
>>
>>
>

-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: Select "todays" timestamps in an index friendly way

2018-10-23 Thread Hellmuth Vargas
Hi

El mar., 23 de oct. de 2018 a la(s) 05:41, Thomas Kellerer (
spam_ea...@gmx.net) escribió:

> Lutz Horn schrieb am 23.10.2018 um 12:19:
> > Hi Thomas,
> >
> > On Tue, Oct 23, 2018 at 12:11:55PM +0200, Thomas Kellerer wrote:
> >> I typically use:
> >>
> >>   where ts >= date '2018-10-23'
> >> and ts < date '2018-10-23' + 1
> >
> > But here the date is an explicit value. Francisco reworded my question:
> >
> >> if your definition of today is 'same value as now() when truncated to
> >> days'
> >
> > That's what I am (was, thanks to Francisco) looking for.
>
> Then use current_date:
>
>where ts >= current_date
>  and ts < current_date + 1
>
>
>
this is equally valid?

where ts >= current_date


cordialmente:

Hellmuth Vargas


Re: Optimizing execution of expensive subqueries

2018-07-11 Thread Hellmuth Vargas
Hi

Try this way:

SELECT
  tbl.field1, tbl.field2, tbl.field3, ...,
  b.Thingy1Sum,
  ... repeat for multiply thingies ...
FROM
  tbl
  LATERAL JOIN (
SELECT anothertbl.UserId,SUM(Duration) as Thingy1Sum
FROM anothertbl
WHERE anothertbl.UserId = tbl.UserId AND anothertbl.ThingyId = 1
group  by 1) as b on tbl.UserId=b.UserId
ORDER BY tbl.field1 LIMIT 20


El mié., 11 de jul. de 2018 a la(s) 09:25, Mathieu Fenniak (
mathieu.fenn...@replicon.com) escribió:

> Hi pgsql-general!
>
> I'm currently looking at a query that is generally selecting a bunch of
> simple columns from a table, and also performing some subqueries to
> aggregate related data, and then sorting by one of the simple columns and
> paginating the result.
>
> eg.
>
> SELECT
>   tbl.field1, tbl.field2, tbl.field3, ...,
>   (SELECT SUM(Duration) FROM anothertbl WHERE anothertbl.UserId = tbl.UserId
> AND anothertbl.ThingyId = 1) as Thingy1Sum,
>   ... repeat for multiply thingies ...
> FROM
>   tbl
> ORDER BY tbl.field1 LIMIT 20
>
> I'm finding that if "tbl" contains hundreds of thousands of rows, the
> subqueries are being executed hundreds of thousands of times.  Because of
> the sorting and pagination, this is appears to be unnecessary, and the
> result is slow performance.  (PostgreSQL 9.5.9 server)
>
> I've only found one solution so far, which is to perform the sort &
> pagination in a CTE, and the subqueries externally.  Are there any other
> approaches that can be taken to optimize this and prevent the unnecessary
> computation?
>
> CTE rewrite:
>
> WITH cte AS (
> SELECT
>   tbl.field1, tbl.field2, tbl.field3
> FROM
>   tbl
> ORDER BY tbl.field1 LIMIT 20
> )
> SELECT cte.*,
>   (SELECT SUM(Duration) FROM anothertbl WHERE anothertbl.UserId
> = tbl.UserId AND anothertbl.ThingyId = 1) as Thingy1Sum,
>   ... repeat for multiply thingies ...
> FROM cte;
>
> Thanks for any thoughts you have,
>
> Mathieu Fenniak
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.


Re: Split daterange into sub periods

2018-07-05 Thread Hellmuth Vargas
Hi

select ($$[$$|| to_char(min(n.dato),'-MM-DD') || $$,$$ ||
to_char(max(n.dato),'-MM-DD') || $$]$$)::daterange,
daterange(min(n.dato)::date,max(n.dato)::date)
from (
select u.dato,anterior,(u.dato-anterior)::interval,sum(case when anterior
is null or (u.dato -anterior)::interval='1 day'::interval  then 0 else 1
end) over(order by u.dato) as grupo
from (
select u.dato, lag(u.dato) over( order by u.dato) as anterior, lead(u.dato)
over( order by u.dato)
from (
select * from generate_series(lower('[2018-01-01,
2018-01-31]'::daterange),upper('[2018-01-01, 2018-01-31]'::daterange),'1
day'::interval) as a(dato)
except
(
select generate_series(lower(a.dato),upper(a.dato),'1 day'::interval) from
(values('[2018-01-04,2018-01-06]'::daterange),('[2018-01-09,2018-01-12]'::daterange),('[2018-01-18,2018-01-19]'::daterange))
as a(dato)

)
) as u order by u.dato
) as u
) as n
group by grupo
order by 1



daterange|daterange
-+-
 [2018-01-01,2018-01-04) | [2018-01-01,2018-01-03)
 [2018-01-08,2018-01-09) | empty
 [2018-01-14,2018-01-18) | [2018-01-14,2018-01-17)
 [2018-01-21,2018-02-02) | [2018-01-21,2018-02-01)
(4 rows)




El jue., 5 de jul. de 2018 a la(s) 10:39, Andreas Kretschmer (
andr...@a-kretschmer.de) escribió:

>
>
> On 05.07.2018 15:49, hmidi slim wrote:
>
> Hi,
> I'm looking for splitting a daterange into many subperiods following this
> example:
>
> Base Date: [2018-01-01, 2018-01-31]
> overlapped_periods:
> 1- [ 2018-01-04, 2018-01-06]
> 2- [ 2018-01-09, 2018-01-12]
> 3- [ 2018-01-18, 2018-01-19]
>
> I try to get such a result:
> 1- [ 2018-01-01, 2018-01-03]
> 2- [ 2018-01-07, 2018-01-08]
> 3- [ 2018-01-13, 2018-01-17]
> 4- [ 2018-01-20, 2018-01-31]
>
> The operator '-' does not support this :
>
> SELECT daterange('2018-01-01', '2018-01-31', '[]') - daterange('2018-01-04', 
> '2018-01-06', '[]');
>
>
> I got this error:
>
> *ERROR: result of range difference would not be contiguous
>
> *
>
> Is there any operators to make the split of daterange?
>
> andreas@[local]:5432/test# \d hmidi
> Table "public.hmidi"
>  Column |   Type| Collation | Nullable | Default
> +---+---+--+-
>  id | integer   |   | not null |
>  d  | daterange |   |  |
> Indexes:
> "hmidi_pkey" PRIMARY KEY, btree (id)
>
> andreas@[local]:5432/test# insert into hmidi values
> (1,'[2018-01-04,2018-01-06]');INSERT 0 1
> andreas@[local]:5432/test# insert into hmidi values
> (2,'[2018-01-09,2018-01-12]');INSERT 0 1
> andreas@[local]:5432/test# insert into hmidi values
> (3,'[2018-01-18,2018-01-19]');INSERT 0 1
> andreas@[local]:5432/test# with month as (select s::date from
> generate_series('2018-01-01'::date, '2018-01-31'::date,'1day'::interval)
> s), tmp as ( select month.s, case when hmidi.d @> month.s then 1 else NULL
> end as covered from month left join hmidi on month.s <@ hmidi.d),tmp2 as (
> select *, coalesce((sum(case when covered = 1 then 1 else NULL end) over
> (order by s))+1,1) as p from tmp) select p, min(s), max(s) from tmp2 where
> covered is null group by p order by p;
>  p  |min |max
> ++
>   1 | 2018-01-01 | 2018-01-03
>   4 | 2018-01-07 | 2018-01-08
>   8 | 2018-01-13 | 2018-01-17
>  10 | 2018-01-20 | 2018-01-31
> (4 rows)
>
>
> Regards, Andreas
> --
> 2ndQuadrant Deutschland
>


-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-20 Thread Hellmuth Vargas
Hi
It may not be the most elegant solution butworks!


with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
path, weight)
as (
select
name, step, ingredient, quantity, unit
,   quantity::numeric(10,2)
,   step::text
,   case when unit = 'g' then quantity::numeric(10,2) else null
end
  from recipe
 where name = 'pizza'
union all
select
recipe.name, recipe.step, recipe.ingredient,
recipe.quantity, recipe.unit
,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
,   pizza.path || '.' || recipe.step
,   case when recipe.unit = 'g' then (pizza.rel_qty *
recipe.quantity)::numeric(10,2) else null end
  from pizza
  join recipe on (recipe.name = pizza.ingredient)
), parcial_weight as
(
select a.path,sum(b.weight) as sum_weight
from pizza as a cross join pizza as b
where b.path ilike a.path || '%'
group by 1
order by path

)
select a.path, a.ingredient, a.quantity, a.rel_qty, a.unit,
a.weight,b.sum_weight as partial_ weigh,sum(a.weight) over() as total_weight
  from pizza as a
  left join parcial_weight as b on a.path=b.path
 order by a.path;



 path  |  ingredient  | quantity | rel_qty | unit  | weight |
partial_weight | total_weight
---+--+--+-+---+++--
 1 | tomato sauce | 1.00 |1.00 | pcs   || 113.00 |
 315.50
 1.1   | tomato   |   100.00 |  100.00 | g | 100.00 | 100.00 |
 315.50
 1.2   | basil|10.00 |   10.00 | g |  10.00 |  10.00 |
 315.50
 1.3   | salt | 3.00 |3.00 | g |   3.00 |   3.00 |
 315.50
 2 | pizza bottom | 1.00 |1.00 | pcs   || 202.50 |
 315.50
 2.2   | dough| 1.00 |1.00 | pcs   || 200.00 |
 315.50
 2.2.1 | flour|   150.00 |  150.00 | g | 150.00 | 150.00 |
 315.50
 2.2.2 | water|50.00 |   50.00 | g |  50.00 |  50.00 |
 315.50
 2.2.3 | salt | 1.00 |1.00 | pinch |||
 315.50
 2.3   | carbon   | 2.50 |2.50 | g |   2.50 |   2.50 |
 315.50
(10 rows)




El mié., 20 de jun. de 2018 a la(s) 10:54, Alban Hertroys (
haram...@gmail.com) escribió:

> On 19 June 2018 at 21:14, Hellmuth Vargas  wrote:
> >
> > Hi
> >
> > with partial sum:
> >
> > with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
> path,
> > weight)
> > as (
> > select
> > name, step, ingredient, quantity, unit
> > ,   quantity::numeric(10,2)
> > ,   step::text
> > ,   case when unit = 'g' then quantity::numeric(10,2) else
> null
> > end
> >   from recipe
> >  where name = 'pizza'
> > union all
> > select
> > recipe.name, recipe.step, recipe.ingredient,
> > recipe.quantity, recipe.unit
> > ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> > ,   pizza.path || '.' || recipe.step
> > ,   case when recipe.unit = 'g' then (pizza.rel_qty *
> > recipe.quantity)::numeric(10,2) else null end
> >   from pizza
> >   join recipe on (recipe.name = pizza.ingredient)
> > )
> > select path, ingredient, quantity, rel_qty, unit, weight,sum(weight)
> > over(partition by split_part(path,'.',1)) as parcial_weight, sum(weight)
> > over() as total_weight
> >   from pizza
> >  order by path;
> >
> >  path  |  ingredient  | quantity | rel_qty | unit  | weight |
> parcial_weight
> > | total_weight
> >
> ---+--+--+-+---+++--
> >  1 | tomato sauce | 1.00 |1.00 | pcs   ||
>  113.00
> > |   313.00
> >  1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |
>  113.00
> > |   313.00
> >  1.2   | basil|10.00 |   10.00 | g |  10.00 |
>  113.00
> > |   313.00
> >  1.3   | salt | 3.00 |3.00 | g |   3.00 |
>  113.00
> > |   313.00
> >  2 | pizza bottom | 1.00 |1.00 | pcs   ||
>  200.00
> > |   313.00
> >  2.2   | dough| 1.00 |1.00 | pcs   ||
>  200.00
> > |   313.00
> >  2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |
>  200.00
> > |   313.00
> >  2.2.2 | water|50.00 |   50.00 | g |  50.00 |
>  200.00
> > |   313.00
> >  2.2.3 | salt | 1.00 |1.00 | pinch ||
>  200.00
> > |   

Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Hellmuth Vargas
*Hi*

This is gorgeous but I suspect any level greater than 10 wide will present
sorting problems, no?


*no, it should not be inconvenient*

 Maybe a fixed two-digit, zero filled number per level?

*neither*

Pushing the problem off by an order of magnitude :)
An exercise left to the OP perhaps.



El mar., 19 de jun. de 2018 a la(s) 14:52, Rob Sargent (
robjsarg...@gmail.com) escribió:

>
>
> On 06/19/2018 01:14 PM, Hellmuth Vargas wrote:
>
>
> Hi
>
> with partial sum:
>
>
>
>
> with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
> path, weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else
> null end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient,
> recipe.quantity, recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty *
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight,*sum(weight)
> over(partition by split_part(path,'.',1)) as parcial_weight*, *sum(weight)
> over() as total_weight*
>   from pizza
>  order by path;
>
>  path  |  ingredient  | quantity | rel_qty | unit  | weight |
> parcial_weight | total_weight
>
> ---+--+--+-+---+++--
>  1 | tomato sauce | 1.00 |1.00 | pcs   ||
>  113.00 |   313.00
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |
>  113.00 |   313.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00 |
>  113.00 |   313.00
>  1.3   | salt | 3.00 |3.00 | g |   3.00 |
>  113.00 |   313.00
>  2 | pizza bottom | 1.00 |1.00 | pcs   ||
>  200.00 |   313.00
>  2.2   | dough| 1.00 |1.00 | pcs   ||
>  200.00 |   313.00
>  2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |
>  200.00 |   313.00
>  2.2.2 | water|50.00 |   50.00 | g |  50.00 |
>  200.00 |   313.00
>  2.2.3 | salt | 1.00 |1.00 | pinch ||
>  200.00 |   313.00
> (9 rows)
>
>
>
>
>
>

-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Hellmuth Vargas
Hi

with partial sum:




with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
path, weight)
as (
select
name, step, ingredient, quantity, unit
,   quantity::numeric(10,2)
,   step::text
,   case when unit = 'g' then quantity::numeric(10,2) else null
end
  from recipe
 where name = 'pizza'
union all
select
recipe.name, recipe.step, recipe.ingredient,
recipe.quantity, recipe.unit
,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
,   pizza.path || '.' || recipe.step
,   case when recipe.unit = 'g' then (pizza.rel_qty *
recipe.quantity)::numeric(10,2) else null end
  from pizza
  join recipe on (recipe.name = pizza.ingredient)
)
select path, ingredient, quantity, rel_qty, unit, weight,*sum(weight)
over(partition by split_part(path,'.',1)) as parcial_weight*, *sum(weight)
over() as total_weight*
  from pizza
 order by path;

 path  |  ingredient  | quantity | rel_qty | unit  | weight |
parcial_weight | total_weight
---+--+--+-+---+++--
 1 | tomato sauce | 1.00 |1.00 | pcs   ||
 113.00 |   313.00
 1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |
 113.00 |   313.00
 1.2   | basil|10.00 |   10.00 | g |  10.00 |
 113.00 |   313.00
 1.3   | salt | 3.00 |3.00 | g |   3.00 |
 113.00 |   313.00
 2 | pizza bottom | 1.00 |1.00 | pcs   ||
 200.00 |   313.00
 2.2   | dough| 1.00 |1.00 | pcs   ||
 200.00 |   313.00
 2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |
 200.00 |   313.00
 2.2.2 | water|50.00 |   50.00 | g |  50.00 |
 200.00 |   313.00
 2.2.3 | salt | 1.00 |1.00 | pinch ||
 200.00 |   313.00
(9 rows)




El mar., 19 de jun. de 2018 a la(s) 11:49, Hellmuth Vargas (hiv...@gmail.com)
escribió:

> Hi
>
> with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
> path, weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else
> null end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient,
> recipe.quantity, recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty *
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight, *sum(weight)
> over() as total_weight*
>   from pizza
>  order by path;
>
>  path  |  ingredient  | quantity | rel_qty | unit  | weight | total_weight
> ---+--+--+-+---++--
>  1 | tomato sauce | 1.00 |1.00 | pcs   ||   313.00
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |   313.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00 |   313.00
>  1.3   | salt | 3.00 |3.00 | g |   3.00 |   313.00
>  2 | pizza bottom | 1.00 |1.00 | pcs   ||   313.00
>  2.2   | dough| 1.00 |1.00 | pcs   ||   313.00
>  2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |   313.00
>  2.2.2 | water|50.00 |   50.00 | g |  50.00 |   313.00
>  2.2.3 | salt | 1.00 |1.00 | pinch ||   313.00
> (9 rows)
>
>
>
>
>
> El mar., 19 de jun. de 2018 a la(s) 08:39, Alban Hertroys (
> haram...@gmail.com) escribió:
>
>> Hi all,
>>
>> I'm struggling with a hierarchical query where I'm tasked to calculate
>> weights of items in an (exploded) Bill of Materials, based on the weights
>> of their components. Not all components are measured with a weight,
>> sometimes there are pieces, meters, areas, etc, and the hierarchy is of
>> varying levels of depth.
>>
>> It would help if I could track a sum() throughout the explosion that
>> would write back onto parent rows when the recursion returns: postorder
>> traversal.
>>
>> I created a simplified example about making pizza:
>>
>> CREATE TABLE ingredient (
>> name text NOT NULL
>> );
>>
>> CREATE TABLE recipe (
>> name text NOT NULL,
>> ingredient text NOT NULL,
&g

Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Hellmuth Vargas
Hi

with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
path, weight)
as (
select
name, step, ingredient, quantity, unit
,   quantity::numeric(10,2)
,   step::text
,   case when unit = 'g' then quantity::numeric(10,2) else null
end
  from recipe
 where name = 'pizza'
union all
select
recipe.name, recipe.step, recipe.ingredient,
recipe.quantity, recipe.unit
,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
,   pizza.path || '.' || recipe.step
,   case when recipe.unit = 'g' then (pizza.rel_qty *
recipe.quantity)::numeric(10,2) else null end
  from pizza
  join recipe on (recipe.name = pizza.ingredient)
)
select path, ingredient, quantity, rel_qty, unit, weight, *sum(weight)
over() as total_weight*
  from pizza
 order by path;

 path  |  ingredient  | quantity | rel_qty | unit  | weight | total_weight
---+--+--+-+---++--
 1 | tomato sauce | 1.00 |1.00 | pcs   ||   313.00
 1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |   313.00
 1.2   | basil|10.00 |   10.00 | g |  10.00 |   313.00
 1.3   | salt | 3.00 |3.00 | g |   3.00 |   313.00
 2 | pizza bottom | 1.00 |1.00 | pcs   ||   313.00
 2.2   | dough| 1.00 |1.00 | pcs   ||   313.00
 2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |   313.00
 2.2.2 | water|50.00 |   50.00 | g |  50.00 |   313.00
 2.2.3 | salt | 1.00 |1.00 | pinch ||   313.00
(9 rows)





El mar., 19 de jun. de 2018 a la(s) 08:39, Alban Hertroys (
haram...@gmail.com) escribió:

> Hi all,
>
> I'm struggling with a hierarchical query where I'm tasked to calculate
> weights of items in an (exploded) Bill of Materials, based on the weights
> of their components. Not all components are measured with a weight,
> sometimes there are pieces, meters, areas, etc, and the hierarchy is of
> varying levels of depth.
>
> It would help if I could track a sum() throughout the explosion that would
> write back onto parent rows when the recursion returns: postorder traversal.
>
> I created a simplified example about making pizza:
>
> CREATE TABLE ingredient (
> name text NOT NULL
> );
>
> CREATE TABLE recipe (
> name text NOT NULL,
> ingredient text NOT NULL,
> quantity numeric(6,2) NOT NULL,
> unit text NOT NULL,
> step integer NOT NULL
> );
>
> COPY ingredient (name) FROM stdin;
> tomato
> basil
> salt
> tomato sauce
> flour
> water
> yeast
> dough
> pizza bottom
> pizza
> \.
>
> COPY recipe (name, ingredient, quantity, unit, step) FROM stdin;
> tomato saucetomato  100.00  g   1
> dough   flour   150.00  g   1
> tomato saucebasil   10.00   g   2
> pizza   pizza bottom1.00pcs 2
> tomato saucesalt3.00g   3
> dough   salt1.00pinch   3
> pizza   tomato sauce1.00pcs 1
> pizza bottomdough   1.00pcs 2
> dough   water   50.00   g   2
> \.
>
> ALTER TABLE ONLY ingredient
> ADD CONSTRAINT ingredient_pkey PRIMARY KEY (name);
>
> ALTER TABLE ONLY recipe
> ADD CONSTRAINT recipe_pkey PRIMARY KEY (name, ingredient);
>
> ALTER TABLE ONLY recipe
> ADD CONSTRAINT recipe_ingredient_fkey FOREIGN KEY (ingredient)
> REFERENCES ingredient(name);
>
> ALTER TABLE ONLY recipe
> ADD CONSTRAINT recipe_name_fkey FOREIGN KEY (name) REFERENCES
> ingredient(name);
>
>
> A query listing the recipe for 'pizza' would be as follows:
> development=> with recursive pizza (name, step, ingredient, quantity,
> unit, rel_qty, path, weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else
> null end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient,
> recipe.quantity, recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty *
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight
>   from pizza
>  order by path;
>
>  path  |  ingredient  | quantity | rel_qty | unit  | weight
> ---+--+--+-+---+
>  1 | tomato sauce | 1.00 |1.00 | pcs   |
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00
>  1.3   | salt | 3.00 |

Re: Crear Una FUNTION usando ROW_NUMBER

2018-02-20 Thread Hellmuth Vargas
Hola Yeli

Respondo entre lineas

El 20 de febrero de 2018, 12:26, yeli escribió:

> Buen día, estoy intento crear un código concatenada a través de una
> función que haga lo siguiente:
>
> Tengo una base de datos postgresql que esta asociada a un software GIS.
>
> Basicamente es una tabla donde se guardan datos sobre centros poblados,
> estos centros poblados tienen datos que al concatenarlos se intenta crear
> códigos únicos.
>
> Id,ccentpob,nombre,cod_cent,sim_cent.
>
> Los datos de las tabla son asi:
>
> 1,’’ , Estado Vargas,24,VAR;
>
> 2,,’’,Estado Vargas,24,VAR;
>
> 3,’’;Estado Amazonas,02,AMA;
>
> 4,’’;Estado Amazonas,02,AMA;
>
> 5,’’;Estado Amazonas,02,AMA;
>
> 6,’’;Estado Amazonas,02,AMA;
>
> 7,’’;Estado Anzoátegui,03,ANZ;
>
>
> Utilizando una sentencia ROW_NUMBER, logro numerar de cada Centro Poblado
> en función a la entidad federal que pertenecen:
>
> SELEC Id,ccentpob,nombre,cod_cent,sim_cent, ROW_NUMBER ( ) OVER
> (PARTITION BY cent) el cual me permite, llevar una secuencia de los datos
> en función a sus diferentes centros poblados. Por lo que esta consulta
> queda asi:
>
> Id,ccentpob,nombre,cod_cent,sim_cent,row_number
>
> 1,’’ , Estado Vargas,24,VAR,1;
>
> 2,,’’,Estado Vargas,24,VAR,2;
>
> 3,’’;Estado Amazonas,02,AMA,1;
>
> 4,’’;Estado Amazonas,02,AMA,2;
>
> 5,’’;Estado Amazonas,02,AMA,3;
>
> 6,’’;Estado Amazonas,02,AMA,4;
>
> 7,’’;Estado Anzoátegui,03,ANZ,1;
>
> Hasta aquí todo funciona perfecto. Al probar esto como una consulta
> intento ponerlo en un funcion que permita crear un trigger, que se va a
> activar cada que que al insertar un dato desde el SIG, se enumere el dato
> siguiendo la secuencia. Lo que me va a permita concatenar los datos para
> crear el ccentpob que es el único dato que me falta.
>
> La funcion que utilizo es la siguiente:
>
>
> CREATE TRIGGER row_number AFTER INSERT
>
> ON nombres_geograficos.ba_2202_100101_n_vargas_centros_poblados
>
> FOR EACH ROW EXECUTE PROCEDURE nombres_geograficos.row_number();
>
>
>
Creo que el Trigger debería ser BEFORE no AFTER si se quiere modificar un
campo para su inserción.


> CREATE OR REPLACE FUNCTION nombres_geograficos.row_number() RETURNS
> TRIGGER AS '
>
> DECLARE
>
> row_number numeric;
>
> BEGIN
>
> IF NEW.cent IS NOT NULL THEN
>
> row_number := (SELECT ROW_NUMBER ( ) OVER (PARTITION BY cent)
>


El  row_number es una función, no es una variable que  se pueda establecer


> FROM nombres_geograficos.ba_2202_100101_n_vargas_centros_poblados);
>


nombres_geograficos.ba_2202_100101_n_vargas_centros_poblados: la tabla esta
particionada?  todos los datos de  vargas  esta en una única tabla? no
tenida sentido estar enumerándolos


> END IF;
>
> RETURN NEW;
>
> END'
>
> language PLPGSQL;
>
>
> Pero cuando intento usar el SIG que es Qgis 2.18 me da un erro… y no me
> almacena el dato.
>
> Se y estoy consiente que el ROW_NUMBER es un dato temporal, pero necesito
> que este dato sea parte de la tabla para así poder ejecutar mi trigger con
> éxito.
>
> Alguien tiene alguna sugerencia.
>
> Se lo agradecería de verdad.
>
>


-- 
Cordialmente,
Hellmuth I. Vargas S.