Re: Indexing fragments of a column's value ?

2023-11-03 Thread Thomas Boussekey
Le ven. 3 nov. 2023 à 21:01, Tom Lane  a écrit :

> David Gauthier  writes:
> > I'm asking about the possibility of indexing portions of a column's value
> > where the column has a static field format.
>
> GIN indexes are meant for exactly that.  You might have to write your
> own opclass to break up the input values in the way you want though.
>
> A less difficult answer would be to write a function that breaks up
> the input into (say) an array of text and then use the existing
> GIN array support.  But you'd pay for that by needing to write more
> complicated queries to use the index.
>
> regards, tom lane
>
>
> Hello David,

Reading your mail, it seems that your data column contains 3 different
kinds of atomic information:

* Characters 1-2
* Characters 3-4
* Characters 5-8

Does it make sense to split this data into 3 separate columns?
Each one could be indexed, and you can rebuild the original thanks to a
generated column:
https://www.postgresql.org/docs/current/ddl-generated-columns.html

HTH, Thomas


Re: Materialized Views - Way to refresh automatically (Incrementaly)

2023-05-11 Thread Thomas Boussekey
Hello

Le jeu. 11 mai 2023, 12:46, FOUTE K. Jaurès  a
écrit :

> Hello Everyone,
>
> Is There a way on PostgresQL 14 to automatically increment a Materialized
> Views ?
>

Have a look at pg_ivm extension https://github.com/sraoss/pg_ivm

>
> --
> Jaurès FOUTE
>

Hope this helps,
Thomas

>
>


Re: Autovacuum on sys tables

2022-12-19 Thread Thomas Boussekey
Hi,

Le lun. 19 déc. 2022 à 07:12, Inzamam Shafiq  a
écrit :

> Thanks Thomas for the response,
>
> It means if I run VACUUM ANALYZE on pg_attribute or pg_class table, it is
> completely safe? Can you please also please confirm what is meant by
> "mid-level" vacuum?
>

To clarify my last message, VACUUM ANALYZE is a trade-off between
autovacuum, that can be considered as a lazy non-blocking operation, and
VACUUM FULL (eager & blocking one).
The `mid-level` in my previous mail was used to pinpoint an intermediate
blacking & resource consumption situation.

VACUUM ANALYZE will:
- remove dead tuples definition
- refresh statistics (can improve execution plans for queries)

Have a look at this website, it explains that better than me ;) :
https://www.interdb.jp/pg/pgsql06.html



>
> Regards,
>
> *Inzamam Shafiq*
> *Sr. DBA*
> ------
> *From:* Thomas Boussekey 
> *Sent:* Sunday, December 18, 2022 4:01 PM
> *To:* Inzamam Shafiq 
> *Cc:* pgsql-general@lists.postgresql.org <
> pgsql-general@lists.postgresql.org>
> *Subject:* Re: Autovacuum on sys tables
>
> Hello Inzamam,
>
> Le sam. 17 déc. 2022 à 08:16, Inzamam Shafiq 
> a écrit :
>
> Dear Experts,
>
> Hope you are doing well.
>
> I have a question that autovacuum is running on sys tables like pg_class,
> pg_attribute, is it a normal thing? Further, what is dead tuples are not
> removed from Autovacuum, can we remove the dead tuples using FULL VACUUMM
> or pg_repack on sys tables?
>
>
> pg_repack cannot run on system tables, it will FAIL with an explicit error
> message explaining the limitation.
>
> Each time you perform DDL operations (CREATE, DROP, ALTER), rows are
> inserted/updated or deleted into the system tables : pg_class, pg_attribute
> ...
> Autovacuum operations perform "low-level" operations, it can be
> interesting to perform "middle-level" vacuum with VACUUM ANALYZE... that is
> not blocking, but will be more a resource-consuming operation than
> autovacuum.
>
> Performing VACUUM FULL operation will block access to these pillar tables
> of your database.
> If your application/users can handle it, go ahead!
> At work on this kind of operation, I set a statement_timeout, in order to
> properly stop the process if it is over a defined amount of time.
>
> Hope this helps,
> Thomas
>
>
> Thank you.
>
> Regards,
>
> *Inzamam Shafiq*
> *Sr. DBA*
>
>


Re: Autovacuum on sys tables

2022-12-18 Thread Thomas Boussekey
Hello Inzamam,

Le sam. 17 déc. 2022 à 08:16, Inzamam Shafiq  a
écrit :

> Dear Experts,
>
> Hope you are doing well.
>
> I have a question that autovacuum is running on sys tables like pg_class,
> pg_attribute, is it a normal thing? Further, what is dead tuples are not
> removed from Autovacuum, can we remove the dead tuples using FULL VACUUMM
> or pg_repack on sys tables?
>

pg_repack cannot run on system tables, it will FAIL with an explicit error
message explaining the limitation.

Each time you perform DDL operations (CREATE, DROP, ALTER), rows are
inserted/updated or deleted into the system tables : pg_class, pg_attribute
...
Autovacuum operations perform "low-level" operations, it can be interesting
to perform "middle-level" vacuum with VACUUM ANALYZE... that is not
blocking, but will be more a resource-consuming operation than autovacuum.

Performing VACUUM FULL operation will block access to these pillar tables
of your database.
If your application/users can handle it, go ahead!
At work on this kind of operation, I set a statement_timeout, in order to
properly stop the process if it is over a defined amount of time.

Hope this helps,
Thomas

>
> Thank you.
>
> Regards,
>
> *Inzamam Shafiq*
> *Sr. DBA*
>


Re: Inheritance pg_largeobject table

2022-12-08 Thread Thomas Boussekey
Le ven. 9 déc. 2022 à 06:27, Ron  a écrit :

> On 12/8/22 23:08, Zhao, Bing wrote:
>
> *Categorization: Unclassified *
>
> We are running PG11.11, and have more than 50T LO data about load into the
> pg_largeobject table. But 32T is the limitation.
>
> We have created couple child tables that using inheritance to
> pg_largeobject, and we have tried use triggers(insert or update) to put
> into child tables. It doesn’t work by use LO functions, ex lo_put,
> lo_create…seems never got triggered. But works with insert statement into
> the pg_largeobject.
>
> If we move LO data from root table to child, lo_get will break.
>
>
>
> Any ideas?
>
>
> Out of curiosity, why inheritance instead of declarative partitioning?
>

Hello,

Pg_largeobject is a system table belonging to the pg_catalog schema.

You cannot customize it(partition,reindex concurrently...) like any other
table.
Maintenance operations on pg_largeobject are also limited, repack extension
cannot work on pg_catalog schema.

We encounter a problem with largeobjects on production databases, a few
weeks ago. My colleague sent a message to the community, and most of the
first feedback that he has received were : "why did you still use this
technology from the last century? "
After this adventure, we have decided to stop using them in the months to
come. It will help us to use logical replication for our next technical
migrations.

Alternatives to pg_largeobject can be:
- using byteA columns, largeobjects will be stored in a toast structure,
linked to the corresponding table. If your largeobjects are stored in
different tables, your data will be spread into different objects, it could
be easier to maintain.
- store binary data outside the database (if it is possible)


> --
> Angular momentum makes the world go 'round.
>


Re: pg_dump is filling C: drive up to 100 percent

2022-07-19 Thread Thomas Boussekey
Hello Meera,

Le mar. 19 juil. 2022 à 13:42, Meera Nair  a écrit :

> Hi Abdul,
>
>
>
> We do realize that. With tar format, is there a way to customize the path
> used for temporary local files?
>
> Some way to configure another drive instead of using C:\?
>

I would try this solution: https://superuser.com/a/1448861/278835

>
>
> Regards,
>
> Meera
>
>
>
> *From:* Abdul Qoyyuum 
> *Sent:* Tuesday, July 19, 2022 3:40 PM
> *To:* Meera Nair 
> *Cc:* pgsql-general@lists.postgresql.org
> *Subject:* Re: pg_dump is filling C: drive up to 100 percent
>
>
>
> External email. Inspect before opening.
>
>
>
> Try dumping without tar format. https://dba.stackexchange.com/a/52730
> 
>
>
>
> On Tue, Jul 19, 2022 at 4:33 PM Meera Nair  wrote:
>
> Hi team,
>
>
>
> pg_dump  is filling C:\
>
>
>
> This is for postgres version 12. Binary directory, data directory are in
> E:\
>
> I’m redirecting pg_dump output also to E:\, I was taking a tar dump output.
>
> But C:\ is getting filled up . Looks like it is used for some sort of
> temporary staging.
>
> Is there a way not to use C:\ for this?
>
>
>
> Regards,
>
> Meera
>
>
>
>
>
>
> --
>
> Abdul Qoyyuum Bin Haji Abdul Kadir
>
> HP No: +673 720 8043
>

HTH,
Thomas


Re: Problem upgrading from 10 to 14 with pg_upgrade: unexpected error upgrading "template1" database for some clusters

2022-07-11 Thread Thomas Boussekey
Hi,

Le lun. 11 juil. 2022 à 21:22, Ken Yeung  a
écrit :

> In the course of upgrading a bunch of database clusters from 10 to 14
> using pg_upgrade tool, some databases result in the following error
> recorded in pg_upgrade_dump_1.log, such that cluster upgrade fails.
> FATAL:  password authentication failed for user "postgres"
>
This error happens on the restoration-side (Pg14)

pg_restore: error: connection to server at "localhost" (::1), port 50432
failed: FATAL:  password authentication failed for user "postgres"

means that you try to use ipv6 (is it possible, according to your target
postgresql.conf file?)
Does your Pg14 target pg_hba.conf file allow this kind of connection?

Last year, I had to migrate 200+ PostgreSQL instance from Pg9.5 to Pg12, I
discovered some configuration differences while performing this kind of
pre-upgrade tests.



>
> This was done on Microsoft Windows 11.
> PostgreSQL 10 and 14 used are built using Visual Studio 2015.
> Only a small number of database clusters has this problem, around 2 of 200.
> It appears these database are used from earlier cluster upgrade from 9.4
> to 10 as well.
>
> It would be very helpful if there is any possible workaround to overcome
> this problem.
>
> I attached a sample pg_upgrade_dump_1.log reported this error.


Hope it helps,
Thomas


Re: Table space not returned to the OS ?

2022-06-27 Thread Thomas Boussekey
Hello Florents,

Le lun. 27 juin 2022 à 11:30, Florents Tselai  a
écrit :

> Hi,
>
> A few months back (October) I had upgraded a Postgres instance from v12 —>
> 14.
>
> The database disk size under /var/lib/postgresql/12 was around 800GB+ back
> then.
> Note, that IIRC I had used hard-linking during the upgrade.
>
> In the database itself, lots of things have changed since.
> In fact, that database itself has been dropped at some point and restored
> from a backup.
>
> As I was running out of disk space, I started investigating and found out
> that
>
> /var/lib/postgresql/12/main/base/16385  —>  886GB+
> /var/lib/postgresql/14 —> 400GB
>
Can you check if your upgrade process has used hard links between the 2
folders, as explained here:
 https://dba.stackexchange.com/a/289007/98943

>
> The last modification date on that directory (../12/) appears to be around
> a month ago,
> When the table with relied 16385 was in fact dropped.
>
> Now, In my update scripts (I use this db as an OLAP) I occasionally run
> VACUUM.
>
> Is it weird that the 886GB space hasn’t been returned to the OS yet?
>
> What’s the safest way to return it to the OS manually?
>
> Thanks
>
>
>
>


Re: SQL questiom

2022-01-21 Thread Thomas Boussekey
Le ven. 21 janv. 2022 à 11:14,  a écrit :

>
>
> Hi,
>
> I am using a query pretty often that looks like
> SELECT <> WHERE <> AND
> <>
>
> Is there a way (with sql or plpgsql)  to convert that into
> SELECT myquery('<>')
>
> Kind regards
> Wolfgang Hamann
>
>
>
> Hello Wolfgang,

You can use a FUNCTION
 to
perform this operation

HTH,
Thomas


Re: List all tables from a specific database

2022-01-14 Thread Thomas Boussekey
Le ven. 14 janv. 2022 à 12:04, Ray O'Donnell  a écrit :

> On 14/01/2022 10:39, Flaviu2 wrote:
> > I work on a project that use Postgre SQL, and I have 0 experience in
> > this regard. I hope to get help here. I need two SQL scripts for Postgre:
> >
> > 1. Get all databases, which I aquired already:
> >
> > *SELECT datname FROM pg_database WHERE datistemplate = false*
> >
> > This one is functional, it's ok.
> >
> > 2. Now, I need to find all tables *under a specific* database. This one
> > I don't know how to achieve it. Can you help me here ? It is possible ?
>

> If it's any help, running psql with the -E switch will show you the SQL
> which psql generates... so then (within psql) issuing the \dt command to
> list tables will show you the SQL used.
>
> Ray.
>
> Hello,
You can use this SQL command to view all the standard table (known as
relations):

SELECT relname FROM pg_class WHERE relkind = 'r';

And this one for partitioned tables (if you use them):

SELECT relname FROM pg_class WHERE relkind = 'p';

HTH,
Thomas


> --
> Raymond O'Donnell // Galway // Ireland
> r...@rodonnell.ie
>
>
>


Re: [RPM/CentOS7] Need to disable repo_gpgcheck on pgdg-common when using RPM version 42.0-17.1

2021-06-30 Thread Thomas Boussekey
Hello all,

Yesterday, a new release of the RPM definition has been released, in order
to integrate many cool updates:
https://git.postgresql.org/gitweb/?p=pgrpms.git;a=summary

And at 10PM yesterday, we encountered a problem on the postgresql instance
upgrades that were performed pointing to the previous RPM definition.
Using the latest tag is not in our team best practices.

Is it possible to keep the last 2 versions?
I made a quick search on the scripts contained into the GIT project:
https://git.postgresql.org/gitweb/?p=pgrpms.git;a=summary, in order to see
if it is possible, as it was defined in per-OS version until 2019 (search
for pgdg-redhat-repo in
https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-7-x86_64/),
but it looks difficult to handle (I got quickly lost in the multiple
Makefiles).

Thanks in advance for your help,
Have a nice day,
Thomas B


Le jeu. 13 mai 2021 à 21:46, Dhanisha  a
écrit :

> Hi Devrim,
>
> Thank you for your response.
>
> I have tried your suggestion, but the error is still present.
>
> I get below stack trace when I run 'yum update' or 'yum install'
>
>
> I have tried few more things
> 1.  If I specify --nogpgcheck in yum install or remove
> pgdg-redhat-all.repo,
> the error goes away.
> 2. I have also specified their respective gpgkeys file.
>
>
>
> I don't think it's an issue with yum or the system at this point, as we can
> reproduce it on other systems.
>
> Sincerely,
> Dhanisha Phadate
>
>
>
>
> --
> Sent from:
> https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>
>
>


Re: [RPM/CentOS7] Need to disable repo_gpgcheck on pgdg-common when using RPM version 42.0-17.1

2021-05-07 Thread Thomas Boussekey
Hello,

Le ven. 7 mai 2021 à 13:55, Devrim Gündüz  a écrit :

>
> Hi,
>
> On Thu, 2021-05-06 at 16:29 +0200, Thomas Boussekey wrote:
> > >
> > This morning, a new RPM version has been sent to
> > https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/ wit
> > h
> > version ID 42.0-17.1
> > We had to adapt our tooling to comply with this new repository file.
> >
> > We faced the following error on section pgdg-common
> > > Failure talking to yum: failure: repodata/repomd.xml.asc from pgdg-
> > > common
> >
> > Searching on the internet, we managed to perform a workaround disabling
> > the
> > parameter `repo_gpgcheck` into the section pgdg-common.
> >
> > Hope it can help others,
>
> Actually please put that setting back. There was a sync issue which I
> fixed yesterday. repo_gpgcheck should be 1, per:
>
>
> https://people.planetpostgresql.org/devrim/index.php?/archives/113-CVE-2021-20271-and-PostgreSQL-YUMDNF-repo.html

I saw your blog article yesterday, nice job!

Yesterday, I didn't find the file `repomd.xml.asc` into the folder:
https://download.postgresql.org/pub/repos/yum/srpms/common/redhat/rhel-7-x86_64/repodata/

Now it is available!
I remove my workaround

Have a nice week-end,
Thomas

>
>
> Regards,
>
> --
> Devrim Gündüz
> Open Source Solution Architect, Red Hat Certified Engineer
> Twitter: @DevrimGunduz , @DevrimGunduzTR
>


[RPM/CentOS7] Need to disable repo_gpgcheck on pgdg-common when using RPM version 42.0-17.1

2021-05-06 Thread Thomas Boussekey
Hello all,

This morning, a new RPM version has been sent to
https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/ with
version ID 42.0-17.1
We had to adapt our tooling to comply with this new repository file.

We faced the following error on section pgdg-common
> Failure talking to yum: failure: repodata/repomd.xml.asc from pgdg-common

Searching on the internet, we managed to perform a workaround disabling the
parameter `repo_gpgcheck` into the section pgdg-common.

Hope it can help others,
Best regards,
Thomas


Re: getting tables list of other schema too

2021-02-24 Thread Thomas Boussekey
Hello Atul,

You can use set a filter to limit the tables returned, i.e:

\dt college.*

HTH,
Thomas

Le mer. 24 févr. 2021 à 08:54, Atul Kumar  a écrit :

> Hi,
>
> I have postgres 9.6 cluster running on centos 7 machine.
>
> when I set search_path to any user made schema with below command
>
> [enterprisedb@stg-edb02 ~ 01:51:39]$ psql edb
> edb=# \c test
>
> set search_path to college;
>
> and after listing the tables with command \dt, we should get list of
> tables of schema college only.
>
> but here I am getting list of tables of schema college and list of
> tables of schema sys along with it.
>
>
> Why is it happening, please suggest.
>
>
> test=# \dt
> List of relations
>  Schema |  Name   | Type  |Owner
> +-+---+--
>  college | ta_rule_error   | table | college
>  college | team_import | table | college
>  college | test_24022021   | table | enterprisedb
>  sys| callback_queue_table| table | enterprisedb
>  sys| dual| table | enterprisedb
>  sys| edb$session_wait_history| table | enterprisedb
>  sys| edb$session_waits   | table | enterprisedb
>  sys| edb$snap| table | enterprisedb
>  sys| edb$stat_all_indexes| table | enterprisedb
>  sys| edb$stat_all_tables | table | enterprisedb
>  sys| edb$stat_database   | table | enterprisedb
>  sys| edb$statio_all_indexes  | table | enterprisedb
>  sys| edb$statio_all_tables   | table | enterprisedb
>  sys| edb$system_waits| table | enterprisedb
>  sys| plsql_profiler_rawdata  | table | enterprisedb
>  sys| plsql_profiler_runs | table | enterprisedb
>  sys| plsql_profiler_units| table | enterprisedb
>  sys| product_component_version   | table | enterprisedb
>  sys| scheduler_0100_component_name   | table | college
>  sys| scheduler_0200_program  | table | college
>  sys| scheduler_0250_program_argument | table | college
>  sys| scheduler_0300_schedule | table | college
>  sys| scheduler_0400_job  | table | college
>  sys| scheduler_0450_job_argument | table | college
>
>
>


Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-31 Thread Thomas Boussekey
Thanks Laurenz for your email

Le lun. 31 août 2020 à 09:42, Laurenz Albe  a
écrit :

> On Sat, 2020-08-29 at 21:18 +0200, Thomas Boussekey wrote:
> > You can find at the end of this email, a new version of the script that
> I use to remove the TOAST table on pg_largobject catalog table.
> > I fixed some typos and wrong synthaxes that I had typed too quickly in
> my first version.
> >
> > Thanks to this script, I can migrate successfully the PostgreSQL
> instance.
> > Yet, the `pg_largobject` table is still considered TOASTed.
> >
> > I have the following behaviour:
> >
> > [...]
> >
> > --- But the pg_largeobject table is not accessible:
> > SELECT * from pg_largeobject order by loid desc limit 5;
> > ERROR:  could not open relation with OID 16619
> >
> > --- Same error when using largeobject functions:
> > SELECT lo_get(47232219);
> > ERROR:  could not open relation with OID 16619
> >
> > --- No TOAST reference into pg_depend for pg_largobject
> > SELECT * from pg_depend where 2613 in (objsubid, refobjid);
> >  classid | objid | objsubid | refclassid | refobjid | refobjsubid |
> deptype
> >
> -+---+--++--+-+-
> >0 | 0 |0 |   1259 | 2613 |   0 | p
> >
> > --- As for OID 16619
> > SELECT * from pg_depend where 16619 in (objsubid, refobjid);
> >  classid | objid | objsubid | refclassid | refobjid | refobjsubid |
> deptype
> >
> -+---+--++--+-+-
> > ```
> >
> > > Is there another catalog table where the TOAST reference can be
> located?
>
> Yes, in the table itself.  It seems like some values in pg_largeobject
> were stored in the TOAST table after all.
>
I can empty the TOAST without altering the access to the data (through
vacuum full).

```
-- Check before VACUUM
WITH last_loid(loid) AS (SELECT distinct loid FROM pg_largeobject order by
loid desc limit 5)
select loid, length(lo_get(loid)) from last_loid;
  loid  | length
+
 361314 |672
 361294 |  40672
 359321 |672
 359301 |  40672
 355170 |672

-- VACUUM the pg_largeobject table:
VACUUM FULL ANALYZE VERBOSE pg_largeobject;
INFO:  vacuuming "pg_catalog.pg_largeobject"
INFO:  "pg_largeobject": found 0 removable, 12393 nonremovable row versions
in 120 pages
DETAIL:  0 dead row versions cannot be removed yet.
CPU 0.10s/0.29u sec elapsed 0.61 sec.
INFO:  analyzing "pg_catalog.pg_largeobject"
INFO:  "pg_largeobject": scanned 114 of 114 pages, containing 12393 live
rows and 0 dead rows; 12393 rows in sample, 12393 estimated total rows
VACUUM
Time: 675.114 ms

-- TOAST is now empty (0 tuples)
SELECT oid, relname, relnamespace, relfilenode, reltoastrelid, relpages,
(relpages*8/1024)::int as mb_size, reltuples::int, relkind
from pg_class
where relname like 'pg_toast_2613%'
or relname like 'pg_largeobject%' order by relname;

  oid  |  relname  | relnamespace | relfilenode |
reltoastrelid | relpages | mb_size | reltuples | relkind
---+---+--+-+---+--+-+---+-
  2613 | pg_largeobject|   11 |  369726 |
  18172 |  114 |   0 | 12393 | r
  2683 | pg_largeobject_loid_pn_index  |   11 |  369732 |
  0 |   36 |   0 | 12393 | i
  2995 | pg_largeobject_metadata   |   11 |2995 |
  0 |2 |   0 |   181 | r
  2996 | pg_largeobject_metadata_oid_index |   11 |2996 |
  0 |2 |   0 |   181 | i
 18172 | pg_toast_2613 |   99 |  369729 |
  0 |0 |   0 | 0 | t
 18174 | pg_toast_2613_index   |   99 |  369731 |
  0 |1 |   0 | 0 | i
(6 rows)

-- The 5 last largeObjects are still available
WITH last_loid(loid) AS (SELECT distinct loid FROM pg_largeobject order by
loid desc limit 5)
>select loid, length(lo_get(loid)) from last_loid;
  loid  | length
+
 361314 |672
 361294 |  40672
 359321 |672
 359301 |  40672
 355170 |672
```

None of the LOID in the pg_largeobject table have a corresponding chunk_id
into the pg_toast table.

```
-- ID existing in the 2 tables (HEAP & TOAST)
WITH
  plo(id,count_rows) AS (SELECT loid, count(*) FROM pg_largeobject GROUP BY
loid),
  pt2(id,count_rows) AS (SELECT chunk_id, count(*) FROM
pg_toast.pg_toast_2613 GROUP BY chunk_id)
SELECT count(*)
FROM plo
  INNER JOIN pt2
ON plo.id = pt2.id;
 count
---
 0
```

I have no record of the TOAST link in the `pg_depend` table:

```

Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-29 Thread Thomas Boussekey
ase_name} -At --no-psqlrc -c "UPDATE
pg_class SET reltoastrelid = 0 WHERE oid = 2613;" | tee -a "${LOG_REMOVE}"

  # Step 5: Drop pg_toast_2613% objects
  echo -e "Change pg_toast_2613 type to relation" | tee -a "${LOG_REMOVE}"
  psql -U postgres --dbname=${database_name} -At --no-psqlrc -c "UPDATE
pg_class SET relkind = 'r' WHERE relname = 'pg_toast_2613';" | tee -a
"${LOG_REMOVE}"

  echo -e "Delete pg_depend link between pg_toast_2613 and pg_largeobject"
| tee -a "${LOG_REMOVE}"
  psql -U postgres --dbname=${database_name} -At --no-psqlrc -c "DELETE
FROM pg_depend WHERE classid = 1259 AND refclassid = 1259 AND refobjid =
2613 AND objid = 'pg_toast.pg_toast_2613'::regclass;" | tee -a
"${LOG_REMOVE}"

  echo -e "Delete pg_depend link between pg_toast_2613 and ---MISSING
OBJECT---" | tee -a "${LOG_REMOVE}"
  psql -U postgres --dbname=${database_name} -At --no-psqlrc -c "DELETE
FROM pg_depend WHERE classid = 1259 AND refclassid = 1259 AND objid =
'pg_toast.pg_toast_2613'::regclass AND refobjsubid not in (select oid from
pg_class);" | tee -a "${LOG_REMOVE}"

  echo "allow_system_table_mods=on" >>"${postgresql_conf_file}"
  systemctl restart postgresql-9.5.service

  echo -e "Drop relation pg_toast_2613" | tee -a "${LOG_REMOVE}"
  psql -U postgres --dbname=${database_name} -At --no-psqlrc -c "DROP TABLE
pg_toast.pg_toast_2613;" | tee -a "${LOG_REMOVE}"

  sed -i '/^allow_system_table_mods=on/d' ${postgresql_conf_file}
  systemctl restart postgresql-9.5.service

  # Refresh value of variable toast_count
  toast_count="$(psql -U postgres --dbname=${database_name} -At --no-psqlrc
-c "select count(*) from pg_class where relname = 'pg_toast_2613';")"
fi

if [[ "${toast_count}" == "0" ]]; then
  echo -e "No TOAST table pg_toast_2613" | tee -a "${LOG_REMOVE}"
fi
```


Le ven. 21 août 2020 à 18:59, Thomas Boussekey 
a écrit :

>
>
> Le ven. 21 août 2020 à 16:45, Laurenz Albe  a
> écrit :
>
>> On Fri, 2020-08-21 at 15:46 +0200, Thomas Boussekey wrote:
>> > Le ven. 21 août 2020 à 15:10, Laurenz Albe 
>> a écrit :
>> > > On Fri, 2020-08-21 at 14:00 +0200, Thomas Boussekey wrote:
>> > > > Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on
>> a PostgreSQL instance when I have
>> > > >  an existing table `pg_toast_2613` into my application database.
>> > > >
>> > > > The upgrade process fails with the following error:
>> > > >
>> > > > ```
>> > > > No match found in new cluster for old relation with OID 16619 in
>> database "mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for
>> "pg_catalog.pg_largeobject"
>> > > > No match found in new cluster for old relation with OID 16621 in
>> database "mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on
>> "pg_toast.pg_toast_2613" which is the TOAST table for
>> > > > "pg_catalog.pg_largeobject"
>> > > > ```
>> > > >
>> > > > The `pg_upgrade` command fails when I have the table
>> `pg_toast_2613` that exists, even if it is empty.
>> > > > I read the PostgreSQL documentation, and I didn't find when the
>> pg_largeobject table needs to be toasted.
>> > > >  I thought it might be linked with records' size, but my queries
>> below don't correlate that!
>> > >
>> > > Indeed, system tables have no TOAST tables in PostgreSQL, so I wonder
>> > > how your "pg_largeobject" table could have grown one.
>> > >
>> > > Did you do any strange catalog modifications?
>> >
>> > Several years before I arrived in this company, the `pg_largeobject`
>> table had been moved to a dedicated tablespace located on a low-IOPS
>> mechanical disk.
>> > One of my first projects when I started working in the company was to
>> move the `pg_largeobject` table back to the default system tablespace.
>> > This might be a side-effect of the migration.
>>
>> I just tried that on v12, and it didn't create a TOAST table.
>>
>> But obviously there is/was a bug somewhere.
>>
>> > > The safest way would be to upgrade with pg_dumpall/psql.
>> >
>> > The `pg_dumpall` command will also copy the content and the existence
>> of the `pg_toast_2613` table, isn't it?
>> > It might generate errors at the execution on the new instance?
>> > Moreover, it will generate a lar

Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Thomas Boussekey
Le ven. 21 août 2020 à 16:45, Laurenz Albe  a
écrit :

> On Fri, 2020-08-21 at 15:46 +0200, Thomas Boussekey wrote:
> > Le ven. 21 août 2020 à 15:10, Laurenz Albe  a
> écrit :
> > > On Fri, 2020-08-21 at 14:00 +0200, Thomas Boussekey wrote:
> > > > Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a
> PostgreSQL instance when I have
> > > >  an existing table `pg_toast_2613` into my application database.
> > > >
> > > > The upgrade process fails with the following error:
> > > >
> > > > ```
> > > > No match found in new cluster for old relation with OID 16619 in
> database "mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for
> "pg_catalog.pg_largeobject"
> > > > No match found in new cluster for old relation with OID 16621 in
> database "mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on
> "pg_toast.pg_toast_2613" which is the TOAST table for
> > > > "pg_catalog.pg_largeobject"
> > > > ```
> > > >
> > > > The `pg_upgrade` command fails when I have the table `pg_toast_2613`
> that exists, even if it is empty.
> > > > I read the PostgreSQL documentation, and I didn't find when the
> pg_largeobject table needs to be toasted.
> > > >  I thought it might be linked with records' size, but my queries
> below don't correlate that!
> > >
> > > Indeed, system tables have no TOAST tables in PostgreSQL, so I wonder
> > > how your "pg_largeobject" table could have grown one.
> > >
> > > Did you do any strange catalog modifications?
> >
> > Several years before I arrived in this company, the `pg_largeobject`
> table had been moved to a dedicated tablespace located on a low-IOPS
> mechanical disk.
> > One of my first projects when I started working in the company was to
> move the `pg_largeobject` table back to the default system tablespace.
> > This might be a side-effect of the migration.
>
> I just tried that on v12, and it didn't create a TOAST table.
>
> But obviously there is/was a bug somewhere.
>
> > > The safest way would be to upgrade with pg_dumpall/psql.
> >
> > The `pg_dumpall` command will also copy the content and the existence of
> the `pg_toast_2613` table, isn't it?
> > It might generate errors at the execution on the new instance?
> > Moreover, it will generate a large downtime
>
> No, pg_dumpall will not duplicate that strange TOAST table.
> It would be the only safe way to upgrade.
>
> If you can ascertain that the TOAST table is empty and you
> like to live dangerous, you can try:
>
> UPDATE pg_class SET reltoastrelid = 0 WHERE oid = 2613;
> UPDATE pg_class SET relkind = 'r' WHERE relname = 'pg_toast_2613';
> DELETE FROM pg_depend WHERE classid = 1259 AND refclassid = 1259 AND
> refobjid = 2613 AND objid = 'pg_toast.pg_toast_2613'::regclass;
> DROP TABLE pg_toast.pg_toast_2613;
>
Thanks Laurenz & Tom for your precious information.

I wrote this BASH script to remove the TOAST table, if it may help anyone:

```sh
#!/usr/bin/env bash
#

set -euo pipefail

database_name="xxx"
postgresql_conf_file="/xxx/postgresql.conf"

# Step 1: check if table pg_toast_2613 exists
toast_count="$(psql -U postgres --dbname=${database_name} -At --no-psqlrc
-c "select count(*) from pg_class where relname = 'pg_toast_2613';")"
echo -e "TOAST exists ::${toast_count}"

if [[ "${toast_count}" == "1" ]]; then
  # Step 2: Check if table pg_toast_2613 has rows and pages
  toast_pages="$(psql -U postgres --dbname=${database_name} -At --no-psqlrc
-c "select relpages from pg_class where relname = 'pg_toast_2613';" )"
  toast_tuples="$(psql -U postgres --dbname=${database_name} -At
--no-psqlrc -c "select reltuples from pg_class where relname =
'pg_toast_2613';" )"

  echo -e "TOAST content pages ::${toast_pages}:: tuples ::${toast_tuples}"

  # Step 3 OPTIONAL: vacuum full pg_largobject if needed
  if [[ "${toast_tuples}" -gt "0" ]]; then

echo -e "Start of vacuum"
psql -U postgres --dbname=${database_name} -At --no-psqlrc -c "VACUUM
FULL VERBOSE pg_largobject;"
echo -e "End of vacuum"

## After VACUUM post-check
toast_pages="$(psql -U postgres --dbname=${database_name} -At
--no-psqlrc -c "select relpages from pg_class where relname =
'pg_toast_2613';" )"
toast_tuples="$(psql -U postgres --dbname=${database_name} -At
--no-psqlrc -c "select reltuples from pg_class where relname =
'pg_toast_2613';" )"


Fwd: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Thomas Boussekey
-- Forwarded message -
De : Thomas Boussekey 
Date: ven. 21 août 2020 à 15:37
Subject: Re: When are largobject records TOASTed into pg_toast_2613?
To: Laurenz Albe 


Le ven. 21 août 2020 à 15:10, Laurenz Albe  a
écrit :

> On Fri, 2020-08-21 at 14:00 +0200, Thomas Boussekey wrote:
> > Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a
> PostgreSQL instance when I have
> >  an existing table `pg_toast_2613` into my application database.
> >
> > The upgrade process fails with the following error:
> >
> > ```
> > No match found in new cluster for old relation with OID 16619 in
> database "mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for
> "pg_catalog.pg_largeobject"
> > No match found in new cluster for old relation with OID 16621 in
> database "mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on
> "pg_toast.pg_toast_2613" which is the TOAST table for
> > "pg_catalog.pg_largeobject"
> > ```
> >
> > The `pg_upgrade` command fails when I have the table `pg_toast_2613`
> that exists, even if it is empty.
> > I read the PostgreSQL documentation, and I didn't find when the
> pg_largeobject table needs to be toasted.
> >  I thought it might be linked with records' size, but my queries below
> don't correlate that!
>
> Indeed, system tables have no TOAST tables in PostgreSQL, so I wonder
> how your "pg_largeobject" table could have grown one.
>
Several years before I arrived in this company, the `pg_largeobject` table
had been moved to a dedicated tablespace located on a low-IOPS mechanical
disk.
One of my first projects when I started working in the company was to move
the `pg_largeobject` table back to the default system tablespace.
This might be a side-effect of the migration.

>
> Did you do any strange catalog modifications?
>
> The safest way would be to upgrade with pg_dumpall/psql.
>
The `pg_dumpall` command will also copy the content and the existence of
the `pg_toast_2613` table, isn't it?
It might generate errors at the execution on the new instance?
Moreover, it will generate a large downtime

> That should get rid of that data corruption.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>


Re: When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Thomas Boussekey
Le ven. 21 août 2020 à 14:00, Thomas Boussekey 
a écrit :

> Hello all,
>
> Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a
> PostgreSQL instance when I have an existing table `pg_toast_2613` into my
> application database.
>
> The upgrade process fails with the following error:
>
> ```
> No match found in new cluster for old relation with OID 16619 in database
> "mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for
> "pg_catalog.pg_largeobject"
> No match found in new cluster for old relation with OID 16621 in database
> "mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on
> "pg_toast.pg_toast_2613" which is the TOAST table for
> "pg_catalog.pg_largeobject"
> ```
>
> The `pg_upgrade` command fails when I have the table `pg_toast_2613` that
> exists, even if it is empty.
> I read the PostgreSQL documentation, and I didn't find when the
> pg_largeobject table needs to be toasted. I thought it might be linked with
> records' size, but my queries below don't correlate that!
>
> I tried to dig into the data and found the following elements:
> * a records exists ONLY into one table (either the pg_largobject table or
> the pg_toast_2613, but not BOTH)
> * The `chunk_id` present into the `pg_toast_2613` table doesn't represent
> real large objects (impossible to query their size)
> * The `chunk_id` present into the `pg_toast_2613` table are not linked to
> existing documents into our applicative tables.
>
> I had a look on my 200+ production & test environments:
> * on half of these instances, the `pg_toast_2613` table doesn't exist
> * on 10% of them, the `pg_toast_2613` table exists and is empty
>
> Here are the points, I want to clarify:
> - What is the aim of the `pg_toast_2613` table?
> - Does it contain REAL large objects or other useful data?
> - Is there a workaround to make the `pg_upgrade` successful?
>
> Thanks in advance for your help,
> Thomas
>
>
> # Appendix
>
> ```sql
> -- Getting the 30 first items of BOTH tables
> # SELECT loid, count(*) from pg_largeobject group by loid order by 1 limit
> 30;
>   loid  | count
> +---
>   24567 | 1
>   24588 | 1
>   24608 | 1
>   24635 | 1
>   24648 | 1
>   24699 | 1
>   27505 | 1
>   84454 |32
>   89483 | 1
>  109676 |34
>  109753 |34
>  109821 |34
>  109855 | 2
>  137150 | 6
>  141236 |29
>  141265 | 1
>  156978 |29
>  157036 |29
>  157065 | 2
>  161835 |29
>  161864 | 1
>  166275 |29
>  166333 |29
>  166404 |29
>  166439 | 2
>  171487 |29
>  171516 | 1
>  175825 |29
>  175854 | 1
>  180171 |29
> (30 rows)
>
> # SELECT chunk_id, count(*) from pg_toast.pg_toast_2613 group by chunk_id
> order by 1 limit 30;
>  chunk_id | count
> --+---
> 84455 | 2
> 84456 | 2
> 84457 | 2
> 84458 | 2
> 84459 | 2
> 84460 | 2
> 84461 | 2
> 84462 | 2
> 84463 | 2
> 84464 | 2
> 84465 | 2
> 84466 | 2
> 84467 | 2
> 84468 | 2
> 84469 | 2
> 84470 | 2
> 84471 | 2
> 84472 | 2
> 84473 | 2
> 84474 | 2
> 84475 | 2
> 84476 | 2
> 84477 | 2
> 84478 | 2
> 84479 | 2
> 84480 | 2
> 84481 | 2
> 84482 | 2
> 84483 | 2
> 84484 | 2
> (30 rows)
>
> -- Searching IDs 84454, 84455 into applicative table
> # SELECT * from mirakl_lob where blob in (84454, 84455);
>  mirakl_document_id | blob
> +---
>2859 | 84454
>
> SELECT length(lo_get (84455));
> ERROR:  large object 84455 does not exist
>
> SELECT length(lo_get (84454));
>  length
> 
>   64080
>
> ```
>

Additional information,

I restored a basebackup for an instance containing the `pg_toast_2613`
table.

At first glimpse, the TOAST table is 30 times the size of pg_largobject
(see relpages in the first query below).
I tried to VACUUM FULL the `pg_largobject` table, and the rows into the
`pg_toast_2613` table vanished!

Can it be a suitable workaround to apply the following logic in my
migration process?

* If `pg_toast_2613` table exists
- Perform `VACUUM FULL VERBOSE pg_largeobject`
- If `SELECT COUNT(*) FROM pg_toast_2613;` = 0
- unTOAST the `pg_largobject` table (if a procedure exists)

```sql
# SELECT oid, relname, relnamespace, relfilenode, reltoastrelid, relpages,
reltuples
>from 

When are largobject records TOASTed into pg_toast_2613?

2020-08-21 Thread Thomas Boussekey
Hello all,

Working on a PostgreSQL 9.5 to 12 upgrade, I encounter problems on a
PostgreSQL instance when I have an existing table `pg_toast_2613` into my
application database.

The upgrade process fails with the following error:

```
No match found in new cluster for old relation with OID 16619 in database
"mirakl_db": "pg_toast.pg_toast_2613" which is the TOAST table for
"pg_catalog.pg_largeobject"
No match found in new cluster for old relation with OID 16621 in database
"mirakl_db": "pg_toast.pg_toast_2613_index" which is an index on
"pg_toast.pg_toast_2613" which is the TOAST table for
"pg_catalog.pg_largeobject"
```

The `pg_upgrade` command fails when I have the table `pg_toast_2613` that
exists, even if it is empty.
I read the PostgreSQL documentation, and I didn't find when the
pg_largeobject table needs to be toasted. I thought it might be linked with
records' size, but my queries below don't correlate that!

I tried to dig into the data and found the following elements:
* a records exists ONLY into one table (either the pg_largobject table or
the pg_toast_2613, but not BOTH)
* The `chunk_id` present into the `pg_toast_2613` table doesn't represent
real large objects (impossible to query their size)
* The `chunk_id` present into the `pg_toast_2613` table are not linked to
existing documents into our applicative tables.

I had a look on my 200+ production & test environments:
* on half of these instances, the `pg_toast_2613` table doesn't exist
* on 10% of them, the `pg_toast_2613` table exists and is empty

Here are the points, I want to clarify:
- What is the aim of the `pg_toast_2613` table?
- Does it contain REAL large objects or other useful data?
- Is there a workaround to make the `pg_upgrade` successful?

Thanks in advance for your help,
Thomas


# Appendix

```sql
-- Getting the 30 first items of BOTH tables
# SELECT loid, count(*) from pg_largeobject group by loid order by 1 limit
30;
  loid  | count
+---
  24567 | 1
  24588 | 1
  24608 | 1
  24635 | 1
  24648 | 1
  24699 | 1
  27505 | 1
  84454 |32
  89483 | 1
 109676 |34
 109753 |34
 109821 |34
 109855 | 2
 137150 | 6
 141236 |29
 141265 | 1
 156978 |29
 157036 |29
 157065 | 2
 161835 |29
 161864 | 1
 166275 |29
 166333 |29
 166404 |29
 166439 | 2
 171487 |29
 171516 | 1
 175825 |29
 175854 | 1
 180171 |29
(30 rows)

# SELECT chunk_id, count(*) from pg_toast.pg_toast_2613 group by chunk_id
order by 1 limit 30;
 chunk_id | count
--+---
84455 | 2
84456 | 2
84457 | 2
84458 | 2
84459 | 2
84460 | 2
84461 | 2
84462 | 2
84463 | 2
84464 | 2
84465 | 2
84466 | 2
84467 | 2
84468 | 2
84469 | 2
84470 | 2
84471 | 2
84472 | 2
84473 | 2
84474 | 2
84475 | 2
84476 | 2
84477 | 2
84478 | 2
84479 | 2
84480 | 2
84481 | 2
84482 | 2
84483 | 2
84484 | 2
(30 rows)

-- Searching IDs 84454, 84455 into applicative table
# SELECT * from mirakl_lob where blob in (84454, 84455);
 mirakl_document_id | blob
+---
   2859 | 84454

SELECT length(lo_get (84455));
ERROR:  large object 84455 does not exist

SELECT length(lo_get (84454));
 length

  64080

```


Re: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-24 Thread Thomas Boussekey
Le mer. 24 juin 2020 à 16:24, Jim Hurne  a écrit :

> "Daniel Verite"  wrote on 06/24/2020 10:08:27 AM:
> > Unfortunately it [pg_repack] can't work with pg_largeobject
>
> That is unfortunate, and potentially leaves us in a difficult spot.
>
> Is it possible to configure PosgreSQL to use a user table for large
> objects instead of a system table?
>
> We're finding it to be especially difficult to maintain the pg_largeobject
> system table (tweak autovacuum settings, manually cleanup with something
> like pg_repack, etc), particularly since our PosrgreSQL instances are
> hosted and controlled by another team.
>
> You can consider moving your largeobjects to BYTEA storage type.
Data will be stored into user's tables (and can be split into multiple
tables, in order to to ease functionnal split and maintenance tasks -- if
needed.
Data acces is slightly different.

In the following link, you have a comparison grid:
https://www.microolap.com/products/connectivity/postgresdac/help/tipsandtricks_byteavsoid.htm

And you also have this article
https://blog-postgresql.verite.pro/2017/11/15/large-objects-bytea.html from
Daniel Vérité, in French

Regards,
>
> Jim Hurne
>
>
>
>
Regards,
Thomas


Re: SQL pretty pritner?

2019-10-28 Thread Thomas Boussekey
Hello all,

I use this github project in order to highlight SQL code on my slides:
https://romannurik.github.io/SlidesCodeHighlighter/

It doesn't highlight 100% of Postgres SQL code, but it's a correct basis!

Hope this helps,
Thomas

Le lun. 28 oct. 2019 à 14:37, Ron  a écrit :

> But does it reformat the text?
>
> On 10/28/19 8:00 AM, Basques, Bob (CI-StPaul) wrote:
> > All,
> >
> > Take a look at the VI(m) editor.  There is a Syntax mode for
> highlighting different file types, as well as tools for exporting the
> highlighted text to HTML.  I have yet to find a file type that it doesn’t
> already handle.
> >
> > Works really slick.
> >
> > bobb
> >
> >
> >
> >> On Oct 27, 2019, at 5:21 PM, stan  wrote:
> >>
> >> Think Before You Click: This email originated outside our organization.
> >>
> >>
> >> I have a presentation later in the week, and i plan on printing out some
> >> files containing SQL commands. I have used some "pretty printers" in the
> >> past for things like Perl scripts. What I am thinking of s something
> that
> >> bolds keywords, handles page breaks, and does some formatting.
> >>
> >> Development being done in an Ubuntu Linux environment.
> >>
> >> Anyone have a recommendation?
> >
>
> --
> Angular momentum makes the world go 'round.
>
>
>


Customizing the PSQL prompt with environment variables using value-dependant coloring

2019-04-11 Thread Thomas Boussekey
Hello everyone,

I'm trying to create a custom PSQL prompt with a part of it colored in
function of its value.
To be more explicit, we are using a physical replication and I'd like to
display in the prompt "MASTER" in red or "slave" in green, when we are
connected on the database server.

# Current configuration

In our current configuration, we are using **bash_profile** to create
functions to get context dependent information:

```sh
# Utility functions used to build the psql prompt
sql_host() {
echo ${HOSTNAME%%.*}
}

sql_replication_status() {
( [ -f ${PGDATA}/recovery.conf ] && echo "slave" ) || echo "master"
}

export -f sql_host sql_replication_status
```

And the PSQL prompt is build thanks to these variable into the
**~/.psqlrc** file:

```sh
\set PROMPT1 '%`sql_host`
(%[%033[31;1m%]%`sql_replication_status`%[%033[0m%])=#'
```

It works fine and uses the C coloring method described into the PSQL
documentation:
https://www.postgresql.org/docs/9.5/app-psql.html#APP-PSQL-PROMPTING

# Tests performed

I tried to create a new function into the **bash_profile** file sending
result in a "colored formatted" way, using both shell or psql synthax, here
is an extract of my tests:

```sh
export RED_pg="%[%033[1;31m%]"
export NOCOLOR_pg="%[%033[0m%]"

export RED_s="\033[1;31m"
export NOCOLOR_s="\033[0m"

export RED_e=`echo -e '\033[1;31m'`
export NOCOLOR_e=`echo -e '\033[0m'`

sql_prompt_colored_s() {
case "`sql_replication_status`" in
"slave" )
  echo "${YELLOW_s}`sql_replication_status`${NOCOLOR_s}";;
"MASTER" )
  echo "${RED_s}`sql_replication_status`${NOCOLOR_s}";;
  esac
}
sql_prompt_colored_e() {
case "`sql_replication_status`" in
"slave" )
  echo "${YELLOW_e}`sql_replication_status`${NOCOLOR_e}";;
"MASTER" )
  echo "${RED_e}`sql_replication_status`${NOCOLOR_e}";;
  esac
}
sql_prompt_colored_pg() {
case "`sql_replication_status`" in
"slave" )
  echo "${YELLOW_pg}`sql_replication_status`${NOCOLOR_pg}";;
"MASTER" )
  echo "${RED_pg}`sql_replication_status`${NOCOLOR_pg}";;
  esac
}

export -f sql_prompt_colored_s sql_prompt_colored_e sql_prompt_colored_pg
```

I managed to get different output, like: %[%033[1;31m%]MASTER%[%033[0m%]
for the function `sql_prompt_colored_pg`.
Yet, when I define a prompt using this variable `sql_prompt_colored_pg`,
example:

```sh
\set PROMPT1 '%`sql_host` (%`sql_prompt_colored_pg`)=#'
```

I get the following prompt:

```
mydatabase (%[%033[1;31m%]MASTER%[%033[0m%])=#
```

And when I try to put this exact prompt value into the PROMPT1:

```
\set PROMPT1 '%`sql_host` (%[%033[1;31m%]MASTER%[%033[0m%])=#'
```

I get the expected prompt:

```
mydatabase (MASTER)=#
```
With MASTER in **red colour**.

# Get a working configuration

## Is there another way to make it work?

I tried other ways of implementing this with psql variable for example, but
I didn't get a better result.

> Do you know a way to perform this context coloring?

## Analysis of the PSQL source code

I think that the file were everything happens is this one:

https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/bin/psql/prompt.c;hb=HEAD#

I'm not at ease with C language, so it is very hard to read.
My intuition is that the readline part operation is applied before the
shell variable interpolation, is it correct?

> Is the message in this file: **(those will not be rescanned for more
escape sequences!)** is present into the file, to tell that no further
interpolation can be performed into both Psql and shell variables?


Thanks in advance for your feedbacks and/or workarounds,
Have a nice day,
kind regards,
Thomas


Re: adding more space to the existing 9.6 cluster

2019-02-21 Thread Thomas Boussekey
Hello all,

If I were in your situation, I would analyze if it could move only a part
of the 36 databases to the new disk.
* Either, I will move some of the databases to the new disk,
* Either, In the largest databases, I will consider to work in multiple
tablespace configuration, using the command ALTER TABLE <> SET
TABLESPACE <>; Link to the documentation:
https://www.postgresql.org/docs/9.6/sql-altertable.html . to move some
tables to the new disk. You can analyze (depending on your disk and DB
configurations, if it's better to move the very large tables or intensively
used tables.

I hope I'm clear enough!
Feel free to ask for clarification or add new elements to go further on!

Hope this helps,
Have a nice day,
Thomas

Le mer. 20 févr. 2019 à 21:37, Ron  a écrit :

> On 2/19/19 5:02 PM, Julie Nishimura wrote:
>
> Hello, we are almost out of space on our main data directory, and about to
> introduce new directory to our cluster. We cannot use multiple physical
> disks as a single volume, so we are thinking about creation new tablespace.
> Our current data_directory shows as follows:
> /data/postgresql/9.6/main
> postgres=# SELECT spcname FROM pg_tablespace;
>   spcname
> 
>  pg_default
>  pg_global
> (2 rows)
>
> We also have 36 existing databases on this cluster.
> If we add new directory, will it be enough to execute the following
> commands in order to force new data there:
>
> CREATE TABLESPACE tablespace01 LOCATION '/data/postgresql/9.6/main01';
>
> ALTER DATABASE db_name SET TABLESPACE
>
> tablespace01
>
> Do I need to repeat it for all our existing databases?
>
>
> Since the command is ALTER DATABASE , it seems that yes you
> have to do it for all of them.  A simple bash script should knock that out
> quickly.
>
>
> Should I change our "template*" dbs as well?
>
>
> If you want *new* databases to automatically go to tablespace01 then
> alter template1.
>
> Do I need to do something else?
>
>
> Maybe, depending on the size of your databases, and how much down time you
> can afford,
>
>
> https://www.postgresql.org/docs/9.6/sql-alterdatabase.html "This command
> physically moves any tables or indexes in the database's old default
> tablespace to the new tablespace."
>
> For example, our multi-TB databases are so big that moving it all at once
> is unreasonably slow.  And a failure might leave the db is a bad spot.
> Thus, I'd move one table at a time, a few per outage.
>
> Naturally, YMMV.
>
> Thank you for your advises.
>
>
>
>
> --
> Angular momentum makes the world go 'round.
>


Re: Question about partition table

2018-10-28 Thread Thomas Boussekey
Hello Yuxia,

As time partitioning is a natively available, I recommend you to use
time-partitioning, as it is.

You can find below an example I used to subpartition my transaction table
by quaterly partition.
I have a repository table named miniexport.tby_part_followup to pilot
partition creation.

FOR live_part_table IN
  SELECT id_business_unit,
date_trunc ('quarter', tpf_dt_min) as min_timestamp,
date_trunc ('quarter', tpf_dt_max + interval '3 months') as
max_timestamp
FROM miniexport.tby_part_followup
WHERE tpf_bool_part_BU_Quarter_ok = false
ORDER BY id_business_unit
LIMIT bi_max_bu LOOP


--RAISE NOTICE 'Creating partitions into schema posdata2 for BU %
...', live_part_table.id_business_unit::varchar;
EXECUTE 'CREATE TABLE posdata2.transaction_p' ||
live_part_table.id_business_unit::varchar || ' PARTITION OF
posdata2.transaction FOR VALUES in (' ||
live_part_table.id_business_unit::varchar || ') PARTITION BY range
(transaction_date);';

-- Create quarter partitions
dt_curr_timestamp := live_part_table.min_timestamp;
WHILE dt_curr_timestamp <= live_part_table.max_timestamp LOOP
  -- Set running variables
  SELECT EXTRACT(YEAR FROM dt_curr_timestamp)  INTO int_curr_year;
  SELECT EXTRACT(QUARTER FROM dt_curr_timestamp) INTO
int_curr_quarter;

  --RAISE NOTICE 'Creating SubPartition for BU %, QUARTER %',
live_part_table.id_business_unit::varchar, int_curr_year::varchar || 'Q' ||
int_curr_quarter::varchar;
  EXECUTE 'CREATE TABLE posdata2.transaction_p' ||
live_part_table.id_business_unit::varchar || '_' || int_curr_year::varchar
|| 'Q' || int_curr_quarter::varchar || ' PARTITION OF
posdata2.transaction_p' || live_part_table.id_business_unit::varchar || '
FOR VALUES FROM (''' || dt_curr_timestamp || ''') TO (''' ||
dt_curr_timestamp + interval '3 months' || ''')';
  EXECUTE 'CREATE UNIQUE INDEX transaction_p' ||
live_part_table.id_business_unit::varchar || '_' || int_curr_year::varchar
|| 'Q' || int_curr_quarter::varchar || '_idx01 ON posdata2.transaction_p'
|| live_part_table.id_business_unit::varchar || '_' ||
int_curr_year::varchar || 'Q' || int_curr_quarter::varchar || '
(id_transaction, id_business_unit);';

  -- Increment dt_curr_timestamp value
  dt_curr_timestamp := dt_curr_timestamp + interval '3 months';
END LOOP;

EXECUTE 'UPDATE miniexport.tby_part_followup SET
tpf_bool_part_BU_Quarter_ok = true where id_business_unit = ' ||
live_part_table.id_business_unit::varchar || ';';

a_count = a_count + 1;
--GET DIAGNOSTICS a_count = ROW_COUNT;
END LOOP;


Moreover, Sébastien Lardière wrote a tooling kit to manage time partitions:
https://github.com/slardiere/PartMgr

I hope this helps!
Regards,
Thomas

Le sam. 27 oct. 2018 à 01:05, Yuxia Qiu  a écrit :

> HI Dear PostgreSQL team,
>
>I have created a partition table as bellow:
>*CREATE TABLE* measurement_year_month (
>  logdate date not null,
>  peaktemp int,
>  unitsales int
> ) PARTITION BY RANGE (EXTRACT(YEAR FROM logdate), EXTRACT(MONTH FROM
>  logdate));
>
> so the content for this  column *partexprs* for this table  in
> pg_partitioned_table will be:
>
> ({FUNCEXPR :funcid 1384 :funcresulttype 701 :funcretset false
> :funcvariadic false :funcformat 0 :funccollid 0 :inputcollid 100 :args
> ({CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1
> :constbyval false :constisnull false :location 122 :constvalue 8 [ 32 0 0 0
> 121 101 97 114 ]} {VAR :varno 1 :varattno 1 :vartype 1082 :vartypmod -1
> :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 1 :location 132})
> :location 114} {FUNCEXPR :funcid 1384 :funcresulttype 701 :funcretset false
> :funcvariadic false :funcformat 0 :funccollid 0 :inputcollid 100 :args
> ({CONST :consttype 25 :consttypmod -1 :constcollid 100 :constlen -1
> :constbyval false :constisnull false :location 150 :constvalue 9 [ 36 0 0 0
> 109 111 110 116 104 ]} {VAR :varno 1 :varattno 1 :vartype 1082 :vartypmod
> -1 :varcollid 0 :varlevelsup 0 :varnoold 1 :varoattno 1 :location 162})
> :location 142})
>
> My question is: Can I get string value from this column? and how?
> In the end I want to have the bellow result:
>
>
> *Table name* *Partition
> information*
> measurement_year_month(EXTRACT(YEAR FROM logdate),
> EXTRACT(MONTH FROM  logdate))
>
>
> Your help is highly appreciated.
>
> Thanks,
> Yuxia
>
>
>


Re: using a plpgsql function argument as a table column.

2018-08-28 Thread Thomas Boussekey
Hello,

Yes, you can :-)

Using a functional that returns a table, like in the following example:
http://www.postgresqltutorial.com/plpgsql-function-returns-a-table/

You will have to generate a dynamic sql statement for querying your table
with the accurate WHERE clause. Like in this example:
https://stackoverflow.com/questions/12780275/dynamic-sql-query-in-postgres

Hope this helps
Have a nice day
Thomas

Le mer. 29 août 2018 à 07:11, ss  a écrit :

> I have a table with many years as columns. y1976, y2077, .. ,
> y2019,y2020 I want to dynamically return a column from a function.
>
>
> select * from FUNCTION('y2016') .
>
> select t1.cola t1.colb, t1.colc, t2.y2016 from . Where t2.y2016 != 0;
>
> or if I select year y2012 I want FUNCTION('y2012')
>
> select t1.cola t1.colb, t1.colc, t2.y2012 from . Where t2.y2012 != 0;
>
>
> to generalize
>
> select * from FUNCTION( year_column )
>
> select t1.cola t1.colb, t1.colc, t2.year_column from . Where
> t2.year_column != 0;
>
> is it possible? if so how?
>
>
>


Re: Granting a user the privilege to create views?

2018-08-28 Thread Thomas Boussekey
Hello Olivier,

You can write an event trigger that will prevent specific user to create
objects that are not views.

Link to the CREATE EVENT TRIGGER documentation:
https://www.postgresql.org/docs/current/static/sql-createeventtrigger.html

Regards,
Thomas

Le mar. 28 août 2018 à 11:00, pavan95  a
écrit :

> Hi Olivier,
>
> Instead if you grant select privilege on a table to a particular user, he
> can be able to create whatever the views he wants to.
>
> Regards,
> Pavan
>
>
>
> --
> Sent from:
> http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>
>


Re: PG Sharding

2018-01-29 Thread Thomas Boussekey
Hello Matej,

I found some interesting implementation of postgres_XL at TenCent(WeChat)
and Javelin. You can find video capture of conferences of IT people from
these companies. Moreover, I attended to PgConf.eu at Warsaw in October,
and The ESA (European Space Agency) made a lightning talk on their Gaïa
project using a 8-datanode cluster.

I agree with you about the SPOF for the master on the citusdata
architecture. Yet, implementation is very easy, and it's an extension in
Pg10! But I had to fire many foreign into my data model to be able to
install my tables into citusdata.

2 years ago, I was looking for a partitioning extension, pg_partman was
mature, whereas pg_pathman was in version 0.4 and many issues in their
github were written in Cyrillic, and I'm French ;-)... So I had a closer
look at pg_partman.
I'm using pg_partman in production now.

2018-01-29 16:49 GMT+01:00 Matej <gma...@gmail.com>:

> Hi Thomas.
>
> Thanks.
>
> Also looked at those solutions:
> - PGXL Am a ltille afraid we would be the test dummies. Did not hear of
> many production installs.
> - Citus seems heavily limited scalability vise, because of the master node
> design.
>
> Regarding  partitioning we are considering ourselves pg_pathman. Was
> hoping on PG10 partitioning but currently not really many changes
> performance vise.
>
> Overall we are still considering manual APP/sharding as this seems to be
> the most scalable approach which least added latency. The builtin solutions
> seems to introduce extra lag and I am afraid of what to do when something
> goes wrong. then it's not a black box anymore and you have to study the
> details.
>
> For node loss we plan a master -slave setup, and there will not be so many
> cross shard queries.
>
> BR
> Matej
>
>
> 2018-01-29 16:15 GMT+01:00 Thomas Boussekey <thomas.bousse...@gmail.com>:
>
>> Hello,
>>
>> Facing the same situation, I'm considering 3 solutions:
>> - Sharding with postgres_xl (waiting for a Pg10 release)
>> - Sharding with citusdata (Release 7.2, compatible with Pg10 and
>> pg_partman, seems interesting)
>> - Partitioning with PG 10 native partitioning or pg_partman
>>
>> With colleagues, we have tested the 3 scenarios.
>> Sharding looks interesting, but you have to apprehend its behaviour in
>> case of node loss, or cross-node queries.
>>
>> Thomas
>>
>> 2018-01-29 15:44 GMT+01:00 Melvin Davidson <melvin6...@gmail.com>:
>>
>>>
>>>
>>> On Mon, Jan 29, 2018 at 9:34 AM, Matej <gma...@gmail.com> wrote:
>>>
>>>> Hi Everyone.
>>>>
>>>> We are looking at a rather large fin-tech installation. But as
>>>> scalability requirements are high we look at sharding of-course.
>>>>
>>>> I have looked at many sources for Postgresql sharding, but we are a
>>>> little confused as to shared with schema or databases or both.
>>>>
>>>>
>>>> So far our understanding:
>>>>
>>>> *SCHEMA.*
>>>>
>>>> PROS:
>>>> - seems native to PG
>>>> - backup seems easier
>>>> - connection pooling seems easier, as you can use same connection
>>>> between shard.
>>>>
>>>> CONS:
>>>> - schema changes seems litlle more complicated
>>>> - heard of backup and maintenance problems
>>>> - also some caching  problems.
>>>>
>>>> *DATABASE:*
>>>>
>>>> PROS:
>>>> - schema changes litlle easier
>>>> - backup and administration seems more robust
>>>>
>>>> CONS:
>>>> - heard of vacuum problems
>>>> - connection pooling is hard, as 100 shards would mean 100 pools
>>>>
>>>>
>>>> So what is actually the right approach? If anyone could  shed some
>>>> light on my issue.
>>>>
>>>> *Thanks*
>>>>
>>>>
>>>>
>>>
>>> *You might also want to consider GridSQL. IIRC it was originally
>>> developed by EnterpriseDB. I saw a demo of it a few years ago and it was
>>> quite impressive, *
>>> *but I've had no interaction with it since, so you will have to judge
>>> for yourself.*
>>>
>>>
>>> *https://sourceforge.net/projects/gridsql/?source=navbar
>>> <https://sourceforge.net/projects/gridsql/?source=navbar>*
>>>
>>> --
>>> *Melvin Davidson*
>>> I reserve the right to fantasize.  Whether or not you
>>> wish to share my fantasy is entirely up to you.
>>>
>>
>>
>


Re: PG Sharding

2018-01-29 Thread Thomas Boussekey
Hello,

Facing the same situation, I'm considering 3 solutions:
- Sharding with postgres_xl (waiting for a Pg10 release)
- Sharding with citusdata (Release 7.2, compatible with Pg10 and
pg_partman, seems interesting)
- Partitioning with PG 10 native partitioning or pg_partman

With colleagues, we have tested the 3 scenarios.
Sharding looks interesting, but you have to apprehend its behaviour in case
of node loss, or cross-node queries.

Thomas

2018-01-29 15:44 GMT+01:00 Melvin Davidson :

>
>
> On Mon, Jan 29, 2018 at 9:34 AM, Matej  wrote:
>
>> Hi Everyone.
>>
>> We are looking at a rather large fin-tech installation. But as
>> scalability requirements are high we look at sharding of-course.
>>
>> I have looked at many sources for Postgresql sharding, but we are a
>> little confused as to shared with schema or databases or both.
>>
>>
>> So far our understanding:
>>
>> *SCHEMA.*
>>
>> PROS:
>> - seems native to PG
>> - backup seems easier
>> - connection pooling seems easier, as you can use same connection between
>> shard.
>>
>> CONS:
>> - schema changes seems litlle more complicated
>> - heard of backup and maintenance problems
>> - also some caching  problems.
>>
>> *DATABASE:*
>>
>> PROS:
>> - schema changes litlle easier
>> - backup and administration seems more robust
>>
>> CONS:
>> - heard of vacuum problems
>> - connection pooling is hard, as 100 shards would mean 100 pools
>>
>>
>> So what is actually the right approach? If anyone could  shed some light
>> on my issue.
>>
>> *Thanks*
>>
>>
>>
>
> *You might also want to consider GridSQL. IIRC it was originally developed
> by EnterpriseDB. I saw a demo of it a few years ago and it was quite
> impressive, *
> *but I've had no interaction with it since, so you will have to judge for
> yourself.*
>
>
> *https://sourceforge.net/projects/gridsql/?source=navbar
> *
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>