Re: vacuum is time consuming

2021-02-02 Thread Martín Marqués
Hi,

> > Is there any other option to increase the speed of vacuum?
>
> For autovacuum, decrease "autovacuum_vacuum_cost_delay".

He mentioned in the original message that his problems was with a
global VACUUM after upgrading, so cost_delay should be zero by
default.

Regards,

-- 
Martín Marqués
It’s not that I have something to hide,
it’s that I have nothing I want you to see




Re: vacuum is time consuming

2021-02-01 Thread Martín Marqués
Hi Atul,

> We have DB of 400GBs, Server RAM is 160GBs. CPU consumption is 1-3%  only.
>
> We have  configured maintenance_work_mem to 10GBs and restarted the
> postgres service.

Just wanted to mention that maintenance_work_mem has a hardcoded upper
limit threshold of 1GB, so any size bigger than that to
maintenance_work_mem or autovacuum_maintenance_work_mem will leave it
effectively at 1GB.

There have been a few attempts the past few years on lifting that restriction.

> We have done upgrade from postgres 9.5 to 9.6, then when we run vacuum
> analyze on database, it is taking more than 5 hours and still running.
>
> Any suggestions for making the process(vacuum analyze) faster are welcome.

Yes, upgrade to PG13.

Kind regards, Martin,


-- 
Martín Marqués
It’s not that I have something to hide,
it’s that I have nothing I want you to see




Re: undefined reference to `pg_snprintf when we upgraded libpq version from 10.3 to 12.3

2020-12-16 Thread Martín Marqués
Hi,

Did you also upgrade the libpq devel package?

To start, what distribution are you using, and how did you upgrade?
Are you using RHEL or Debian based packages? Did you upgrade libpq
using packages (yum/dnf update libpq or apt upgrade libpq)?

In any case, this seems like a problem with development package from a
different version then the library installed.

Kind regards, Martín


El mié, 7 oct 2020 a las 10:52, M Tarkeshwar Rao
() escribió:
>
> Hi all,
>
>
>
> We upgraded the libpq version from 10.3 to 12.3. Now we are getting following 
> linker error in compilation.
>
> linux64/lib/libPostgreSQLClient.so: undefined reference to 
> `pg_snprintf(char*, unsigned long, char const*, ...)'
>
>
>
> When we analyzed it we found following diff in libpq. What could be the 
> solution of this error.
>
>
>
> %/postgres/10.3/lib
>
> >nm libpq.a | grep sprint
>
> U sprintf
>
> U sprintf
>
> U sprintf
>
> U sprintf
>
> U sprintf
>
> U sprintf
>
>
>
> %:/postgres/12.3/lib >nm libpq.a | grep sprint
>
> U pg_sprintf
>
> U pg_sprintf
>
> U pg_sprintf
>
> U pg_sprintf
>
>
>
> Regards
>
> Tarkeshwar



--
Martín Marqués
It’s not that I have something to hide,
it’s that I have nothing I want you to see




Re: Autovacuum not functioning for large tables but it is working for few other small tables.

2020-12-16 Thread Martín Marqués
Hi,

> We have facing some discrepancy in Postgresql database related to the 
> autovacuum functionality.
>
> By default autovacuum was enable on Postgres which is used to remove the dead 
> tuples from the database.
>
> We have observed autovaccum cleaning dead rows from table_A but same was not 
> functioning correctly for  table_B which have a large size(100+GB) in 
> comparision to table_A.
>
> All the threshold level requirements for autovacuum was meet and there are 
> about Million’s of  dead tuples but autovacuum was unable to clear them, 
> which cause performance issue on production server.
>
> Is autovacuum not working against large sized tables or Is there any 
> parameters which  need to set to make autovacuum functioning?


Do you have autovacuum logging enabled in this server? If so, would be
good if you could share them here.

Having the output from logs of autovacuum for these tables would give
some insights on where the problem might reside.

-- 
Martín Marqués
It’s not that I have something to hide,
it’s that I have nothing I want you to see




Re: postgres operational

2019-01-09 Thread Martín Marqués
El 9/1/19 a las 17:38, Ron escribió:
> On 1/9/19 12:19 PM, Martín Marqués wrote:
>> El 9/1/19 a las 14:58, Steve Clark escribió:
>>> Hi List,
>>>
>>> Is there a sure fire way to tell if postgres server is up an
>>> operational. I was testing to see if the
>>> socket at /tmp/.s.PGSQL.5432 existed - but I ran into a recent
>>> problem on CentOS 7.5, postgresql 9.2.24, where the
>>> socket was there but my script couldn't read from my database yet.
>> Use `check_postgres`
> 
> Isn't "pg_ctl status" the canonical method?

Depends on what you are expecting.

`check_postgres` has many different checks, one which will actually try
to connect and run `SELECT 1` to make sure postgres can actually run a
query.

pg_ctl status might return ok, yet you can't connect for other reasons.


-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: Pulling data from Postgres DB table for every 5 seconds.

2019-01-09 Thread Martín Marqués
El 9/1/19 a las 20:22, Mark Fletcher escribió:
> On Wed, Jan 9, 2019 at 12:58 PM github kran  <mailto:githubk...@gmail.com>> wrote:
> 
> 
> Mark - just curious to know on the logical replication. Do you think
> I can use it for my use case where i need to publish data to a
> subscriber when there is a change in the data updated for a row or
> any new inserts happening on the table. Intention
> is to send this data in Json format by collecting this modified data
> in real time to a subscriber.
> 
> From what you've said, it's a great use case for that feature. The one
> thing to note is that you will have to code up a logical replication
> client. If I can do it, pretty much anyone can, but it might take some
> time to get things right. I wrote about some of what I found when
> developing our client a year ago
> here: https://wingedpig.com/2017/09/20/streaming-postgres-changes/
> 
> We ended up just using the included test output plugin that comes with
> the postgresql distribution. And we didn't end up streaming to Kafka or
> anything else first. We just take the data and insert it into our
> elasticsearch cluster directly as we get it.

I realy doubt that would work. Aurora doesn't have WALs, so how would
you be able to decode the transactions?

AFAIU, you can't use logical decoding on Aurora. Maybe you should be
asking at the Aurora support channel.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: postgres operational

2019-01-09 Thread Martín Marqués
El 9/1/19 a las 14:58, Steve Clark escribió:
> Hi List,
> 
> Is there a sure fire way to tell if postgres server is up an operational. I 
> was testing to see if the
> socket at /tmp/.s.PGSQL.5432 existed - but I ran into a recent problem on 
> CentOS 7.5, postgresql 9.2.24, where the
> socket was there but my script couldn't read from my database yet.

Use `check_postgres`


-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: multiple configurations with repmgr

2019-01-09 Thread Martín Marqués
El 8/1/19 a las 13:17, ROS Didier escribió:
> Hi
> 
>    We are going to use repmgr  with one node for the
> primary, one node for the standby and one node for the witness.
> 
>    It works fine _with one project_.
> 
>  
> 
> The problem is  that we want to have several other projects. Each one
> with its own primary node, its own standby node and the _same witness node._
> 
>    Question : is it possible to do that ? _One witness node_
> which surveys several other nodes.

Yes, you can accomplish that following the instructions below for
configuring the witness:

- Initialize the postgres server on the witness node
- Create one database for each cluster the witness will be managing
- Create the appropriate repmgr.conf files, one for each cluster managed
through this witness, and each with the appropriate conninfo setting so
they point to different databases.
- Run `repmgr witness register` multiple times, using different
configuration files on each run

Hope that helps clarify.

P.S.: This works on latest versions of repmgr. Before 3.3 the witness
was created through repmgr which made things harder to manage, reason
why that was removed in later versions

Cheers,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: pg_restore fails due to foreign key violation

2018-12-10 Thread Martín Marqués
Hi,

El lun., 10 dic. 2018 a las 7:21, Andreas Kretschmer (<
andr...@a-kretschmer.de>) escribió:

>
> Am 10.12.18 um 11:15 schrieb Olga Vingurt:
> > After playing with the dump and importing schema first and data next
> > without the triggers we indeed see that data is missing in the table
> > i.e. dump is not consistent.
> > We don't stop the application which uses database during the dump but
> > according to the documentation the dump still should be consistent.
>
>
> please check if the data are consistent on the source database system.
>


I would start by running amcheck to see if there is index corruption
somewhere, as that can lead to data corruption (in particular if the
corrupted index is a unique or primary key index)

Regards,

-- 
Martín Marqués
It’s not that I have something to hide,
it’s that I have nothing I want you to see


Re: Unused indexes

2018-12-01 Thread Martín Marqués
El sáb., 1 dic. 2018 a las 8:24, Ron () escribió:
>
> For indexes that existed before the cluster was last started, and for which
> REINDEX hasn't been run on them since the cluster was last started... is it
> valid to say that an index has not been used since the cluster was started
> if these three pg_stat_all_indexes fields all have a value of 0?
>
> idx_scan

Just this one, and that the index doesn't force a constraint (it's not
a unique index).

One other thing to keep in mind is that, if you have hot_standby
replicas, the index might be used there, and the primary doesn't have
information of index_scans on other nodes of the cluster.

Regards,

-- 
Martín Marqués
It’s not that I have something to hide,
it’s that I have nothing I want you to see



Re: postgresql10-server RPM unpacking of archive failed

2018-11-20 Thread Martín Marqués
Hi,

El lun., 19 nov. 2018 a las 17:15, Chris Mair () escribió:
>
> Hi,
>
> on a CentOS 7.5 machine with PostgreSQL 10.3 installed from the PGDG yum repo,
> I have a strange error when trying to update to 10.6.
>
> A simple "yum update" updated everything except postgresql10-server.x86_64!
>
> That package gives (repeatedly) the error message:
>
> Error unpacking rpm package postgresql10-server-10.6-1PGDG.rhel7.x86_64
> error: unpacking of archive failed: cpio: lstat

I would try cleaning up all the data (packages and metadata) and trying again:

yum clean all
yum update postgresql10-server

Regards,

-- 
Martín Marqués
It’s not that I have something to hide,
it’s that I have nothing I want you to see



Re: PostgreSQL MVCC and alternative REDO implementation - my thoughts

2018-11-18 Thread Martín Marqués
El 18/11/18 a las 04:49, Gerhard Wiesinger escribió:
> Hello,
> 
> I just read that blog article and I think switching to REDO logic is not
> a good idea.

I think you got it the other way around. Postgres implementes REDO, that
is, move forward by REDOing the changes which are stored in WAL files

BTW, could you point us to the blog you read this?

Maybe you got REDO and UNDO mixed up.

Saludos,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: Enabling bdr in multiple databases on the same postgresql instance/cluster

2018-10-28 Thread Martín Marqués
El 23/10/18 a las 10:24, Daniel Fink (PDF) escribió:
> 
> I already have a running cluster of BDR nodes.
> 
> Now we want to add an additional database on the same hosts.
> 
> Can I just create a new database and then create/join nodes as in this
> description:
> 
> http://bdr-project.org/docs/1.0.3/quickstart-enabling.html

If you refer to the following architecture:

Node1_DB1 <--> Node2_DB1

And you want to create a new database DB2 on Node1 and have it
replicating to DB2 on Node2 with BDR, then yes, you have to follow the
same process used to create the first BDR group.

Keep in mind you'll need more slots and replication connections, so
check the values of max_replication_slots, max_worker_processes and
max_wal_senders.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: Pgbouncer discard all

2018-10-16 Thread Martín Marqués
El 16/10/18 a las 09:59, Scot Kreienkamp escribió:
> Are you sure they’re actually waiting?  Don’t forget 10.5 will show the
> last query executed even if the connection is idle.  I believe discard
> all would be the last command the pgbouncer would send to the database
> when the client is done as it resets the connection for the next
> client.  So what you’re describing would seem to be expected behavior. 

He might have been referring to client waiting. That is visible in the
pgbouncer pseudo-database

OTOH if the the waiting is seen in pg_stat_activity, then pgbouncer has
nothing to do. The connection has already been assigned to the client
and the waiting is happening on the database server, not the pooler.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: upgrading from pg 9.3 to 10

2018-08-14 Thread Martín Marqués
El 14/08/18 a las 14:44, Edmundo Robles escribió:
> Is safe  to upgrade from pg 9.3 to pg 10 directly using pg_upgrade or
> is better upgrade, with pg_upgrade,  from  9.3 -> 9.4 ->9.5 -> 9.6 -> 10.

That is quiet a jump. But not imposible with pg_upgrade.

I'd recommend testing with a clone of the server to verify that it works
properly (not only pg_upgrade, but your application with the new version
of postgres). Also to time the window you'll need and see if there are
things to be aware of, like extensions which are upgraded and might
break the upgrade.

Now if you are going to first jump to 9.4, I'd recommend using pglogical
after getting to 9.4 and upgrade straight from 9.4 to 10 (always after
testing your application against 10)

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: [GENERAL] Postgre compatible version with RHEL 7.5

2018-05-21 Thread Martín Marqués
El 21/05/18 a las 02:15, Deepti Sharma S escribió:
> Hello Team,
> 
> Can you please let us know what postgre version is compatible with RHEL7.5? 
> We are currently using Postgre version 9.6.6.

Please, don't hijack other threads just changing the subject line. Start
a new mail thread with the question you have.

As Adrian mentioned, the yum postgres web shows all available version of
postgres for each distribution version.

> See below for full Postgres to RH/Centos/Fedora/SL/etc mapping:
> 
> http://yum.postgresql.org/repopackages.php

In the case of RHEL 7.x, all supported versions will work (and some of
the unsupported ones will as well, although it's not recommended to
still be on versions prior to 9.3).

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: hardcode password in connect string

2018-04-15 Thread Martín Marqués
El 15/04/18 a las 12:16, Raymond O'Donnell escribió:
> On 15/04/18 12:35, Ricardo Martin Gomez wrote:
>> Hello David,
>> I think so, the parameter you need in psql command is -w mipassword.
> 
> Sorry, this is not correct. -w causes psql *never* to prompt for a
> password, and -W forces a password prompt. Neither allows the password
> to be passed as an argument - I don't think that this is possible with
> psql.

It is possible, you just have to construct the whole string:

psql "host=thehost dbname=mydatabase user=memyselfi password=mypassword"

Still, the best option, and the most secure, is to use a .pgpass file.
The psql invocation with it's password will very likely end in
.bash_history and alike.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services