Re: Max effective number of CPUs that Postgresql can handle?

2023-12-06 Thread Christoph Moench-Tegeder
## Ron Johnson (ronljohnso...@gmail.com):

> Like the Subject says, is there any point of diminishing returns at which
> the Postmaster gets "too busy" to manage all the threads?

It is possible to use 3-digit cores (i.e. 128, maybe more) quite
efficiently. The rest of the system has to fit the amount of compute,
else you end up with an unbalanced system. Also, not every workload
can benefit from this kind of machine. (then: cost of redundancy,
etc.).
I believe there was a benchmark (actual numbers, not just marketing)
done by PostgresPro on a largish Power machine, but I can't find
that right now.

Regards,
Christoph

-- 
Spare Space.




Re: pg_basebackup

2023-11-20 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> 2023-11-16 20:34:13.538 CET [6250] LOG:  terminating walsender process due to 
> replication timeout

Besides "what Lauenz said" (especially about the horribly ooutdated
PostgreSQL version): check IO speed and saturation during backup
and make sure you're not stalling. I've seen this beaviour a few
times, mostly in conjunction with btrfs - using a suitably proven
filesystem usually solved the problem (overloaded hardware can
be a problem, too - but modern systems can take quite a bit more
than in the olden days of spinning rust).

Regards,
Christoph

-- 
Spare Space.




Re: missing client_hostname

2023-11-01 Thread Christoph Moench-Tegeder
Hi,

please don't top-post.

## Atul Kumar (akumar14...@gmail.com):

> I have already enabled log_hostname, still  *client_hostname is not showing
> up.*

It's always helpful to mention relevant non-default settings along
with the question. Was log_hostname really active at backup start
time? (Check pg_settings before starting the backup). Can the server
actually resolve the IP address to a host name?

> Do you think that just because I use ip instead of hostname while using
> pg_basebackup could be the reason for it ?

No.

Regards,
Christoph

-- 
Spare Space




Re: missing client_hostname

2023-11-01 Thread Christoph Moench-Tegeder
## Atul Kumar (akumar14...@gmail.com):

> It was successfully configured but when I query pg_stat_replication I don't
> get the hostname in output:

I Recommend The Fine Manual:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-VIEW
"... and only when log_hostname is enabled".

Regards,
Christoph

-- 
Spare Space




Re: archive_command debugging

2023-08-23 Thread Christoph Moench-Tegeder
## Nick Renders (postg...@arcict.com):

> I was wondering if anyone had any good tips for "debugging" the
> archive_command in the postgresql.conf.

For starters, you get rather noisy logging when that command fails
(plus all the output from the command itself), so check your postgres
logfile first. Also, you get some additional debug logging at level
DEBUG3.
Then archive_command basically inherits it's environment and user id
from the archiver process, so you could check that process' proc entries.
Third, archive_command is executed via system() (after template
processing), so you're free to do... quite a lot in there (just
don't mess with the exit status). It might be more convenient to
use some shell wrapper around your final archive command and have
that wrapper do all the logging etc. you might want.

And a final tip: do not rely on the environment, always use full
paths, and if you ever might have whitespace or other "funny
characters" in some path/file name (or always, just to be safe)
make sure you get your quoting right.

Regasrds,
Christoph

-- 
Spare Space




Re: Different releases in the same server

2023-08-02 Thread Christoph Moench-Tegeder
## marco@tiscali.it (marco@tiscali.it):

> we have recently started to manage a production server
> running a 9.6 postgres.

Which is EOL for nearly two years now:
https://www.postgresql.org/support/versioning/

> We
> have to upgrade to postgres 12.x

Which is going EOL in little over one year's time.

You should look into https://yum.postgresql.org/packages/ where you
can get packages with some real production life time.

> At present in /usr/bin there are not links as
> aspected for use with alternatives, but there are files belonging to
> 9.2
> version except for pg_basebackup:

That sounds severly broken, as if you installed CentOS original
"postgresql" package (CentOS 7 ships with the really ancient 9.2).
Check rpm and yum where that came from and then it's time for some
cleanup. In the mean time, make sure to always call all PostgreSQL
utilities with full path.

> Could
> we have trouble having 9.6 and 12 running in the same time (we will
> upgrade with pg_upgrade) in such server?

For pg_upgrade, you will need both the old and new binaries installed.

> HOW can we to fix the presence
> of release 9.2 files in /usr/bin?

Maybe best use the upgrade opportunity to move everything to a clean
installation on a new VM?

> Will the simbolic links in /usr/bin be
> created by alternatives once the 9.2 release will be
> dropped/deinstalled/deleted?

The symlinks would be created by registering the repective versions
with the alternatives system - usually that happens in the post-install
scripts of the RPMs. You could do that manually (after removing 9.2)
or maybe by re-installing your current (ancient) packages. Cleanest
solution would be a new server - who knows what other surprises are
hidden on that machine?

Regards,
Christoph

-- 
Spare Space.




Re: Accessing referential constraint information with minimal permissions

2023-06-24 Thread Christoph Moench-Tegeder
## Avin Kavish (a...@baseboard.ai):

> I know the information is in `information_schema.referential_constraints`,
> but apparently reading that information requires having write permissions
> to the tables that have references. I don't know why it's designed like
> that.

I guess because "the standard says so".
But then, information_schema.referential_constraints is only a view
and the privilege check is coded into the view, so you could just take
the query from the view and omit the privilege check and Bob is your
uncle.
Another way to approach your problem would be via pg_catalog.pg_constraint
https://www.postgresql.org/docs/current/catalog-pg-constraint.html
and maybe use pg_get_constraintdef() as documented in this table:
https://www.postgresql.org/docs/current/functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE

Another way to learn about these internals is to use psql with
argument -E (--echo-hidden, or "\set ECHO_HIDDEN on") and watch
psql's queries when displaying objects.

Regards,
Christoph

-- 
Spare Space




Re: 15 pg_upgrade with -j

2023-05-23 Thread Christoph Moench-Tegeder
## Ron (ronljohnso...@gmail.com):

> We'd never hardlink.  Eliminates the ability to return to the old
> system if something goes wrong.

That's why you get yourself a recent XFS and use clone mode (still
sticks you to the same filesystem, but gets you up running much
faster).

Regards,
Christoph

-- 
Spare Space




Re: Very slow queries followed by checkpointer process killed with signal 9

2023-04-03 Thread Christoph Moench-Tegeder
## Tom Lane (t...@sss.pgh.pa.us):

> Jason McLaurin  writes:
> > I'm troubleshooting an issue where about once a week, a database appears to
> > lock up and then the PostgreSQL process crashes and recovers. When this
> > happens, a few queries will be logged, but there is no pattern to which
> > queries are executing when the crash happens, and the queries logged don't
> > appear to be queries that would consume a lot of resources.
> 
> Hmm, is it always the checkpointer that gets the OOM kill?  That seems
> quite odd.

Are you sure it's OOM? That's a frequent source of SIGKILLs, but
SIGKILL is not exclusive to oomkiller. Maybe something in the
container stack (some cgroup resource limiting? would be in line
with slowdown at least; or a hard ulimit on cpu time?) or maybe even
some "anti virus" thingy ("It's been 0 days since AV ate the database").

Regards,
Christoph

-- 
Spare Space.




Re: DEFINER / INVOKER conundrum

2023-04-03 Thread Christoph Moench-Tegeder
## Dominique Devienne (ddevie...@gmail.com):

> On the one hand, I want a INVOKER security function,
> to be able to capture the login and current ROLEs.

There's session_user ("the session user's name") which remains unchanged
on a SECURITY DEFINER function, and current_user ("the user name of the
current execution context") which changes according to the security
context set by SECURITY DEFINER/INVOKER.
-> https://www.postgresql.org/docs/current/functions-info.html

Regards,
Christoph

-- 
Spare Space.




Re: Binary large object processing problems

2023-03-13 Thread Christoph Moench-Tegeder
## Raivo Rebane (raivor...@gmail.com):

> Can anybody help me find where is my mistake an what is working solution ?

The documentation clearly states "All large object manipulation using
these functions must take place within an SQL transaction block"
https://www.postgresql.org/docs/current/lo-interfaces.html
but I couldn't see anything of thank kind in your code.

Regards,
Christoph

-- 
Spare Space.




Re: public schema grants to PUBLIC role

2023-03-09 Thread Christoph Moench-Tegeder
## Dominique Devienne (ddevie...@gmail.com):

> Hi. I've recently realized via a post (or article?) from Laurenz that the
> PUBLIC role has CREATE privilege on the 'public' schema by default (see
> query below). I guess it can't be avoided?

You could just use PostgreSQL 15:
https://www.postgresql.org/docs/15/release-15.html#id-1.11.6.7.4

> In particular, we need extensions, which are loaded in public by default.
> Will USAGE of public be enough for LOGIN users having access to the DB to
> use extensions?

Plus any grants on the extension's object.

> More broadly, we want to secure the DB so that all DB access and schema
> access are explicit.
> Anything else to be aware of please, beside the two mentioned above?

Have a look at default privileges and group roles, that will make your
life much easier.
https://www.postgresql.org/docs/15/ddl-priv.html

Regards,
Christoph

-- 
Spare Space.




Re: shp2pgsql error under windows

2023-03-04 Thread Christoph Moench-Tegeder
## Raivo Rebane (raivor...@gmail.com):

> Raivo@DESKTOP-69FUU49 /cygdrive/f/3D-data/Kataster
> 
> $ cat kataster.sql | less
> 
> 
> 
> S^@E^@T^@ ^@C^@L^@I^@E^@N^@T^@_^@E^@N^@C^@O^@D^@I^@N^@G^@ ^@T^@O^@

That's a BOM and the rest looks like UTF-16 (or UCS-2). You can use
recode (also available in Cygwin) to convert that. You could also use
shp2pgsql in Cygwin and be done with that (Windows is a quite bloated
bootloader for Cygwin, but you do you)).

Regards,
Christoph

-- 
Spare Space




Re: PANIC: could not flush dirty data: Cannot allocate memory

2022-11-15 Thread Christoph Moench-Tegeder
## klaus.mailingli...@pernau.at (klaus.mailingli...@pernau.at):

> AFAIU the problem is not related to the memory settings in 
> postgresql.conf. It is the kernel that
> for whatever reasons report ENOMEM. Correct?

Correct, there's a ENOMEM from the kernel when writing out data.

> Filesystem is ext4. VM technology is mixed: VMware, KVM and XEN PV. 
> Kernel is 5.15.0-52-generic.

I do not suspect the filesystem per se, ext4 is quite common and we
would have heard something about that (but then, someone gotta be
the first reporter?). I would believe that the kernel would raise
a bunch of printks if it hit ENOMEM in the commonly used paths, so
you would see something in dmesg or wherever you collect your kernel
log if it happened where it was expected.
And coming from the other side: does this happen on all the hosts,
or is it limited to one host or one technology? Any uncommon options
on the filesystem or the mount point? Anything which could mess
with your block devices? (I'm expecially thinking "antivirus" because
it's always "0 days since the AV ate a database" and they tend to
raise errors in the weirdest places, which would fit the bill here;
but anythig which is not "commonly in use everywhere" could be a
candidate).

Regards,
Christoph

-- 
Spare Space




Re: PANIC: could not flush dirty data: Cannot allocate memory

2022-11-14 Thread Christoph Moench-Tegeder
## klaus.mailingli...@pernau.at (klaus.mailingli...@pernau.at):

> On several servers we see the error message: PANIC:  could not flush 
> dirty data: Cannot allocate memory

As far as I can see, that "could not flush dirty data" happens total
three times in the code - there are other places where postgresql could
PANIC on fsync()-and-stuff-related issues, but they have different
messages.
Of these three places, there's an sync_file_range(), an posix_fadvise()
and an msync(), all in src/backend/storage/file/fd.c. "Cannot allocate
memory" would be ENOMEM, which posix_fadvise() does not return (as per
it's docs). So this would be sync_file_range(), which could run out
of memory (as per the manual) or msync() where ENOMEM actually means
"The indicated memory (or part of it) was not mapped". Both cases are
somewhat WTF for this setup.
What filesystem are you running?

Regards,
Christoph

-- 
Spare Space




Re: does postgres has snapshot standby feature?

2022-10-14 Thread Christoph Moench-Tegeder
## milist ujang (ujang.mil...@gmail.com):

> read about removing standby.signal file behavior in pg14 from
> https://dbaclass.com/article/how-to-open-postgres-standby-database-for-read-writesnapshot-standby/

That article is fractally wrong, and that starts right in the first
sentence. See
https://www.postgresql.org/docs/15/functions-info.html#FUNCTIONS-PG-SNAPSHOT
(not getting into detail discussions, but still).

Anyhow, the article does not discuss requirements and limitations of
pg_rewind, and promotes unsafe practices. After reading this article,
I'd be more than careful with anything else published by that source.

> I was shocked about the hidden feature of snapshot standby, does it really
> exist?

It's not an official feature. There's a limited amount of thing you can
get away with when breaking and rewinding replication, but the guarantees
on those are rather weak.

Regards,
Christoph

-- 
Spare Space.




Re: Example code Re: Singleton SELECT inside cursor loop

2022-10-01 Thread Christoph Moench-Tegeder
## Ron (ronljohnso...@gmail.com):

> The question then is "why am I just now seeing the problem?"  We've been 
> using v12 for two years, and it just happened.
> 
> The only recent change is that I upgraded it from RDS 12.10 to 12.11 a 
> couple of weeks ago.

That's correlation, but no proof for causality.
Now that you've confirmed that you have indeed a mismatch between generic
and custom plan, you could compare those (EXPLAIN EXECUTE) and see
where the generic plan goes wrong. Otherwise, prime suspects are bad
statistics (Were stats accurate before the update? How about now?
Ran VACUUM ANALYZE recently? If not, does it improve matters?)
If would help if you could compare execution plans with plans from
before the update, but I guess you didn't document those?

Regards,
Christoph

-- 
Spare Space




Re: Example code Re: Singleton SELECT inside cursor loop

2022-10-01 Thread Christoph Moench-Tegeder
## Ron (ronljohnso...@gmail.com):

> Note how quickly it runs the first five times, but takes 780x longer the 
> sixth time I run it.  Exiting psql and entering again causes the same 
> slowness the sixth time it's run.

Tanks at the sixth time? That rings a bell: "The current rule for this
is that the first five executions are done with custom plans[...]" from
https://www.postgresql.org/docs/12/sql-prepare.html
And your functions create prepared statements under the hood:
https://www.postgresql.org/docs/12/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING
I guess you should visit
https://www.postgresql.org/docs/12/runtime-config-query.html#GUC-PLAN-CACHE_MODE
and try if plan_cache_mode = force_custom_plan helps here.

Regards,
Christoph

-- 
Spare Space




Re: New message in PostgreSQL log regarding socket for statistics collector

2022-09-16 Thread Christoph Moench-Tegeder
## Hilbert, Karin (i...@psu.edu):

> 2022-09-16 02:00:16 EDT [1918984]: [7-1] db=,user= LOG:  listening on Unix 
> socket "/tmp/.s.PGSQL.5432"
> 2022-09-16 02:00:17 EDT [1918984]: [8-1] db=,user= LOG:  test message did not 
> get through on socket for statistics collector
> 2022-09-16 02:00:17 EDT [1918984]: [9-1] db=,user= LOG:  trying another 
> address for the statistics collector

> I'm concerned about the "test message did not get through on socket
> for statistics collector" & "trying another address for the statistics
> collector" messages.

That's the stats collector socket, which is used to communicate runtime
statistic updates inside the server. As a user, you'll never interact
with this socket directly. It's created on the first working address
which the system returns as "localhost". The server will try all the
"localhost" addresses until the socket can be created and passes the
test message, logging messages similar to your message for each failure.
In your case the test message was not received (or it was received
too late - the server waits only 0.5 seconds for the test message).
In any case, a working stats socket was created on your machine:
without a working stats socket server startup will fail. (You should
be able to see that socket in each PostgreSQL process' file descriptors:
UDP, localhost, high port, "connected" to the very same address/port
tuple).
Why did the test message fail on your first socket? Maybe there's
some filtering in place, or your system was very slow and missed
the 500ms timeout, or the address was otherwise unusable (not
available in that namespace, trying to use IPv4/IPv6 when not
configured... I'd have to test which condition results in failure
at this stage).

Regards,
Christoph

-- 
Spare Space




Re: Possible values of DATESTYLE / PGDATESTYLE

2022-08-23 Thread Christoph Moench-Tegeder
## Sebastien Flaesch (sebastien.flae...@4js.com):

> Where can I find the list of possible values for this DATESTYLE parameter, 
> for V14?

https://www.postgresql.org/docs/14/runtime-config-client.html#GUC-DATESTYLE

Regards,
Christoph

-- 
Spare Space




Re: - operator overloading not giving expected result

2022-07-08 Thread Christoph Moench-Tegeder
## Rajesh S (rajes...@fincuro.com):

> We are migrating our database from Oracle to Postgresql.  In oracle we 
> have used this syntax "SELECT ('1999-12-30'::DATE) - 
> ('1999-12-11'::DATE)" to get difference between two dates as a integer 
> output (ex: 19).  But in Postgres the same query returns result as "19 
> days".

There's something fishy going on, as (date) - (date) returns integer
since a very long time (even the version 8.0 docs have that).
On the other hand, (timestamp) - (timestamp) gives an interval, so
first make sure you really got the data types right.

> CREATE OR REPLACE FUNCTION public.dt_minus_dt(
>      dt1 timestamp without time zone,
>      dt2 timestamp without time zone)

See? There's TIMESTAMP, not DATE.

>      LANGUAGE 'edbspl'

Is this really PostgreSQL or is it that fork - and if it's the fork,
does it behave the same way as stock PostgreSQL does? (I would be
surprised if that deviates in this place, but...).

>      SELECT DATE_PART('day', dt1::timestamp - dt2::timestamp)::integer 

And TIMESTAMP again.

Regards,
Christoph

-- 
Spare Space




Re: postgresql bug

2022-07-08 Thread Christoph Moench-Tegeder
## m...@ft-c.de (m...@ft-c.de):

> /usr/local/bin/postgres -V
> ld-elf.so.1: Shared object "libicui18n.so.70" not found, required by
> "postgres"

You screwd up your upgrades: that postgres binary was built against
ICU 70, but as hou have shown you have ICU 71 installed:

> find / -name "*libicui*"
> /usr/local/lib/libicuio.a
> /usr/local/lib/libicui18n.so.71
> /usr/local/lib/libicuio.so.71.1
> /usr/local/lib/libicui18n.so.71.1


> /usr/local/lib/compat/pkg/libicui18n.so.68.2
> /usr/local/lib/compat/pkg/libicuio.so.68
> /usr/local/lib/compat/pkg/libicuio.so.68.2
> /usr/local/lib/compat/pkg/libicui18n.so.68

And that looks like you're using portupgrade - in which case you need
to chase a lot of shared library dependencies yourself. pkg_libchk
from ports-mgmt/bsdadminscripts2 may help a little with that, but
it's still an error-prone process and a lot of toil. For that reason
I'd recommend either using packages (from the official builds) or
setting up your own poudriere; all other options will cost you a lot
of time and nerves and require a lot more knowledge and manual
intervention than "simple updates" are supposed to in 2022.

Regards,
Christoph

-- 
Spare Space




Re: existing row not found by SELECT ... WHERE CTID = ?

2022-05-25 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> We will solve the problem now with setting the session after connect to
> 
>SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL REPEATABLE READ;
> 
> (with an appropriate ESQL/C call). Any comments?

Maybe the real question is whether it is wise to use an implementation
artifact (ctid) to identify rows?
The textbook approach could be row locks (SELECT ... FOR SHARE/UPDATE and
variants) to prevent concurrent changes or optimistic locking  (and a
primary key in any case) - but maybe you already investigated those options?

Regards,
Christoph

-- 
Spare Space




Re: Who am I? Where am I connected?

2022-05-18 Thread Christoph Moench-Tegeder
## Dominique Devienne (ddevie...@gmail.com):

> Once connected, can I find out all aspects of the connection string?

\conninfo in psql (pro tip: \? actually helps), "Connection Status
Functions" https://www.postgresql.org/docs/current/libpq-status.html
in libpq; and in a pinch you could find your connection in
pg_stat_activity (pid = pg_backend_pid()) plus some assorted queries
for other details. You cannot get actual secrets like your private
ssl key or your password (the server most likely doesn't have that,
and the client assumes that you have it).

Regards,
Christoph

-- 
Spare Space




Re: md5 issues Postgres14 on OL7

2022-01-04 Thread Christoph Moench-Tegeder
## Michael Paquier (mich...@paquier.xyz):

> On Mon, Dec 20, 2021 at 03:22:31PM +0100, Christoph Moench-Tegeder wrote:
> > Active FIPS mode (/proc/sys/crypto/fips_enabled => 1) on the server does
> > produce this behaviour.
> 
> Most likely, this is a build linked with OpenSSL?  The way MD5 hashes
> are computed in Postgres has largely changed in 14, and the code has
> been refactored so as we rely on the EVP APIs from OpenSSL when
> building with --with-ssl=openssl, having as direct consequence to
> allocate a bit more memory every time a hash is computed.

You can reproduce that behaviour with the PGDG-RPMs on CentOS 7.
Enable FIPS-mode, reboot, and immediately md5() fails. The PGDG-RPMS
are built with openssl ("--with-openssl" in pg_config output),
as of course you need SSL today.

"Supports FIPS mode" is one of the selling points for your cryptohash
patches in the Release Notes, and that means no md5 when FIPS is
enforced (I think FIPS is a little too strict in this regard, as
people do invent horrid workarounds, which does not really improve
matters; but that's another can of worms).

Anyway, it's not the memory, but "out of memory" is all PostgreSQL
reports when anything in the hashing operations returns a failure.

Regards,
Christoph

-- 
Spare Space




Re: md5 issues Postgres14 on OL7

2021-12-20 Thread Christoph Moench-Tegeder
Hi!

## Michael Mühlbeyer (michael.muehlbe...@trivadis.com):

> postgres=# select md5('just a test');
> ERROR:  out of memory

Active FIPS mode (/proc/sys/crypto/fips_enabled => 1) on the server does
produce this behaviour.

Regards,
Christoph

-- 
Spare Space




Re: Certificate validity error download.postgresql.org

2021-10-14 Thread Christoph Moench-Tegeder
## Cedric Rey (ce...@groupemutuel.ch):

> the certificate on download.postgresql.org has expired :
> 
> openssl s_client -connect download.postgresql.org:443
> CONNECTED(0003)
> depth=3 O = Digital Signature Trust Co., CN = DST Root CA X3
> verify error:num=10:certificate has expired
> notAfter=Sep 30 14:01:15 2021 GMT

That's complaining about the "DST Root CA X3" certificate, and that's
(partially) expected: https://letsencrypt.org/2021/10/01/cert-chaining-help.html

But the fact that you're seeing this indicates that you're either
running an horribly outdated version of openssl (as Daniel mentioned),
but even CentOS' "OpenSSL 1.0.2k-fips  26 Jan 2017" has been fixed
in this regard.
The other possibility is that your trusted CA list is outdated: that
would be package ca-certificates (same name in deb and rpm world).
I do know from my own experience that at least the "old" (2020.2.something)
Redhat package is missing the new "ISRG Root X1" certificate, you'll
need version 2021.2.something.

Regards,
Christoph

-- 
Spare Space




Re: PostgreSQL equivalent to Oracles ANYDATASET

2020-11-13 Thread Christoph Moench-Tegeder
## Dirk Mika (dirk.m...@mikatiming.de):

> SELECT * FROM TABLE(series_pkg.get_results(1));
> 
> The purpose of this function is to provide a DATASET, which has
> different columns in the result depending on the passed parameter.
> 
> Is there any way to achieve something similar in PostreSQL?

testing=# CREATE OR REPLACE FUNCTION public.rr(p INTEGER)
 RETURNS SETOF RECORD
 LANGUAGE plpgsql
AS $function$
BEGIN
  IF p = 1 THEN
RETURN NEXT ('k1'::TEXT, 'v1'::TEXT);
RETURN NEXT ('k2'::TEXT, 'v2'::TEXT);
  ELSE
RETURN NEXT (23::INTEGER, 42::INTEGER, 'abc'::TEXT);
RETURN NEXT (42::INTEGER, 23::INTEGER, 'xyz'::TEXT);
  END IF;
  RETURN;
END;
$function$;
CREATE FUNCTION

testing=# SELECT * FROM rr(2) f(a INTEGER, b INTEGER, c TEXT);
 a  | b  |  c
++-
 23 | 42 | abc
 42 | 23 | xyz
(2 rows)

testing=# SELECT * FROM rr(1) f(x TEXT, y TEXT);
 x  | y
+
 k1 | v1
 k2 | v2
(2 rows)

Regards,
Christoph

-- 
Spare Space




Re: Parameter value from (mb/gb) to bytes

2020-10-14 Thread Christoph Moench-Tegeder
## Magnus Hagander (mag...@hagander.net):

> Actually, it doesn't have to be in 8k pages, that depends on the build
> options. So if you want to be perfectly correct, you should probably
> multiply with current_setting('block_size') instead of a hardcoded 8192 :)

More self-contained:
  select pg_size_bytes(setting) * pg_size_bytes(unit)
  from pg_settings
  where name = 'shared_buffers';

I guess that's what that unit column is there for.

Regards,
Christoph

-- 
Spare Space




Re: Return value of CREATE TABLE

2020-09-10 Thread Christoph Moench-Tegeder
## Mike Martin (redt...@gmail.com):

> So basically I would like to be able to know what namespace a temp table is
> created in, so that I can constrain lookup.

pg_my_temp_schema() returns the OID of the session's temporary schema
("or 0 if none", according to the docs).

Regards,
Christoph

-- 
Spare Space




Re: Unexplained disk usage in AWS Aurora Postgres

2020-08-07 Thread Christoph Moench-Tegeder
## Chris Borckholder (chris.borckhol...@bitpanda.com):

> We are experiencing a strange situation with an AWS Aurora postgres
> instance.

The main problem here is that "Amazon Aurora" is not PostgreSQL.
If I understand Amazon's documentation, what you are using is
officially named "Amazon Aurora with PostgreSQL Compatibility",
and that sums is up quite nicely: Aurora is a database engine
developed at Amazon - and it's inner workings are not publically
documented.
Whatever is using up that disk space - only AWS Support can know.

Regards,
Christoph

-- 
Spare Space




Re: shp2pgsql is missing

2020-07-26 Thread Christoph Moench-Tegeder
## Susan Hurst (susan.hu...@brookhurstdata.com):

> OS:  FreeBSD 12.1-RELEASE-p7 FreeBSD 12.1-RELEASE-p7 GENERIC  amd64

There's your answer: the FreeBSD port of PostGIS 3.0
(databases/postgis30) installs shp2pgsql only if option LOADERGUI
has been enabled on the port's build (the port defaults to
LOADERGUI=off). You need to rebuild PostGIS or convince the
maintainer of that port to switch the default (the usual way
for that is the FreeBSD bugzilla system). Perhaps one could even
argue that LOADERGUI should only toggle shp2pgsql-gui, but I
haven't really looked into that.

Regards,
Christoph

-- 
Spare Space




Re: is JIT available

2020-07-25 Thread Christoph Moench-Tegeder
## Scott Ribe (scott_r...@elevated-dev.com):

> So JIT is enabled in your conf, how can you tell from within a client
> session whether it's actually available (PG compiled with it and
> compiler available)?

pg_jit_available()  boolean  is JIT compilation available in this session

https://www.postgresql.org/docs/12/functions-info.html

Regards,
Christoph

-- 
Spare Space




Re: ownership of "/var/run/postgresql"

2020-07-15 Thread Christoph Moench-Tegeder
## Dmitry O Litvintsev (litvi...@fnal.gov):

> Upgraded to 11 and now we see that file 
> 
> /var/run/postgresql changes ownership to postgres:postgres on reboot ,
> even though postgresql-11.service is disabled. 

That's /usr/lib/tmpfiles.d/postgresql.conf (or similar). Don't
edit that file, see "man 5 tmpfiles.d" on how to override that
file and general info on the format.

Regards,
Christoph

-- 
Spare Space




Re: Log the incoming old SSL certs by pid or any way

2020-06-25 Thread Christoph Moench-Tegeder
## Durumdara (durumd...@gmail.com):

> Do we have chance to log somewhere the connected client's certificate, or
> some info about it?

There's pg_stat_ssl, and if you had an recent version of PostgreSQL
(9.6 is too old for that), you'd even have the serial number of
the certificate in there:
https://www.postgresql.org/docs/12/monitoring-stats.html#PG-STAT-SSL-VIEW

On the other hand, you could check the certificates directly,
e.g. https://github.com/matteocorti/check_ssl_cert or even
just some scripting around openssl. (That assumes that you know
where those client certificates are).

Regards,
Christoph

-- 
Spare Space




Re: Monitoring for long running transactions

2020-06-04 Thread Christoph Moench-Tegeder
## Samuel Smith (pg...@net153.net):

> Sorry, I should have clarified that I was aware of the pg_stat_activity 
> table. That is how we found the problem in the first place. And yes I 
> could just write a bash script and run it in cron. I just didn't know if 
> there was a more "official" way to go about this since it is probably a 
> common monitoring point and/or if something like this was already made.

This works fine for everything nagios-like:
https://bucardo.org/check_postgres/check_postgres.pl.html#txn_time

Other solutions may be found via the PostgreSQL wiki - as you noted,
this is (should be) monitored, so almost everything which can monitor
PostgreSQL covers this. (which raises the obvious question - did you
monitor your server or did you just use the common "shouting user"
approach? ;))

Regards,
Christoph

-- 
Spare Space




Re: Linux Update Experience

2020-05-29 Thread Christoph Moench-Tegeder
## Peter J. Holzer (hjp-pg...@hjp.at):

> * Update frequently. That reduces the risk of needing a package which
>   has since been deleted from a repo, but more importantly it makes it
>   easier to pinpoint the cause of a conflict.

This. Plus: make sure you can re-create any machine in a fully deterministic
manner - that way, you can easily create a test environment to match
production (minus RAM/CPU/storage) for testing upgrades beforehand.

Rationale: experience shows that using Test as "first stage" and carrying
changes forward to Production results in a "contaminated" test environment;
before long, results of failed experiments have accumulated on Test,
Production and Test are diverging, and at that point Test has lost it's
purpose.
(For some people, that's a point for containerization: you don't change
a running container, but package a new one. Other environments have so
much Production with all the redundancy etc that they can "test in
production" and just scrap-and-replace failed tests, but that's not an
option if you have just a handful of systems.)

Regards,
Christoph

-- 
Spare Space




Re: Ident authentication failed

2020-03-26 Thread Christoph Moench-Tegeder
## Ted To (t...@theo.to):

> Thank you -- I added two lines to the hba file to allow for ident
> authentication, restarted postgres and still the same errors.

You probably don't want "ident" authentication - that's the thing
with "identd" (see RfC 1413), which even 20 years ago was only used
in connection with IRC, but not really for "authentication".

You'd rather want "scram-sha-256" as the authentication method (hoping
that whatever-go-uses can do that - come on, we got that in version 10),
or (if that's not possible) "md5". See
https://www.postgresql.org/docs/12/auth-methods.html and
https://www.postgresql.org/docs/12/auth-pg-hba-conf.html .
Note that "password_encryption" has to match that authentication method
from pg_hba.conf, that's described in
https://www.postgresql.org/docs/12/auth-password.html .

Regards,
Christoph

-- 
Spare Space




Re: Real application clustering in postgres.

2020-03-08 Thread Christoph Moench-Tegeder
## Andrew Kerber (andrew.ker...@gmail.com):

>  The nice point of oracle
> dataguard is that it is a block by block copy, while all of the Postgres
> Multi-Master and master-slave replication solutions work by SQL capture.

https://www.postgresql.org/docs/12/warm-standby.html#STREAMING-REPLICATION
I wouldn't exactly call our streaming replication "SQL capture".

Regards,
Christoph

-- 
Spare Space




Re: Perl::DBI and TYPE of column

2020-03-03 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> My question here is: How I could get a copy of the document 
> ftp://sqlstandards.org/SC32/SQL_Registry/

Methinks that the most interesting constants of that are already in
DBI (export tag sql_types) - man DBI, /sql_types. Is that the data
you're looking for? Also look at DBD::Pg, pg_types.

Regards,
Christoph

-- 
Spare Space




Re: Postgres 12.1 : UPPER() in WHERE clause restarts server

2020-02-08 Thread Christoph Moench-Tegeder
## Nick Renders (postg...@arcict.com):

> 2020-02-08 20:21:19.942 CET [83892] LOG:  server process (PID 85456)
> was terminated by signal 9: Killed: 9

Signal 9 sounds like OOM (or manual intervention). What's in dmesg?

Regards,
Christoph

-- 
Spare Space




Re: calculating the MD5 hash of role passwords in C

2020-01-23 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> > The documentation on pg_authid has the details:
> > "The MD5 hash will be of the user's password concatenated to their user 
> > name."
> > https://www.postgresql.org/docs/12/catalog-pg-authid.html
> 
> This is still not exactly what I was looking for. But has an interesting
> detail (salting the role password by adding the role name to it). An
> implementation with UNIX crypt(3) for MD5 would need an additional salt
> like '$1$salt' to encrypt 'sisis123sisis'.

It's not crypt(3). It's "the MD5 hash of the user's password concatenated
to their user name".
Try:
perl -MDigest::MD5 -e 'print("md5" . Digest::MD5::md5_hex("sisis123" . "sisis") 
. "\n");'

Regards,
Christoph

-- 
Spare Space.




Re: calculating the MD5 hash of role passwords in C

2020-01-22 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> sisis71=# select rolname, rolpassword from pg_authid where rolname = 'sisis';
>  rolname | rolpassword
> -+-
>  sisis   | md52f128a1fbbecc4b16462e8fc8dda5cd5
> 
> I know the clear text password of the role, it is simple 'sisis123', how
> could I calculate the above MD5 hash from the clear text password, for
> example in C? Which salt is used for the crypt(3) function?

The documentation on pg_authid has the details:
"The MD5 hash will be of the user's password concatenated to their user name."
https://www.postgresql.org/docs/12/catalog-pg-authid.html

Regards,
Christoph

-- 
Spare Space




Re: Access privileges

2019-12-16 Thread Christoph Moench-Tegeder
## Daulat Ram (daulat@exponential.com):

> Can you please describe the 
>  " =Tc/postgres + postgres=CTc/postgres  +confluence=CTc/postgres".

It's all here:
https://www.postgresql.org/docs/current/ddl-priv.html

Regards,
Christoph

-- 
Spare Space




Re: Authentication: MD5 to SCRAM-SHA-256 error

2019-11-15 Thread Christoph Moench-Tegeder
## Dave Hughes (dhughe...@gmail.com):

> However when I try to log in now, via command line, I receive the error:
> "psql: authentication method 10 not supported".

Your client (more precisely: it's libpq) is not ready for SCRAM.
I guess you're using an older (<10) client version? Mixed up
packages?

Regards,
Christoph

-- 
Spare Space




Re: security on user for replication

2019-11-11 Thread Christoph Moench-Tegeder
## PegoraroF10 (mar...@f10.com.br):

> How can I hide that info from users which are connected to my replica server

https://www.postgresql.org/docs/current/catalog-pg-subscription.html
  Access to the column subconninfo is revoked from normal users, because
  it could contain plain-text passwords.

Else: SSL certificates, pgpass file, or rig up some kerberos (that's
not that elegant in this case).

Regards,
Christoph

-- 
Spare Space.




Re: jsonb_set() strictness considered harmful to data

2019-10-19 Thread Christoph Moench-Tegeder
## Ariadne Conill (aria...@dereferenced.org):

> NULL propagation makes sense in the context of traditional SQL.  What
> users expect from the JSONB support is for it to behave as JSON
> manipulation behaves everywhere else.

Well, some users expect that. Others are using this interface as it is
documented and implemented right now. And that's what makes this a
somewhat difficult case: I wouldn't argue for one behaviour or the
other if this was new functionality. But jsonb_set() was added in 9.5,
and changing that behaviour now will make other people about as unhappy
as you are right now.
Further, "now" is a rather flexible term: the function cannot be changed
"right now" with the next bugfix release (may break existing applications,
deterring people from installing bugfixes: very bad) and there's about
no way to get a new function into a bugfix release (catversion bump).
The next chance to do anything here is version 13, to be expected around
this time next year. This gives us ample time to think about a solution
which is consistent and works for (almost) everyone - no need to force
a behaviour change in that function right now (and in case it comes to
that: which other json/jsonb-functions would be affected?).

That creates a kind of bind for your case: you cannot rely on the new
behaviour until the new version is in reasonably widespread use.
Database servers are long-lived beasts - in the field, version 8.4
has finally mostly disappeared this year, but we still get some
questions about that version here on the lists (8.4 went EOL over
five years ago). At some point, you'll need to make a cut and require
your users to upgrade the database.

>   At some point, you have to start pondering whether the behaviour
> does not make logical sense in the context that people frame the JSONB
> type and it's associated manipulation functions.

But it does make sense from a SQL point of view - and this is a SQL
database. JSON is not SQL (the sheer amount of "Note" in between the
JSON functions and operators documentation is proof of that) and nots
ASN.1, "people expect" depends a lot on what kind of people you ask. 
None of these expectations is "right" or "wrong" in an absolute manner.
Code has to be "absolute" in order to be deterministic, and it should
do so in a way that is unsurprising to the least amount of users: I'm
willing to concede that jsonb_set() fails this test, but I'm still not
convinced that your approach is much better just because it fits your
specific use case.

> It is not *obvious*
> that jsonb_set() will trash your data, but that is what it is capable
> of doing.

It didn't. The data still fit the constraints you put on it: none,
unfortunately. Which leads me to the advice for the time being (until
we have this sorted out in one way or another, possibly the next
major release): at least put a NOT NULL on columns which must be not
NULL - that alone would have gone a long way to prevent the issues
you've unfortunately had. You could even put CHECK constraints on
your JSONB (like "CHECK (j->'info' IS NOT NULL)") to make sure it
stays well-formed. As a SQL person, I'd even argue that you shouldn't
use JSON columns for key data - there is a certain mismatch between
SQL and JSON, which will get you now and then, and once you've
implemented all the checks to be safe, you've build a type system
when the database would have given you one for free. (And running
UPDATEs inside your JSONB fields is not as efficient as on simple
columns).
And finally, you might put some version information in your
database schema, so the application can check if all the neccessary
data migrations have been run.

Regards,
Christoph

-- 
Spare Space




Re: jsonb_set() strictness considered harmful to data

2019-10-18 Thread Christoph Moench-Tegeder
## Ariadne Conill (aria...@dereferenced.org):

>update users set info=jsonb_set(info, '{bar}', info->'foo');
> 
> Typically, this works nicely, except for cases where evaluating
> info->'foo' results in an SQL null being returned.  When that happens,
> jsonb_set() returns an SQL null, which then results in data loss.[3]

So why don't you use the facilities of SQL to make sure to only
touch the rows which match the prerequisites?

  UPDATE users SET info = jsonb_set(info, '{bar}', info->'foo')
WHERE info->'foo' IS NOT NULL;

No special wrappers required.

Regards,
Christoph

-- 
Spare Space




Re: SELECT d02name::bytea FROM ... && DBI::Pg

2019-10-12 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> Hmm. But *I* do need the content in hex to see if the varchar column
> contains correct encoded UTF-8 data.

 select 'Hello'::bytea::text;

Regards,
Christoph

-- 
Spare Space




Re: Perl DBI converts UTF-8 again to UTF-8 before sending it to the server

2019-10-12 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> but when I now fetch the first row with:
> 
>@row = $sth->fetchrow_array;
>$HexStr = unpack("H*", $row[0]);
>print "HexStr: " . $HexStr . "\n";
>print "$row[0]\n";
> 
> The resulting column contains ISO data:

As expected: https://perldoc.perl.org/perluniintro.html
  Specifically, if all code points in the string are 0xFF or less, Perl
  uses the native eight-bit character set.

> Pdagogische Hochschule Weingarten

And then it doesn't know that your terminal expects UTF-8 (perl
just dumps the binary string here), because you didn't tell it:
"binmode(STDOUT, ':encoding(utf8)')" would fix that.
See: https://perldoc.perl.org/perlunifaq.html specifically "What if I
don't decode?", "What if I don't encode?" and "Is there a way to
automatically decode or encode?".

The whole PostgreSQL-DBI-UTF8-thingy is working: use "Tijl Müller"
as test data (that's the dutch "ij"-digraph in there, a character
decidedly not in "latin-9/15" and therefore not "0xFF or less").
That will break your "unpack('H*')" - it tries to unpack that wide
character into a hex byte and "Character in 'H' format wrapped in
unpack". Use "print(join(' ', unpack('U*', $row[0])))" to see that
the ij has codepoint 307 (decimal).

Regards,
Christoph

-- 
Spare Space




Re: Perl DBI converts UTF-8 again to UTF-8 before sending it to the server

2019-10-04 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> my $text = "ä \xc3\xa4";

That will only work if you remove "use utf8". And then other stuff may
break.

Regards,
Christoph

-- 
Spare Space




Re: Perl DBI converts UTF-8 again to UTF-8 before sending it to the server

2019-10-04 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> my $text = "\xc3\xa4";
> print "text: ".$text."\n";

Your output is lying to you:
you need a binmode(STDOUT, ':encoding(utf8)'), which will make this print
"ä", and a utf8::decode($text), after which you get "ä". And when you
pass that $text through DBD::Pg, it will still be an "ä". And when you
change $text to "ä", it still works. Most other combinations will
either fail with "\xc3\xa4" or "ä".

Welcome to modern perl's handling of utf8. Cue "Everything is Fine" meme.

Regards,
Christoph

-- 
Spare Space




Re: pgbouncer with ldap

2019-09-09 Thread Christoph Moench-Tegeder
## Ayub M (hia...@gmail.com):

> It has hba and via hba file one can specify ldap connections
> 
> https://www.postgresql.org/docs/9.3/auth-pg-hba-conf.html

https://pgbouncer.github.io/config.html#hba-file-format
"Auth-method field: Only methods supported by PgBouncer’s auth_type
are supported", and "ldap" is not supported.
When there's no ldap support in pgbouncer, there's no ldap support
in pgbouncer.

Regards,
Christoph

-- 
Spare Space




Re: pgbouncer with ldap

2019-09-09 Thread Christoph Moench-Tegeder
## Ayub M (hia...@gmail.com):

> Hello, I am trying to setup pgbouncer with ldap to connect with rds
> postgres. Downloaded latest version of pbbouncer (1.11) and using hba
> auth_type. Getting below error when starting pgbouncer daemon. Am I
> missing something?

There's no "ldap" mentioned anywhere in the docs:
 https://pgbouncer.github.io/config.html
That could be a hint that ldap isn't supported...

Regards,
Christoph

-- 
Spare Space




Re: Connecting to NOTIFY with telnet

2019-05-04 Thread Christoph Moench-Tegeder
## Igal Sapir (i...@lucee.org):

> My main "issue" is that the official pgjdbc driver does not support the
> notifications with listen and I was trying to figure out why.

https://jdbc.postgresql.org/documentation/head/listennotify.html

Regards,
Christoph

-- 
Spare Space




Re: Migrating database(s) from Sybase ASE 15.7 to PostgreSQL 10.6 on Linux

2019-05-03 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> Re/ the migration of the data itself, are there any use case studies
> which could we keep in mind?

https://wiki.postgresql.org/images/e/e7/Pgconfeu_2013_-_Jens_Wilke_-_Sybase_to_PostgreSQL.pdf

Regards,
Christoph

-- 
Spare Space




Re: Missing pg_config on SuSE SLES 12 for PostgreSQL 10

2019-04-29 Thread Christoph Moench-Tegeder
## Matthias Apitz (g...@unixarea.de):

> The server in question is SLES12-SP3 and I can't update to SP4 at the
> moment. I have installed the following pkg:

> # rpm -qa | egrep 'postgre|libpq' | sort
> libpq5-10.6-1.6.1.x86_64

Ah, right, there's also a postgresql10 in SP3.

> How can I activated/installed the "SDK" repositories to get rid of the
> PGDG packages? Thanks in advance.

Use SUSEConnect... "SUSEConnect --list-extensions" will show all available
extensions and the command lines to activate/deactivate them (the SDK
would be registered by running "SUSEConnect -p sle-sdk/12.3/x86_64", but
check first...). Alternatively you could use yast and friends.
You might have to deactivate the PGDG repos and remove those rpms, else
zypper might try to "upgrade" the Suse packages with the PGDG ones.

Else: get PostgreSQL 11 from the PGDG repos and hope Suse doesn't
give you their own postgresql11 packages sometime soon.

Regards,
Christoph

-- 
Spare Space




Re: Problem with commit in function

2018-10-30 Thread Christoph Moench-Tegeder
## Mike Martin (redt...@gmail.com):

> Subject: Problem with commit in function

You can't commit inside a FUNCTION - and there's an obvious ERROR if
you try to do that: "invalid transaction termination".
Only since version 11 you can use a PROCEDURE and COMMIT/ROLLBACK
inside that - and the procedure must be written in PL/pgSQL (you
don't get transaction control in SQL procedures - but beside the
LANGUAGE marker, your code would be the same).
See
  https://www.postgresql.org/docs/current/static/sql-createprocedure.html
  https://www.postgresql.org/docs/current/static/plpgsql-transactions.html

> truncate table postgres_log_tmp  ;

You might want to look into temporary tables (perhaps even unlogged
ones) - that will save you the hassle of truncating (temporary tables
are even automatically removed), and with an unlogged temp table it
will save you some WAL I/O.

> --COMMIT;

So, is this on or not?
Oh, and please pay attention to the errors PostgreSQL throws at
you - they're significant.

Regards,
Christoph

-- 
Spare Space



Re: rw_redis_fdw: SQL Errors when statement is within a function

2018-10-29 Thread Christoph Moench-Tegeder
## GPT (gptmailingli...@gmail.com):

> - In PG10.5 I run, out of function, a simple statement for 5 times
> successfully and the 6th time I get an error "KEY is NULL". In the
> meantime of these times I added, removed code, packages got updated,
> etc. Suddenly, an error. Key is NULL!!!??? Check the key, write
> statements to check the value of the key. Eh, it is not NULL! Hm, undo
> all changes, start again! Oh, now it runs! Ok, redo the changes one by
> one. Ah, ok still run. Suddenly, error again! Check again and again.
> Ok check Redis. Uninstall packages, reinstall packages... Finally,
> install PG9.6 make it run, install fdw to the new system, check the
> environment. OK run. Keep it as it is!

Your testing/debugging/validating processes seem to be a little
erratic. Sometimes it does make sense to build minimal test cases;
and to understand why things don't work or seem to be working again.

> So, in order this thread to get over:
> - PG developers made a drastic change. Not problem at all, more then welcome.
> - I was the "lucky guy" who had a painful experience. These things
> happen as Adrian wrote, and life goes on.

Let me add another thing here: you're using software which is marked
as "not really ready" (rw_redis_fdw/README.md: "work in progress and may
have experience significant changes until it becomes stable") which
uses very low level, internal interfaces to PostgreSQL; in a perhaps
not-trivial project (there's PostgreSQL, Redis, rw_redis_fdw, you mentioned
node.js and microservices...). I'm not sure you really did calculate the
project risk of that. In some environments, such setup are called
"technology jenga": deep stack with an unstable equilibrium.

> What I would like to ask from developers is:
> 
> Please, if possible improve the error system!
> 
> Especially when there are internal changes which may affect
> dynamically the outcome (from expected results to ERROR or whatever)
> of a correct statement.

I'm not sure we can do very much when external (not under any of our's
control - I guess most people here wouldn't even have known about
rw_redis_fdw until you mentioned it) components use interfaces in
a fragile way. And as it was noted several times: the developer of
that component identified the problem with the debug output (nothing
fancy here) and fixed it.

Regards,
Christoph

-- 
Spare Space



Re: rw_redis_fdw: SQL Errors when statement is within a function

2018-10-27 Thread Christoph Moench-Tegeder
## GPT (gptmailingli...@gmail.com):

> > And the important thing is: there is no guarantee that the same SQL
> > statement will always execute with the same plan:
> + Yes but there should be guarantee that when the statement is free of
> any syntactic error to be executed successfully and return the
> expected result!!!

It does. Only there's often more than one way to get the correct
result; and PostgreSQL picks the plan which looks "best". And just
for the record: you haven't actually claimed that PostgreSQL returns
the wrong result. The only observation you gave was that PostgreSQL
"sometimes" switches the way to get that result. And to that the
answer is "yes, it does".

> + (By the way, I grub the opportunity. I use DBeaver because Admin III
> does not work properly with pg10 and 11 and BECAUSE Admin4 is a
> NIGHTMARE to install it and make it to work (from the point of a
> simple user!!!))

I wouldn't know about that, there are pre-built packages available
for my systems (which I can install with one command). Anyway, I
use psql for about all things PostgreSQL.

> + What else you need from me to help you find out the source of the problem?

First of all, we'd need to see a problem. As long as the correct data
is returned (and/or written), there is no obvious problem. How PostgreSQL
handles your query is for most cases an implementation detail, which
"normal" users can safely ignore.
The only problem we have seen so far was in rw_redis_fdw (and it has
been fixed) - it could not handle all the plans PostgreSQL could use.

Regards,
Christoph

-- 
Spare Space.



Re: rw_redis_fdw: SQL Errors when statement is within a function

2018-10-26 Thread Christoph Moench-Tegeder
## GPT (gptmailingli...@gmail.com):

> I have searched in
> https://github.com/nahanni/rw_redis_fdw/blob/master/redis_fdw.c for
> PREPARE and EXECUTE keywords. There are not any of them, except in
> comments.

Of course not - the FDW does not execute SQL on the PostgreSQL side,
but sends commands to redis.
Still, the FDW operations are part of PostgreSQL's query plan; they
have to be - everything PostgreSQL does is organized in plans, and the
operations done on the Foreign Server (redis in this case) have to
have their interfaces in the plan.

And the important thing is: there is no guarantee that the same SQL
statement will always execute with the same plan: One reason would be
changing table statistics, another is when PostgreSQL switches to
the generic plan for a prepared statement. Your case looks like the
latter, especially the observation "After that (6th time)" in
https://github.com/nahanni/rw_redis_fdw/issues/13#issuecomment-428670890
hints to that.
So, where does that prepared statement come from? You don't really
describe your environment... It's unlikely that you're calling PREPARE
yourself - but some drivers are notorious for that (Perl DBI's
$dbh->prepare() or JDBC's PreparedStatement come to mind), and
even PL/pgSQL uses prepared statements internally:
https://www.postgresql.org/docs/11/static/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING

So: plans are not stable between query executions, and you may have
prepared statements without knowing that.

Regards,
Christoph

-- 
Spare Space.



Re: rw_redis_fdw: SQL Errors when statement is within a function

2018-10-25 Thread Christoph Moench-Tegeder
## GPT (gptmailingli...@gmail.com):

> So, this kind of switch after a few goes is a normal behavior or
> something unexpected which will change in future?

It's expected, and even documented (when you look at the user-level
interface):
https://www.postgresql.org/docs/current/static/sql-prepare.html
  Prepared statements can use generic plans rather than re-planning
  with each set of supplied EXECUTE values. This occurs immediately
  for prepared statements with no parameters; otherwise it occurs
  only after five or more executions produce plans whose estimated
  cost average (including planning overhead) is more expensive than
  the generic plan cost estimate.

Regards,
Christoph

-- 
Spare Space.



Re: Privilege mess?

2018-10-09 Thread Christoph Moench-Tegeder
## Thiemo Kellner (thi...@gelassene-pferde.biz):

> I installed pglogger (https://sourceforge.net/projects/pglogger/) and
> try to insert into the "level" table as user "act" but it fails
> claiming insufficient privileges even though insert is granted to
> public (see below). What am I missing?

Schema privileges.

> ERROR:  permission denied for schema logger
> LINE 1: insert into logger.LEVEL (

It says "permission denied for schema", so this is not about table
privileges (GRANT INSERT/UPDATE/... ON TABLE ...), but about schema
provileges. I'd guess you miss USAGE on schema logger.
See https://www.postgresql.org/docs/current/static/sql-grant.html

AFAIK those privileges are not found in information_schema, you'll
have to use pg_catalog for that.

Regards,
Christoph

-- 
Spare Space.



Re: Out of Memory

2018-09-27 Thread Christoph Moench-Tegeder
## Laurenz Albe (laurenz.a...@cybertec.at):

> vm.overcommit_memory = 2
> vm_overcommit_ratio = 100
> 
> Linux commits (swap * overcommit_ratio * RAM / 100),
  ^
  That should be a "+".

See Documentation/sysctl/vm.txt and Documentation/vm/overcommit-accounting.rst
in your kernel tree.

> so without any swap the correct value would be 100.
> 
> I don't know how safe it is to got entirely without swap.

That's completely safe. Even more: if your production machine ever needs
swap, it's already mostly dead in the water. So there's also no benefit in
having the kernel copy stuff to swap "just in case" (read: set sysctl
vm.swappiness to 0) and use the swap disk space for something more
productive. If your memory usage ever exceeds available memory, something
already went horribly wrong, and you want to know about it right now.
A good way to know about this kind of mistakes is having stuff fail
hard, obviously and loudly (i.e. Out Of Memory). Do not try to save
some day by swapping: stuff is "sometimes slow" and if you really need
the swap, everything crawls to a halt anyway.
Of course, this does not hold for personal workstations and
the like where you might want to use the swap space for suspend to
disk, or have some buffer in case of runaway processes.

Regards,
Christoph

-- 
Spare Space.



Re: postgresql systemd service fails to start only on boot but not manually

2018-09-22 Thread Christoph Moench-Tegeder
## Doron Behar (doron.be...@gmail.com):

> My server fails to start PostgreSQL only on boot, if I restart it
> manually afterwards it doesn't have any problem starting. Here is the
> log extracted from the journal:
> 
> ```
> 2018-09-21 20:46:40.028 CEST [306] LOG:  listening on IPv4 address 
> "127.0.0.1", port 5432
> 2018-09-21 20:46:40.036 CEST [306] LOG:  listening on Unix socket 
> "/run/postgresql/.s.PGSQL.5432"
> 2018-09-21 20:46:40.233 CEST [337] LOG:  database system was shut down at 
> 2018-09-21 20:46:21 CEST
> 2018-09-21 20:48:10.441 CEST [352] WARNING:  worker took too long to start; 
> canceled
> 2018-09-21 20:49:10.469 CEST [352] WARNING:  worker took too long to start; 
> canceled

This would indicate that your machine is overloaded during start -
perhaps there's just too much being started at the same time?
ObRant: that's what happens if people take "system startup duration"
as a benchmark and optimize for that - sure, running one clumsy shell
script after another isn't effective usage of today's systems,
but starting eight dozens programs all at once may have other
side effects. Really, with the hardware taking small ages to find
it's own arse before even loading the boot loader, those few seconds
weren't worth optimizing - and if people reboot their computers so
often that startup time takes a measurable toll on their productive
day, perhaps they should rather spend their time thinking about their
usage pattern than "optimizing" the startup process.

So, now that I've got that off my chest... your machine propably tries to
do too much at the same time when booting: the worker processes take
longer than 90 seconds to start. Slow CPU or storage maybe?

> 2018-09-21 20:49:10.478 CEST [306] LOG:  database system is ready to accept 
> connections
> 2018-09-21 20:49:10.486 CEST [306] LOG:  received fast shutdown request

And in the mean time, systemd has lost it's patience, declares the
start as failed and terminates the process group. (The default systemd
timeout is 90 seconds, at least in some releases of systemd, so
this fits quite nicely).

You could try to work around this by increasing TimeoutStartSec
in postgresql's systemd unit (or even globally), which perhaps
only hides the problem until the next service suddenly doesn't
start anymore.
You could move postgresql to the end of the boot order by
adding "After=..." to the Unit section of the systemd service
file, the value behind "After=" being all the other services in
the same target, which should reduce parallelism and improve
PostgreSQL's startup behaviour.
A more advanced variant of that would be to create a new
systemd target, make that start "After" multiuser.target
or even graphical.target (depending on your setup), make sure
it "Requires" the current default systemd target and make
postgresql the only additional service in that target.
(This would be the cleanest solution, but you should get some
grasp of systemd and how your specific distribution uses it
before meddling with the default targets; I don't know every
distribution/version variant of systemd integration, so I
can't give that specific instructions here).
Or you figure out what the heck your machine is running
during startup any why it is that slow, and try to fix that.

Regards,
Christoph

-- 
Spare Space



Re: Sv: Re: How to get shorter SERVER_VERSION in psql-prompt?

2018-09-03 Thread Christoph Moench-Tegeder
## Rob Sargent (robjsarg...@gmail.com):

> > Ugh.  (So this is coming from "configure --with-extra-version" stuff)

> Does that also diddle the value of "server_version_num"?

No, that's still integer-format (it's unchanged and you can cast it
straight into INTEGER).

Gruss,
Christoph

-- 
Spare Space.



Re: How to get shorter SERVER_VERSION in psql-prompt?

2018-09-03 Thread Christoph Moench-Tegeder
## Andreas Joseph Krogh (andr...@visena.com):

> This results in this verver_version:
> 10.5 (Ubuntu 10.5-1.pgdg18.04+1)
>   
> Is it possible to adjust this somehow so it outputs only "10.5"?

On Debian/Ubuntu, all version strings are somewhat extended.
Luckily, with the power of SQL we're not completely helpless, so try
this in your .psqlrc (somewhat simpler than your prompt, but you
get the idea):

select substring(current_setting('server_version') from '#"[^ ]+#"( +%)?' for 
'#') as short_server_ver\gset
\set PROMPT1 '%/ %:short_server_ver: %R%# '

Yes, that works across \c.

Happy prompting,
Christoph

-- 
Spare Space.



Re: Can't compile postgresql 11 on FreeBSD 11.1

2018-07-17 Thread Christoph Moench-Tegeder
## Márcio Antônio Sepp (mar...@zyontecnologia.com.br):

> I’m trying to compile PostgreSQL 11beta2 but this errors occur:
> 
> root@srvbacula:/postgresql/postgresql-11beta2 # ./configure

Stop right here and try using the same configure command line
as the port (postgresql10-server, as there's no v11 port yet).
At the very minimum, you will need "--with-libraries=/usr/local/lib"
and "--with-includes=/usr/local/include". That makes configure
pass for me.

Regards,
Christoph

-- 
Spare Space



Re: Using COPY to import large xml file

2018-06-25 Thread Christoph Moench-Tegeder
## Anto Aravinth (anto.aravinth@gmail.com):

> Sure, let me try that.. I have a question here, COPY usually works when you
> move data from files to your postgres instance, right? Now in node.js,
> processing the whole file, can I use COPY
> programmatically like COPY Stackoverflow ?
> Because from doc:
> 
> https://www.postgresql.org/docs/9.2/static/sql-copy.html
> 
> I don't see its possible. May be I need to convert the files to copy
> understandable first?

"COPY ... FROM STDIN"
 STDIN Specifies that input comes from the client application.

It's on the page...

Regards,
Christoph

-- 
Spare Space.



Re: Using COPY to import large xml file

2018-06-24 Thread Christoph Moench-Tegeder
## Adrien Nayrat (adrien.nay...@anayrat.info):

> I used this tool :
> https://github.com/Networks-Learning/stackexchange-dump-to-postgres

That will be awfully slow: this tool commits each INSERT on it's own,
see loop in
https://github.com/Networks-Learning/stackexchange-dump-to-postgres/blob/master/load_into_pg.py#L83

With only small changes - prepare the INSERT, execute for all (or at
least a huge lot of) rows, COMMIT at the end - you can safe quite a lot
of time (500 rows is not "a huge lot"). And when you do that, for
heaven's sake, do not try to create the INSERT statement as a string
with the values - Bobby Tables will eat you. See psycopg documentation
on how it's done (especially watch the warnings):
http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries
On prepared statements with psycopg2, see
http://initd.org/psycopg/articles/2012/10/01/prepared-statements-psycopg/
(python makes stuff rather hard, but I'll spare you the snark and wait until
the language has matured to at least version 5).

Using the COPY protocol with psycopg2 seems to require some hoop-jumping,
but could improve matters even more.

Regards,
Christoph

-- 
Spare Space.



Re: Database connection log

2018-06-14 Thread Christoph Moench-Tegeder
## Tiffany Thang (tiffanyth...@gmail.com):

> Does PostgreSQL keep a log of client connections to the database like
> Oracle's listener.log? I would like to extract information such as how many
> connections are made to the database daily, the IP addresses they
> originated from and the schemas they are connected to. Would it be possible
> to extract the above information in PostgreSQL?

Set "log_connections" to "on":
https://www.postgresql.org/docs/10/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

Regards,
Christoph

-- 
Spare Space



Re: binaries for 11 beta compiled with --with-llvm?

2018-05-29 Thread Christoph Moench-Tegeder
## Thomas Kellerer (spam_ea...@gmx.net):

> But what about Linux binaries with JITting enabled? 

The Debian packages do have JIT enabled.
https://www.postgresql.org/download/linux/debian/

Regards,
Christoph

-- 
Spare Space



Re: Will Altering and Modifying tables during backup result in a corrupted server after the restore?

2018-05-21 Thread Christoph Moench-Tegeder
## Yashwanth Govinda Setty (ygovindase...@commvault.com):

>   2.  Restore the server with transaction logs

This is missing a lot of details. If you do it right - see your email
thread from one week ago - you will be able to recover the database
server to a state as of the _end_ of the backup process (as marked by
the return of the pg_stop_backup() command).
If you do not follow the backup/restore documentation to the letter,
the database will be corrupted and will not start (sometimes people
report with haphazard backup schemes, but that's just more luck than
they deserve, and nobody should rely on that).

Regards,
Christoph

-- 
Spare Space



Re: Recommended way to copy database files on Windows OS (to perform file system level backup)

2018-05-14 Thread Christoph Moench-Tegeder
## Yashwanth Govinda Setty (ygovindase...@commvault.com):

> We are facing this problem while performing file system level backup of 
> database files:
> As each database will form a directory inside Base directory which consists 
> of files representing the tables, when some tables are dropped during backup, 
> We get error while copying since the files do not exist anymore.

This looks like you read only the first sentence of the relevant
documentation:
https://www.postgresql.org/docs/current/static/backup-file.html

The "recommended way" is not "use tar", but to heed both restrictions
mentioned there. Especially, if you want to do a naive copy of the files,
the database has to be shut down.
If shutting down the database is not an option (most cases), your
option is a "base backup" - there's pg_basebackup to help you with
that, and you can do that "the hard way" with the low level API.
Both approaches are described here:
https://www.postgresql.org/docs/current/static/continuous-archiving.html#BACKUP-BASE-BACKUP

This is mostly independent from the OS - but you have to make sure to
follow the process (shutting down the database beforehand, or do the
full base backup) - else the database will be corrupted after restore
and may even fail to start.

Regards,
Christoph

-- 
Spare Space.



Re: pg_dump to a remote server

2018-04-17 Thread Christoph Moench-Tegeder
## Ron (ronljohnso...@gmail.com):

> > pg_dump -h host1 dbname | psql -h host2 dbname
> 
> But that assumes --format=plain which will send a whole lot of
> uncompressed text across the wire.

You can also use pg_restore with standard input, i.e. pg_dump | pg_restore.

Regards,
Christoph

-- 
Spare Space.