stored procedure call is not working with "select procedure()" option

2018-02-19 Thread Abhra Kar
Hi ,
 I have a stored procedure name "procedure()". Which I am calling
by --

Session sess = (Session)entityManager.getDelegate(); //entityManager is
javax.persistent.EntityManager[ Properly Initialise]
  sess.createSQLQuery("select procedure()");

procedure is containing some truncate queries like --  EXECUTE('truncate
table abc');

It's not throwing any exception but not executing the procedure.Using
oracle query -- sess.createSQLQuery("{ call procedure()
}").executeUpdate();   procedure execution is working fine.

What's need to be change here.

Thanks and Regards,
Abhra


Re: Error when compiling postgresql 9.6.7 with Visual Studio 15.5.6

2018-02-19 Thread Michael Paquier
On Mon, Feb 19, 2018 at 04:17:18PM -0500, Tom Lane wrote:
> Well, as far as I'm concerned the patch that was proposed there is
> far too fragile to be acceptable.  We need to find out what the
> Microsoft-approved way of getting the information is.  Assuming that
> we know the contents of non-exported data structures is surely not
> what they have in mind, and it would undoubtedly break in the next VS
> update.

Definitely agreed.  The locale-related code is *the* pain point when it
comes to MSVC things.  And each time we add support for a new version of
MSVC there is always something different happening and breaking.  There
has never been any discussion around ResolveLocaleName() though.  A
downside is that this would increase the minimal version support bar on
Windows.  Still that would be worth a serious look.
--
Michael


signature.asc
Description: PGP signature


Re: pgBackRest backup from standby

2018-02-19 Thread Don Seiler
On Mon, Feb 19, 2018 at 2:53 PM, David Steele  wrote:

>
> Also, relocating the log directory is easy using the log_directory
> setting, so that's what I recommend if it's an issue.  Some users do
> want to backup their logs.
>
>
That's probably a lot better idea than symlinking anyway. I'll look to do
that in my next round of config changes.



-- 
Don Seiler
www.seiler.us


Re: Empty ./share/timezone/UTC and failure to set UTC as timezone

2018-02-19 Thread Sasa Vilic
Hi Tom,

thanks for quick response. I have to apologize to you, PG from source is
fine. Sorry for wasting your time.

It only happens that after I put it into archive (tar czf
postgresql-9.6.7-linux-x64-binaries.tar.gz pgsql) and extract it, that it
is empty and it looks like it is a bug in java library implementation of
tar.gz.

Once again, sorry for wasting your time. Thank you very much for your
support.

Kind Regards,

Sasa Vilic

2018-02-19 17:33 GMT+00:00 Tom Lane :

> Sasa Vilic  writes:
> > To double check, I took official 9.6.7 sources, compiled and installed it
> > locally (/tmp/pgsql) and tried connecting with client again and I again
> we
> > got same error message.
>
> Hm, what platform and compiler are you using, exactly?
>
> > In order to further investigate this, I have attached strace on backend
> and
> > I have noticed that PG is able to find and read file
> ./share/timezone/UTC.
> > But right after reading the file, we get error message from above.
> > We noticed that UTC file is empty! Is this on purpose or is this is bug?
> Am
> > I missing something?
>
> It should certainly not be empty.  On my machine it's 127 bytes long:
>
> $ ls -l share/timezone/UTC
> -rw-r--r--. 6 postgres postgres 127 Feb 19 12:27 share/timezone/UTC
>
> and file(1) knows what it is:
>
> $ file share/timezone/UTC
> share/timezone/UTC: timezone data, version 2, 1 gmt time flag, 1 std time
> flag, no leap seconds, no transition times, 1 abbreviation char
>
> I wonder if you've tripped over some portability issue in the zic
> compiler.  Another idea, seeing that this file is multiply-linked
> in the timezone install tree, is that maybe you're on a filesystem
> that does strange things with hard links.
>
> It'd be interesting to see the rest of your timezone directory.  Mine
> looks like
>
> total 248
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Africa
> drwxr-xr-x.  6 postgres postgres 4096 Feb 19 12:27 America
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Antarctica
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Arctic
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Asia
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Atlantic
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Australia
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Brazil
> -rw-r--r--.  1 postgres postgres 2102 Feb 19 12:27 CET
> -rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 CST6CDT
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Canada
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Chile
> -rw-r--r--.  2 postgres postgres 2437 Feb 19 12:27 Cuba
> -rw-r--r--.  1 postgres postgres 1876 Feb 19 12:27 EET
> -rw-r--r--.  1 postgres postgres  127 Feb 19 12:27 EST
> -rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 EST5EDT
> -rw-r--r--.  2 postgres postgres 1972 Feb 19 12:27 Egypt
> -rw-r--r--.  2 postgres postgres 3543 Feb 19 12:27 Eire
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Etc
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Europe
> -rw-r--r--.  1 postgres postgres  148 Feb 19 12:27 Factory
> -rw-r--r--.  7 postgres postgres 3687 Feb 19 12:27 GB
> -rw-r--r--.  7 postgres postgres 3687 Feb 19 12:27 GB-Eire
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT+0
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT-0
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 GMT0
> -rw-r--r--. 10 postgres postgres  127 Feb 19 12:27 Greenwich
> -rw-r--r--.  1 postgres postgres  128 Feb 19 12:27 HST
> -rw-r--r--.  2 postgres postgres 1189 Feb 19 12:27 Hongkong
> -rw-r--r--.  2 postgres postgres 1188 Feb 19 12:27 Iceland
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Indian
> -rw-r--r--.  2 postgres postgres 1718 Feb 19 12:27 Iran
> -rw-r--r--.  3 postgres postgres 2265 Feb 19 12:27 Israel
> -rw-r--r--.  2 postgres postgres  507 Feb 19 12:27 Jamaica
> -rw-r--r--.  2 postgres postgres  318 Feb 19 12:27 Japan
> -rw-r--r--.  2 postgres postgres  259 Feb 19 12:27 Kwajalein
> -rw-r--r--.  2 postgres postgres  655 Feb 19 12:27 Libya
> -rw-r--r--.  1 postgres postgres 2102 Feb 19 12:27 MET
> -rw-r--r--.  1 postgres postgres  127 Feb 19 12:27 MST
> -rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 MST7MDT
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Mexico
> -rw-r--r--.  4 postgres postgres 2460 Feb 19 12:27 NZ
> -rw-r--r--.  2 postgres postgres 2087 Feb 19 12:27 NZ-CHAT
> -rw-r--r--.  4 postgres postgres 2453 Feb 19 12:27 Navajo
> -rw-r--r--.  5 postgres postgres  414 Feb 19 12:27 PRC
> -rw-r--r--.  1 postgres postgres 2294 Feb 19 12:27 PST8PDT
> drwxr-xr-x.  2 postgres postgres 4096 Feb 19 12:27 Pacific
> -rw-r--r--.  2 postgres postgres 2705 Feb 19 12:27 Poland
> -rw-r--r--.  2 postgres postgres 3453 Feb 19 12:27 Portugal
> -rw-r--r--.  2 postgres postgres  790 Feb 19 12:27 ROC
> -rw-r--r--.  2 postgres postgres  531 Feb 19 12:27 ROK
> -rw-r--r--.  2 postgres postgres  424 

Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
On 2/19/18 2:05 PM, Don Seiler wrote:
> On Mon, Feb 19, 2018 at 12:39 PM, David Steele  > wrote:
> 
> 
> I read "open it for testing (or backups in this case)" as letting
> recovery complete and promoting the cluster to a master before taking
> the backup.
> 
> Don, is that the case?  If it is, I think there's a problem with or
> without a timeline switch.  If you confirm the backup is being taken as
> above then I'll detail my concerns.
> 
> 
> Note that this is just for creating a couple of one-off backups to
> restore for our dev and pre-prod environments. Given that, I was going
> to open a new clone as its own cluster and take backups from that. The
> data would be the same though and suit purposes of the dev and pre-prod
> refreshes.
> 
> If I were taking backups for the purpose of production backups, I would
> not do things this way. That is the eventual plan but right now we
> aren't ready to make the changes necessary in the production environment.

OK, that's fine then.  You can play these to consistency and they'll be
fine.  I just wouldn't try to do any PITR using the production WAL archive.

-- 
-David
da...@pgmasters.net



Empty ./share/timezone/UTC and failure to set UTC as timezone

2018-02-19 Thread Sasa Vilic
Hello,

we are using "embedded" PostgreSQL for integration tests (see
https://github.com/yandex-qatools/postgresql-embedded) and we have noticed
that our (java) clients fail to connect when their timezone is UTC, with PG
server giving following error message:

invalid value for parameter "TimeZone": "UTC"

To double check, I took official 9.6.7 sources, compiled and installed it
locally (/tmp/pgsql) and tried connecting with client again and I again we
got same error message.

When using psql, the same happens:

postgres=# set timezone='UTC';
ERROR:  invalid value for parameter "TimeZone": "UTC"

In order to further investigate this, I have attached strace on backend and
I have noticed that PG is able to find and read file ./share/timezone/UTC.
But right after reading the file, we get error message from above.

We noticed that UTC file is empty! Is this on purpose or is this is bug? Am
I missing something?

Kind Regards,

Sasa Vilic


Error when compiling postgresql 9.6.7 with Visual Studio 15.5.6

2018-02-19 Thread Aditya Nugraha
Hello everyone,

   When trying to compile postgresql 9.6.7 with Visual Studio 15.5.6,
i am getting errors at these following line :
"C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\pgsql.sln"
(default targe
t) (1) ->
"C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\ascii_and_mic.vcxproj"
(d
efault target) (5) ->
"C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxproj"
(defaul
t target) (6) ->
(ClCompile target) ->
  src/backend/utils/adt/pg_locale.c(927): error C2037: left of
'locale_name' specifies undefined struct/union '__crt_lo
cale_data' 
[C:\Users\nameless\Documents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxpr
oj]
  src/backend/utils/adt/pg_locale.c(928): error C2198: 'wchar2char':
too few arguments for call [C:\Users\nameless\Docum
ents\Works\Compiling\pgwininstall\builddir\postgresql\postgresql-9.6.7\postgres.vcxproj]

4 Warning(s)
2 Error(s)

Searching through the mailinglist archive i am getting this thread :
http://www.postgresql-archive.org/Building-PostgreSQL-9-6devel-sources-with-Microsoft-Visual-C-2015-td5880108.html

But the intended patched file is different than above error which is
on this src/backend/utils/adt/pg_locale.c file.

Here is the snapshot of the offending code from pg_locale.c file :
/* Locale names use only ASCII, any conversion locale suffices. */
rc = wchar2char(iso_lc_messages, loct->locinfo->locale_name[LC_CTYPE],
sizeof(iso_lc_messages), NULL);

Can anyone tell me the fixes ?.


Cheers.



Aditya



Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
On 2/19/18 10:32 AM, Don Seiler wrote:
> On Mon, Feb 19, 2018 at 9:21 AM, David Steele  > wrote:
> 
> 
> Yes, they are typically very small.  The general exception to this rule
> is if logs are stored in pg_log.  I recommend storing logs out of the
> PGDATA dir as they can be quite large and don't really make sense to
> restore to another server.
> 
> Files copied from the master will be marked as such in backup.manifest
> (master:true) so you can check for yourself.
> 
> 
> Good to know. And fortunately for this DB we do have pg_log (and
> pg_xlog) symlinked to different volumes outside of $PGDATA.

If pg_log is symlinked to PGDATA it will be copied.  pg_xlog is not
copied in any backup.

> 
> > I did come up with a sort of Rube Goldberg-esque workaround for now
> > involving using a clone of the prod standby VM from Veeam backup to use
> > as the backup source (after stopping recovery and opening it as a
> > standalone DB).
> 
> You don't get PITR that way, of course, but at least it's a backup.  As
> long as your clone is consistent.
> 
> 
> Yes it's a crash-consistent snapshot-based backup. I've done quite a few
> restores from it and it works great. It can do PITR as well since I
> would have all the WAL files from prod needed to keep recovering. But
> for these cases I just recover it to the first consistent point and open
> it for testing (or backups in this case). 

I don't think it would be safe to do PITR on a backup taken in this way.
 The WAL diverges even if you suppress a timeline switch.

-- 
-David
da...@pgmasters.net



Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
Hi Don,

On 2/19/18 10:01 AM, Don Seiler wrote:
> On Mon, Feb 19, 2018 at 8:53 AM, David Steele  > wrote:
> 
> Anything *not* in global (except pg_control), base, pg_tblspc,
> pg_xact/pg_clog, and pg_multixact are copied from the primary.
> 
> For example, pg_stat is copied from the primary so these stats are
> preserved on a standby backup.
> 
> So if I have tablespaces outside of $PGDATA (but symlinked from within
> pg_tblspc, of course), those will still be backed up from the standby,
> right?

Correct.

> Is it right to say that the files that would be copied from primary are
> very small, typically? So it isn't a huge transfer over the WAN (in my
> case)?

Yes, they are typically very small.  The general exception to this rule
is if logs are stored in pg_log.  I recommend storing logs out of the
PGDATA dir as they can be quite large and don't really make sense to
restore to another server.

Files copied from the master will be marked as such in backup.manifest
(master:true) so you can check for yourself.

> I did come up with a sort of Rube Goldberg-esque workaround for now
> involving using a clone of the prod standby VM from Veeam backup to use
> as the backup source (after stopping recovery and opening it as a
> standalone DB).

You don't get PITR that way, of course, but at least it's a backup.  As
long as your clone is consistent.

-- 
-David
da...@pgmasters.net



Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
Hi Don,

On 2/19/18 9:25 AM, Don Seiler wrote:
> On Mon, Feb 19, 2018 at 8:18 AM, David Steele  > wrote:
> 
> It copies files that are not replicated from the primary so that a
> primary-style backup is created. Anything that is replicated (which is
> by far the bulk of the data) is copied from the standby.
> 
> OK so all data files would be copied from standby. Can you give me an
> example of the types of files that need to be copied from primary?
>  

Anything *not* in global (except pg_control), base, pg_tblspc,
pg_xact/pg_clog, and pg_multixact are copied from the primary.

For example, pg_stat is copied from the primary so these stats are
preserved on a standby backup.

pgBackRest uses all the same exclusions as pg_basebackup, so many
dirs/files are not copied at all: pg_dynshmem, pg_notify, pg_replslot,
pg_serial, pg_snapshots, pg_stat_tmp, pg_subtrans, etc.

Full list here
https://www.postgresql.org/docs/10/static/protocol-replication.html.

> it's best to archive from the primary so a replication
> failure does not affect your archiving.
> 
> Understood, just not something I can change in production primary at the
> moment. Hence looking to see about a quick one-off backup from standby.

For a quick one-off, pg_basebackup is your friend.

Regards,
-- 
-David
da...@pgmasters.net



Re: pgBackRest backup from standby

2018-02-19 Thread Don Seiler
On Mon, Feb 19, 2018 at 8:18 AM, David Steele  wrote:

> It copies files that are not replicated from the primary so that a
> primary-style backup is created. Anything that is replicated (which is
> by far the bulk of the data) is copied from the standby.
>

OK so all data files would be copied from standby. Can you give me an
example of the types of files that need to be copied from primary?


> it's best to archive from the primary so a replication
> failure does not affect your archiving.
>

Understood, just not something I can change in production primary at the
moment. Hence looking to see about a quick one-off backup from standby.


> Configuring pgBackRest, SSH, standby, and backup from standby are all
> covered in the user guide.


Thanks, I've been through it a few times and played with some test backups
from primary clones. I just ditched my master/replica clone setup but I'll
test there as well. I just had a couple questions about the mechanics.



-- 
Don Seiler
www.seiler.us


Re: pgBackRest backup from standby

2018-02-19 Thread David Steele
Hi Don,

On 2/18/18 7:34 PM, Don Seiler wrote:
> 
> Looking to use pgBackRest to take a backup from a hot standby. I'm
> reading that pgBackRest still needs to connect to the primary and copy
> some files. My questions are:
> 
>  1. What files does it need to copy? Config files? WAL files?

It copies files that are not replicated from the primary so that a
primary-style backup is created. Anything that is replicated (which is
by far the bulk of the data) is copied from the standby.

>  2. How does it connect? SSH?

Yes.

>  3. Does pgBackRest need to be installed and configured on the primary
> as well?

Yes.  Anyway, it's best to archive from the primary so a replication
failure does not affect your archiving.

Configuring pgBackRest, SSH, standby, and backup from standby are all
covered in the user guide.

http://pgbackrest.org/user-guide.html

In particular:

http://pgbackrest.org/user-guide.html#backup-host
http://pgbackrest.org/user-guide.html#replication
http://pgbackrest.org/user-guide.html#standby-backup

Regards,
-- 
-David
da...@pgmasters.net



Re: shared_buffers 8GB maximum

2018-02-19 Thread Vitaliy Garnashevich


When we did calculation of some analytic tasks, then increasing 
shared_buffers had negative impact on speed. Probably hit ration was 
too low after change, but the maintenance of shared buffers (searching 
free blocks) was slower.


What was the size of shared buffers when slowdown happened (approximately)?

Regards,
Vitaliy



Re: shared_buffers 8GB maximum

2018-02-19 Thread Vitaliy Garnashevich


Yes.  I don't know the exact reason, but reading a buffer from OS 
cache is quite a bit more expensive than just pinning a buffer already 
in the buffer_pool, about 5 times more expensive the last time I 
tested it, which was before Meltdown.  (And just pinning a buffer 
which is already in the cache is already pretty expensive--about 15 
times as expensive as reading the next tuple from an already-pinned 
buffer).


Thanks for the numbers. Just out of curiosity, do you happen to know how 
much more expensive compared to that a read from disk is? And also, how 
much the pinning can be slowed down, when having to iterate using the 
clock-sweep method over large shared_buffers?


I don't think that there is any reason to think that buffers_clean > 
buffers_checkpoint is a problem.  In fact, you could argue that it is 
the way it was designed to work.  Although the background writer does 
need to tell the checkpointer about every file it dirties, so it can 
be fsynced at the end of the checkpoint.  The overhead of this was 
minimal in my testing.




The reason why I mentioned buffers_clean is because I was assuming that 
under "healthy" conditions, most writes should be done by checkpointer, 
because, as it was already mentioned, that's the most efficient way of 
writing (no duplicate writes of the same buffer, write optimizations 
etc.). I was thinking about bgwriter as a way of reducing latency by 
avoiding the case when a backend has to write buffers by itself. So that 
would mean that big numbers in buffers_clean and buffers_backend 
compared to buffers_checkpoint, would mean that a lot of writes are done 
not by checkpointer, and thus probably less efficiently than they could 
be. That might have resulted in IO writes being more random, and more IO 
writes done in general, because same buffer can be written multiple 
times between checkpoints.


But buffers_backend > buffers_checkpoint could be a problem, 
especially if they are also much larger than buffers_clean.  But the 
wrinkle here is that if you do bulk inserts or bulk updates (what 
about vacuums?), the backends by design write their own dirty 
buffers.  So if you do those kinds of things, buffers_backend being 
large doesn't indicate much.  There was a patch someplace a while ago 
to separate the counters of backend-intentional writes from 
backend-no-choice writes, but it never went anywhere.


We do daily manual vacuuming. Knowing what part of total writes is 
accounted for them indeed would be nice.


When looking at buffers_checkpoint/buffers_clean/buffers_backend, I was 
saving the numbers with several hours interval, knowing that there are 
no vacuums running at that time, and calculated the difference.


It is not clear to me that this is the best way to measure health.  
Did your response time go down?  Did your throughput go up?


We have mixed type of DB usage. There is OLTP-like part with many small 
read/write transactions. Predictable latency does not matter in that 
case, but throughput does, because that is basically a background data 
loading job. Then there is an OLAP-like part when heavier report queries 
are being run. Then there are more background jobs which are a 
combination of both, which at first run long queries and then do lots of 
small inserts, thus pre-calculating some data for bigger reports.


After increasing shared_buffers 8GB -> 64GB, there was 7% improvement in 
run time of the background pre-calculating job (measured by running 
several times in a row, and caches are hot).


When we configured hugepages for the bigger shared_buffers, the 
additional improvement was around 3%.


Regards,
Vitaliy



Re: Connection loosing at some places - caused by firewall

2018-02-19 Thread Durumdara
Hello!


2018-02-13 21:21 GMT+01:00 George Neuner :

> On Tue, 14 Nov 2017 12:09:31 +0100, Durumdara 
> wrote:
>
>
> >*I disabled my firewall at home - the [keepalive] problem vanished!!!*
>
> What firewall are you using?  Windows own firewall doesn't interfere
> with keepalive packets.  Most commercial SOHO firewalls won't either.
>
>
Normal Windows Firewall (Defender). An ASUS test notebook works. The DELL
isn't.
So something is different in the FW on DELL. With manually defined (faster)
keepalive it's working.

dd


Re: Join query

2018-02-19 Thread Laurenz Albe
hmidi slim wrote:
> I have two tables: establishment which contains these columns: id, name, 
> longitude, latitude, geom (Geometric column)
> Product contains: id, name, establishment_id
> First of all I want to select the establishment within a radius. 
> I run this query:
> select e.name, e1.name
> from establishment as e, establishment as e1
> where e.id <> e1.id
> and e1.id = 1
> and ST_DWithin(geom, ST_MakePoint(e1.longitude, e1.latitude)::geography, 1000)
> 
> The result of this query returns all the establishment within a radius 1KM 
> from from a given establishment which has an id = 1.
> 
> After that I want to get the product's name of each establishment from the 
> query's result.
> 
> Is there an other optimized solution to make a query such this:
> select * from (
> select e.name, e1.name, e.id
> from establishment as e, establishment as e1
> where e.id <> e1.id
> and e1.id = 1
> and ST_DWithin(geom, ST_MakePoint(e1.longitude, e1.latitude)::geography, 
> 1000)) as tmp inner join product as p on p.establishment_id = tmp.id

A simple join is what you need:

   SELECT e.name, e1.name
   FROM establishment AS e
  JOIN establishment AS e1
 ON ST_DWithin(e.geom, ST_MakePoint(e1.longitude, 
e1.latitude)::geography, 1000)
AND e.id <> e1.id
  JOIN product AS p
 ON p.establishment_id = e.id
   WHERE e1.id = 1;

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: Deadlock with 2 processes waiting on transactionids and locking unrelated tables

2018-02-19 Thread Rene Romero Benavides
My guess is that the transaction doing:

update "planscheduleitem" set "planschedule"=$1 where "psi"=$2

updates ticket before reaching that point

And

update ticket set unread = true where ticketid = $1

updates planscheduleitem before that

Does it make sense to you? Btw, do the transactions use explicit locking?

2018-02-18 23:28 GMT-06:00 David Wheeler :

> Hi,
>
> We’re seeing deadlock semi-regularly (0-2 per day) that I’m really having
> trouble getting to the bottom of.
>
> Process 7172 waits for ShareLock on transaction 4078724272
> <(407)%20872-4272>; blocked by process 7186.
> Process 7186 waits for ShareLock on transaction 4078724210
> <(407)%20872-4210>; blocked by process 7172.
>
> The two queries in question are updates on unrelated tables. Running the
> queries on their own shows no overlapping entries in pg_locks.
>
> Process 7172: update ticket set unread = true where ticketid = $1
> Process 7186: update "planscheduleitem" set "planschedule"=$1 where
> "psi"=$2
>
> How can I work out why Postgres has decided that the two processes are in
> deadlock? Is there an explainer somewhere on transaction level locks? I
> can’t see anything in the docs besides that they exist.
>
>
>
> Details below
>
> select version();
>   version
> 
> ---
>  PostgreSQL 10.2 (Ubuntu 10.2-1.pgdg14.04+1) on x86_64-pc-linux-gnu,
> compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4, 64-bit
> (1 row)
>
> 
> 
> ---
>
>
> after running update "planscheduleitem" set "planschedule"=$1 where
> "psi"=$2
>
> SELECT locktype, virtualxid, transactionid, virtualtransaction, pid, mode,
> relname, page, tuple
> FROM pg_locks LEFT JOIN pg_class ON (relation=oid) where pid =41715;
>locktype| virtualxid | transactionid | virtualtransaction |  pid  |
>   mode   | relname | page | tuple
> ---++---+---
> -+---+--+-+--+---
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_parentticketid   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_originalticketid |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_tickettypeid_idx |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_subject_idx  |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_closedtime_idx   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_assignedto_idx   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_serviceuid_idx   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_parentuid_idx|  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_createdtime_idx  |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_txid |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_tickettype   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_ticketpriority   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_idx_0|  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_pkey |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | number_constraint   |  |
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket  |  |
>  virtualxid| 56/2306863 |   | 56/2306863 | 41715 |
> ExclusiveLock| |  |
>  transactionid ||4089785154 <(408)%20978-5154> |
> 56/2306863 | 41715 | ExclusiveLock| |
>|
>  relation  ||   | 56/2306863 | 41715 |
> RowExclusiveLock | ticket_fromuid_idx  |  |
> (19 rows)
>
> 
> 
> 

Re: Deadlock with 2 processes waiting on transactionids and locking unrelated tables

2018-02-19 Thread David Wheeler
Thanks for your response

> Does any of the two tables have triggers?

Yes the ticket table has a trigger that inserts changes into a ticketstatuslog 
table when the ticket.status column changes and on insert.

ticket_status_insert_trigger AFTER INSERT ON ticket FOR EACH ROW EXECUTE 
PROCEDURE ticket_status_trigger_function()
ticket_status_update_trigger AFTER UPDATE OF ticketstatus ON ticket FOR 
EACH ROW WHEN (old.ticketstatus <> new.ticketstatus) EXECUTE PROCEDURE 
ticket_status_trigger_function()

> What's the database / transaction isolation level?

Both read committed

> Do the updates run in a transaction among other read / write operations 
> within the same transaction ?

Yes they will both have many reads and writes before running the deadlocking 
query. 

Cheers, 

-- David


 

 David Wheeler • software engineer
Inomial Pty Ltd • Automatic Billing 
p +61 3 9663 3554


  
 
 
> On 19 Feb 2018, at 4:43 pm, Rene Romero Benavides  
> wrote:
> 
> Hi. Does any of the two tables have triggers? What's the database / 
> transaction isolation level? Do the updates run in a transaction among other 
> read / write operations within the same transaction ?
> Regards.
> 
> 2018-02-18 23:28 GMT-06:00 David Wheeler  >:
> Hi,
> 
> We’re seeing deadlock semi-regularly (0-2 per day) that I’m really having 
> trouble getting to the bottom of. 
> 
> Process 7172 waits for ShareLock on transaction 4078724272 
> ; blocked by process 7186.
> Process 7186 waits for ShareLock on transaction 4078724210 
> ; blocked by process 7172.
> 
> The two queries in question are updates on unrelated tables. Running the 
> queries on their own shows no overlapping entries in pg_locks. 
> 
> Process 7172: update ticket set unread = true where ticketid = $1
> Process 7186: update "planscheduleitem" set "planschedule"=$1 where "psi"=$2
> 
> How can I work out why Postgres has decided that the two processes are in 
> deadlock? Is there an explainer somewhere on transaction level locks? I can’t 
> see anything in the docs besides that they exist. 
> 
> 
> 
> Details below
> 
> select version();
>   version
> ---
>  PostgreSQL 10.2 (Ubuntu 10.2-1.pgdg14.04+1) on x86_64-pc-linux-gnu, compiled 
> by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4, 64-bit
> (1 row)
> 
> ---
> 
> 
> after running update "planscheduleitem" set "planschedule"=$1 where "psi"=$2
> 
> SELECT locktype, virtualxid, transactionid, virtualtransaction, pid, mode, 
> relname, page, tuple
> FROM pg_locks LEFT JOIN pg_class ON (relation=oid) where pid =41715;
>locktype| virtualxid | transactionid | virtualtransaction |  pid  |
>mode   | relname | page | tuple
> ---++---++---+--+-+--+---
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_parentticketid   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_originalticketid |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_tickettypeid_idx |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_subject_idx  |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_closedtime_idx   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_assignedto_idx   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_serviceuid_idx   |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_parentuid_idx|  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_createdtime_idx  |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_txid |  |
>  relation  ||   | 56/2306863 | 41715 | 
> RowExclusiveLock | ticket_tickettype   |  |
>  relation  ||   | 56/2306863