RE: Mailing list archiver

2018-01-02 Thread Kellner Thiemo
Looks nice, thanks, however, I could not find select posts to this list from 
20th December and 19th Dezember respectively.

Kind regards Thiemo


Re: Returning Values from INSERT ON CONFLICT DO NOTHING

2018-01-02 Thread Igal Sapir
On Mon, Jan 1, 2018 at 11:59 PM, Jov  wrote:

> From https://www.postgresql.org/docs/devel/static/sql-insert.html:
>
>> The optional RETURNING clause causes INSERT to compute and return
>> value(s) based on each row actually inserted (or updated, if an ON
>> CONFLICT DO UPDATE clause was used). This is primarily useful for
>> obtaining values that were supplied by defaults, such as a serial sequence
>> number. However, any expression using the table's columns is allowed. The
>> syntax of the RETURNING list is identical to that of the output list of
>> SELECT. Only rows that were successfully inserted or updated will be
>> returned. For example, if a row was locked but not updated because an ON
>> CONFLICT DO UPDATE ... WHERE clause *condition* was not satisfied, the
>> row will not be returned.​
>
>
> do update will return values while do nothing will not.
>

That explains it, thank you.


>
> 2018-01-02 15:43 GMT+08:00 Igal Sapir :
>
>> It seems that if I do a simple update it resolves my issue:
>>
>> INSERT INTO users(email, name)
>>   VALUES('u...@domain.tld', 'User')
>>   ON CONFLICT (email) DO UPDATE
>> SET email = excluded.email  -- users.email works too, not sure if
>> makes a difference
>>   RETURNING user_id, (xmax::text::int > 0) as existed;
>>
>
> ​Do not update email column because there is index on this column. It is
> better to update other non-index column for HOT update.​
>

Makes sense, thanks again.


Igal


Re: Query error: could not resize shared memory segment

2018-01-02 Thread Thuc Nguyen Canh
Thank you Thomas,

I make it work with extra setting --shm-size=1g in my docker run script.

On Wed, Jan 3, 2018 at 12:16 PM, Thomas Munro
 wrote:
> On Wed, Jan 3, 2018 at 6:13 PM, Thuc Nguyen Canh
>  wrote:
>> And here is the result from postgres container:
>>
>> mount | grep /dev/shm
>> => shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
>
> Bingo.  Somehow your container tech is limiting shared memory.  That
> error is working as designed.  You could figure out how to fix the
> mount options, or you could disable parallelism with
> max_parallel_workers_per_gather = 0.
>
> --
> Thomas Munro
> http://www.enterprisedb.com



Re: Query error: could not resize shared memory segment

2018-01-02 Thread Tom Lane
Thomas Munro  writes:
> On Wed, Jan 3, 2018 at 6:13 PM, Thuc Nguyen Canh
>  wrote:
>> mount | grep /dev/shm
>> => shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)

> Bingo.  Somehow your container tech is limiting shared memory.

If this is a common setup, maybe we're going to need a new section
under or near

https://www.postgresql.org/docs/current/static/kernel-resources.html#SYSVIPC

It's pretty annoying to have sweated blood to get out from under the
kernel's chosen-at-random SysV IPC limits, only to run into container
tmpfs limits that seem equally chosen with the aid of a dartboard.

regards, tom lane



Re: Query error: could not resize shared memory segment

2018-01-02 Thread Thomas Munro
On Wed, Jan 3, 2018 at 6:13 PM, Thuc Nguyen Canh
 wrote:
> And here is the result from postgres container:
>
> mount | grep /dev/shm
> => shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)

Bingo.  Somehow your container tech is limiting shared memory.  That
error is working as designed.  You could figure out how to fix the
mount options, or you could disable parallelism with
max_parallel_workers_per_gather = 0.

-- 
Thomas Munro
http://www.enterprisedb.com




Re: Query error: could not resize shared memory segment

2018-01-02 Thread Thomas Munro
On Wed, Jan 3, 2018 at 5:39 PM, Tom Lane  wrote:
> Thomas Munro  writes:
>> So you have 16GB of RAM and here we're failing to posix_fallocate()
>> 50MB (actually we can't tell if it's the ftruncate() or
>> posix_fallocate() call that failed, but the latter seems more likely
>> since the former just creates a big hole in the underlying tmpfs
>> file).  Can you share the query plan (EXPLAIN SELECT ...)?
>
> I wonder if OP is running with a tmpfs size setting that's less than
> the traditional Linux default of half of physical RAM size.

Hmm.  Canh, can you please share the output of the following commands?

mount | grep /dev/shm
du -hs /dev/shm
df /dev/shm

-- 
Thomas Munro
http://www.enterprisedb.com



Re: Query error: could not resize shared memory segment

2018-01-02 Thread Thuc Nguyen Canh
The last query explain is with random_page_cost = 3.
Here is the query explain with random_page_cost = 2.5, that causes the
'shared memory segment' issue.

'Sort  (cost=9255854.81..9356754.53 rows=40359886 width=64)'
'  Sort Key: (to_char(b.week, 'dd-mm-'::text))'
'  CTE sumorder'
'->  GroupAggregate  (cost=692280.90..703914.76 rows=513746 width=16)'
'  Group Key: (date_trunc('month'::text,
to_timestamp("order".order_time)))'
'  ->  Sort  (cost=692280.90..693590.12 rows=523689 width=14)'
'Sort Key: (date_trunc('month'::text,
to_timestamp("order".order_time)))'
'->  Bitmap Heap Scan on "order"  (cost=11461.44..642534.77
rows=523689 width=14)'
'  Recheck Cond: ((service_id = ANY
('{SGN-BIKE,SGN-POOL}'::text[])) AND (order_time >= '1509469200'::double
precision))'
'  ->  Bitmap Index Scan on
order_service_id_order_time_idx  (cost=0.00..11330.52 rows=523689 width=0)'
'Index Cond: ((service_id = ANY
('{SGN-BIKE,SGN-POOL}'::text[])) AND (order_time >= '1509469200'::double
precision))'
'  CTE badorder'
'->  Finalize GroupAggregate  (cost=987667.04..989627.66 rows=15712
width=16)'
'  Group Key: (date_trunc('month'::text,
to_timestamp(order_1.order_time)))'
'  ->  Gather Merge  (cost=987667.04..989326.48 rows=13100
width=16)'
'Workers Planned: 2'
'->  Partial GroupAggregate  (cost=986667.01..986814.39
rows=6550 width=16)'
'  Group Key: (date_trunc('month'::text,
to_timestamp(order_1.order_time)))'
'  ->  Sort  (cost=986667.01..986683.39 rows=6550
width=14)'
'Sort Key: (date_trunc('month'::text,
to_timestamp(order_1.order_time)))'
'->  Parallel Bitmap Heap Scan on "order"
order_1  (cost=35678.61..986251.83 rows=6550 width=14)'
'  Recheck Cond: ((service_id = ANY
('{SGN-BIKE,SGN-POOL}'::text[])) AND (order_time >= '1483203600'::double
precision))'
'  Filter: ((rating_by_user < 5) AND
(rating_by_user > 0))'
'  ->  Bitmap Index Scan on
order_service_id_order_time_idx  (cost=0.00..35674.67 rows=1740356 width=0)'
'Index Cond: ((service_id = ANY
('{SGN-BIKE,SGN-POOL}'::text[])) AND (order_time >= '1483203600'::double
precision))'
'  ->  Merge Join  (cost=60414.85..1271289.99 rows=40359886 width=64)'
'Merge Cond: (b.week = s.week)'
'->  Sort  (cost=1409.33..1448.61 rows=15712 width=16)'
'  Sort Key: b.week'
'  ->  CTE Scan on badorder b  (cost=0.00..314.24 rows=15712
width=16)'
'->  Sort  (cost=59005.52..60289.88 rows=513746 width=16)'
'  Sort Key: s.week'
'  ->  CTE Scan on sumorder s  (cost=0.00..10274.92 rows=513746
width=16)'

On Wed, Jan 3, 2018 at 11:43 AM, Thomas Munro  wrote:

> On Wed, Jan 3, 2018 at 5:22 PM, Thuc Nguyen Canh
>  wrote:
> > Here is the query plan of a query that causes above issue for any
> > random_page_cost < 3 (I keep the work_mem by default)
> >
> > 'Sort  (cost=9441498.11..9542397.83 rows=40359886 width=64) (actual
> > time=33586.588..33586.590 rows=4 loops=1)'
>
> I guess that must be EXPLAIN ANALYZE, because it includes "actual"
> time, so it must be the plan when you set random_page_code >= 3,
> right?  Otherwise it would raise the error.  Can you now set it to  <
> 3 and do just EXPLAIN (no ANALYZE) so that we can see the failing plan
> without trying to run it?  I'm guessing it's different, because the
> plan you showed doesn't look like it would want 50MB of DSM.
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>


Re: Query error: could not resize shared memory segment

2018-01-02 Thread Thomas Munro
On Wed, Jan 3, 2018 at 5:22 PM, Thuc Nguyen Canh
 wrote:
> Here is the query plan of a query that causes above issue for any
> random_page_cost < 3 (I keep the work_mem by default)
>
> 'Sort  (cost=9441498.11..9542397.83 rows=40359886 width=64) (actual
> time=33586.588..33586.590 rows=4 loops=1)'

I guess that must be EXPLAIN ANALYZE, because it includes "actual"
time, so it must be the plan when you set random_page_code >= 3,
right?  Otherwise it would raise the error.  Can you now set it to  <
3 and do just EXPLAIN (no ANALYZE) so that we can see the failing plan
without trying to run it?  I'm guessing it's different, because the
plan you showed doesn't look like it would want 50MB of DSM.

-- 
Thomas Munro
http://www.enterprisedb.com



Re: Query error: could not resize shared memory segment

2018-01-02 Thread Tom Lane
Thomas Munro  writes:
> So you have 16GB of RAM and here we're failing to posix_fallocate()
> 50MB (actually we can't tell if it's the ftruncate() or
> posix_fallocate() call that failed, but the latter seems more likely
> since the former just creates a big hole in the underlying tmpfs
> file).  Can you share the query plan (EXPLAIN SELECT ...)?

I wonder if OP is running with a tmpfs size setting that's less than
the traditional Linux default of half of physical RAM size.

regards, tom lane



Re: Query error: could not resize shared memory segment

2018-01-02 Thread Thuc Nguyen Canh
Hi,
Here is the query plan of a query that causes above issue for any
random_page_cost < 3 (I keep the work_mem by default)

'Sort  (cost=9441498.11..9542397.83 rows=40359886 width=64) (actual
time=33586.588..33586.590 rows=4 loops=1)'
'  Sort Key: (to_char(b.week, 'dd-mm-'::text))'
'  Sort Method: quicksort  Memory: 25kB'
'  CTE sumorder'
'->  GroupAggregate  (cost=763614.25..775248.11 rows=513746 width=16)
(actual time=16587.507..17320.290 rows=4 loops=1)'
'  Group Key: (date_trunc('month'::text,
to_timestamp("order".order_time)))'
'  ->  Sort  (cost=763614.25..764923.47 rows=523689 width=14)
(actual time=16587.362..16913.230 rows=539089 loops=1)'
'Sort Key: (date_trunc('month'::text,
to_timestamp("order".order_time)))'
'Sort Method: quicksort  Memory: 47116kB'
'->  Bitmap Heap Scan on "order"  (cost=12679.94..713868.12
rows=523689 width=14) (actual time=516.465..15675.517 rows=539089 loops=1)'
'  Recheck Cond: ((service_id = ANY
('{SGN-BIKE,SGN-POOL}'::text[])) AND (order_time >= '1509469200'::double
precision))'
'  Heap Blocks: exact=242484'
'  ->  Bitmap Index Scan on
order_service_id_order_time_idx  (cost=0.00..12549.02 rows=523689 width=0)
(actual time=425.697..425.697 rows=539089 loops=1)'
'Index Cond: ((service_id = ANY
('{SGN-BIKE,SGN-POOL}'::text[])) AND (order_time >= '1509469200'::double
precision))'
'  CTE badorder'
'->  Finalize GroupAggregate  (cost=993588.49..995549.11 rows=15712
width=16) (actual time=16257.720..16263.183 rows=13 loops=1)'
'  Group Key: (date_trunc('month'::text,
to_timestamp(order_1.order_time)))'
'  ->  Gather Merge  (cost=993588.49..995247.93 rows=13100
width=16) (actual time=16257.435..16263.107 rows=39 loops=1)'
'Workers Planned: 2'
'Workers Launched: 2'
'->  Partial GroupAggregate  (cost=992588.46..992735.84
rows=6550 width=16) (actual time=16246.191..16251.348 rows=13 loops=3)'
'  Group Key: (date_trunc('month'::text,
to_timestamp(order_1.order_time)))'
'  ->  Sort  (cost=992588.46..992604.84 rows=6550
width=14) (actual time=16245.767..16248.316 rows=3715 loops=3)'
'Sort Key: (date_trunc('month'::text,
to_timestamp(order_1.order_time)))'
'Sort Method: quicksort  Memory: 274kB'
'->  Parallel Seq Scan on "order" order_1
(cost=0.00..992173.28 rows=6550 width=14) (actual time=4.162..16230.174
rows=3715 loops=3)'
'  Filter: ((order_time >=
'1483203600'::double precision) AND (service_id = ANY
('{SGN-BIKE,SGN-POOL}'::text[])) AND (rating_by_user < 5) AND
(rating_by_user > 0))'
'  Rows Removed by Filter: 1801667'
'  ->  Merge Join  (cost=60414.85..1271289.99 rows=40359886 width=64)
(actual time=33586.471..33586.503 rows=4 loops=1)'
'Merge Cond: (b.week = s.week)'
'->  Sort  (cost=1409.33..1448.61 rows=15712 width=16) (actual
time=16263.259..16263.276 rows=13 loops=1)'
'  Sort Key: b.week'
'  Sort Method: quicksort  Memory: 25kB'
'  ->  CTE Scan on badorder b  (cost=0.00..314.24 rows=15712
width=16) (actual time=16257.737..16263.220 rows=13 loops=1)'
'->  Sort  (cost=59005.52..60289.88 rows=513746 width=16) (actual
time=17320.506..17320.509 rows=4 loops=1)'
'  Sort Key: s.week'
'  Sort Method: quicksort  Memory: 25kB'
'  ->  CTE Scan on sumorder s  (cost=0.00..10274.92 rows=513746
width=16) (actual time=16587.532..17320.352 rows=4 loops=1)'
'Planning time: 3.202 ms'
'Execution time: 33589.971 ms'

On Wed, Jan 3, 2018 at 11:13 AM, Thomas Munro  wrote:

> On Wed, Jan 3, 2018 at 5:05 PM, Thuc Nguyen Canh
>  wrote:
> > The dynamic_shared_memory_type is posix, the before and after values for
> > work_mem are ~41MB and ~64MB.
> > I'm using a Digital Ocean vps of 16RAM 8 Cores.
> > For more information, I managed to reproduce this issue on a fresh vps
> after
> > I changed the random_page_cost from 4.0 to 1.1. So that said, I did
> reduce
> > the random_page_cost to 1.1, in order to optimize postgresql performance
> on
> > SSD (DO uses SSD) and got this issue.
>
> So you have 16GB of RAM and here we're failing to posix_fallocate()
> 50MB (actually we can't tell if it's the ftruncate() or
> posix_fallocate() call that failed, but the latter seems more likely
> since the former just creates a big hole in the underlying tmpfs
> file).  Can you share the query plan (EXPLAIN SELECT ...)?
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>


Re: Query error: could not resize shared memory segment

2018-01-02 Thread Thomas Munro
On Wed, Jan 3, 2018 at 5:05 PM, Thuc Nguyen Canh
 wrote:
> The dynamic_shared_memory_type is posix, the before and after values for
> work_mem are ~41MB and ~64MB.
> I'm using a Digital Ocean vps of 16RAM 8 Cores.
> For more information, I managed to reproduce this issue on a fresh vps after
> I changed the random_page_cost from 4.0 to 1.1. So that said, I did reduce
> the random_page_cost to 1.1, in order to optimize postgresql performance on
> SSD (DO uses SSD) and got this issue.

So you have 16GB of RAM and here we're failing to posix_fallocate()
50MB (actually we can't tell if it's the ftruncate() or
posix_fallocate() call that failed, but the latter seems more likely
since the former just creates a big hole in the underlying tmpfs
file).  Can you share the query plan (EXPLAIN SELECT ...)?

-- 
Thomas Munro
http://www.enterprisedb.com



Re: Query error: could not resize shared memory segment

2018-01-02 Thread Thuc Nguyen Canh
Hi,

The dynamic_shared_memory_type is posix, the before and after values for
work_mem are ~41MB and ~64MB.
I'm using a Digital Ocean vps of 16RAM 8 Cores.
For more information, I managed to reproduce this issue on a fresh vps
after I changed the random_page_cost from 4.0 to 1.1. So that said, I did
reduce the random_page_cost to 1.1, in order to optimize postgresql
performance on SSD (DO uses SSD) and got this issue.




On Wed, Jan 3, 2018 at 10:53 AM, Thomas Munro  wrote:

> On Wed, Jan 3, 2018 at 1:22 PM, Thuc Nguyen Canh
>  wrote:
> > I got following error when running some heavy queries
> > "ERROR: could not resize shared memory segment "/PostgreSQL.388782411" to
> > 50438144 bytes: No space left on device SQL state: 53100"
> >
> > I'm using a postgis 10 docker container with mounted volume on ubuntu 16
> > vps.
> >
> > Some of failed queries can run after I increased my work_mem.
> >
> > On the other hand, this issue is not producible on postgresql 9.6.
>
> Hi,
>
> So it couldn't allocate 50MB of dynamic shared memory.  Can you show
> the work_mem settings, the query plan with the two different work_mem
> settings (the one that works and the one that doesn't), the value of
> dynamic_shared_memory_type, and tell us how much memory and swap space
> you have?  Do you run many of these queries in parallel?  I guess this
> is probably a parallel query using parallel bitmap heapscan and seeing
> the error coming from the change in commit
> 899bd785c0edf376077d3f5d65c316f92c1b64b5, meaning that it would have
> risked death by SIGBUS before that commit.  What is surprising is that
> increasing work_mem helped.
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>


Re: Query error: could not resize shared memory segment

2018-01-02 Thread Thomas Munro
On Wed, Jan 3, 2018 at 1:22 PM, Thuc Nguyen Canh
 wrote:
> I got following error when running some heavy queries
> "ERROR: could not resize shared memory segment "/PostgreSQL.388782411" to
> 50438144 bytes: No space left on device SQL state: 53100"
>
> I'm using a postgis 10 docker container with mounted volume on ubuntu 16
> vps.
>
> Some of failed queries can run after I increased my work_mem.
>
> On the other hand, this issue is not producible on postgresql 9.6.

Hi,

So it couldn't allocate 50MB of dynamic shared memory.  Can you show
the work_mem settings, the query plan with the two different work_mem
settings (the one that works and the one that doesn't), the value of
dynamic_shared_memory_type, and tell us how much memory and swap space
you have?  Do you run many of these queries in parallel?  I guess this
is probably a parallel query using parallel bitmap heapscan and seeing
the error coming from the change in commit
899bd785c0edf376077d3f5d65c316f92c1b64b5, meaning that it would have
risked death by SIGBUS before that commit.  What is surprising is that
increasing work_mem helped.

-- 
Thomas Munro
http://www.enterprisedb.com



BDR: remove junk replication identifier

2018-01-02 Thread milist ujang
Hi all,

I had many waste replication identifier (164 of 457), the env is BDR 0.9.3
and 9.4.4.
I'm going to remove those junk replication identifiers, but there is no
exactly how to do in docs.

# select slot_name,plugin,datoid,restart_lsn from pg_replication_slots
where slot_name like '%654018%';
slot_name | plugin | datoid | restart_lsn
--+++--
 bdr_654018_6258509090026857134_1_24612__ | bdr| 654018 | 1443/FDC0A58
(1 row)

# select * from pg_replication_identifier where riname like '%654018%';
 riident | riname
-+-
 217 | bdr_6258509090026857134_1_24612_654018_
(1 row)

# select * from pg_replication_identifier_progress where external_id like
'%654018%';
 local_id |   external_id   | remote_lsn  |
 local_lsn
--+-+-+---
  217 | bdr_6258509090026857134_1_24612_654018_ | 35/38181F98 |
1448/CC3D90C8
(1 row)


>From those 3 queries, summarize the slot, and identifier:

slot_name  : bdr_654018_6258509090026857134_1_24612__
riname : bdr_6258509090026857134_1_24612_654018_
external_id: bdr_6258509090026857134_1_24612_654018_


Validate number of the junk identifier, is it tally?:

# select count(*) from pg_replication_slots;
 count
---
   293
(1 row)

# select count(*) from pg_replication_identifier;
 count
---
   457
(1 row)

Identify the junk identifiers with the following query:

# select count(*) from pg_replication_identifier where riname not in (
# select
split_part(slot_name,'_',1)||'_'||split_part(slot_name,'_',3)||'_'||split_part(slot_name,'_',4)||'_'||split_part(slot_name,'_',5)||'_'||split_part(slot_name,'_',2)||'_'
as aaa from pg_replication_slots
# );
 count
---
   164
(1 row)

457-293 = 164, seems exactly match...


Luckily this query return 6 rows, which I'm not understand :(

# select * from pg_replication_identifier_progress where external_id not in
(
select
split_part(slot_name,'_',1)||'_'||split_part(slot_name,'_',3)||'_'||split_part(slot_name,'_',4)||'_'||split_part(slot_name,'_',5)||'_'||split_part(slot_name,'_',2)||'_'
as aaa from pg_replication_slots
);

 local_id |   external_id| remote_lsn |
 local_lsn
--+--++---
  121 | bdr_6258509090026857134_1_27953_394071_  | 2/F795DDA0 | 0/0
  331 | bdr_6258509090026857134_1_16385_133577_  | 4/91562CB8 | 0/0
  274 | bdr_6258509090026857134_1_16385_797268_  | 2/2DC5D518 |
1280/83F70D10
  163 | bdr_6258509090026857134_1_16385_1104572_ | 0/2059400  | 0/0
  430 | bdr_6258509090026857134_1_26348_1108062_ | 4/DD9E0488 | 0/0
  431 | bdr_6258509090026857134_1_26359_1107286_ | 4/E474D8F8 | 0/0
(6 rows)

my question is:
Is it OK to remove those 164 junk identifiers? or are they any other
approach?


-- 
regards

ujang jaenudin | DBA Consultant (Freelancer)
http://ora62.wordpress.com
http://id.linkedin.com/pub/ujang-jaenudin/12/64/bab


Mailing list archiver

2018-01-02 Thread Jordan Deitch
This project serves to organize and archive the postgres mailing list:

http://pg.rsa.pub/

Let me know what you think, or if you have any suggestions.
I left email addresses off the site to protect privacy.

p.s: you can click messages to collapse the tree

Thanks


Fwd: Query error: could not resize shared memory segment

2018-01-02 Thread Thuc Nguyen Canh
Hello,

I got following error when running some heavy queries
"ERROR: could not resize shared memory segment "/PostgreSQL.388782411" to
50438144 bytes: No space left on device SQL state: 53100"

I'm using a postgis 10 docker container with mounted volume on ubuntu 16
vps.

Some of failed queries can run after I increased my work_mem.

On the other hand, this issue is not producible on postgresql 9.6.

Hope to get any advice here.

Thanks and regards,

Thuc


Re: Need information on the tools available to upload the shape files to the postgreSQL database

2018-01-02 Thread James Keener
If shp2pgsql doesn't run on windows, qgis might be an option?

Jim

On January 2, 2018 6:24:45 PM EST, "Ramamoorthi, Meenakshi" 
 wrote:
>Hi All:
>
>I wanted to know if there are any tools available to upload the shape
>files from windows platform and then deposit it into PostgreSQL
>database on a UNIX system.
>
>Thanks and best regards
>Meenakshi Ramamoorthi

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Need information on the tools available to upload the shape files to the postgreSQL database

2018-01-02 Thread Ramamoorthi, Meenakshi
Hi All:

  I wanted to know if there are any tools available to upload the shape files 
from windows platform and then deposit it into PostgreSQL database on a UNIX 
system.

Thanks and best regards
Meenakshi Ramamoorthi


Re: Error creating a table

2018-01-02 Thread Joshua D. Drake

On 01/02/2018 02:38 PM, Dale Seaburg wrote:

le):

NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 
"public_rowkey" for table "ABSTRACT-SERVER_runsheet"


ERROR:  relation "public_rowkey" already exists

** Error **

ERROR: relation "public_rowkey" already exists
SQL state: 42P07

I have looked for rowkey in the "public" schema, but find nothing.
What can be the problem?  What might I be doing wrong?  I'm not sure 
what to do.


I don't use PgAdmin but the error appears to be looking for 
public_rowkey not public.rowkey. Further, it isn't a table you would be 
looking for but an index.


Thanks,

JD



Dale




--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc

PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *




Error creating a table

2018-01-02 Thread Dale Seaburg

I am in pgAdmin, attempting to create a table in an existing database.
This table is normally created and deleted by software I have written (C#).
I get this kind of error when executing the CREATE TABLE script (which 
is a copy of similarly named table):


NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 
"public_rowkey" for table "ABSTRACT-SERVER_runsheet"


ERROR:  relation "public_rowkey" already exists

** Error **

ERROR: relation "public_rowkey" already exists
SQL state: 42P07

I have looked for rowkey in the "public" schema, but find nothing.
What can be the problem?  What might I be doing wrong?  I'm not sure 
what to do.


Dale




Re: Use of Port 5433 with Postgresql 9.6

2018-01-02 Thread Graeme

Jeff, Andrew, Andreas: Thank you for your replies.
Environment is Mageia 6 x86_64. I think the packagers must have set up 
the 5433 port (although I notice the documentation for pg_ctl contains 
several examples using 5433).
Anyway, I am relieved that 5432 is still regarded as the main port, and 
have changed all port references to that.

Regards
Graeme




Re: Select for update / deadlock possibility?

2018-01-02 Thread Tom Lane
Jeff Janes  writes:
> On Tue, Jan 2, 2018 at 3:22 AM, Durumdara  wrote:
>> Is "select for update" atomic (as transactions) or it isn't?

> It is atomic, but you do have to worry about deadlocks.

I think by "atomic" the OP intends "all the row locks are magically
acquired at the same instant".  Which they aren't, they're taken one
at a time.  So if you have different transactions trying to lock
overlapping sets of rows, there's a risk of deadlock.  Which will
be detected and one transaction will fail, but still you might wish
to avoid that.

The usual rule for that is "be sure all transactions acquire locks
in the same order".  So just throw an "order by id" type of clause
into the SELECT FOR UPDATE, and you should be fine.

Personally, I'd still code the application to retry on deadlock
failures, just for robustness.

regards, tom lane



Re: 5 USD for PostgreSQL books at PacktPub

2018-01-02 Thread Vincenzo Romano
2018-01-02 16:13 GMT+01:00 Alexander Farber :
> Hello fellow PostgreSQL users,
>
> there is currently a sale for books at
> https://www.packtpub.com/tech/PostgreSQL
>
> I am not affiliated in any way with them, it is just a "heads up".
>
> For myself I have pre-ordered
> https://www.packtpub.com/big-data-and-business-intelligence/mastering-postgresql-10
>
> Regards
> Alex

Those are e-books, though, not dead tree books.

-- 
Vincenzo Romano - NotOrAnd.IT
Information Technologies
--
NON QVIETIS MARIBVS NAVTA PERITVS



5 USD for PostgreSQL books at PacktPub

2018-01-02 Thread Alexander Farber
Hello fellow PostgreSQL users,

there is currently a sale for books at
https://www.packtpub.com/tech/PostgreSQL

I am not affiliated in any way with them, it is just a "heads up".

For myself I have pre-ordered
https://www.packtpub.com/big-data-and-business-intelligence/mastering-postgresql-10

Regards
Alex


Re: Select for update / deadlock possibility?

2018-01-02 Thread Durumdara
Dear Jeff!

So. I start this question from more far.
I need to protect some resources.
All modifications started with StartTransaction.
Then I try to lock the articles by ids (to prevents other client's
modifications).
After that I insert / modify needed data.
Then I commit or rollback.

The locks will vanish on the end of the transaction, so resources
accessable again for different session.

If A session locks 1. articles, B session waits for the end of the
transaction of A.

>From the help I didn't know that these row locks are created by one by one
- so it could cause deadlock on unended waiting.

In this flame they talk about statement_timeout:
https://www.postgresql.org/message-id/200402161053.11142.xzilla%40users.sourceforge.net

Here lock_timeout:
https://stackoverflow.com/questions/20963450/controlling-duration-of-postgresql-lock-waits

And you deadlock_timeout... :-) :-) :-)

Thanks

  dd







2018-01-02 15:02 GMT+01:00 Jeff Janes :

> On Tue, Jan 2, 2018 at 3:22 AM, Durumdara  wrote:
>
>> Dear Members!
>>
>> I have to ask something that not clear for me from description, and I
>> can't simulate it.
>>
>> Is "select for update" atomic (as transactions) or it isn't?
>>
>> I want to avoid the deadlocks.
>>
>> If it's atomic, then I don't need to worry about concurrent locks.
>> But I think it's not.
>>
>
>
> It is atomic, but you do have to worry about deadlocks.  Being atomic
> doesn't mean it can't deadlock, it just means that if it does deadlock, all
> the work in the transaction is rolled back together.
>
>
>>
>>
>> This is an example for deadlock:
>>
>> a.) select * from test where id in (1, 3, 4)
>> b.) select * from test where id in (2, 4, 5)
>> c.) select * from test where id in (5, 1, 6)
>>
>> If it's not atomic, then:
>>
>> - a locks 1.
>> - b locks 2.
>> - c locks 5.
>> - a locks 3.
>> - b locks 4.
>> - c try to lock 1, but it locked by a
>> - a try to lock 4, but it locked by b
>> - b try to lock 5, but it locked by c
>>
>
> There is no obligation for it to lock rows in the order they appear in the
> IN-list.  Maybe that is why you can't simulate it.
>
>
>
>>
>> DEADLOCK!!!
>>
>> As I read select for update doesn't support timeout.
>> I've found two timeout that could be affects on it.
>> Which one I need to redefine temporarily?
>>
>> lock_timeout (integer)
>> statement_timeout (integer)
>>
>
> Deadlocks are automatically detected and one session is dealt an ERROR to
> resolve them.  So deadlock_timeout is the only timeout you need care about.
>
>
>>
>> Somebody wrote statement_timeout, but why PG have lock_timeout then?
>>
>
> They do different things, and give you different error messages when they
> fire so that you know more about what the problem was (I'm too slow, versus
> I'm stuck behind someone else).
>
> Cheers,
>
> Jeff
>
>


Re: Select for update / deadlock possibility?

2018-01-02 Thread Jeff Janes
On Tue, Jan 2, 2018 at 3:22 AM, Durumdara  wrote:

> Dear Members!
>
> I have to ask something that not clear for me from description, and I
> can't simulate it.
>
> Is "select for update" atomic (as transactions) or it isn't?
>
> I want to avoid the deadlocks.
>
> If it's atomic, then I don't need to worry about concurrent locks.
> But I think it's not.
>


It is atomic, but you do have to worry about deadlocks.  Being atomic
doesn't mean it can't deadlock, it just means that if it does deadlock, all
the work in the transaction is rolled back together.


>
>
> This is an example for deadlock:
>
> a.) select * from test where id in (1, 3, 4)
> b.) select * from test where id in (2, 4, 5)
> c.) select * from test where id in (5, 1, 6)
>
> If it's not atomic, then:
>
> - a locks 1.
> - b locks 2.
> - c locks 5.
> - a locks 3.
> - b locks 4.
> - c try to lock 1, but it locked by a
> - a try to lock 4, but it locked by b
> - b try to lock 5, but it locked by c
>

There is no obligation for it to lock rows in the order they appear in the
IN-list.  Maybe that is why you can't simulate it.



>
> DEADLOCK!!!
>
> As I read select for update doesn't support timeout.
> I've found two timeout that could be affects on it.
> Which one I need to redefine temporarily?
>
> lock_timeout (integer)
> statement_timeout (integer)
>

Deadlocks are automatically detected and one session is dealt an ERROR to
resolve them.  So deadlock_timeout is the only timeout you need care about.


>
> Somebody wrote statement_timeout, but why PG have lock_timeout then?
>

They do different things, and give you different error messages when they
fire so that you know more about what the problem was (I'm too slow, versus
I'm stuck behind someone else).

Cheers,

Jeff


Re: MSSQL compatibility layer

2018-01-02 Thread Thomas Kellerer

James Harper schrieb am 02.01.2018 um 12:24:

Right now the code is pretty rough. I've only implemented bits and
pieces to a proof of concept stage so it's definitely way off being
useful. At the moment the following pieces are in place:

. BackgroundWorker listening on port 1433 (hardcoded - I haven't implemented 
any of the udp/1434 discovery stuff)
. Parser to take TDS from an MSSQL client and make sense of it, and then take 
postgres results and send back to the client
. Parser to take T-SQL statements and convert them to postgresql
. Postgres data types for MSSQL types (the main ones at least) using an 
internal storage format compatible with TDS

It's also not really a way for an MSSQL client to access an existing
postgres database, it's more about implementing an MSSQL-like
database using postgres as the backend, so an application will think
it is actually talking to an MSSQL server. I mostly don't use
postgres types, just my own tds/tsql compatible types. I haven't
explored what sort of performance impact this might have.

I'm not really sure if anyone else would find this interesting, but
the code is just sitting around on my server doing nothing at the
moment. Let me know if you want to know more and I can send you a
copy of the code (under GPL or whatever license). I'm not subscribed
to the list so cc me or email me direct.


Maybe that could be integrated into pgTSQL?

https://www.openscg.com/bigsql/docs/pgtsql/









Re: Select for update / deadlock possibility?

2018-01-02 Thread Durumdara
Hello!

Read Committed.

I extend the example: the concurrent connections are in transactions.

begin
   select ... for update;
end;

Regards
  dd



2018-01-02 12:31 GMT+01:00 Rakesh Kumar :

>
> Shouldn't isolation level also matter ?  What is the isolation level you
> are using ?
>


Re: Selecting a JSON object of arrays from a PostgreSQL table

2018-01-02 Thread Alexander Farber
Also got this nice suggestion at https://stackoverflow.com/q/48050127/165071
-

SELECT COALESCE(
  json_object_agg(
  gid, array_to_json(y)
), '{}'::json)
FROM(
  SELECT  gid,
  array_agg(
json_build_object(
  'uid', uid,
  'created', EXTRACT(EPOCH FROM created)::int,
  'msg', msg)
  ) AS y
  FROMchat /* WHERE gid = 0 */
  GROUP BY gid
) x;


Re: Slow system due to ReorderBufferGetTupleBuf?

2018-01-02 Thread Martin Moore


On 01/01/2018, 17:45, "Peter Geoghegan"  wrote:

On Mon, Jan 1, 2018 at 8:56 AM, Martin Moore  
wrote:
> Can someone shed some light on this and advise how to prevent it 
reoccurring?

You're using v10, which has these two commits:


https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=58b25e98106dbe062cec0f3d31d64977bffaa4af


https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=9fab40ad32efa4038d19eaed975bb4c1713ccbc0

Unfortunately, per the commit message of the first commit, it doesn't
look like the tuple allocator uses any new strategy, at least until
this v11 commit:


https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=a4ccc1cef5a04cc054af83bc4582a045d5232cb3

My guess is that that would make a noticeable difference, once v11
becomes available. Could you test this yourself by building from the
master branch?

-- 
Peter Geoghegan





Thanks Peter. I don’t really want to go down that route for various reasons. 
There’s a task that copies ‘old’ rows to various old_  tables and then deletes 
from the main tables, then does a vaccum and analyse. Tables only have 20-30k 
rows. I’m guessing this may be the trigger for the problem so have changed the 
timing from every 20 mins to once in the middle of the night when things are 
quiet.

Would this explain the problem?


Martin.








Re: PGPool encrypted connections from Application

2018-01-02 Thread Tatsuo Ishii
> Does PGPool allow encrypted connections from Application? i.e. ssl
> encrypted? My company wants to encrypt all traffic in the environment
> and so enabled https connections between applications/load balance. I
> am not sure about the PGPool. If application sends encrypted
> connection to PGPool will it accept?

Yes, you can use encrypted connections with Pgpool-II. See FAQ:
https://pgpool.net/mediawiki/index.php/FAQ#How_can_I_set_up_SSL_for_pgpool-II.3F

and the manual:
http://www.pgpool.net/docs/latest/en/html/runtime-ssl.html

for more details.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



Re: Select for update / deadlock possibility?

2018-01-02 Thread Rakesh Kumar

Shouldn't isolation level also matter ?  What is the isolation level you are 
using ?



MSSQL compatibility layer

2018-01-02 Thread James Harper
Hi All,

A few years ago I got bored over summer and wrote some code to act as an MSSQL 
compatibility layer to postgres. Mostly just to see if such a thing could work. 
The idea is that I can point the MSSQL Management Studio at the server running 
postgres and it looks like I am talking to an MSSQL server. Once I got it 
working to a basic level I went onto other things and kind of forgot about it 
until I was cleaning up my server a few days ago.

Right now the code is pretty rough. I've only implemented bits and pieces to a 
proof of concept stage so it's definitely way off being useful. At the moment 
the following pieces are in place:
. BackgroundWorker listening on port 1433 (hardcoded - I haven't implemented 
any of the udp/1434 discovery stuff)
. Parser to take TDS from an MSSQL client and make sense of it, and then take 
postgres results and send back to the client
. Parser to take T-SQL statements and convert them to postgresql
. Postgres data types for MSSQL types (the main ones at least) using an 
internal storage format compatible with TDS

It's also not really a way for an MSSQL client to access an existing postgres 
database, it's more about implementing an MSSQL-like database using postgres as 
the backend, so an application will think it is actually talking to an MSSQL 
server. I mostly don't use postgres types, just my own tds/tsql compatible 
types. I haven't explored what sort of performance impact this might have.

So right now you can connect, create a database, create/insert/select from some 
tables, etc. Stored procedures/functions should be working but I just tested 
against 9.6 and I get some error about permission denied, and then when I tried 
to grant access to sa to my tsql language I got some error about "GRANT and 
REVOKE are not allowed on untrusted languages" and haven't investigated yet.

I'm not really sure if anyone else would find this interesting, but the code is 
just sitting around on my server doing nothing at the moment. Let me know if 
you want to know more and I can send you a copy of the code (under GPL or 
whatever license). I'm not subscribed to the list so cc me or email me direct.

James




Re: Returning Values from INSERT ON CONFLICT DO NOTHING

2018-01-02 Thread Jov
>From https://www.postgresql.org/docs/devel/static/sql-insert.html:

> The optional RETURNING clause causes INSERT to compute and return
> value(s) based on each row actually inserted (or updated, if an ON
> CONFLICT DO UPDATE clause was used). This is primarily useful for
> obtaining values that were supplied by defaults, such as a serial sequence
> number. However, any expression using the table's columns is allowed. The
> syntax of the RETURNING list is identical to that of the output list of
> SELECT. Only rows that were successfully inserted or updated will be
> returned. For example, if a row was locked but not updated because an ON
> CONFLICT DO UPDATE ... WHERE clause *condition* was not satisfied, the
> row will not be returned.​


do update will return values while do nothing will not.

2018-01-02 15:43 GMT+08:00 Igal Sapir :

> It seems that if I do a simple update it resolves my issue:
>
> INSERT INTO users(email, name)
>   VALUES('u...@domain.tld', 'User')
>   ON CONFLICT (email) DO UPDATE
> SET email = excluded.email  -- users.email works too, not sure if
> makes a difference
>   RETURNING user_id, (xmax::text::int > 0) as existed;
>

​Do not update email column because there is index on this column. It is
better to update other non-index column for HOT update.​



>
> But if anyone has a better solution then I'd love to hear it.
>
> Thanks,
>
>
> Igal
>
> On Mon, Jan 1, 2018 at 11:07 PM, Igal Sapir  wrote:
>
>> Hello,
>>
>> I want to use the "UPSERT" syntax for returning an ID if it exists, or
>> inserting a record and returning the new ID if it does not exist.
>>
>> INSERT INTO users(email, name)
>>   VALUES('u...@domain.tld', 'User')
>>   ON CONFLICT (email) DO NOTHING
>>   RETURNING user_id, (xmax::text::int > 0) as existed;
>>
>> When an email address does not exist then it works fine and I get the new
>> user_id, but when it does exist, I get no results at all.
>>
>> How can I get the results in case of a CONFLICT?
>>
>> Thanks,
>>
>>
>> Igal
>>
>
>