[GENERAL] Encrypt with BouncyCastle and decrypt with pg_pub_decrypt

2017-05-18 Thread Kang, Kamal
Hi all,

I am trying to encrypt a string using Bouncy Castle PGP Java apis, Base64 
encode the encrypted string and then decrypt using pg_pub_decrypt but it is 
failing with error “Wrong Key”. Just wanted to know if this is doable or 
pg_pub_decrypt only works with encrypted strings from pg_pub_encrypt?

Thanks

Kamal


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Scott Marlowe
On Thu, May 18, 2017 at 3:40 PM, Scott Marlowe  wrote:
> I would say that the best thing to do is to run 9.6 grab pgadmin4 and do all
> the examples in the doc page on partitioning.
>
> https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html
>
> If that works well then the question becomes are there any esoteric cases
> where pgadmin4 won't quite get you there?

Or maybe what can we automate from pgadmin4 that you currently need to
script etc?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Scott Marlowe
I would say that the best thing to do is to run 9.6 grab pgadmin4 and do
all the examples in the doc page on partitioning.

https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html

If that works well then the question becomes are there any esoteric cases
where pgadmin4 won't quite get you there?


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread John R Pierce

On 5/18/2017 2:29 PM, Robert Eckhardt wrote:
All the code for creating and managing partitions is part of the core 
Postgres code. What we are interested in looking into is what that 
work flow might look like and how that workflow can be supported with 
a GUI management tool. 



only thing I'd expect from a GUI management tool would be to allow me to 
create partitioned tables and its partitions, and display their 
attributes appropriately.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Scott Marlowe
On Thu, May 18, 2017 at 1:21 PM, Melvin Davidson 
wrote:

> Shirley,
> I am a bit confused. AFAIK, it is PostgreSQL that is responsible for table
> partitioning. PgAdmin4 is just an administrative tool.
> Are you saying that PgAdmin4 now can make partition tables automatically?
>

I think maybe she just means that their GUI is pgadmin4 for reference as
opposed to the partitioning being IN pgadmin4.

If she'd like to set up a discussion thread HERE on partitioning I'm more
than willing to add to it.

More importantly, while I think the postgresql documentation on
partitioning gives you everything you need to roll your own, it doesn't
inform you on all the ways to do partitioning for various jobs. That's
where partitioning gets interesting and requires business requirements and
all that fun stuff.

Do you hand off maintenance jobs to cron or do you check and run it every
time a function's called etc.

Do you put materialized views on top to stop scanning all the partitions
ever?

fun fun.


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Robert Eckhardt
All the code for creating and managing partitions is part of the core
Postgres code. What we are interested in looking into is what that work
flow might look like and how that workflow can be supported with a GUI
management tool.

-- Rob

On Thu, May 18, 2017 at 3:21 PM, Melvin Davidson 
wrote:

> Shirley,
> I am a bit confused. AFAIK, it is PostgreSQL that is responsible for table
> partitioning. PgAdmin4 is just an administrative tool.
> Are you saying that PgAdmin4 now can make partition tables automatically?
>
> On Thu, May 18, 2017 at 2:10 PM, Shirley Wang  wrote:
>
>> Hello!
>>
>> We're a team from Pivotal, working with members of the Postgres community
>> on table partitioning in pgAdmin4. We're looking to chat with some Postgres
>> users on their expectations with table partitioning within pgAdmin4.
>>
>> If you have some availability next week, we'd love to set up some time to
>> learn about your needs with this feature. Let us know some times that work
>> for you and we'll send over details for the call.
>>
>> Thanks!
>> Shirley
>>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread David G. Johnston
On Thu, May 18, 2017 at 1:46 PM, John R Pierce  wrote:

> On 5/18/2017 1:40 PM, Andrew Kerber wrote:
>
>> It appears to me you might be making this a lot more difficult than
>> necessary. Why not just pre-create the required partitions daily or weekly
>> or monthly? Or do you have a requirement that a new partition only be
>> created the first time it is required?
>>
>
> +1
>
> we create new partitions in advance of their being needed as part of a
> maintenance process that's strictly single threaded.


​While I've been trying to explain the mechanics involved here I agree that
the whole idea of exceptionally creating a table in a trigger is just
asking for trouble.  I do get the idea of not wanting an external
maintenance process involved that needs to be setup and maintained, and
maybe there are now better options with "workers", but the trade-offs
involved would start leaning me heavily toward having a maintenance
routine, especially in a production environment, and at that point you
should mirror production in development.

David J.


Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread John R Pierce

On 5/18/2017 1:40 PM, Andrew Kerber wrote:
It appears to me you might be making this a lot more difficult than 
necessary. Why not just pre-create the required partitions daily or 
weekly or monthly? Or do you have a requirement that a new partition 
only be created the first time it is required?


+1

we create new partitions in advance of their being needed as part of a 
maintenance process that's strictly single threaded.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread Pavel Stehule
2017-05-18 22:39 GMT+02:00 Rob Brucks :

> Thanks.
>
>
>
> I can code an exception block to handle the table problem, and probably
> one for the index collision too.
>

Creating partitions dynamically is pretty bad idea. You have to handle a
exceptions - it enforces implicit subtransaction (some slowdown) or you
have to use a explicit locks - or some mix of all.

Writing this code without race condition is not too easy.


>
> My point is how did two concurrent threads successfully create the same
> table?  That had to have happened if one of the threads hit a duplicate
> index error.
>

PostgreSQL is based on processes. The reason was described by David well.


>
>
> It almost seems like Postgres skipped checking for duplicate tables due to
> some timing issue.  I don't want my DB to ending up hosed by something like
> that.
>
>
>
> Thanks,
>
> Rob
>
>
>
> *From: *"David G. Johnston" 
> *Date: *Thursday, May 18, 2017 at 3:31 PM
> *To: *Rob Brucks 
> *Cc: *"pgsql-general@postgresql.org" 
> *Subject: *Re: [GENERAL] Error that shouldn't happen?
>
>
>
> On Thu, May 18, 2017 at 1:18 PM, Rob Brucks 
> wrote:
>
> According to this post, adding "if not exists" won't really help for race
> conditions.
>
>
>
> "The bottom line is that CREATE TABLE IF NOT EXISTS doesn't pretend to
>
> handle concurrency issues any better than regular old CREATE TABLE,
>
> which is to say not very well." - Robert Haas
>
>
>
> https://www.postgresql.org/message-id/CA+TgmoZAdYVtwBfp1FL2sMZbiHCWT4UP
> rzrlnnx1nb30ku3...@mail.gmail.com
>
>
>
> It still doesn't explain how the function got past creating the table, but
> failed on the index.  If another thread was also creating the table then
> there should have been lock contention on the create table statement.
>
>
>
>
>
> A​T1: Insert, failed, cannot find table
>
> AT2: Insert, failed, cannot find table
>
> BT2: Create Table, succeeds
>
> BT1: Create Table; fails, it exists now, if exists converts to a warning
>
> CT2: Create Index, succeeds
>
> CT1: Create Index, fails , hard error
>
> DT2: Insert, succeeds
>
> ​DT1: Never Happens
>
>
>
> What that post seems to be describing is that it is possible the "BT1"
> actually hard errors instead of just being converted into a notice.  There
> is no statement visible action to show that interleave but there is an
> underlying race condition since both BT1 and BT2 are executing concurrently.
>
>
>
> In short even with IF NOT EXISTS you are not guaranteed to not fail.  But
> at least IF NOT EXISTS makes the probability of not failing > 0.  It
> doesn't handle the concurrency any better - but it does change the outcome
> in some of those less-than-ideally handled situations.
>
>
>
> David J.
>
>
>


Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread David G. Johnston
On Thu, May 18, 2017 at 1:39 PM, Rob Brucks 
wrote:

> My point is how did two concurrent threads successfully create the same
> table?
>

​You seem to not be understanding that "CREATE TABLE IF NOT EXISTS" can
succeed without actually creating a table...​

​David J.​

​
​


Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread Andrew Kerber
It appears to me you might be making this a lot more difficult than
necessary. Why not just pre-create the required partitions daily or weekly
or monthly? Or do you have a requirement that a new partition only be
created the first time it is required?

On Thu, May 18, 2017 at 3:31 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, May 18, 2017 at 1:18 PM, Rob Brucks 
> wrote:
>
>> According to this post, adding "if not exists" won't really help for race
>> conditions.
>>
>>
>>
>> "The bottom line is that CREATE TABLE IF NOT EXISTS doesn't pretend to
>>
>> handle concurrency issues any better than regular old CREATE TABLE,
>>
>> which is to say not very well." - Robert Haas
>>
>>
>>
>> https://www.postgresql.org/message-id/CA+TgmoZAdYVtwBfp1FL2s
>> mzbihcwt4uprzrlnnx1nb30ku3...@mail.gmail.com
>>
>>
>>
>> It still doesn't explain how the function got past creating the table,
>> but failed on the index.  If another thread was also creating the table
>> then there should have been lock contention on the create table statement.
>>
>>
>>
> A​T1: Insert, failed, cannot find table
> AT2: Insert, failed, cannot find table
> BT2: Create Table, succeeds
> BT1: Create Table; fails, it exists now, if exists converts to a warning
> CT2: Create Index, succeeds
> CT1: Create Index, fails , hard error
> DT2: Insert, succeeds
> ​DT1: Never Happens
>
> What that post seems to be describing is that it is possible the "BT1"
> actually hard errors instead of just being converted into a notice.  There
> is no statement visible action to show that interleave but there is an
> underlying race condition since both BT1 and BT2 are executing concurrently.
>
> In short even with IF NOT EXISTS you are not guaranteed to not fail.  But
> at least IF NOT EXISTS makes the probability of not failing > 0.  It
> doesn't handle the concurrency any better - but it does change the outcome
> in some of those less-than-ideally handled situations.
>
> David J.
>
>


-- 
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'


Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread Rob Brucks
Thanks.

I can code an exception block to handle the table problem, and probably one for 
the index collision too.

My point is how did two concurrent threads successfully create the same table?  
That had to have happened if one of the threads hit a duplicate index error.

It almost seems like Postgres skipped checking for duplicate tables due to some 
timing issue.  I don't want my DB to ending up hosed by something like that.

Thanks,
Rob

From: "David G. Johnston" 
Date: Thursday, May 18, 2017 at 3:31 PM
To: Rob Brucks 
Cc: "pgsql-general@postgresql.org" 
Subject: Re: [GENERAL] Error that shouldn't happen?

On Thu, May 18, 2017 at 1:18 PM, Rob Brucks 
> wrote:
According to this post, adding "if not exists" won't really help for race 
conditions.

"The bottom line is that CREATE TABLE IF NOT EXISTS doesn't pretend to
handle concurrency issues any better than regular old CREATE TABLE,
which is to say not very well." - Robert Haas

https://www.postgresql.org/message-id/ca+tgmozadyvtwbfp1fl2smzbihcwt4uprzrlnnx1nb30ku3...@mail.gmail.com

It still doesn't explain how the function got past creating the table, but 
failed on the index.  If another thread was also creating the table then there 
should have been lock contention on the create table statement.


A​T1: Insert, failed, cannot find table
AT2: Insert, failed, cannot find table
BT2: Create Table, succeeds
BT1: Create Table; fails, it exists now, if exists converts to a warning
CT2: Create Index, succeeds
CT1: Create Index, fails , hard error
DT2: Insert, succeeds
​DT1: Never Happens

What that post seems to be describing is that it is possible the "BT1" actually 
hard errors instead of just being converted into a notice.  There is no 
statement visible action to show that interleave but there is an underlying 
race condition since both BT1 and BT2 are executing concurrently.

In short even with IF NOT EXISTS you are not guaranteed to not fail.  But at 
least IF NOT EXISTS makes the probability of not failing > 0.  It doesn't 
handle the concurrency any better - but it does change the outcome in some of 
those less-than-ideally handled situations.

David J.



Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread David G. Johnston
On Thu, May 18, 2017 at 1:18 PM, Rob Brucks 
wrote:

> According to this post, adding "if not exists" won't really help for race
> conditions.
>
>
>
> "The bottom line is that CREATE TABLE IF NOT EXISTS doesn't pretend to
>
> handle concurrency issues any better than regular old CREATE TABLE,
>
> which is to say not very well." - Robert Haas
>
>
>
> https://www.postgresql.org/message-id/CA+TgmoZAdYVtwBfp1FL2sMZbiHCWT4UP
> rzrlnnx1nb30ku3...@mail.gmail.com
>
>
>
> It still doesn't explain how the function got past creating the table, but
> failed on the index.  If another thread was also creating the table then
> there should have been lock contention on the create table statement.
>
>
>
A​T1: Insert, failed, cannot find table
AT2: Insert, failed, cannot find table
BT2: Create Table, succeeds
BT1: Create Table; fails, it exists now, if exists converts to a warning
CT2: Create Index, succeeds
CT1: Create Index, fails , hard error
DT2: Insert, succeeds
​DT1: Never Happens

What that post seems to be describing is that it is possible the "BT1"
actually hard errors instead of just being converted into a notice.  There
is no statement visible action to show that interleave but there is an
underlying race condition since both BT1 and BT2 are executing concurrently.

In short even with IF NOT EXISTS you are not guaranteed to not fail.  But
at least IF NOT EXISTS makes the probability of not failing > 0.  It
doesn't handle the concurrency any better - but it does change the outcome
in some of those less-than-ideally handled situations.

David J.


Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread Rob Brucks
According to this post, adding "if not exists" won't really help for race 
conditions.

"The bottom line is that CREATE TABLE IF NOT EXISTS doesn't pretend to
handle concurrency issues any better than regular old CREATE TABLE,
which is to say not very well." - Robert Haas

https://www.postgresql.org/message-id/ca+tgmozadyvtwbfp1fl2smzbihcwt4uprzrlnnx1nb30ku3...@mail.gmail.com

It still doesn't explain how the function got past creating the table, but 
failed on the index.  If another thread was also creating the table then there 
should have been lock contention on the create table statement.

Thanks,
Rob

From: "David G. Johnston" 
Date: Thursday, May 18, 2017 at 3:05 PM
To: Rob Brucks 
Cc: "pgsql-general@postgresql.org" 
Subject: Re: [GENERAL] Error that shouldn't happen?

On Thu, May 18, 2017 at 12:48 PM, Rob Brucks 
> wrote:
I am unable to figure out how the trigger was able to successfully create the 
table, but then fail creating the index.  I would have expected one thread to 
"win" and create both the table and index, but other threads would fail when 
creating the table… but NOT when creating the index.

​I don't fully comprehend the locking involved here but if you want a failure 
while creating the table you shouldn't use "IF NOT EXISTS".  ​On the other side 
adding "IF NOT EXISTS" to the CREATE INDEX will supposedly prevent the error 
you are seeing.

The trigger that failed to create the index also failed to create the table - 
it just didn't care because of the IF NOT EXISTS.  At least this is what I am 
observing from your description.

David J.



Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread David G. Johnston
On Thu, May 18, 2017 at 12:48 PM, Rob Brucks 
wrote:

> I am unable to figure out how the trigger was able to successfully create
> the table, but then fail creating the index.  I would have expected one
> thread to "win" and create both the table and index, but other threads
> would fail when creating the table… but NOT when creating the index.
>

​I don't fully comprehend the locking involved here but if you want a
failure while creating the table you shouldn't use "IF NOT EXISTS".  ​On
the other side adding "IF NOT EXISTS" to the CREATE INDEX will supposedly
prevent the error you are seeing.

The trigger that failed to create the index also failed to create the table
- it just didn't care because of the IF NOT EXISTS.  At least this is what
I am observing from your description.

David J.


[GENERAL] Error that shouldn't happen?

2017-05-18 Thread Rob Brucks
Hello Everyone,

I've run into a strange error on the PostgreSQL 9.5.4 DB we use for our Zabbix 
Server.  I implemented auto-partitioning based on the design from this wiki 
article: 
https://www.zabbix.org/wiki/Docs/howto/zabbix2_postgresql_autopartitioning

I implemented auto-partitioning for the history_uint table using the following 
trigger function:

CREATE FUNCTION zbx_part_trigger_func() RETURNS trigger
LANGUAGE plpgsql
AS $_$
  DECLARE
prefix text := 'partitions';
timeformat text;
selector   text;
_interval  interval;
tablename  text;
startdate  text;
enddatetext;
create_table_part text;
create_index_part text;

  BEGIN
selector = TG_ARGV[0];

IF selector = 'hour' THEN
  timeformat := '_MM_DD_HH24';
ELSIF selector = 'day' THEN
  timeformat := '_MM_DD';
ELSIF selector = 'month' THEN
  timeformat := '_MM';
ELSE
  RAISE EXCEPTION 'zbx_part_trigger_func: Specify "hour", "day", or 
"month" for interval selector instead of "%"', selector;
END IF;

_interval := '1 ' || selector;
tablename :=  TG_TABLE_NAME || '_p' || to_char(to_timestamp(NEW.clock), 
timeformat);

EXECUTE 'INSERT INTO ' || quote_ident(prefix) || '.' || 
quote_ident(tablename) || ' SELECT ($1).*' USING NEW;
RETURN NULL;

EXCEPTION
  WHEN undefined_table THEN
startdate := extract(epoch FROM date_trunc(selector, 
to_timestamp(NEW.clock)));
enddate := extract(epoch FROM date_trunc(selector, 
to_timestamp(NEW.clock) + _interval ));
create_table_part := 'CREATE TABLE IF NOT EXISTS ' || 
quote_ident(prefix) || '.' || quote_ident(tablename)
  || ' (CHECK ((clock >= ' || 
quote_literal(startdate)
  || ' AND clock < ' || quote_literal(enddate)
  || '))) INHERITS (' || TG_TABLE_NAME || ')';
create_index_part := 'CREATE INDEX ' || quote_ident(tablename)
  || '_1 on ' || quote_ident(prefix) || '.' || 
quote_ident(tablename) || '(itemid,clock)';
EXECUTE create_table_part;
EXECUTE create_index_part;
--insert it again
EXECUTE 'INSERT INTO ' || quote_ident(prefix) || '.' || 
quote_ident(tablename) || ' SELECT ($1).*' USING NEW;
RETURN NULL;
  END;
$_$;


With this trigger (no other triggers defined):
zbx_partition_trg BEFORE INSERT ON history_uint FOR EACH ROW EXECUTE PROCEDURE 
zbx_part_trigger_func('day');


I had fully expected race conditions to occur on a very busy system and throw 
errors trying to create the table, but instead I got the following index 
creation error:

ERROR:  relation "history_uint_p2017_05_17_1" already exists
CONTEXT:  SQL statement "CREATE INDEX history_uint_p2017_05_17_1 on 
partitions.history_uint_p2017_05_17(itemid,clock)"
PL/pgSQL function zbx_part_trigger_func() line 43 at EXECUTE
STATEMENT:  insert into history_uint (itemid,clock,ns,value) values 
(73800,1494979201,11841804,99382669312),(30061,1494979201,17605067,0);


I am unable to figure out how the trigger was able to successfully create the 
table, but then fail creating the index.  I would have expected one thread to 
"win" and create both the table and index, but other threads would fail when 
creating the table… but NOT when creating the index.

The only other function defined in the system is the "cleanup" function which 
was not running at the time.

The target table and index were still created.

Can anyone shed any light on how this could have occurred?  Is this a bug or am 
I missing something?


Pertinent details:
· PostgreSQL 9.5.4 installed from PGDG packages on Centos 7.3.1611
· Zabbix 3.2 server


Thanks,
Rob Brucks



Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Melvin Davidson
Shirley,
I am a bit confused. AFAIK, it is PostgreSQL that is responsible for table
partitioning. PgAdmin4 is just an administrative tool.
Are you saying that PgAdmin4 now can make partition tables automatically?

On Thu, May 18, 2017 at 2:10 PM, Shirley Wang  wrote:

> Hello!
>
> We're a team from Pivotal, working with members of the Postgres community
> on table partitioning in pgAdmin4. We're looking to chat with some Postgres
> users on their expectations with table partitioning within pgAdmin4.
>
> If you have some availability next week, we'd love to set up some time to
> learn about your needs with this feature. Let us know some times that work
> for you and we'll send over details for the call.
>
> Thanks!
> Shirley
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] improvements/feedback sought for a working query that looks a bit ugly and might be inefficient

2017-05-18 Thread jonathan vanasco

On May 16, 2017, at 10:20 PM, David G. Johnston wrote:

> Unless you can discard the 5 and 1000 limits you are going to be stuck 
> computing rank three times in order to compute and filter them.

Thanks a ton for your insight.  I'm suck using them (5 is required for 
throttling, 1000 is required for this to run in a reasonable amount of time)

The overhead of computing things is indeed super small.  I'm not really worried 
much about the performance of this query (it runs around 3ms now, down from 
20+s).  I'm more worried about this code being referenced and a (possibly 
improper) idiom being used on queries where it will have a noticeable effect.

Re: [GENERAL] EnterpriseDB installed PostgreSQL 9.6 vs. REPMGR. Round 4 - compilation issues on RHEL 7.2

2017-05-18 Thread Adrian Klaver

On 05/18/2017 08:49 AM, Martin Goodson wrote:

On 18/05/2017 15:20, Adrian Klaver wrote:


If I remove that link then I can compile. Have no idea why.



Hi! OP here, and here's the feedback on what I'm getting after unlinking 
as suggested yesterday:


I tried removing/unlinking as suggested and, just like Adrian, it worked 
OK for me! :)


The advice was:

  Whoever does it needs to unlink:

  /lib64/libldap_r-2.4.so.2


So that's what I did.


Bit of a mystery still on that linking/unlinking (if we can find a way 
to avoid doing that, that would be great) but success!


Now I get to start playing with it properly - but I'm relatively 
confident that should be OK, I was getting good results on my little 
ubuntu box at home ...


Of course, I said the same think about compiling it :)

Huge thanks to EVERYONE who helped on this! If anyone is going to be at 
PG Day UK, I'll buy you a drink :)


Glad you got it working.

Something you might want to point out to the admins, if you where using 
the PGDG RPM repos you could have gotten to this point by doing 
something like this:


sudo yum install repmgr

Just a thought.



Regards,

Martin.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread Shirley Wang
Hello!

We're a team from Pivotal, working with members of the Postgres community
on table partitioning in pgAdmin4. We're looking to chat with some Postgres
users on their expectations with table partitioning within pgAdmin4.

If you have some availability next week, we'd love to set up some time to
learn about your needs with this feature. Let us know some times that work
for you and we'll send over details for the call.

Thanks!
Shirley


Re: [GENERAL] Serializable isolation -- are predicate locks still held across all databases?

2017-05-18 Thread Kevin Grittner
On Thu, May 18, 2017 at 11:07 AM, Karl O. Pinc  wrote:

> I forget all the details, but some time ago I found
> that I had to increase max_pred_locs_per_transaction.
> What I recall about the reason for this is that I'm
> using the serializable transaction isolation, and that
> I've a test database which occasionally has extremely
> long running transactions.  The PG serializable
> snapshot isolation implementation at the time (9.1?)
> was holding predicate locks across all databases
> during transactions.  This even though databases
> are independent of each other.  The long transaction
> times in the test database lead to predicate lock
> exhaustion in production databases -- only a single
> transaction would be executing in the test database
> but many would occur in the production databases.
> (I don't know if there was potential for other bad effects
> due to the production transactions "hanging around" until the
> transaction in the test db finished.)
>
> My question is whether this has changed.  Does PG
> now pay attention to database in it's SSI implementation?

Well, it pays attention as far as the scope of each lock, but there
is only one variable to track how far back the oldest transaction ID
for a running serializable transaction goes, which is used in
cleanup of old locks.  I see your point, and it might be feasible to
change that to a list or map that tracks it by database; but I don't
even have a gut feel estimate for the scale of such work without
investigating it.  Just out of curiosity, what is the reason you
don't move the production and test databases to separate instances?
If nothing else, extremely long-running transaction in one database
can lead to bloat in others.

> Thanks for the help and apologies if I'm not framing
> the question perfectly.  It's not often I think about
> this.

No sweat -- your concern/question is perfectly clear.  It's the
first time I've heard of someone with this particular issue, so at
this point I'm inclined to recommend the workaround of using a
separate cluster; but if we get other reports it might be worth
adding to the list of enhancements that SSI could use.

Thanks!

--
Kevin Grittner
VMware vCenter Server
https://www.vmware.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Serializable isolation -- are predicate locks still held across all databases?

2017-05-18 Thread Karl O. Pinc
Hi,

I forget all the details, but some time ago I found
that I had to increase max_pred_locs_per_transaction.
What I recall about the reason for this is that I'm
using the serializable transaction isolation, and that
I've a test database which occasionally has extremely
long running transactions.  The PG serializable
snapshot isolation implementation at the time (9.1?)
was holding predicate locks across all databases
during transactions.  This even though databases
are independent of each other.  The long transaction
times in the test database lead to predicate lock
exhaustion in production databases -- only a single
transaction would be executing in the test database
but many would occur in the production databases.
(I don't know if there was potential for other bad effects 
due to the production transactions "hanging around" until the
transaction in the test db finished.)

My question is whether this has changed.  Does PG
now pay attention to database in it's SSI implementation?

Thanks for the help and apologies if I'm not framing
the question perfectly.  It's not often I think about
this.

Regards,

Karl 
Free Software:  "You don't pay back, you pay forward."
 -- Robert A. Heinlein


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EnterpriseDB installed PostgreSQL 9.6 vs. REPMGR. Round 4 - compilation issues on RHEL 7.2

2017-05-18 Thread Martin Goodson

On 18/05/2017 15:20, Adrian Klaver wrote:


If I remove that link then I can compile. Have no idea why.



Hi! OP here, and here's the feedback on what I'm getting after unlinking 
as suggested yesterday:


I tried removing/unlinking as suggested and, just like Adrian, it worked 
OK for me! :)


The advice was:

 Whoever does it needs to unlink:

 /lib64/libldap_r-2.4.so.2


So that's what I did.

Output of ls -l command:
lrwxrwxrwx 1 root root 23 May 11 12:42 /lib64/libldap_r-2.4.so.2 -> 
libldap_r-2.4.so.2.10.3


Unlink the file:
$ sudo unlink /lib64/libldap_r-2.4.so.2

Verify gone:
$ ls -l /lib64/libldap_r-2.4.so.2
ls: cannot access /lib64/libldap_r-2.4.so.2: No such file or directory

Attempt build again:

$ PATH=/db_demo/app/postgres/9.6.2-3/bin:$PATH make USE_PGXS=1 clean all

No errors this time! It worked, except for the following warning we've 
seen before:


/bin/ld: warning: libssl.so.1.0.0, needed by 
/db_demo/app/postgres/9.6.2-3/lib/libpq.so, may conflict with libssl.so.10


Moving onto the install (didn't need sudo, as the userid I'm using is 
the owner of the PostgreSQL software):


$ PATH=/db_demo/app/postgres/9.6.2-3/bin:$PATH make USE_PGXS=1 install

make -C sql
make[1]: Entering directory `/home/pginst/repmgr-3.3.1/sql'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/home/pginst/repmgr-3.3.1/sql'
/bin/mkdir -p '/db_demo/app/postgres/9.6.2-3/share/postgresql/contrib'
mkdir -p '/db_demo/app/postgres/9.6.2-3/bin'
/usr/bin/install -c  repmgrd '/db_demo/app/postgres/9.6.2-3/bin/'
/usr/bin/install -c  repmgr '/db_demo/app/postgres/9.6.2-3/bin/'
make -C sql install
make[1]: Entering directory `/home/pginst/repmgr-3.3.1/sql'
/bin/mkdir -p '/db_demo/app/postgres/9.6.2-3/lib/postgresql'
/bin/mkdir -p '/db_demo/app/postgres/9.6.2-3/share/postgresql/contrib'
/usr/bin/install -c -m 755  repmgr_funcs.so 
'/db_demo/app/postgres/9.6.2-3/lib/postgresql/repmgr_funcs.so'
/usr/bin/install -c -m 644 .//uninstall_repmgr_funcs.sql 
repmgr_funcs.sql '/db_demo/app/postgres/9.6.2-3/share/postgresql/contrib/'

make[1]: Leaving directory `/home/pginst/repmgr-3.3.1/sql'
/usr/bin/install -c -m 644 .//repmgr.sql .//uninstall_repmgr.sql 
'/db_demo/app/postgres/9.6.2-3/share/postgresql/contrib/'


Seems to have worked OK! I went looking within the postgresql tree to 
see if it had built anything:

$ find . -name *repmgr* -print

./lib/postgresql/repmgr_funcs.so
./share/contrib/repmgr.sql
./share/contrib/repmgr_funcs.sql
./share/contrib/uninstall_repmgr_func.sql
./share/contrib/uninstall_repmgr.sql
./share/contrib/uninstall_repmgr_funcs.sql
./share/postgresql/contrib/repmgr.sql
./share/postgresql/contrib/repmgr_funcs.sql
./share/postgresql/contrib/uninstall_repmgr.sql
./share/postgresql/contrib/uninstall_repmgr_funcs.sql
./bin/repmgr
./bin/repmgrd

Success!

I then relinked the library:

$ sudo ln -s /lib64/libldap_r-2.4.so.2.10.3 /lib64/libldap_r-2.4.so.2

$ ls -l the file to see if it's back:
lrwxrwxrwx 1 root root 30 May 18 15:51 /lib64/libldap_r-2.4.so.2 -> 
/lib64/libldap_r-2.4.so.2.10.3


Try executing the command from the database owner id:

$ repmgr -V

repmgr 3.3.1 (PostgreSQL 9.6.2)

Didn't barf, so hopefully relinking the file makes no difference to the 
already compiled repmgr :)


Bit of a mystery still on that linking/unlinking (if we can find a way 
to avoid doing that, that would be great) but success!


Now I get to start playing with it properly - but I'm relatively 
confident that should be OK, I was getting good results on my little 
ubuntu box at home ...


Of course, I said the same think about compiling it :)

Huge thanks to EVERYONE who helped on this! If anyone is going to be at 
PG Day UK, I'll buy you a drink :)


Regards,

Martin.
--
Martin Goodson

"Have you thought up some clever plan, Doctor?"
"Yes, Jamie, I believe I have."
"What're you going to do?"
"Bung a rock at it."


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-18 Thread Eric Hill
Thanks, Merlin - lots of good information here, and I had not yet stumbled 
across pg-large-object - I will look into it.

Eric

-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 
Sent: Thursday, May 18, 2017 9:49 AM
To: Eric Hill 
Cc: Thomas Kellerer ; PostgreSQL General 

Subject: Re: storing large files in database - performance

EXTERNAL

On Thu, May 18, 2017 at 7:34 AM, Eric Hill  wrote:
> I would be thrilled to get 76 MB per second, and it is comforting to know 
> that we have that as a rough upper bound on performance.  I've got work to do 
> to figure out how to approach that upper bound from Node.js.
>
> In the meantime, I've been looking at performance on the read side.  For 
> that, I can bypass all my Node.js layers and just run a query from pgAdmin 4. 
>  I ran this query, where indexFile.contents for the row in question is 25MB 
> in size.  The query itself took 4 seconds in pgAdmin 4.  Better than the 12 
> seconds I'm getting in Node.js, but still on the order of 6MB per second, not 
> 76.  Do you suppose pgAdmin 4 and I are doing similarly inefficient things in 
> querying bytea values?

Probably.  I haven't spent a lot of time with pgadmin 4 so I'm not entirely 
sure.  If you want a quick and dirty comparison, try using running your query 
in psql unaligned mode for a comaprison point.  You can also do \copy BINARY in 
the case of byte transfers.

The basic problem is not really the database, it's that database interaction 
APIs tend not to be directed to this kind of problem.
The big picture issues are:

*) Driver overhead marshaling from wire format to managed types

*) Driver overhead for memory management

*) Wire format issues.  Certain types are *much* faster with the binary wire 
format and are additionally much more memory efficient.
Your bytea transfers are probably being serialized to text and back in both 
directions which is very wasteful, especially for very large transfers since 
it's wasteful in terms of memory.

If I were to seriously look at node.js performance, my rough thinking is that 
I'd want to be setting up the javascript variables directly in C somehow using 
plv8 internal routines.  Short of that, I would probably be querying all data 
out of postgres in json rather than serializing individual fields (which is 
what I generally do in practice).

Another point, some googling turned up
https://www.npmjs.com/package/pg-large-object which is definitely something to 
consider trying.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] union all taking years - PG 9.6

2017-05-18 Thread Merlin Moncure
On Mon, May 15, 2017 at 6:21 PM, Patrick B  wrote:
> I created a view selecting from both tables, with a UNION ALL between them.
> When selecting from that view, it's really slow. I can't even run explain
> analyze (it's been 1h and query did not finished yet).

Have you ruled out locks? Let the select run and check
pg_stat_activity and pg_locks.   If everything is good there, please
paste contents of explain (without analyze) and explain analyze of the
'good' queries.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EnterpriseDB installed PostgreSQL 9.6 vs. REPMGR. Round 4 - compilation issues on RHEL 7.2

2017-05-18 Thread Adrian Klaver
On 05/17/2017 02:03 PM, Tom Lane wrote:
> Adrian Klaver  writes:
>> I could build repmgr against Postgres source and on Ubuntu install of
>> EDB Postgres. The issue seems to be a combination of RH and EDB Postgres
>> installation. To me it looks like ld is finding
>> /lib64/libldap_r-2.4.so.2 library before the /opt/PostgreSQL/9.6/lib/
>> one.
> 
> IIRC this is determined ultimately by /etc/ld.so.conf, but there's
> a layer of caching that might be giving trouble.  See "man 8 ld.so"
> and also "man 8 ldconfig".  Looking at the output of "ldconfig -p"
> might be informative.


Learned a lot, though not the answer. I do this:

PATH=/opt/PostgreSQL/9.6/bin:$PATH  make USE_PGXS=1 clean all

I get:

/usr/lib64/libldap_r-2.4.so.2: undefined reference to `ber_sockbuf_io_udp'

Looking at files:

ldd /usr/lib64/libldap_r-2.4.so.2

linux-vdso.so.1 =>  (0x7ffce3553000)
liblber-2.4.so.2 => /lib64/liblber-2.4.so.2 (0x7f279cd52000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x7f279cb38000)
libsasl2.so.3 => /lib64/libsasl2.so.3 (0x7f279c91a000)
libssl3.so => /lib64/libssl3.so (0x7f279c6d7000)
libsmime3.so => /lib64/libsmime3.so (0x7f279c4b)
libnss3.so => /lib64/libnss3.so (0x7f279c189000)
libnssutil3.so => /lib64/libnssutil3.so (0x7f279bf5d000)
libplds4.so => /lib64/libplds4.so (0x7f279bd59000)
libplc4.so => /lib64/libplc4.so (0x7f279bb53000)
libnspr4.so => /lib64/libnspr4.so (0x7f279b915000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f279b6f9000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f279b4f4000)
libc.so.6 => /lib64/libc.so.6 (0x7f279b133000)
libcrypt.so.1 => /lib64/libcrypt.so.1 (0x7f279aefc000)
libz.so.1 => /lib64/libz.so.1 (0x7f279ace5000)
librt.so.1 => /lib64/librt.so.1 (0x7f279aadd000)
/lib64/ld-linux-x86-64.so.2 (0x7f279d1c5000)
libfreebl3.so => /lib64/libfreebl3.so (0x7f279a8d9000)

nm -D /usr/lib64/libldap_r-2.4.so.2 | grep ber_sockbuf_io_udp
U ber_sockbuf_io_udp

nm -D /usr/lib64/liblber-2.4.so.2 | grep ber_sockbuf_io_udp
0020e020 D ber_sockbuf_io_udp

This is repeated for libldap_r-2.4.so.2 and liblber-2.4.so.2 in /lib64/

ldd /opt/PostgreSQL/9.6/lib/libldap_r-2.4.so.2 

linux-vdso.so.1 =>  (0x7ffcf5bfc000)
liblber-2.4.so.2 => /opt/PostgreSQL/9.6/lib/liblber-2.4.so.2 
(0x7f60f0e86000)
libresolv.so.2 => /lib64/libresolv.so.2 (0x7f60f0c64000)
libsasl2.so.3 => /opt/PostgreSQL/9.6/lib/libsasl2.so.3 (0x7f60f0a42000)
libssl.so.1.0.0 => /opt/PostgreSQL/9.6/lib/libssl.so.1.0.0 (0x7f60f07d3000)
libcrypto.so.1.0.0 => /opt/PostgreSQL/9.6/lib/libcrypto.so.1.0.0 
(0x7f60f039c000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f60f018)
libc.so.6 => /lib64/libc.so.6 (0x7f60efdbf000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f60efbba000)
/lib64/ld-linux-x86-64.so.2 (0x7f60f12f9000)

nm -D /opt/PostgreSQL/9.6/lib/libldap_r-2.4.so.2  | grep ber_sockbuf_io_udp

nm -D /opt/PostgreSQL/9.6/lib/liblber-2.4.so.2  | grep ber_sockbuf_io_udp

If I do:

export LD_LIBRARY_PATH=/opt/PostgreSQL/9.6/lib
PATH=/opt/PostgreSQL/9.6/bin:$PATH  make USE_PGXS=1 clean all

I get the same error:

/usr/lib64/libldap_r-2.4.so.2: undefined reference to `ber_sockbuf_io_udp'

ldd /usr/lib64/libldap_r-2.4.so.2

linux-vdso.so.1 =>  (0x7ffe49f72000)
liblber-2.4.so.2 => /opt/PostgreSQL/9.6/lib/liblber-2.4.so.2 
(0x7f8fc35e)
libresolv.so.2 => /lib64/libresolv.so.2 (0x7f8fc33be000)
libsasl2.so.3 => /opt/PostgreSQL/9.6/lib/libsasl2.so.3 (0x7f8fc319c000)
libssl3.so => /lib64/libssl3.so (0x7f8fc2f59000)
libsmime3.so => /lib64/libsmime3.so (0x7f8fc2d31000)
libnss3.so => /lib64/libnss3.so (0x7f8fc2a0b000)
libnssutil3.so => /lib64/libnssutil3.so (0x7f8fc27df000)
libplds4.so => /lib64/libplds4.so (0x7f8fc25da000)
libplc4.so => /lib64/libplc4.so (0x7f8fc23d5000)
libnspr4.so => /lib64/libnspr4.so (0x7f8fc2197000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f8fc1f7a000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f8fc1d76000)
libc.so.6 => /lib64/libc.so.6 (0x7f8fc19b5000)
libz.so.1 => /opt/PostgreSQL/9.6/lib/libz.so.1 (0x7f8fc1798000)
librt.so.1 => /lib64/librt.so.1 (0x7f8fc159)
/lib64/ld-linux-x86-64.so.2 (0x7f8fc3a4d000)


nm -D /usr/lib64/libldap_r-2.4.so.2 | grep ber_sockbuf_io_udp
 U ber_sockbuf_io_udp

nm -D /usr/lib64/liblber-2.4.so.2 | grep ber_sockbuf_io_udp
0020e020 D ber_sockbuf_io_udp

/usr/lib64/libldap_r-2.4.so.2 is a symlink:

ls -al /usr/lib64/libldap_r-2.4.so.2

lrwxrwxrwx. 1 root root 23 May 18 10:04 /usr/lib64/libldap_r-2.4.so.2 -> 
libldap_r-2.4.so.2.10.3


If I remove that link then I can compile. Have no idea why.


> 
>   regards, tom lane
> 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-18 Thread Eric Hill
My apologies: I said I ran "this query" but failed to include the query.  It 
was merely this:

SELECT "indexFile"."_id", "indexFile"."contents"
FROM "mySchema"."indexFiles" AS "indexFile"
WHERE "indexFile"."_id" = '591c609bb56d0849404e4720';

Eric

-Original Message-
From: Eric Hill [mailto:eric.h...@jmp.com] 
Sent: Thursday, May 18, 2017 8:35 AM
To: Merlin Moncure ; Thomas Kellerer 
Cc: PostgreSQL General 
Subject: Re: storing large files in database - performance

I would be thrilled to get 76 MB per second, and it is comforting to know that 
we have that as a rough upper bound on performance.  I've got work to do to 
figure out how to approach that upper bound from Node.js.  

In the meantime, I've been looking at performance on the read side.  For that, 
I can bypass all my Node.js layers and just run a query from pgAdmin 4.  I ran 
this query, where indexFile.contents for the row in question is 25MB in size.  
The query itself took 4 seconds in pgAdmin 4.  Better than the 12 seconds I'm 
getting in Node.js, but still on the order of 6MB per second, not 76.  Do you 
suppose pgAdmin 4 and I are doing similarly inefficient things in querying 
bytea values?

Thanks,

Eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-18 Thread Merlin Moncure
On Thu, May 18, 2017 at 7:34 AM, Eric Hill  wrote:
> I would be thrilled to get 76 MB per second, and it is comforting to know 
> that we have that as a rough upper bound on performance.  I've got work to do 
> to figure out how to approach that upper bound from Node.js.
>
> In the meantime, I've been looking at performance on the read side.  For 
> that, I can bypass all my Node.js layers and just run a query from pgAdmin 4. 
>  I ran this query, where indexFile.contents for the row in question is 25MB 
> in size.  The query itself took 4 seconds in pgAdmin 4.  Better than the 12 
> seconds I'm getting in Node.js, but still on the order of 6MB per second, not 
> 76.  Do you suppose pgAdmin 4 and I are doing similarly inefficient things in 
> querying bytea values?

Probably.  I haven't spent a lot of time with pgadmin 4 so I'm not
entirely sure.  If you want a quick and dirty comparison, try using
running your query in psql unaligned mode for a comaprison point.  You
can also do \copy BINARY in the case of byte transfers.

The basic problem is not really the database, it's that database
interaction APIs tend not to be directed to this kind of problem.
The big picture issues are:

*) Driver overhead marshaling from wire format to managed types

*) Driver overhead for memory management

*) Wire format issues.  Certain types are *much* faster with the
binary wire format and are additionally much more memory efficient.
Your bytea transfers are probably being serialized to text and back in
both directions which is very wasteful, especially for very large
transfers since it's wasteful in terms of memory.

If I were to seriously look at node.js performance, my rough thinking
is that I'd want to be setting up the javascript variables directly in
C somehow using plv8 internal routines.  Short of that, I would
probably be querying all data out of postgres in json rather than
serializing individual fields (which is what I generally do in
practice).

Another point, some googling turned up
https://www.npmjs.com/package/pg-large-object which is definitely
something to consider trying.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sql server to Postgres Migration issue!

2017-05-18 Thread Adrian Klaver

On 05/18/2017 05:46 AM, Rakesh Mamidala wrote:

Hi Buddies,

I am trying to migrate Sql server database to PostgreSql, But i am 
failing with the error like MTK-03000: General Error


java.lang.ClassCastException: org.postgresql.jdbc.PgConnection cannot be 
cast to org.postgresql.jdbc2.AbstractJdbc2Connection



Please anybody can help me to resolve the issue.


This needs to be taken up with the EDB as it is their tool(EnterpriseDB 
Migration Toolkit). It looks to be an internal issue with their code and 
something only they can fix. If you have a EDB account/support plan I 
would use that. If not then the only thing I can find is this:


https://www.enterprisedb.com/general-inquiry-form



PF attached logfile.


Thanks and Regards,
Rakesh Mamidala






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] Sql server to Postgres Migration issue!

2017-05-18 Thread Moreno Andreo

  
  
Il 18/05/2017 14:46, Rakesh Mamidala ha
  scritto:


  
java.lang.ClassCastException:
org.postgresql.jdbc.PgConnection cannot be cast to
org.postgresql.jdbc2.AbstractJdbc2Connection 
  

  
  

This appears to be a non-Postgresql related issue, but more likely
it's a problem with the migration tool you are using (maybe
JDBC-related).

Cheers
Moreno.

  

  
Thanks and Regards,
Rakesh Mamidala
  
  
  
  
  




  





[GENERAL] Sql server to Postgres Migration issue!

2017-05-18 Thread Rakesh Mamidala
Hi Buddies,

I am trying to migrate Sql server database to PostgreSql, But i am failing
with the error like MTK-03000: General Error

java.lang.ClassCastException: org.postgresql.jdbc.PgConnection cannot be
cast to org.postgresql.jdbc2.AbstractJdbc2Connection


Please anybody can help me to resolve the issue.

PF attached logfile.


Thanks and Regards,
Rakesh Mamidala


mtk_Servicedesk_PGSQL_20170518030847.log
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-18 Thread Eric Hill
I would be thrilled to get 76 MB per second, and it is comforting to know that 
we have that as a rough upper bound on performance.  I've got work to do to 
figure out how to approach that upper bound from Node.js.  

In the meantime, I've been looking at performance on the read side.  For that, 
I can bypass all my Node.js layers and just run a query from pgAdmin 4.  I ran 
this query, where indexFile.contents for the row in question is 25MB in size.  
The query itself took 4 seconds in pgAdmin 4.  Better than the 12 seconds I'm 
getting in Node.js, but still on the order of 6MB per second, not 76.  Do you 
suppose pgAdmin 4 and I are doing similarly inefficient things in querying 
bytea values?

Thanks,

Eric

-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 
Sent: Wednesday, May 17, 2017 10:21 AM
To: Thomas Kellerer 
Cc: PostgreSQL General 
Subject: Re: storing large files in database - performance

On Tue, May 16, 2017 at 9:51 AM, Thomas Kellerer  wrote:
> John R Pierce schrieb am 16.05.2017 um 16:44:
>> On 5/16/2017 7:35 AM, Thomas Kellerer wrote:
>>> When my (JDBC based) SQL client and the database server are on the same 
>>> computer...
>>
>> node.js is Javascript, not java w/ jdbc
>
> I know that.
>
> I mentioned JDBC so that it's clear that the timings were done using a 
> different technology
>
> Maybe it's Node.js or the JavaScript "driver" that causes the problems.

When writing large objects to the database, method of transmission will very 
much determine performance until you start hitting the natural boundaries 
imposed by the database.

via (hastily written):
#include "libpqtypes.h"
#include "stdlib.h"
#include "string.h"

int main()
{
  int s = 1024 * 1024 * 256;

  char *p = malloc(s);
  memset(p, 'x', s);
  p[s-1] = 0;

  PGconn *conn = PQconnectdb("");
  PQinitTypes(conn);

  PGresult *res = PQexecf(conn, "insert into foo values(1,%text)", p);

  if(!res)
fprintf(stderr, "*ERROR: %s\n", PQgeterror());

  PQclear(res);
}

mmoncure@mernix2 09:13 AM /tmp$ gcc -otest test.c -lpq -lpqtypes -I 
/home/mmoncure/src/libpqtypes-1.5.1/src -I /home/mmoncure/pg94/include/ -L 
/home/mmoncure/src/libpqtypes-1.5.1/.libs/
mmoncure@mernix2 09:13 AM /tmp$ psql -c "create table foo(i int, f text)"
CREATE TABLE
mmoncure@mernix2 09:13 AM /tmp$ psql -c "alter table foo alter f set storage 
external"
ALTER TABLE
mmoncure@mernix2 09:14 AM /tmp$ time
LD_LIBRARY_PATH=/home/mmoncure/src/libpqtypes-1.5.1/.libs ./test

real 0m3.245s
user 0m0.092s
sys 0m0.232s
mmoncure@mernix2 09:15 AM /tmp$ psql -c "select 
pg_size_pretty(pg_table_size('foo'))"
 pg_size_pretty

 266 MB
(1 row)


...that's over 76mb/sec (to local server) for 256mb transfer.  That's
pretty good I think.   We have a 1GB barrier on bytea/text and (at
least in C, with certain reasonable precautions) you can work comfortably under 
that limit.  There might be other better strategies but it can be done.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PSQL command line print speed

2017-05-18 Thread Francisco Olarte
Adrian:

On Wed, May 17, 2017 at 8:40 PM, Adrian Klaver
 wrote:
> On 05/17/2017 09:46 AM, Adrian Myers wrote:
>> Ah I should have mentioned, the pager is off.
> Is that by choice and if so why?
>
> With the pager off you have to wait for the entire output to write to the
> screen. For anything but a small dataset that is going to take time.

Are you sure? IIRC hespecifically said the output started promptly but
was slow, i.e. speed, not latency problems as your are hinting.

And, IIRC again, psql uses libpq which always buffer the entire
response to the queries.

And nothing prohibits you from starting the output as soon as you can
calculate it even if no pager there ( note you have to make things
like calculate column widths, but this is done with and without pager
). In fact normally the pager will just introduce an small but
potentially noticeable delay.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PSQL command line print speed

2017-05-18 Thread Francisco Olarte
CCing the list.

Adrian:

On Wed, May 17, 2017 at 6:58 PM, Adrian Myers  wrote:
> That is a great question, and no, I'm not. This is on Windows and I see this
> with normal cmd.exe and ConEmu but I have been using ConEmu for some time
> and have not looked into settings there. Thanks for that suggestion.

I do not know what ConEmu is, and have being out of windows since w2k.
But I do remember that, although windows console windows where great (
when using their api ) their terminal  emulation capabilities where
not that good, and some times they did things like slow ( aka "smooth"
) scroll and similar things which greatly impaired their performance.

I would test that ConEmu with some simple program. If you have a
classic cat it would be nice ( not type, which is internal, or any
windows program which maybe skipping stdin and goin directly to the
console api , even a simple 'while((c=getc())!=EOF) putc(c)' should be
fast if the console/redirections is not playing tricks.

You can also test with a single table, single text field, put a
several pages text value there and select it. If it is slow, I would
bet for terminal emulator problems.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general