Re: [GENERAL] wide row insert via Postgres jdbc driver

2014-09-26 Thread Sameer Kumar
On Tue, Sep 23, 2014 at 9:24 PM, Bill Moran wmo...@potentialtech.com
wrote:

 On Tue, 23 Sep 2014 14:12:22 +0200
 Thomas Kellerer spam_ea...@gmx.net wrote:

  Sameer Kumar schrieb am 23.09.2014 um 07:24:
   I am working with a vendor and planning to deploy their application
   on PostgreSQL as backend. They have cautioned the customer that
   PostgreSQL's jdbc driver v9.1 (build 900) has issues which causes
   deadlocks while wide record inserts.
 
  Can you be a bit more explicit?
  I have never heard the term wide record inserts before

 I've heard these terms before.  Wide generally means at least one of the
 following:

 * A large number of columns
 * At least 1 column with a lot of data

 ​Sorry for using the generic term. Yes the explanation is correct​. When I
said wide row, I meant bytea columns being part of table.



 Of course, both of those criteria are incredibly subjective.  How many
 columns
 is a large number?  How much data is a lot?

 It generally boils down to he fact that pushing a lot of data (whether many
 columns or a single column with a lot of data) takes longer than pushing
 small
 amounts of data (big surprise) and as a result, the statistical chance that
 the operatin will collide with a conflicting operation (causing, in this
 case,
 a deadlock) is increased.

 I guess I understand the explanation here.​


 As you mention, it's usually something that people with poorly written
 applications complain about.  I.e. our application works just fine in our
 test
 environment, so your server must be too slow ... get a faster server



 Of course, the real problem is that the application was written with a
 large
 number of braindead assumptions (things will always be fast; our tests
 never
 encounter deadlocks, so they can't happen, etc)  I've dealt directly with
 this
 back in my consulting days: clients who insisted that the correct way to
 fix
 their crashes was to buy faster hardware.  The annoying thing is that such
 an
 approach _appears_ to fix the problem, because the faster hardware causes
 the
 chance of the problem occuring to be less, and in the mind of people who
 don't
 understand concurrent programming, that's fixed.

 The amount of really badly written software out there is a very large
 number.

​Can not agree any more... :)
​
​
​Let me get back to the vendor with this. I am sure they are not going to
like it (no one likes to admit they are wrong) :)​

Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] http://www.ashnik.com/



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).


Re: [GENERAL] [SQL] pg_multixact issues

2014-09-26 Thread Dev Kumkar
On Fri, Sep 19, 2014 at 1:23 PM, Dev Kumkar devdas.kum...@gmail.com wrote:

 Apologies for the delay, was working/troubleshooting same issue and was
 away from my emails. :(
 Regards...


Received the database with huge pg_multixact directory of size 21G and
there are ~82,000 files in pg_multixact/members and 202 files in
pg_multixact/offsets directory.

Did run vacuum full on this database and it was successful. However now
am not sure about pg_multixact directory. truncating this directory except
 file results into database start up issues, of course this is not
correct way of truncating.
 FATAL:  could not access status of transaction 13224692

Stumped ! Please provide some comments on how to truncate pg_multixact
files and if there is any impact because of these files on database
performance.

Regards...


[GENERAL] Postgres Standby

2014-09-26 Thread Derek Giri


-- 
Derek Giri


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Procedure after failover

2014-09-26 Thread Paul Jungwirth
Hi All,

I have Postgres 9.3 on Ubuntu 14.04 set up in a master/slave
configuration with streaming replication. On the master I ran `sudo
service postgresql stop` and then on the slave I ran `sudo touch
$trigger_file`. Now the slave seems to be running fine, but I'm trying
to figure out the process for getting things back to normal. I think
it is roughly like this, but I'd love for someone to confirm:

1. Change each server's {postgresql,recovery}.conf so the (old) slave
will replicate back to the (old) master. Restart the (old) slave, then
start the (old) master.
2. Once the (old) master has caught up, run `sudo service postgresql
stop` on the (old) slave, then `sudo touch $trigger_file` on the (old)
master. Now the (old) master is a master again.
3. Change each server's {postgresql,recovery}.conf files to their
original settings. Restart the master, then start the slave.

Will this work?

What if there were changes on the master that didn't get replicated
before I originally shut it down? (Or does using init.d delay shutdown
until all WAL updates have made it out?)

Is there a better way to do it? Do I need to wipe the (old) master and
use pg_dump/pg_restore before I bring it back up?

If it helps, here is my postgresql.conf on the master:

archive_mode = on
archive_command = 'rsync -aq -e ssh -o StrictHostKeyChecking=no %p
10.0.21.10:/secure/pgsql/archive/%f'
archive_timeout = 3600

Here is postgresql.conf on the slave:

hot_standby = on

and recovery.conf on the slave:

standby_mode = 'on'
primary_conninfo = 'XXX'
trigger_file = '/secure/pgsql/main/trigger'
restore_command = 'cp /secure/pgsql/archive/%f %p'
archive_cleanup_command =
'/usr/lib/postgresql/9.3/bin/pg_archivecleanup /secure/pgsql/archive/
%r'

Thanks,
Paul

-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Procedure after failover

2014-09-26 Thread Paul Jungwirth
A bit more info:

 What if there were changes on the master that didn't get replicated
 before I originally shut it down?

It looks like Ubuntu's init.d script does a fast shutdown, i.e.
SIGINT on this page:

http://www.postgresql.org/docs/9.3/static/server-shutdown.html

I can't tell from the doc what happens re WAL archives though. Is that
what the page means by online backup mode? My suspicion is that
because I shut down the master fast, I'm going to have to wipe it
and then pg_restore it from the slave, because it might have data that
never made it out to the slave. Is that correct?

Thanks,
Paul

-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PANIC: could not create file pg_xlog/xlogtemp.7884: No space left on device

2014-09-26 Thread Roopeshakumar Narayansa Shalgar (rshalgar)
Hi,

I am using version 9.3.1 and see the no space device error even though there 
is enough space (99% free) on my disk.

Error: PANIC:  could not create file pg_xlog/xlogtemp.7884: No space left on 
device

I want to know whether this is a bug and if yes whether fix or workaround is 
available.
I am using the default configuration mentioned in the postgresql.conf file.

bash-3.2# du -h
4.0K./pg_stat
6.2M./base/12026
198M./base/16384
6.3M./base/12031
6.3M./base/1
216M./base
24K ./pg_stat_tmp
4.0K./pg_snapshots
4.0K./pg_serial
12K ./pg_multixact/offsets
12K ./pg_multixact/members
28K ./pg_multixact
448K./global
4.0K./pg_xlog/archive_status
129M./pg_xlog
4.0K./pg_tblspc
12K ./pg_notify
4.0K./pg_twophase
128K./pg_clog
4.0K./pg_subtrans





Re: [GENERAL] PANIC: could not create file pg_xlog/xlogtemp.7884: No space left on device

2014-09-26 Thread Raymond O'Donnell
On 25/09/2014 11:44, Roopeshakumar Narayansa Shalgar (rshalgar) wrote:
 Hi,
 
  
 
 I am using version 9.3.1 and see the “no space device error” even though
 there is enough space (99% free) on my disk.
 
  
 
 Error: PANIC:  could not create file pg_xlog/xlogtemp.7884: No space
 left on device
 
  
 
 I want to know whether this is a bug and if yes whether fix or
 workaround is available.

It's not a bug - you're out of disk space. As it happens, I saw a blog
post yesterday on precisely this problem, and approaches to fixing it:

http://blog.endpoint.com/2014/09/pgxlog-disk-space-problem-on-postgres.html


HTH,

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PANIC: could not create file pg_xlog/xlogtemp.7884: No space left on device

2014-09-26 Thread Raymond O'Donnell
On 25/09/2014 11:44, Roopeshakumar Narayansa Shalgar (rshalgar) wrote:
 Hi,
 
  
 
 I am using version 9.3.1 and see the “no space device error” even though
 there is enough space (99% free) on my disk.
 
  
 
 Error: PANIC:  could not create file pg_xlog/xlogtemp.7884: No space
 left on device
 
  
 
 I want to know whether this is a bug and if yes whether fix or
 workaround is available.

It's not a bug - you're out of disk space. As it happens, I saw a blog
post yesterday on precisely this problem, and approaches to fixing it:

http://blog.endpoint.com/2014/09/pgxlog-disk-space-problem-on-postgres.html


HTH,

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: PANIC: could not create file pg_xlog/xlogtemp.7884: No space left on device

2014-09-26 Thread David G Johnston
Roopeshakumar Narayansa Shalgar (rshalgar) wrote
 bash-3.2# du -h
 4.0K./pg_stat
 6.2M./base/12026
 198M./base/16384
 6.3M./base/12031
 6.3M./base/1
 216M./base
 24K ./pg_stat_tmp
 [...]
 129M./pg_xlog
 4.0K./pg_tblspc
 12K ./pg_notify
 4.0K./pg_twophase
 128K./pg_clog
 4.0K./pg_subtrans

Did you maybe mean to provide the output of df instead of du?  All you
show above is the size of the postgres data directory - which I doubt is the
only thing on the partition in question.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PANIC-could-not-create-file-pg-xlog-xlogtemp-7884-No-space-left-on-device-tp5820686p5820695.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] password in recovery.conf

2014-09-26 Thread Nelson Green
Hello all,

I am setting up a streaming replication stand-by, and the replication role
password has a single quote in it. I am unable to properly reference the
password in the conninfo setting of recovery.conf so it will authenticate
to the master. Doubling the quote gives me a syntax error, and escaping it
or quoting it with double-quotes gives me an authentication error. The
password is correct because I can copy it from the recovery.conf and supply
it when prompted by pg_basebackup, so if I may, what is the proper way to
handle single quotes within the conninfo string?

Obviously I can change the password, but we use an automated password
generator so I'd like to not have to keep generating passwords, and
checking them, until I get one that will work, unless that my only option.

Thanks,
Nelson


Re: [GENERAL] password in recovery.conf

2014-09-26 Thread Bosco Rama
On 09/26/14 12:58, Nelson Green wrote:

 I am setting up a streaming replication stand-by, and the replication
 role password has a single quote in it. I am unable to properly
 reference the password in the conninfo setting of recovery.conf so it
 will authenticate to the master. Doubling the quote gives me a syntax
 error, and escaping it or quoting it with double-quotes gives me an
 authentication error.

You may have to double it twice -- once for recovery.conf and once for
the actual usage in the connection.

Thus for password abc'123 you would want to use:
   'user=user_name password=abc123 host=primary_host'

Or possibly even a combination of doubling and escaping:
   'user=user_name password=abc\''123 host=primary_host'
or:
   'user=user_name password=abc\\''123 host=primary_host'

This is just conjecture.  I don't use this method of recovery myself.

HTH.

Bosco.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] advice sought - general approaches to optimizing queries around event streams

2014-09-26 Thread Jonathan Vanasco

I have a growing database with millions of rows that track resources against an 
event stream.  

i have a few handfuls of queries that interact with this stream in a variety of 
ways, and I have managed to drop things down from 70s to 3.5s on full scans and 
offer .05s partial scans.  

no matter how i restructure queries, I can't seem to get around a few 
bottlenecks and I wanted to know if there were any tips/tricks from the 
community on how to approach them.  

a simple form of my database would be:

--  1k of
create table stream (
id int not null primary key,
)

-- 1MM of
create table resource (
id int not null primary key,
col_a bool,
col_b bool,
col_c text,
);

-- 10MM of
create table streamevent (
id int not null,
event_timestamp timestamp not null,
stream_id int not null references stream(id)
);

-- 10MM of
create table resource_2_stream_event(
resource_id int not null references resource(id),
streamevent_id int not null references streamevent(id)
)

Everything is running off of indexes; there are no seq scans.

I've managed to optimize my queries by avoiding joins against tables, and 
turning the stream interaction into a subquery or CTE.  
better performance has come from limiting the number of stream events  ( 
which are only the timestamp and resource_id off a joined table ) 

The bottlenecks I've encountered have primarily been:

1.  When interacting with a stream, the ordering of event_timestamp and 
deduplicating of resources becomes an issue.
I've figured out a novel way to work with the most recent events, but 
distant events are troublesome

using no limit, the query takes 3500 ms
using a limit of 1, the query takes 320ms
using a limit of 1000, the query takes 20ms

there is a dedicated index of on event_timestamp (desc) , and it is 
being used
according to the planner... finding all the records is fine; 
merging-into and sorting the aggregate to handle the deduplication of records 
in a stream seems to be the issue (either with DISTINCT or max+group_by)


2.  I can't figure out an effective way to search for a term against an 
entire stream (using a tsquery/gin based search)

I thought about limiting the query by finding matching resources first, 
then locking it to an event stream, but:
- scanning the entire table for a term takes about 10 seconds 
on an initial hit.  subsequent queries for the same terms end up using the 
cache, and complete within 20ms.

I get better search performance by calculating the event stream, then 
searching it for matching documents, but I still have the performance issues 
related to limiting the window of events

i didn't include example queries, because I'm more concerned with the general 
approaches and ideas behind dealing with large data sets than i am with raw SQL 
right now.  

i'm hoping someone can enlighten me into looking at new ways to solve these 
problems.   i think i've learned more about postgres/sql in the past 48hour 
than I have in the past 15 years, and I'm pretty sure that the improvements I 
need will come from new ways of querying data , rather than optimizing the 
current queries.




















-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advice sought - general approaches to optimizing queries around event streams

2014-09-26 Thread Gavin Flower

On 27/09/14 09:02, Jonathan Vanasco wrote:

I have a growing database with millions of rows that track resources against an 
event stream.

i have a few handfuls of queries that interact with this stream in a variety of 
ways, and I have managed to drop things down from 70s to 3.5s on full scans and 
offer .05s partial scans.

no matter how i restructure queries, I can't seem to get around a few 
bottlenecks and I wanted to know if there were any tips/tricks from the 
community on how to approach them.

a simple form of my database would be:

--  1k of
create table stream (
id int not null primary key,
)

-- 1MM of
create table resource (
id int not null primary key,
col_a bool,
col_b bool,
col_c text,
);

-- 10MM of
create table streamevent (
id int not null,
event_timestamp timestamp not null,
stream_id int not null references stream(id)
);

-- 10MM of
create table resource_2_stream_event(
resource_id int not null references resource(id),
streamevent_id int not null references streamevent(id)
)

Everything is running off of indexes; there are no seq scans.

I've managed to optimize my queries by avoiding joins against tables, and 
turning the stream interaction into a subquery or CTE.
better performance has come from limiting the number of stream events  ( 
which are only the timestamp and resource_id off a joined table )

The bottlenecks I've encountered have primarily been:

1.  When interacting with a stream, the ordering of event_timestamp and 
deduplicating of resources becomes an issue.
I've figured out a novel way to work with the most recent events, but 
distant events are troublesome

using no limit, the query takes 3500 ms
using a limit of 1, the query takes 320ms
using a limit of 1000, the query takes 20ms

there is a dedicated index of on event_timestamp (desc) , and it is 
being used
according to the planner... finding all the records is fine; 
merging-into and sorting the aggregate to handle the deduplication of records 
in a stream seems to be the issue (either with DISTINCT or max+group_by)


2.  I can't figure out an effective way to search for a term against an 
entire stream (using a tsquery/gin based search)

I thought about limiting the query by finding matching resources first, 
then locking it to an event stream, but:
- scanning the entire table for a term takes about 10 seconds 
on an initial hit.  subsequent queries for the same terms end up using the 
cache, and complete within 20ms.

I get better search performance by calculating the event stream, then 
searching it for matching documents, but I still have the performance issues 
related to limiting the window of events

i didn't include example queries, because I'm more concerned with the general 
approaches and ideas behind dealing with large data sets than i am with raw SQL 
right now.

i'm hoping someone can enlighten me into looking at new ways to solve these 
problems.   i think i've learned more about postgres/sql in the past 48hour 
than I have in the past 15 years, and I'm pretty sure that the improvements I 
need will come from new ways of querying data , rather than optimizing the 
current queries.

Minor point: when specifying PRIMARY KEY, you don't need to also put NOT 
NULL (this should make no change to performance).


I notice that the 'id' of 'streamevent' is not marked as a PRIMARY KEY, 
so it will not have an index associated with it - hence referencing it 
as a foreign key might be slower than expected.



Cheers,
Gavin


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] password in recovery.conf

2014-09-26 Thread DrakoRod
Hi! 

Have you tried escape the Single or Double quote? Maybe this information can
help you:

http://stackoverflow.com/questions/12316953/insert-varchar-with-single-quotes-in-postgresql
http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html

Best Regards! 





-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions 
--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/password-in-recovery-conf-tp5820725p5820737.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advice sought - general approaches to optimizing queries around event streams

2014-09-26 Thread Brent Wood
We have a similar timeseries database approaching 500m records.

We partition the main tables (much like your events) into one year subsets, 
with a clustered index on timestamp for all but the live year.

https://blog.engineyard.com/2013/scaling-postgresql-performance-table-partitioning
http://www.postgresql.org/docs/9.3/static/ddl-partitioning.html
http://www.postgresql.org/docs/9.3/static/sql-cluster.html

As discussed here previously, you can also improve performance using hardware - 
SSD'd vs spindles. Also note that tablespaces, with indexes on your faster 
drives  data on slower ones can improve performance.

http://www.postgresql.org/docs/9.3/static/manage-ag-tablespaces.html

Also make sure your db server is optimised for the database size  hardware 
configuration - like perhaps alloe fewer concurrent users, but more resources 
per user, or see what pgtune recommends.

Should help your performance, in terms of underlying db efficiency  
performance, rather than tweaking your actual queries.

Brent Wood

Programme leader: Environmental Information Delivery
NIWA
DDI:  +64 (4) 3860529

Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529 | 301 Evans Bay Parade, Greta Point, Wellington | 
www.niwa.co.nzhttp://www.niwa.co.nz
[NIWA]http://www.niwa.co.nz

From: pgsql-general-ow...@postgresql.org pgsql-general-ow...@postgresql.org 
on behalf of Jonathan Vanasco postg...@2xlp.com
Sent: Saturday, September 27, 2014 9:02 AM
To: PostgreSQL general
Subject: [GENERAL] advice sought - general approaches to optimizing queries 
around event streams

I have a growing database with millions of rows that track resources against an 
event stream.

i have a few handfuls of queries that interact with this stream in a variety of 
ways, and I have managed to drop things down from 70s to 3.5s on full scans and 
offer .05s partial scans.

no matter how i restructure queries, I can't seem to get around a few 
bottlenecks and I wanted to know if there were any tips/tricks from the 
community on how to approach them.

a simple form of my database would be:

   --  1k of
   create table stream (
   id int not null primary key,
   )

   -- 1MM of
   create table resource (
   id int not null primary key,
   col_a bool,
   col_b bool,
   col_c text,
   );

   -- 10MM of
   create table streamevent (
   id int not null,
   event_timestamp timestamp not null,
   stream_id int not null references stream(id)
   );

   -- 10MM of
   create table resource_2_stream_event(
   resource_id int not null references resource(id),
   streamevent_id int not null references streamevent(id)
   )

Everything is running off of indexes; there are no seq scans.

I've managed to optimize my queries by avoiding joins against tables, and 
turning the stream interaction into a subquery or CTE.
better performance has come from limiting the number of stream events  ( 
which are only the timestamp and resource_id off a joined table )

The bottlenecks I've encountered have primarily been:

1.  When interacting with a stream, the ordering of event_timestamp and 
deduplicating of resources becomes an issue.
   I've figured out a novel way to work with the most recent events, but 
distant events are troublesome

   using no limit, the query takes 3500 ms
   using a limit of 1, the query takes 320ms
   using a limit of 1000, the query takes 20ms

   there is a dedicated index of on event_timestamp (desc) , and it is 
being used
   according to the planner... finding all the records is fine; 
merging-into and sorting the aggregate to handle the deduplication of records 
in a stream seems to be the issue (either with DISTINCT or max+group_by)


2.  I can't figure out an effective way to search for a term against an 
entire stream (using a tsquery/gin based search)

   I thought about limiting the query by finding matching resources first, 
then locking it to an event stream, but:
   - scanning the entire table for a term takes about 10 seconds on 
an initial hit.  subsequent queries for the same terms end up using the cache, 
and complete within 20ms.

   I get better search performance by calculating the event stream, then 
searching it for matching documents, but I still have the performance issues 
related to limiting the window of events

i didn't include example queries, because I'm more concerned with the general 
approaches and ideas behind dealing with large data sets than i am with raw SQL 
right now.

i'm hoping someone can enlighten me into looking at new ways to solve these 
problems.   i think i've learned more about postgres/sql in the past 48hour 
than I have in the past 15 years, and I'm pretty sure that the 

Re: [GENERAL] password in recovery.conf

2014-09-26 Thread Adrian Klaver

On 09/26/2014 12:58 PM, Nelson Green wrote:

Hello all,

I am setting up a streaming replication stand-by, and the replication
role password has a single quote in it. I am unable to properly
reference the password in the conninfo setting of recovery.conf so it
will authenticate to the master. Doubling the quote gives me a syntax
error, and escaping it or quoting it with double-quotes gives me an
authentication error. The password is correct because I can copy it from
the recovery.conf and supply it when prompted by pg_basebackup, so if I
may, what is the proper way to handle single quotes within the conninfo
string?



Doubling the quote seems to work here.

aklaver@panda:~ psql 'dbname=test user=test_user password=test''pwd'
psql (9.0.17)
Type help for help.

test=

What is the syntax error you get?

Another option:

http://www.postgresql.org/docs/9.3/static/standby-settings.html

 A password needs to be provided too, if the primary demands password 
authentication. It can be provided in the primary_conninfo string, or in 
a separate ~/.pgpass file on the standby server (use replication as the 
database name)


So you might look at setting up a .pgpass 
file(http://www.postgresql.org/docs/9.3/static/libpq-pgpass.html)




Obviously I can change the password, but we use an automated password
generator so I'd like to not have to keep generating passwords, and
checking them, until I get one that will work, unless that my only option.

Thanks,
Nelson



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] password in recovery.conf [SOLVED]

2014-09-26 Thread Nelson Green
On Fri, Sep 26, 2014 at 5:51 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 09/26/2014 12:58 PM, Nelson Green wrote:

 Hello all,

 I am setting up a streaming replication stand-by, and the replication
 role password has a single quote in it. I am unable to properly
 reference the password in the conninfo setting of recovery.conf so it
 will authenticate to the master. Doubling the quote gives me a syntax
 error, and escaping it or quoting it with double-quotes gives me an
 authentication error. The password is correct because I can copy it from
 the recovery.conf and supply it when prompted by pg_basebackup, so if I
 may, what is the proper way to handle single quotes within the conninfo
 string?



 Doubling the quote seems to work here.


Thanks Bosco, DrakoRod, and Adrian. Between the three of you it became
obvious
that I was doing something wrong. And yes, in the end you were right.
Doubling
the quote does indeed work.

It turns out it this particular password also had a \ in it, and my console
width wrapped right before it, putting it as the first character on the next
line, where I just didn't notice it until a few minutes ago. I changed that
to
a ^ for the time being, and then doubled the quote whereupon it all worked.
I
will certainly look into how to escape the backslash too, but that's for
next
week at this point.

Apologies for the noise. Just been one of those days.

Thanks,
Nelson


Re: [GENERAL] password in recovery.conf [SOLVED]

2014-09-26 Thread John R Pierce

On 9/26/2014 4:32 PM, Nelson Green wrote:


Thanks Bosco, DrakoRod, and Adrian. Between the three of you it became 
obvious
that I was doing something wrong. And yes, in the end you were right. 
Doubling

the quote does indeed work.

It turns out it this particular password also had a \ in it, and my 
console
width wrapped right before it, putting it as the first character on 
the next
line, where I just didn't notice it until a few minutes ago. I changed 
that to
a ^ for the time being, and then doubled the quote whereupon it all 
worked. I
will certainly look into how to escape the backslash too, but that's 
for next

week at this point.


I'd consider using `mkpasswd -l 15 -s 0`  just to avoid any such 
problems.   15 random alphanumerics is already plenty complex, 62^15th 
possible combinations, without needing to mix in special characters.


$ mkpasswd -l 15 -s 0
eec1kj7ZsthlYmh


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] password in recovery.conf [SOLVED]

2014-09-26 Thread Adrian Klaver

On 09/26/2014 04:32 PM, Nelson Green wrote:

On Fri, Sep 26, 2014 at 5:51 PM, Adrian Klaver



Doubling the quote seems to work here.


Thanks Bosco, DrakoRod, and Adrian. Between the three of you it became
obvious
that I was doing something wrong. And yes, in the end you were right.
Doubling
the quote does indeed work.

It turns out it this particular password also had a \ in it, and my console
width wrapped right before it, putting it as the first character on the next
line, where I just didn't notice it until a few minutes ago. I changed
that to
a ^ for the time being, and then doubled the quote whereupon it all
worked. I
will certainly look into how to escape the backslash too, but that's for
next
week at this point.


aklaver@panda:~ psql 'dbname=test user=test_user password=test\\pwd'
psql (9.0.17)
Type help for help.

test=




Apologies for the noise. Just been one of those days.

Thanks,
Nelson



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] password in recovery.conf [SOLVED]

2014-09-26 Thread John R Pierce

On 9/26/2014 4:40 PM, John R Pierce wrote:
I'd consider using `mkpasswd -l 15 -s 0`  just to avoid any such 
problems.   15 random alphanumerics is already plenty complex, 62^15th 
possible combinations, without needing to mix in special characters.


$ mkpasswd -l 15 -s 0
eec1kj7ZsthlYmh


btw, thats 768,909,700,000,000,000,000,000,000 possible passwords. 768 
septillion, using the aamerican 'short scale' naming convention.  if you 
could brute force try 1/second, it would merely take 
24,365,800,000,000 centuries (24 trillion).


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] password in recovery.conf [SOLVED]

2014-09-26 Thread Gavin Flower

On 27/09/14 11:56, John R Pierce wrote:

On 9/26/2014 4:40 PM, John R Pierce wrote:
I'd consider using `mkpasswd -l 15 -s 0` just to avoid any such 
problems.   15 random alphanumerics is already plenty complex, 
62^15th possible combinations, without needing to mix in special 
characters.


$ mkpasswd -l 15 -s 0
eec1kj7ZsthlYmh


btw, thats 768,909,700,000,000,000,000,000,000 possible passwords. 768 
septillion, using the aamerican 'short scale' naming convention.  if 
you could brute force try 1/second, it would merely take 
24,365,800,000,000 centuries (24 trillion).



So do you think a password like *Nxw7TnC2^}%(}tEz* is strong enough?  :-)

I developed a Java program that generates 20 passwords (each of 16 
characters) at a time, I've attached it for anyone who might be 
interested.  I have put it under the GPL version 3, but I might consider 
releasing under other licences.



Cheers,
Gavin
package gcf.misc;

/**
 * Copyright © 2012 Gavin C. Flower
 * 
 * author: gavin.flo...@archidevsys.co.nz
 * 
 * This program is free software: you can redistribute it and/or modify it under
 * the terms of the GNU General Public License as published by the Free Software
 * Foundation, either version 3 of the License, or (at your option) any later
 * version.
 * 
 * This program is distributed in the hope that it will be useful, but WITHOUT
 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
 * FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
 * details.
 * 
 * For full details of the license see http://www.gnu.org/licenses/.
 */

import java.security.SecureRandom;

public class AppPasswordGenerator
{
private final static int PASSWORD_LENGTH = 16;

private final static int MAX_INDEX = PASSWORD_LENGTH - 1;

/*
 * We avoid ambiguous characters, so you won't get 'I1|l', 'B8', 'S5', or
 * 'O0' being produced
 */

private static String DIGITS = 23456789;

private static String SPECIAL = !@#$%^*()_+{}[].:;

private static String UPPER = ACDEFGHJKLMNPQRTVWXY;

private static String LOWER = abcdefghijklmnopqrstuvwxyz;

private static String FULL = DIGITS + SPECIAL + UPPER + LOWER;

private final StringBuilder SB = new StringBuilder(PASSWORD_LENGTH);

SecureRandom secureRandom = new SecureRandom();

AppPasswordGenerator()
{
/*
 * This is way more complicated than it needs to be for the current
 * application, but it was fun coding it!
 * 
 * The use of sin()  exp() introduce a semirandom delay in obtaining
 * the current time in nano seconds as well as returning values to act
 * as additional randomising factors.
 */
long nanoA = System.nanoTime();
double sinVal = Math.sin(nanoA);
long nanoB = System.nanoTime();
double expVal = Math.exp(sinVal);
long nanoC = System.nanoTime();
int shift = (int) nanoB  0x3F;
long rotation = Long.rotateRight(nanoC, shift);
long rawBits = Double.doubleToRawLongBits(expVal);
long seed = rotation ^ rawBits;
secureRandom.setSeed(seed);

// System.out.printf(nanoA: %016X\n, nanoA);
// System.out.printf(   sinVal: %16.13f\n, sinVal);
// System.out.printf(nanoB: %016X\n, nanoB);
// System.out.printf(   expVal: %16.13f\n, expVal);
// System.out.printf(nanoC: %016X\n, nanoC);
// System.out.printf(shift: %16d\n, shift);
// System.out.printf(  rawBits: %016X\n, rawBits);
// System.out.printf( rotation: %016X\n, rotation);
// System.out.printf( seed: %016X\n, seed);
// System.out.printf(FULL.length(): %16d\n, FULL.length());
}

public static void main(String[] args)
{
AppPasswordGenerator appPasswordGenerator = new AppPasswordGenerator();
appPasswordGenerator.go();
}

private void go()
{
assert PASSWORD_LENGTH  5; // Actually, later code assume 16...

for (int i = 0; i  20; i++)
{
printAPassword();
}
}

private void printAPassword()
{
addChar(DIGITS);
addChar(DIGITS);
addChar(SPECIAL);
addChar(UPPER);
addChar(LOWER);

for (int ii = SB.length(); ii  PASSWORD_LENGTH; ii++)
{
addChar(FULL);
}

// Randomise password characters
for (int index_a = 0; index_a  PASSWORD_LENGTH; index_a++)
{
char ca = SB.charAt(index_a);
int index_b = secureRandom.nextInt(PASSWORD_LENGTH);
char cb = SB.charAt(index_b);
SB.setCharAt(index_b, ca);
SB.setCharAt(index_a, cb);
}

// Ensure the last character is not a digit
while (Character.isDigit(SB.charAt(MAX_INDEX)))
{
int index = secureRandom.nextInt(MAX_INDEX);
char ca = SB.charAt(MAX_INDEX);
char cb = SB.charAt(index);

[GENERAL] Window function with valued based range frames?

2014-09-26 Thread David G Johnston
ex.  avg(value) over (order by date range interval '6' day preceding)

Source:
http://wwwlgis.informatik.uni-kl.de/cms/fileadmin/courses/SS2008/NEDM/RDDM.Chapter.06.Windows_and_Query_Functions_in_SQL.pdf

Page 9, slide 17

The SELECT SQL command page mentions standard aspects of window frame
clauses not being supported but is not specific (supposedly in order to
having people ask for these things).

Just looking for recollection regarding why these were omitted initially and
if anyone has concerned adding them in follow-up.  With the recent
hypothetical work being done maybe these can be re-evaluated in a fresh
light?  They (whatever they is) are standard and do seem generally useful.  

I don't personally have an immediate need but have been pondering moving
average related window queries and how performant they are in PostgreSQL
version possible alternative calculation means and came across this
presentation.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Window-function-with-valued-based-range-frames-tp5820757.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Window function with valued based range frames?

2014-09-26 Thread Tom Lane
David G Johnston david.g.johns...@gmail.com writes:
 Just looking for recollection regarding why these were omitted initially and
 if anyone has concerned adding them in follow-up.

My recollection is that RANGE requires some data-type-specific behavior
that we don't have any provision for in PG's datatype extension framework
(something about increment/decrement I think, but too lazy to consult the
archives for details).  The original window-function patch had some klugy
hard-wired behavior for a small set of datatypes, which we quite properly
rejected as not being up to project standards: datatype extensibility is
one of PG's defining features, and we're not going to give it up lightly.
Nobody's yet done the work to get something that would pass muster.

 With the recent
 hypothetical work being done maybe these can be re-evaluated in a fresh
 light?

AFAIK those functions are unrelated to this problem.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general