Re: [GENERAL] max number of locks

2015-07-06 Thread Fabio Pardi
Hi Adrian,

thanks for your reply,

On 07/03/2015 02:52 PM, Adrian Klaver wrote:
 On 07/03/2015 03:14 AM, Fabio Pardi wrote:
 Hi,


 while experimenting with number of locks, i found something I cannot
 understand.

  From what i can read in the documentation, at any one given time, a
 query can obtain a max number of locks given by

 max_locks_per_transaction * (max_connections + max_prepared_transactions)

 I then changed my db to use this settings:

 mydb=# show max_locks_per_transaction ;
   max_locks_per_transaction
 ---
   20
 (1 row)

 mydb=# show max_connections ;
   max_connections
 -
   2
 (1 row)

 mydb=# show  max_prepared_transactions ;
   max_prepared_transactions
 ---
   0
 (1 row)

 so i expected to be able to acquire a maximum of 40 locks.
 
 On tables.

from what i see, the locks are of type 'AccessShareLock' which should be on 
tables, if i understand well from what mentioned in the link: 
http://www.postgresql.org/docs/9.2/static/explicit-locking.html

In that case, there should be no more than 40 locks in total, any time in total 
into pg_locks table.

[ Else: how to find out the number of locked objects kept in the 'shared lock 
table', which follows the logic of: max_locks_per_transaction * 
(max_connections + max_prepared_transactions) ? is there a different query for 
that? ]



I understand that a single query can held more locks than what is in 
max_locks_per_transaction (because there is the multiplier 'max_connections'), 
that's why during my tests I m using max_connections=2.




I paste here below the result of the query, which might help to clarify the 
situation:

 

# select * from pg_locks ;
  locktype  | database | relation | page | tuple | virtualxid | transactionid | 
classid | objid | objsubid | virtualtransaction | pid  |  mode   | 
granted | fastpath 
+--+--+--+---++---+-+---+--++--+-+-+--
 relation   |   224236 |   229160 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   228957 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   227513 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   227298 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   227090 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   226013 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   226385 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   226296 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   229719 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   229646 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   229594 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   225815 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   225783 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   225756 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |   225730 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 relation   |   224236 |11069 |  |   ||   | 
|   |  | 2/9100 | 6282 | AccessShareLock | t
   | t
 virtualxid |  |  |  |   | 2/9100 |   | 
|   |  | 2/9100 | 6282 | ExclusiveLock   | t
   | t
 relation   |   224236 |   292464 |  |   |  

Re: [GENERAL] dblink max per function

2015-07-06 Thread Peter Kroon
A restart of my system solved the matter.

2015-07-05 20:54 GMT+02:00 Peter Kroon plakr...@gmail.com:

 Hi,

 I've got a function with 5 dblink select statement all to my local server
 with the same connection string.
 When one more dblink select statement is added the query fails.
 Is there some kind of limit that I can configure? If so, which one an
 where?

 Thanks,
 Peter




Re: [GENERAL] Error prone compilation of stored procedure

2015-07-06 Thread Andreas Kretschmer
pinker pin...@onet.eu wrote:

 What's the reason behind very tolerant error checking during stored
 procedure compilation?

they are not compiled but interpreted at runtime.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Error prone compilation of stored procedure

2015-07-06 Thread pinker
What's the reason behind very tolerant error checking during stored
procedure compilation?
Why PostgreSQL allows using variable (j_var) that doesn't exists? It isn't
column name or isn't declared anywhere. Like in example below:
 
CREATE OR REPLACE FUNCTION test()
  RETURNS int AS
$BODY$
BEGIN

select 1 WHERE 1  j_var;
RETURN 2;
END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;



--
View this message in context: 
http://postgresql.nabble.com/Error-prone-compilation-of-stored-procedure-tp5856699.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error prone compilation of stored procedure

2015-07-06 Thread Pavel Stehule
2015-07-06 12:08 GMT+02:00 pinker pin...@onet.eu:

 What's the reason behind very tolerant error checking during stored
 procedure compilation?
 Why PostgreSQL allows using variable (j_var) that doesn't exists? It isn't
 column name or isn't declared anywhere. Like in example below:

 CREATE OR REPLACE FUNCTION test()
   RETURNS int AS
 $BODY$
 BEGIN

 select 1 WHERE 1  j_var;
 RETURN 2;
 END;
 $BODY$
   LANGUAGE plpgsql VOLATILE
   COST 100;


PLpgSQL doesn't check a identifiers inside embedded SQL before execution.
In this case j_var can be theoretically some SQL identifiers - the
possibility or impossibility is not know in function validation stage.

It has some plus and minus points. The plus - there are not strong
dependency between database objects and PL code. The minus - lot of bugs
are not detected in validation stage. But this issue can be solved by
plpgsql_check extension https://github.com/okbob/plpgsql_check/

Regards

Pavel






 --
 View this message in context:
 http://postgresql.nabble.com/Error-prone-compilation-of-stored-procedure-tp5856699.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] next postgres version on Amazon RDS ?

2015-07-06 Thread Marc Mamin
Hello,
has anyone some insight on when we can expect a newest 9.4.x Version on RDS ?
Or knows a better forum to ask for this?

I should upgrade an application DB currently on 9.3 that might go on RDS
but I'm a bit reluctant to go only on 9.4.1 ...

regards,
Marc Mamin



Re: [GENERAL] Error prone compilation of stored procedure

2015-07-06 Thread pinker
Andreas Kretschmer-2 wrote
 pinker lt;

 pinker@

 gt; wrote: What's the reason behind very tolerant error checking
 during stored procedure compilation?they are not compiled but interpreted
 at runtime.Andreas

Documentation says:  during the compilation of a function
http://www.postgresql.org/docs/9.4/static/plpgsql-development-tips.html  I
know it's not classic one, but still this word is used in this context.



--
View this message in context: 
http://postgresql.nabble.com/Error-prone-compilation-of-stored-procedure-tp5856699p5856707.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Error prone compilation of stored procedure

2015-07-06 Thread pinker
Pavel Stehule wrote
 PLpgSQL doesn't check a identifiers inside embedded SQL before execution.
 In this case j_var can be theoretically some SQL identifiers - the
 possibility or impossibility is not know in function validation stage.
 
 It has some plus and minus points. The plus - there are not strong
 dependency between database objects and PL code. The minus - lot of bugs
 are not detected in validation stage. But this issue can be solved by
 plpgsql_check extension https://github.com/okbob/plpgsql_check/

Thank you for the link to extension. 
Another minus is that my colleagues which use to work on oracle think that
postgresql is at least one league below oracle.



--
View this message in context: 
http://postgresql.nabble.com/Error-prone-compilation-of-stored-procedure-tp5856699p5856708.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error prone compilation of stored procedure

2015-07-06 Thread Andrew Sullivan
On Mon, Jul 06, 2015 at 04:52:52AM -0700, pinker wrote:
 Another minus is that my colleagues which use to work on oracle think that
 postgresql is at least one league below oracle.

I find that people who are used to any one tool always point out how
some other tool is deficient, even if they're raving about other
advantages.  This is especially the case when there are trade-offs
involved in the way a tool does a thing.  My suggestion is to point
out that if one hates $feature, one can extend Postgres to make it go
away, a capability not available in Oracle at any price.  At least, I
found that to be useful when talking to Oracle partisans.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error prone compilation of stored procedure

2015-07-06 Thread Rob Sargent
And are your colleagues offering to pay for an Oracle license?

 On Jul 6, 2015, at 5:52 AM, pinker pin...@onet.eu wrote:
 
 Pavel Stehule wrote
 PLpgSQL doesn't check a identifiers inside embedded SQL before execution.
 In this case j_var can be theoretically some SQL identifiers - the
 possibility or impossibility is not know in function validation stage.
 
 It has some plus and minus points. The plus - there are not strong
 dependency between database objects and PL code. The minus - lot of bugs
 are not detected in validation stage. But this issue can be solved by
 plpgsql_check extension https://github.com/okbob/plpgsql_check/
 
 Thank you for the link to extension. 
 Another minus is that my colleagues which use to work on oracle think that
 postgresql is at least one league below oracle.
 
 
 
 --
 View this message in context: 
 http://postgresql.nabble.com/Error-prone-compilation-of-stored-procedure-tp5856699p5856708.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unexpected data beyond EOF in block 260 of relation pg_tblspc

2015-07-06 Thread Adrian Klaver

On 07/05/2015 10:15 PM, Mitu Verma wrote:

Hi,
Below alarm was raised at the system where postgres database was used..
---
BGWPOL22-00:/var/opt/mediation/MMStorage6/Server8/CXC1734739_R8J/storage/logging/ACTIVE#
cat AA::80165730a!
1434566795-Server8eBGWPOL22-00!! iThere is a problem with the system
environment (memory, disk, etc).
Failed to store log information in the database in AUDIT
ERROR:  unexpected data beyond EOF in block 260 of relation
pg_tblspc/16388/PG_9.1_201105231/16384/16418
HINT:  This has been seen to occur with buggy kernels; consider updating
your system.
Failed to transfer a row to the database table auditTrailLogEntry.
---
Entry into the table Is also getting failed due to this.
If this is an issue with the postgres DB or some problem with the kernel?
Please help by giving some pointers and in what circumstances this issue
can occur?


It would be helpful to know more information:

1) OS and version?

2) Postgres version?

3) Kernel version?

4) Storage type? In particular what is the tablespace stored on?


Regards
Mit



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL

2015-07-06 Thread Jan de Visser
On July 6, 2015 06:43:53 AM c.bu...@posteo.jp wrote:
 On 2015-07-05 15:13 Jan de Visser j...@de-visser.net wrote:
  You could set up a whole new server with a different $PGDATA on a
  different port.
 
 I (and the user) don't want to setup anything - that is the point.

Well, you don't have to setup anything. You do an initdb in a different 
directory, that will write a .conf file there, which you then massage to 
include a different port. You'll use the same binaries as the standard pgsql 
install, but in a different environment.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error prone compilation of stored procedure

2015-07-06 Thread pinker
lup wrote
 And are your colleagues offering to pay for an Oracle license?

I would prefer to prove them it's not necessary :)



--
View this message in context: 
http://postgresql.nabble.com/Error-prone-compilation-of-stored-procedure-tp5856699p5856734.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error prone compilation of stored procedure

2015-07-06 Thread Pavel Stehule
2015-07-06 13:52 GMT+02:00 pinker pin...@onet.eu:

 Pavel Stehule wrote
  PLpgSQL doesn't check a identifiers inside embedded SQL before execution.
  In this case j_var can be theoretically some SQL identifiers - the
  possibility or impossibility is not know in function validation stage.
 
  It has some plus and minus points. The plus - there are not strong
  dependency between database objects and PL code. The minus - lot of bugs
  are not detected in validation stage. But this issue can be solved by
  plpgsql_check extension https://github.com/okbob/plpgsql_check/

 Thank you for the link to extension.
 Another minus is that my colleagues which use to work on oracle think that
 postgresql is at least one league below oracle.


why minus? - The dependency in PL/SQL is hell - it strongly increase a
complexity of lot of tasks.  The plpgsql is designed to be simple as
possible - but it has the power of PL/SQL. If you need to check your
PLpgSQL code, just install plpgsql_check extension.

Regards

Pavel




 --
 View this message in context:
 http://postgresql.nabble.com/Error-prone-compilation-of-stored-procedure-tp5856699p5856708.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Polymorphic queries

2015-07-06 Thread Patric Bechtel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

maybe someone has an idea upon this:

I have a table which has quite a few inheriting tables which have inheriting 
tables etc (it's
mirroring a OOP model).

As expected, most queries are polymorphic, so they are going against the parent 
tables.

I'm searching for an idea how to query _once_ and, if the result hits one of 
the inheriting
tables, put the additional columns of them into a JSON or array structure, to 
avoid having to
query again to fetch the inheriting tables. I know that this would be possible 
with querying the
whole hierarchy via unions, but it seems extremely clumsy.

Does anyone have a solution for this or can push me in the right direction?

- -- 
tia,

Patric
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: GnuPT 2.5.2

iEYEARECAAYFAlWa7lUACgkQfGgGu8y7ypA5BACfY5oaQwRyvaF4QRdM2BG9Oj1p
HtMAoPNxCysjAeRAJOs5rVbqpoxRmF9M
=97f7
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread Robert DiFalco

 ​I am fairly certain this does not give you the correct results.
 Specifically, the minimum value for each cDate is going to be 1 since
 count(*) counts NULLs.  count(u) should probably work.
 ​

 Yes you are right, I forgot to change COUNT(*) to COUNT(id), as you
mention COUNT(u.*) will also work. I just couldn't get the idea of
generating a sequence form 0 to 6 to work correctly. The approach I'm using
seems to give the correct results (with COUNT(u.id)).


Re: [GENERAL] database-level lockdown

2015-07-06 Thread Filipe Pina
Yes, I've tried to come up with guideline to enumerate tables used in 
each process, but it's not simple because it's python application 
calling pgsql functions that use other functions, so it's tricky for a 
developer re-using existing functions to enumerate the tables used for 
those. Even if everything is well documented and can be re-used seems 
like a nasty task...


For now, I'm locking all to be able to close the gap, but I'm also 
wondering if I could do it in a pgsql function as I mentioned in the 
question:


FUNCTION A
- FUNCTION B
 lock TABLE
- FUNCTION C
 TABLE is not locked anymore because function B frees it as soon 
as it returns


Is there someway to have a function that locks some tables on the 
outter transaction instead of its own subtransaction?


On Seg, Jul 6, 2015 at 3:08 , Filipe Pina filipe.p...@impactzero.pt 
wrote:



On Dom, Jul 5, 2015 at 2:50 , Greg Sabino Mullane g...@turnstep.com 
wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


Filipe Pina said:

 I really can't find any other solution for what I need (in short: 
make sure

 no transactions are left out due to serialization failures)


I think you may have been too quick to rule out advisory locks as a 
solution. 
Yes, you will need wrappers around all other calls, but 
extraordinary problems 
call for extraordinary solutions.


 I could place the locks from the adapter directly at the outer 
transaction 
 level but I have the feeling that performance would be worse...


Well, performance has really got to take a back seat, given your 
other 
requirements. ;) Locking certainly *could* work - and is arguably 
the best 
solution, as that's what locks are for. Just have your Very 
Important Transaction 
retry, and upon reaching that critical number, exclusively lock just 
the tables 
being used, then try again. If you don't know which tables are being 
used, 
I suggest storing that somewhere your class can find it, or moving 
away 
from such a generic class.


There are other solutions (e.g. forcing conflicting processes to 
quit 
and sleep a second), but the locking one seems the easiest.


- -- 
Greg Sabino Mullane g...@turnstep.com

End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201507050943
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAlWZNeoACgkQvJuQZxSWSshVngCgpzGg7/OXRcyE2JgwDxDTFr9X
o7UAn3ENNgmIVqPpR4j1kyooiu+Ool7A
=6FSv
-END PGP SIGNATURE-




Re: [GENERAL] String match function required like utl_match in oracle

2015-07-06 Thread Jimit Amin
Thanks, It is somewhat useful.


Jimit Amin

On Sat, Jul 4, 2015 at 3:51 PM, Charles Clavadetscher 
clavadetsc...@swisspug.org wrote:

 Or maybe even better:

 http://www.postgresql.org/docs/9.4/static/fuzzystrmatch.html

 Search for Levenshtein.

 Bye
 Charles

 On 7/4/2015 11:50, Jimit Amin wrote:

 Hello,

 I want to compare 2 string and want result like how much percentage or
 how much part string is compared. Can I know any option like utl_match
 in oracle

 Jimit Amin



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] PgBouncer error - psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket /var/run/postgresql/.s.PGSQL

2015-07-06 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Ali Aktar
Sent: Monday, July 06, 2015 11:04 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] PgBouncer error - psql: could not connect to server: No such 
file or directory Is the server running locally and accepting connections on 
Unix domain socket /var/run/postgresql/.s.PGSQL.6432?

Hi Guys;

Can I please get some assistance:

I have postgres server running on the localhost:
-bash-4.2$ telnet localhost 5432
Trying ::1...
Connected to localhost.
Escape character is '^]'.

-bash-4.2$ psql -U postgres
psql (9.4.4)
Type help for help.

postgres=# \list
  List of databases
   Name|  Owner   | Encoding |   Collate   |Ctype|   Access 
privileges
---+--+--+-+-+---
 postgres  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres  
+
   |  |  | | | 
postgres=CTc/postgres
 template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres  
+
   |  |  | | | 
postgres=CTc/postgres
 test  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
(4 rows)

pgbouncer.ini:
[databases]
template1 = host=127.0.0.1 port=5432 dbname=template1
[pgbouncer]
listen_port = 6432
listen_addr = 127.0.0.1
auth_type = trust
auth_file = /etc/pgbouncer/userslist.txt
logfile = /var/log/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
admin_users = postgres

-bash-4.2$ telnet localhost 6432
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

2015-07-06 15:02:00.233 11993 DEBUG C-0x2549910: 
(nodb)/(nouser)@127.0.0.1:44274http://127.0.0.1:44274 P: got connection: 
127.0.0.1:44274http://127.0.0.1:44274 - 127.0.0.1:6432http://127.0.0.1:6432

But here is the weird thing:
When I try to run:
-bash-4.2$ psql -p 6432 pgbouncer
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket 
/var/run/postgresql/.s.PGSQL.6432?

Please tell me what Im doing wrong?

And another question, if I want to specify remote postgres DB's in the .ini 
file. How do I do that?

Thanks
Ali.


You should specify pgbouncer with “-d” option:

-bash-4.2$ psql -p 6432 –d pgbouncer

Regards,
Igor Neyman




Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread David G. Johnston
Please follow list conventions and either respond inline or bottom-post.

On Mon, Jul 6, 2015 at 3:30 PM, Robert DiFalco robert.difa...@gmail.com
wrote:

 Paul, I'm sure I'm missing something but it seems like your approach will
 not work. It's because the LEFT OUTER JOIN is on the numeric day of the
 week. So if you had this query going over weeks or months of data wouldn't
 you have the same issue with the days that had no new users not being
 factored into the AVG?  I ended up doing something like this, which seems
 to work pretty well.

 WITH usersByDay AS (
 SELECT cDate, COUNT(*) AS total
 FROM (
 SELECT generate_series(
 {CALENDAR_INTERVAL.START}::DATE,
 {CALENDAR_INTERVAL.END}::DATE,
 interval '1 day')::DATE AS cDate
 ) AS c
 LEFT OUTER JOIN users u ON u.created::DATE = c.cDate
 GROUP BY cDate),


​I am fairly certain this does not give you the correct results.
Specifically, the minimum value for each cDate is going to be 1 since
count(*) counts NULLs.  count(u) should probably work.

SELECT dt, count(uid), count(*)
FROM generate_series('2015-01-01'::date, '2015-01-05'::date, '1
day'::interval) gs (dt)
LEFT JOIN (VALUES ('2015-01-01'::date, 1),
('2015-01-01',2),('2015-01-02',3)) users (dt, uid)
USING (dt)
GROUP BY dt
​;​

​David J.
​


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread David G. Johnston
On Mon, Jul 6, 2015 at 4:40 PM, Robert DiFalco robert.difa...@gmail.com
wrote:

 ​I am fairly certain this does not give you the correct results.
 Specifically, the minimum value for each cDate is going to be 1 since
 count(*) counts NULLs.  count(u) should probably work.
 ​

 Yes you are right, I forgot to change COUNT(*) to COUNT(id), as you
 mention COUNT(u.*) will also work. I just couldn't get the idea of
 generating a sequence form 0 to 6 to work correctly. The approach I'm using
 seems to give the correct results (with COUNT(u.id)).


​Correct. generate_series(0,6) won't work since there is no context as
whether it is supposed to cover a single week or multiple years or anything
in between.​

Any non-null column can be supplied to the count() function: count ignores
nulls.  In this case you want to ignore the placeholder null that you are
creating during the left join.  My original suggestion avoided these extra
placeholder values and instead forces you to process the master date range
and the user-by-date pieces separately and then substitute 0 for any master
date where the corresponding user-by-date was missing.  If performance were
important it may be worth testing both versions otherwise my guess is this
version is more readable (for you).

David J.


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread Paul Jungwirth

 I'm not sure how to create a result where I get the average number of
 new users per day of the week. My issues are that days that did not
 have any new users will not be factored into the average

This is a pretty common problem with time-series queries when there is 
sparse data. My go-to solution is to use generate_series---in your case 
from 0 to 6---then do a left join from there to your actual data.


Paul





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread Robert DiFalco
Thanks Paul, I guess I'm not sure how a generate_series between 0 to 6
would solve this problem. Wouldn't I have to generate a series based on the
date range (by day) and then group by DOW _after_ that? Can you give me an
example of how I'd do it with a series based on 0 to 6?

On Mon, Jul 6, 2015 at 10:58 AM, Paul Jungwirth p...@illuminatedcomputing.com
 wrote:

  I'm not sure how to create a result where I get the average number of
  new users per day of the week. My issues are that days that did not
  have any new users will not be factored into the average

 This is a pretty common problem with time-series queries when there is
 sparse data. My go-to solution is to use generate_series---in your case
 from 0 to 6---then do a left join from there to your actual data.

 Paul





 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] to_tsvector() with hyphens

2015-07-06 Thread Tom Lane
Brian DeRocher br...@derocher.org writes:
 But why does to_tsquery() AND them?

 rasmas_hackathon= select * from to_tsquery( 'gn-foo | bandage' );
  to_tsquery 
 
  'gn-foo'  'gn'  'foo' | 'bandag'
 (1 row)

Because what you're looking for is gn-foo, not either gn alone or foo
alone.  Converting to OR would be the wrong thing.

 The rank is so bad.

 rasmas_hackathon= select ts_rank_cd( to_tsvector( 'gn series bandage' ), 
 to_tsquery( 'gn-foo | bandage' ) );
  ts_rank_cd 
 
 0.1
 (1 row)

 Without the hyphen the rank is better, despite the process above.

 rasmas_hackathon= select ts_rank_cd( to_tsvector( 'gn series bandage' ), 
 to_tsquery( 'gn | bandage' ) );
  ts_rank_cd 
 
 0.2
 (1 row)

Don't see the problem.  The first case doesn't match the query as well as
the second one does, so I'd fully expect a higher rank for the second.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Average New Users Per DOW

2015-07-06 Thread Robert DiFalco
I'm not sure how to create a result where I get the average number of new
users per day of the week. My issues are that days that did not have any
new users will not be factored into the average, giving an overinflated
result.

This is what I started with:

WITH userdays AS
  (SELECT u.created::DATE AS created,
  to_char(u.created,'Dy') AS d,
  COUNT(*) AS total
   FROM users u
   GROUP BY 1,2),
userdays_avg AS
  (SELECT extract('dow'
  FROM created) AS nDay,
  d AS Day,
  AVG(total) AS New Users
   FROM userdays
   GROUP BY 1,2
   ORDER BY 1)
SELECT Day, New Users
FROM userdays_avg
ORDER BY nDay;


But you can see it wont give correct results since (for example) Monday's
with no new users will not be counted in the average as 0.

TIA

R.


[GENERAL] PgBouncer error - psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket /var/run/postgresql/.s.PGSQL.643

2015-07-06 Thread Ali Aktar
Hi Guys;

Can I please get some assistance:

I have postgres server running on the localhost:
-bash-4.2$ telnet localhost 5432
Trying ::1...
Connected to localhost.
Escape character is '^]'.

-bash-4.2$ psql -U postgres
psql (9.4.4)
Type help for help.

postgres=# \list
  List of databases
   Name|  Owner   | Encoding |   Collate   |Ctype|   Access
privileges
---+--+--+-+-+---
 postgres  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
 +
   |  |  | | |
postgres=CTc/postgres
 template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
 +
   |  |  | | |
postgres=CTc/postgres
 test  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
(4 rows)

pgbouncer.ini:
[databases]
template1 = host=127.0.0.1 port=5432 dbname=template1
[pgbouncer]
listen_port = 6432
listen_addr = 127.0.0.1
auth_type = trust
auth_file = /etc/pgbouncer/userslist.txt
logfile = /var/log/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
admin_users = postgres

-bash-4.2$ telnet localhost 6432
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

2015-07-06 15:02:00.233 11993 DEBUG C-0x2549910: (nodb)/(nouser)@
127.0.0.1:44274 P: got connection: 127.0.0.1:44274 - 127.0.0.1:6432

But here is the weird thing:
When I try to run:
-bash-4.2$ psql -p 6432 pgbouncer
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket /var/run/postgresql/.s.PGSQL.6432?

Please tell me what Im doing wrong?

And another question, if I want to specify remote postgres DB's in the .ini
file. How do I do that?

Thanks
Ali.


Re: [GENERAL] PgBouncer error - psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket /var/run/postgresql/.s.PGSQL

2015-07-06 Thread David G. Johnston
Cursory observations below...

On Mon, Jul 6, 2015 at 11:04 AM, Ali Aktar aktar...@gmail.com wrote:

 Hi Guys;

 Can I please get some assistance:

 I have postgres server running on the localhost:
 -bash-4.2$ telnet localhost 5432
 Trying ::1...
 Connected to localhost.
 Escape character is '^]'.

 -bash-4.2$ psql -U postgres
 psql (9.4.4)
 Type help for help.


​You are connecting to a unix socket and not a TCP/IP port​...


 postgres=# \list
   List of databases
Name|  Owner   | Encoding |   Collate   |Ctype|   Access
 privileges

 ---+--+--+-+-+---
  postgres  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
  template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
  +
|  |  | | |
 postgres=CTc/postgres
  template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
  +
|  |  | | |
 postgres=CTc/postgres
  test  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 (4 rows)

 pgbouncer.ini:
 [databases]
 template1 = host=127.0.0.1 port=5432 dbname=template1


​haven't proved PostgreSQL is listening on 127.0.0.1:5432 yet; only proved
it is listing on the local unix socket...

[pgbouncer]
 listen_port = 6432
 listen_addr = 127.0.0.1
 auth_type = trust
 auth_file = /etc/pgbouncer/userslist.txt
 logfile = /var/log/pgbouncer.log
 pidfile = /var/run/pgbouncer/pgbouncer.pid
 admin_users = postgres

 -bash-4.2$ telnet localhost 6432
 Trying ::1...
 telnet: connect to address ::1: Connection refused
 Trying 127.0.0.1...
 Connected to localhost.
 Escape character is '^]'.


​Listening on a tcp/ip port​...(ip4 only)


 2015-07-06 15:02:00.233 11993 DEBUG C-0x2549910: (nodb)/(nouser)@
 127.0.0.1:44274 P: got connection: 127.0.0.1:44274 - 127.0.0.1:6432

 But here is the weird thing:
 When I try to run:
 -bash-4.2$ psql -p 6432 pgbouncer


...​but attempting to connect to a unix socket​

psql: could not connect to server: No such file or directory
 Is the server running locally and accepting
 connections on Unix domain socket /var/run/postgresql/.s.PGSQL.6432?

 Please tell me what Im doing wrong?


​Try specifying --host=localhost or --host=127.0.0.1


 And another question, if I want to specify remote postgres DB's in the
 .ini file. How do I do that?


​[databases]
name = host=remote_ip_address​ etc...

David J.


[GENERAL] to_tsvector() with hyphens

2015-07-06 Thread Brian DeRocher
Hey everyone,

I think it's great that the full text search parser breaks hyphenated words 
into multiple parts.  I think this really could help, but something is not 
right.


rasmas_hackathon= select * from ts_debug( 'gn-foo' );
  alias  |   description   |  token  |  dictionaries  | 
 dictionary  | lexemes  
-+-+-++--+--
 asciihword  | Hyphenated word, all ASCII  | gn-foo  | {english_stem} | 
english_stem | {gn-foo}
 hword_asciipart | Hyphenated word part, all ASCII | gn  | {english_stem} | 
english_stem | {gn}
 blank   | Space symbols   | -   | {} | 
 | 
 hword_asciipart | Hyphenated word part, all ASCII | foo | {english_stem} | 
english_stem | {foo}
 blank   | Space symbols   | | {} | 
 | 
(6 rows)


But why does to_tsquery() AND them?

rasmas_hackathon= select * from to_tsquery( 'gn-foo | bandage' );
 to_tsquery 

 'gn-foo'  'gn'  'foo' | 'bandag'
(1 row)


Perhaps my vector is like this:

rasmas_hackathon= select to_tsvector( 'gn series bandage' );
 to_tsvector 
-
 'bandag':3 'gn':1 'seri':2
(1 row)


The rank is so bad.

rasmas_hackathon= select ts_rank_cd( to_tsvector( 'gn series bandage' ), 
to_tsquery( 'gn-foo | bandage' ) );
 ts_rank_cd 

0.1
(1 row)

Without the hyphen the rank is better, despite the process above.

rasmas_hackathon= select ts_rank_cd( to_tsvector( 'gn series bandage' ), 
to_tsquery( 'gn | bandage' ) );
 ts_rank_cd 

0.2
(1 row)


So wouldn't this be a better query for hyphenated words?

 'gn-foo' | 'gn' | 'foo'


Aside: Best i can tell the parser is giving instructions to pushval_morph() to 
treat hyphenated words as 
same variants.


thanks,
Brian


-- 
http://brian.derocher.org
http://mappingdc.org
http://about.me/brian.derocher


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL

2015-07-06 Thread Adrian Klaver

On 07/05/2015 09:43 PM, c.bu...@posteo.jp wrote:

On 2015-07-05 15:13 Jan de Visser j...@de-visser.net wrote:

You could set up a whole new server with a different $PGDATA on a
different port.


I (and the user) don't want to setup anything - that is the point.


Then what you want is an embedded database, in other words a program 
that you can include inside your application. As others have suggested 
Sqlite is just such program and what is more it is included in the 
Python standard library since 2.5. Postgres is not an embedded database 
and therefore it will by nature exist outside the app. This means either 
you have to create code to anticipate all your users setups and 
configure Postgres accordingly or you will need to include the user in 
the set up process.





What I'm wondering though is what made you decide to use pgsql for
your project? It seems to me that something like sqlite would be
better suited for your requirements.


When I started I wasn't aware of the difference between PostgreSQL and
sqlite. Maybe this is a solution.

But isn't there a way to use PostgreSQL without that setup and
configuration things?





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread David G. Johnston
On Mon, Jul 6, 2015 at 2:04 PM, Robert DiFalco robert.difa...@gmail.com
wrote:

 Wouldn't I have to generate a series based on the date range (by day) and
 then group by DOW _after_ that?


​You are correct.​

WITH userdays (dow, user_count) AS ( existing_query, more or less )
​, day_counts (dow, count_of_days) AS ( SELECT
generate_series(user_earliest_created_date, user_most_recent_created_date)
)​
SELECT dow, coalesce(user_count, 0) / count_of_days
FROM day_counts
LEFT JOIN userdays USING (dow)
​;​

David J.
​


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread Paul Jungwirth

Thanks Paul, I guess I'm not sure how a generate_series between 0 to 6
would solve this problem. Wouldn't I have to generate a series based on
the date range (by day) and then group by DOW _after_ that? Can you give
me an example of how I'd do it with a series based on 0 to 6?


Looks like David Johnston beat me to it! :-) But this is what I had in mind:

SELECT  s.d AS dow,
COUNT(u.id) c
FROMgenerate_series(0, 6) s(d)
LEFT OUTER JOIN users u
ON  EXTRACT(dow FROM created) = s.d
GROUP BY dow
ORDER BY dow
;

You can also get human-readable DOW names by creating a 7-row CTE table 
and joining to it based on the numeric dow.


Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread Robert DiFalco
Paul, I'm sure I'm missing something but it seems like your approach will
not work. It's because the LEFT OUTER JOIN is on the numeric day of the
week. So if you had this query going over weeks or months of data wouldn't
you have the same issue with the days that had no new users not being
factored into the AVG?  I ended up doing something like this, which seems
to work pretty well.

WITH usersByDay AS (
SELECT cDate, COUNT(*) AS total
FROM (
SELECT generate_series(
{CALENDAR_INTERVAL.START}::DATE,
{CALENDAR_INTERVAL.END}::DATE,
interval '1 day')::DATE AS cDate
) AS c
LEFT OUTER JOIN users u ON u.created::DATE = c.cDate
GROUP BY cDate),
avgUsersByDOW AS (
SELECT extract('dow' FROM cDate) AS nDay,
 to_char(cDate,'Dy') AS Day,
ROUND(AVG(total), 2) AS New Users
FROM usersByDay
GROUP BY 1, 2
ORDER BY 1)
SELECT Day, New Users FROM avgUsersByDOW ORDER BY nDay




On Mon, Jul 6, 2015 at 11:30 AM, Paul Jungwirth p...@illuminatedcomputing.com
 wrote:

 Thanks Paul, I guess I'm not sure how a generate_series between 0 to 6
 would solve this problem. Wouldn't I have to generate a series based on
 the date range (by day) and then group by DOW _after_ that? Can you give
 me an example of how I'd do it with a series based on 0 to 6?


 Looks like David Johnston beat me to it! :-) But this is what I had in
 mind:

 SELECT  s.d AS dow,
 COUNT(u.id) c
 FROMgenerate_series(0, 6) s(d)
 LEFT OUTER JOIN users u
 ON  EXTRACT(dow FROM created) = s.d
 GROUP BY dow
 ORDER BY dow
 ;

 You can also get human-readable DOW names by creating a 7-row CTE table
 and joining to it based on the numeric dow.

 Paul




Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread David G. Johnston
On Mon, Jul 6, 2015 at 6:16 PM, Michael Nolan htf...@gmail.com wrote:

  But you can see it wont give correct results since (for example) Monday's
  with no new users will not be counted in the average as 0.

 One way to handle this is to union your query with one that has a
 generate_series (0,6) for the DOW column and nulls for the other
 columns, then treat both that and your original query as a subquery
 and do your averages, since nulls are not included in either count()
 or average() aggregates:

 select dow, count(*), avg(some_column) from (
 select extract ('dow' from some_date) as dow, some_number from some_table
 union select generate_series(0,6) as dow, null as some_number) as x
 group by 1 order by 1


​I'm not seeing how this is at all useful.

As you said, the average function ignores the null introduced by the union
so the final answer with and without the union is the same.

No matter how you work a generate_series(0,6) based query it will never
be able to give a correct answer expect accidentally.  Each actual missing
date contributes a ZERO to the numerator and a ONE to the denominator in
the final division that constitutes the mean-average.  You must have those
dates.

In a series with four Mondays ( 4, 0, 0, 8 ) the average desired is 3, not
6 (or 4).  There is no way to make the denominator (number of Mondays) 4
instead of 3 by using generate_series(0,6).

David J.


[GENERAL] Missing space in message

2015-07-06 Thread Daniele Varrazzo
Patch attached.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 595a609..c8c4eed 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -2461,7 +2461,7 @@ static struct config_int ConfigureNamesInt[] =
 
 	{
 		{wal_retrieve_retry_interval, PGC_SIGHUP, REPLICATION_STANDBY,
-			gettext_noop(Sets the time to wait before retrying to retrieve WAL
+			gettext_noop(Sets the time to wait before retrying to retrieve WAL 
 		 after a failed attempt.),
 			NULL,
 			GUC_UNIT_MS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing space in message

2015-07-06 Thread David Rowley
On 7 July 2015 at 10:52, Daniele Varrazzo daniele.varra...@gmail.com
wrote:

 Patch attached.



Thanks for the patch.

Would you be able to post it to pgsql-hack...@postgresql.org instead?

Regards

David Rowley

--
 David Rowley   http://www.2ndQuadrant.com/
http://www.2ndquadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


Re: [GENERAL] Missing space in message

2015-07-06 Thread Daniele Varrazzo
On Tue, Jul 7, 2015 at 12:08 AM, David Rowley
david.row...@2ndquadrant.com wrote:
 On 7 July 2015 at 10:52, Daniele Varrazzo daniele.varra...@gmail.com
 wrote:

 Patch attached.



 Thanks for the patch.

 Would you be able to post it to pgsql-hack...@postgresql.org instead?

Oops, sorry. Got this wrong, for the other errors I've found I've
already sent the patches to -hackers.

-- Daniele


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread Michael Nolan
On 7/6/15, Robert DiFalco robert.difa...@gmail.com wrote:
 I'm not sure how to create a result where I get the average number of new
 users per day of the week. My issues are that days that did not have any
 new users will not be factored into the average, giving an overinflated
 result.

 This is what I started with:

 WITH userdays AS
   (SELECT u.created::DATE AS created,
   to_char(u.created,'Dy') AS d,
   COUNT(*) AS total
FROM users u
GROUP BY 1,2),
 userdays_avg AS
   (SELECT extract('dow'
   FROM created) AS nDay,
   d AS Day,
   AVG(total) AS New Users
FROM userdays
GROUP BY 1,2
ORDER BY 1)
 SELECT Day, New Users
 FROM userdays_avg
 ORDER BY nDay;


 But you can see it wont give correct results since (for example) Monday's
 with no new users will not be counted in the average as 0.

One way to handle this is to union your query with one that has a
generate_series (0,6) for the DOW column and nulls for the other
columns, then treat both that and your original query as a subquery
and do your averages, since nulls are not included in either count()
or average() aggregates:

select dow, count(*), avg(some_column) from (
select extract ('dow' from some_date) as dow, some_number from some_table
union select generate_series(0,6) as dow, null as some_number) as x
group by 1 order by 1

--
Mike Nolan
no...@tssi.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database-level lockdown

2015-07-06 Thread Adrian Klaver

On 07/06/2015 07:10 AM, Filipe Pina wrote:

It's not necessary to commit at all costs, it can fail, just not due to
serialization..

And the transaction can be something as simple as updating a field or
inserting a record (with foreign keys which is one the serialization
checks).


Not following, why throw serialization at a FK?



On Sáb, Jul 4, 2015 at 7:23 , Adrian Klaver adrian.kla...@aklaver.com
wrote:

On 07/04/2015 10:49 AM, Filipe Pina wrote:

Thanks for the suggestion. I read that some people do use that
strategy for maintenance sometimes but it's no feasible in this
scenario. I would have to disallow new connections AND kill all
existing connections (as there would be an existing connection
pool), but this won't have the same impact as using LOCKs..
Terminating all sessions will break every other transaction
(except for the one doing it). Locking database will put all the
other on hold. As we're talking about quick/instant operations on
hold will have impact on performance but won't cause anything to
abort.. I really can't find any other solution for what I need (in
short: make sure no transactions are left out due to serialization
failures)

Which would seem to indicate you have painted yourself into a corner.
The idea of locking an entire database to get one transaction to
commit seems a little extreme to me. What is this transaction trying
to do and why is it necessary that it commit at all costs?

On 03/07/2015, at 19:00, Melvin Davidson melvin6...@gmail.com
mailto:melvin6...@gmail.com wrote:

--
Adrian Klaver adrian.kla...@aklaver.com



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download PostgreSQL 9.5 Alpha

2015-07-06 Thread Edson F. Lidorio



On 04-07-2015 02:20, Charles Clavadetscher wrote:

Hello

I also could not find the download on EDB. For Ubuntu 9.5 alpha
is available but I was not able to install it using apt-get because it 
cannot resolve some dependencies. Finally I downloaded and compiled 
the source code. This worked well, but I had to do some additional 
steps to get the cluster up and running. I wrote some notes on


http://www.schmiedewerkstatt.ch/wiki/index.php/PostgreSQL_Compiling_From_Source 



In general the problems were:
- Full qualify the executables if you have other instances running on 
your system (in my case 9.4.4). This to make sure that you are using 
the newer versions.
- Set LD_LIBRARY_PATH to the lib dir in your installation directory. 
The same as before but for libraries.


You may prefer to set PATH to the 9.5 lib dir in your current shell 
instead of qualifying the executables.


Hope this helps.

BTW. I am not sure about that, but I guess that if I hadn't had 9.4 
already on the system apt-get would have worked. The error message 
suggested that it did not want to replace some existing files (e.g. 
libpq if I remember well). I assumed that this was intended to avoid a 
working version to be damaged.


Bye
Charles

On 7/4/2015 02:24, Joshua D. Drake wrote:


On 07/03/2015 04:32 PM, Edson F. Lidorio wrote:


Hello,

PostgreSQL 9.5 Alpha not appear on the downloads list in [1]
Where do I download for Windows?

[1]
http://www.enterprisedb.com/products-services-training/pgdownload#windows 





For those in the community who may not know, EnterpriseDB hosts the
Windows versions of PostgreSQL.

Edson,

It does not appear that they have a Alpha download available yet.

Sincerely,

JD



--
Edson










I foundnow:

http://www.enterprisedb.com/products-services-training/pgdevdownload



Re: [GENERAL] database-level lockdown

2015-07-06 Thread Adrian Klaver

On 07/06/2015 07:15 AM, Filipe Pina wrote:

Yes, I've tried to come up with guideline to enumerate tables used in
each process, but it's not simple because it's python application
calling pgsql functions that use other functions, so it's tricky for a
developer re-using existing functions to enumerate the tables used for
those. Even if everything is well documented and can be re-used seems
like a nasty task...


Still not sure what is you are trying to accomplish.

Is it really necessary that every transaction be serialized?

Or to put it another way, why are you running in serializable by default?

Or yet another way, what is the problem you are trying to solve with 
serialized transactions?






For now, I'm locking all to be able to close the gap, but I'm also
wondering if I could do it in a pgsql function as I mentioned in the
question:

FUNCTION A
- FUNCTION B
 lock TABLE
- FUNCTION C
 TABLE is not locked anymore because function B frees it as soon as
it returns

Is there someway to have a function that locks some tables on the
outter transaction instead of its own subtransaction?







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread Michael Nolan
On Mon, Jul 6, 2015 at 5:50 PM, David G. Johnston 
david.g.johns...@gmail.com wrote:

 On Mon, Jul 6, 2015 at 6:16 PM, Michael Nolan htf...@gmail.com wrote:

  But you can see it wont give correct results since (for example)
 Monday's
  with no new users will not be counted in the average as 0.

 One way to handle this is to union your query with one that has a
 generate_series (0,6) for the DOW column and nulls for the other
 columns, then treat both that and your original query as a subquery
 and do your averages, since nulls are not included in either count()
 or average() aggregates:

 select dow, count(*), avg(some_column) from (
 select extract ('dow' from some_date) as dow, some_number from some_table
 union select generate_series(0,6) as dow, null as some_number) as x
 group by 1 order by 1


 ​I'm not seeing how this is at all useful.

 As you said, the average function ignores the null introduced by the union
 so the final answer with and without the union is the same.

 No matter how you work a generate_series(0,6) based query it will never
 be able to give a correct answer expect accidentally.  Each actual missing
 date contributes a ZERO to the numerator and a ONE to the denominator in
 the final division that constitutes the mean-average.  You must have those
 dates.

 In a series with four Mondays ( 4, 0, 0, 8 ) the average desired is 3, not
 6 (or 4).  There is no way to make the denominator (number of Mondays) 4
 instead of 3 by using generate_series(0,6).

 David J.





Ah, you're right.  The problem is that avg() is going to treat missing data
as missing (of course.)  It will either be necessary to add in the missing
days as a zero value (but ONLY the missing days, requiring some kind of
'not exists' select, I suppose) or to 'roll your own' average function by
adding in the missing days as I did with a union in my earlier post.

The real problem is the DOW is not the field where the missing data is, it
is in the underlying date field.

I created a test dataset.  It has 1 day missing in a two-week period from
June 1st through June 14th (Sunday, June 7th).  Here's what the OP's SQL
generates:

Day   New Users
--- --
Sun 2.
Mon 4.5000
Tue 2.
Wed 4.5000
Thu 1.
Fri 3.
Sat 3.

Here's the SQL to generate the missing day and do the average function by
hand:

select Day, New Users from (
select dow, Day, sum(total) / count(distinct created) as New Usersfrom
(select extract(dow from created) as dow,
to_char(created,'Dy') as Day, created, created2, total from

(select created, created as created2, count(*) as total from users
group by 1, 2
union
(select generate_series('2015-06-01 00:00'::timestamp,
'2015-06-14'::timestamp,'1 day')::date, null, 0) ) as x) as y
group by 1, 2) as z
order by dow


Day   New Users
--- --
Sun 1.
Mon 4.5000
Tue 2.
Wed 4.5000
Thu 1.
Fri 3.
Sat 3.

--
Mike Nolan
no...@tssi.com


Re: [GENERAL] Average New Users Per DOW

2015-07-06 Thread Michael Nolan
Here's a minor refinement that doesn't require knowing the range of dates
in the users table:

(select created, created as created2, count(*) as total from users
group by 1, 2
union
(select generate_series(
(select min(created)::timestamp from users),
(select max(created)::timestamp from users),
'1 day')::date, null, 0) ) as x) as y
group by 1, 2) as z
order by dow

Day   New Users
--- --
Sun 1.
Mon 4.5000
Tue 2.
Wed 4.5000
Thu 1.
Fri 3.
Sat 3.

--
Mike Nolan


Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL

2015-07-06 Thread c.buhtz
On 2015-07-05 22:16 John R Pierce pie...@hogranch.com wrote:
 at a bare minimum, a database administrator needs to create database 
 roles (users) and databases for an app like yours.

The admin don't need to create the db. It is done by the application
(sqlalchemy-utils on Python3) itself.

But I see. I will go back to sqlite3.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL

2015-07-06 Thread John R Pierce

On 7/6/2015 9:55 PM, c.bu...@posteo.jp wrote:

On 2015-07-05 22:16 John R Piercepie...@hogranch.com  wrote:

at a bare minimum, a database administrator needs to create database
roles (users) and databases for an app like yours.

The admin don't need to create the db. It is done by the application
(sqlalchemy-utils on Python3) itself.


an application should not have the privileges to do that.   you don't 
run your apps as 'root', do you?   why would you run them as a database 
administrator ?




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL

2015-07-06 Thread Mark Morgan Lloyd

Jan de Visser wrote:

On July 6, 2015 06:43:53 AM c.bu...@posteo.jp wrote:

On 2015-07-05 15:13 Jan de Visser j...@de-visser.net wrote:

You could set up a whole new server with a different $PGDATA on a
different port.

I (and the user) don't want to setup anything - that is the point.


Well, you don't have to setup anything. You do an initdb in a different 
directory, that will write a .conf file there, which you then massage to 
include a different port. You'll use the same binaries as the standard pgsql 
install, but in a different environment.


I'm not sure that helps, since I think part of the question is what the 
true Debian way is to massage the configuration files to include 
appropriate entries.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database-level lockdown

2015-07-06 Thread Filipe Pina
It's not necessary to commit at all costs, it can fail, just not due to 
serialization..


And the transaction can be something as simple as updating a field or 
inserting a record (with foreign keys which is one the serialization 
checks).


On Sáb, Jul 4, 2015 at 7:23 , Adrian Klaver 
adrian.kla...@aklaver.com wrote:

On 07/04/2015 10:49 AM, Filipe Pina wrote:
Thanks for the suggestion. I read that some people do use that 
strategy

for maintenance sometimes but it's no feasible in this scenario.

I would have to disallow new connections AND kill all existing
connections (as there would be an existing connection pool), but this
won't have the same impact as using LOCKs..

Terminating all sessions will break every other transaction (except 
for

the one doing it). Locking database will put all the other on hold.
As we're talking about quick/instant operations on hold will have 
impact

on performance but won't cause anything to abort..

I really can't find any other solution for what I need (in short: 
make

sure no transactions are left out due to serialization failures)


Which would seem to indicate you have painted yourself into a corner. 
The idea of locking an entire database to get one transaction to 
commit seems a little extreme to me.


What is this transaction trying to do and why is it necessary that it 
commit at all costs?





On 03/07/2015, at 19:00, Melvin Davidson melvin6...@gmail.com
mailto:melvin6...@gmail.com wrote:




--
Adrian Klaver
adrian.kla...@aklaver.com