Re: [GENERAL] Bad query? Or planner?

2016-12-04 Thread Johann Spies
On 28 November 2016 at 21:11, Devin Smith  wrote:

> Hi,
>
>
> I recently wrote a query that I thought was easy to reason about, and I
> assumed the query planner would execute it efficiently.
>
> SELECT * FROM xtag_stack_feed
> JOIN (
>   SELECT DISTINCT ON (do_post_xtag.xtag_ci) *
>   FROM do_post_xtag
>   JOIN do_post ON do_post.rc_id=do_post_xtag.post_rc_id
>   ORDER BY do_post_xtag.xtag_ci ASC, do_post_xtag.post_rc_id DESC)
> last_post
> ON last_post.xtag_ci=xtag_stack_feed.xtag_ci
> ORDER BY decayed_to_base DESC
> LIMIT 1;
>
> Unfortunately, the query as written is not being executed efficiently. I
> tried to rewrite it in a couple different ways without success, and then
> learned about lateral joins. Rewritten as follows, it executes efficiently.
>
> SELECT * FROM xtag_stack_feed
> JOIN LATERAL (
>   SELECT DISTINCT ON (do_post_xtag.xtag_ci) *
>   FROM do_post_xtag
>   JOIN do_post ON do_post.rc_id=do_post_xtag.post_rc_id
>   WHERE do_post_xtag.xtag_ci=xtag_stack_feed.xtag_ci
>   ORDER BY do_post_xtag.xtag_ci ASC, do_post_xtag.post_rc_id DESC)
> last_post
> ON true
> ORDER BY decayed_to_base DESC
> LIMIT 1;
>
> From my naive perspective, it seems like the second query is semantically
> equivalent to the first; it just has the join condition moved into the
> subquery as a WHERE filter.
>
>
>
I do not see a "where"  condition in your first query.

Regards
Johann


-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Johann Spies
Web2py is another (and maybe simpler) alternative to Django.

Regards
Johann


-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Rich Shepard

On Sun, 4 Dec 2016, John McKown wrote:


I'm not really sure what ".. make it an application that my clients can
use. ..." really means.


John,

  It has to be simple and useful to naive users.


I guess it means that you have some code for an application (which uses
PostgreSQL as it's data repository), but it is difficult for many of your
users to use easily. I also don't know how much effort you want to put
into this. Would using C++ be acceptable? If so, then perhaps you should
look at QT from TrollTech. This started out as a cross platform (UNIX,
Windows, MAC) windowing system which has really grown. https://www.qt.io/
is a nice site where you can get started. But you would need a commercial
license if your software is not licensed as "open source".


  I'm familiar with Qt.


A possible alternative to QT is GTK+ (https://www.gtk.org/)​. It is
both GPL & LGPL licensed, so you can freely use it in commercial software.


  I migrated from C to Python a number of years ago so I've looked at
Python-based web frameworks (flask, django). I have developed a number of
applications in Python2 using wxPython; writing the UI takes most of the
time. But, I'll look again at the Python3 version called Phoenix and Peewee
(a lighter ORM than SQLAlchemy).


Sorry if I went off into left field on this.


  That's fine. All ideas are always welcome.

Regards,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where would I be able to get instructions regarding postgresql installation on Windows 10?

2016-12-04 Thread Melvin Davidson
Just click on the version of Win x-86 32 PostgreSQL you want from here and
use the downloaded installer.

http://www.enterprisedb.com/products-services-training/pgdownload#windows

On Sun, Dec 4, 2016 at 7:57 PM, Varuna Seneviratna <
varunasenevira...@gmail.com> wrote:

> Hello,
> I looked through the documentation But was not able to find any
> information. If possible please guide me
>
> Varuna
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread John McKown
On Sun, Dec 4, 2016 at 9:41 AM, Rich Shepard 
wrote:

>I have postgres tables (with data) for a specific application but have
> not found the time to learn django to make it an application that my
> clients
> can use. It occurs to me that the most parsimonious approach is to use
> LibreOffice's Base for the UI with postgres as the dbms-engine. While I
> work
> in only linux (and unixODBC is installed), my clients all use various
> flavors of Windows, but the three critical software applications (LO,
> PostgreSQL, and ODBC) are available for Microsoft, too.
>
>I've scanned the Base portion of the LO User Guide and it looks to be a
> practical solution to quickly providing clients with working database
> applications.
>
>As this is completely new territory for me I'd like is to learn from
> those who've done this before. As examples, What can/should I do as stored
> procedures using PL/pgSQL? Can queries be created and tested using psql
> before being imported into Base?
>
>All tips and gotcha's for a new user are certainly welcome.
>
> TIA,
>
> Rich
>

​I'm not really sure what ".. make it an application that my clients can
use. ..." really means. I guess it means that you have some code for an
application (which uses PostgreSQL as it's data repository), but it is
difficult for many of your users to use easily. I also don't know how much
effort you want to put into this. Would using C++ be acceptable? If so,
then perhaps you should look at QT from TrollTech. This started out as a
cross platform (UNIX, Windows, MAC) windowing system which has really
grown. https://www.qt.io/ is a nice site where you can get started. But you
would need a commercial license if your software is not licensed as "open
source".

A possible alternative to QT is GTK+ (https://www.gtk.org/)​. It is both
GPL & LGPL licensed, so you can freely use it in commercial software.

Sorry if I went off into left field on this.


-- 
Heisenberg may have been here.

Unicode: http://xkcd.com/1726/

Maranatha! <><
John McKown


[GENERAL] Where would I be able to get instructions regarding postgresql installation on Windows 10?

2016-12-04 Thread Varuna Seneviratna
Hello,
I looked through the documentation But was not able to find any
information. If possible please guide me

Varuna


Re: [GENERAL] Avoid sorting when doing an array_agg

2016-12-04 Thread Peter Geoghegan
On Sun, Dec 4, 2016 at 4:09 PM, Tom Lane  wrote:
> Of course, we would also have to teach cost_sort or someplace near there
> that non-C sorting is much more expensive than C-collation sorting.  Not
> sure about exactly how to set that up without it being a kluge.

We've talked about that before, in the context of parallel query. At
the 2014 developer meeting, IIRC.

> A related problem is that if you have "GROUP BY x,y" and no particular
> ORDER BY requirement, you could sort by either x,y or y,x before the
> GroupAgg.  This would matter if, say, there was an index matching one
> but not the other.  Right now we're very stupid and only consider x,y,
> but if there were room to consider more than one set of target pathkeys
> it would be fairly simple to make that better.

That sounds valuable, especially because it seems natural to make the
leading group-on var the least selective within a GROUP BY; having a
matching index that you can thereby use might be less common than that
in practice, unless and until the partial sort patch is committed.

I will tend to write "GROUP BY country, province, city" -- never
"GROUP BY city, province, country". I speak a language that is written
left-to-right, but it would be the opposite way around in both
directions if I spoke a language written right-to-left, I bet. Same
difference. This might be a very prevalent habit. In general, a
tuplesort will be faster with a high cardinality leading attribute, so
this habit works against tuplesort. (Assuming a leading attribute of
pass-by-value type, or with abbreviated key support.)

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Avoid sorting when doing an array_agg

2016-12-04 Thread Tom Lane
Peter Geoghegan  writes:
> I wonder what it would take to teach the optimizer to consider the
> possibility of a "collation strength reduction". In other words, for
> aggregates that perform a sort (or for aggregates that rely on the
> presence of a sort node without there being some other dependency on
> the sort node), it should be possible for the optimizer to determine
> that it would be just fine to use the C locale, since the user isn't
> entitled to assume anything about the exact sort order. There are of
> course cases where this can make a huge difference.

IMO the way to handle this would be to consider both paths that use the
straight sort order and paths that use COLLATE "C" ordering.  I think
the key structural limitation that makes it not straightforward is that
the query_planner() API supports only one target ordering
(root->query_pathkeys).  I've had a bee in my bonnet for awhile about
replacing that with a list of potentially-useful target orderings, but
haven't got round to making it happen.

Of course, we would also have to teach cost_sort or someplace near there
that non-C sorting is much more expensive than C-collation sorting.  Not
sure about exactly how to set that up without it being a kluge.

A related problem is that if you have "GROUP BY x,y" and no particular
ORDER BY requirement, you could sort by either x,y or y,x before the
GroupAgg.  This would matter if, say, there was an index matching one
but not the other.  Right now we're very stupid and only consider x,y,
but if there were room to consider more than one set of target pathkeys
it would be fairly simple to make that better.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Rich Shepard

On Sun, 4 Dec 2016, Martin Collins wrote:


Not tried it myself but have you seen
https://www.kde.org/applications/office/kexi/

And if that's no good http://flask.pocoo.org/ is simpler than Django.


Martin,

  The problem with kexi is that it does not run on Microsoft OSes which is
what my clients use. Long ago I looked at flask; that might be worth a
second look. I'm also seriously looking at peewee, which is an ORM simpler
than SQLAlchemy, and the Phoenix toolkit for the UI.

Thanks very much,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Martin Collins
On 04/12/16 11:59, Rich Shepard wrote:
>
>OK. I spent a lot of time over the past few years looking for a simpler
> solution and they just don't exist.
> 
>I'll learn django and take it step-by-step.

Not tried it myself but have you seen
https://www.kde.org/applications/office/kexi/

And if that's no good http://flask.pocoo.org/ is simpler than Django.

Martin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Avoid sorting when doing an array_agg

2016-12-04 Thread Pavel Stehule
2016-12-04 23:12 GMT+01:00 Peter Geoghegan :

> On Sat, Dec 3, 2016 at 5:20 PM, Tomas Vondra
>  wrote:
> > So the sort is probably slow because of CPU, as it compares strings. In
> > some locales that may be very expensive - not sure which locale is used
> > in this case, as it was not mentioned.
>
> I wonder what it would take to teach the optimizer to consider the
> possibility of a "collation strength reduction". In other words, for
> aggregates that perform a sort (or for aggregates that rely on the
> presence of a sort node without there being some other dependency on
> the sort node), it should be possible for the optimizer to determine
> that it would be just fine to use the C locale, since the user isn't
> entitled to assume anything about the exact sort order. There are of
> course cases where this can make a huge difference.
>

it is pretty good idea.

Regards

Pavel




>
> --
> Peter Geoghegan
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Avoid sorting when doing an array_agg

2016-12-04 Thread Peter Geoghegan
On Sat, Dec 3, 2016 at 5:20 PM, Tomas Vondra
 wrote:
> So the sort is probably slow because of CPU, as it compares strings. In
> some locales that may be very expensive - not sure which locale is used
> in this case, as it was not mentioned.

I wonder what it would take to teach the optimizer to consider the
possibility of a "collation strength reduction". In other words, for
aggregates that perform a sort (or for aggregates that rely on the
presence of a sort node without there being some other dependency on
the sort node), it should be possible for the optimizer to determine
that it would be just fine to use the C locale, since the user isn't
entitled to assume anything about the exact sort order. There are of
course cases where this can make a huge difference.

-- 
Peter Geoghegan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extensions and privileges in public schema

2016-12-04 Thread Paul Ramsey
When you create the student user, remove their create privs in public.
Then create a scratch schema and grant them privs there.
Finally, alter the student user so that the scratch schema appears FIRST in
their search path. This will cause unqualified CREATE statements to create
in the scratch schema.
For full separation, give each student their own login and set the search
path to

"$user", public

That way each student gets their own private scratch area, and it is used
by default for their creates.

P


On Sun, Dec 4, 2016 at 1:10 PM, Lee Hachadoorian <
lee.hachadooria...@gmail.com> wrote:

> This question is specifically motivated by my use of the PostGIS
> extension, but since other extensions create functions and other supporting
> objects in public schema, I believe it is more general.
>
> I'm teaching a university-level class using PostGIS. I have created a
> scratch schema for students to create objects in. At the end of the term I
> can drop scratch and start fresh the following term.
>
> Students of course can also create objects in public schema, and often do
> unintentionally because the forget to schema qualify their CREATE TABLE
> statements. This complicates things because I can't drop public schema
> without dropping various PostGIS (and other) tables and functions.
> Additionally, while I doubt the students would do something like drop a
> public function or supporting table (like spatial_ref_sys), it nonetheless
> seems like a poor idea for these database objects to be vulnerable.
>
> What is considered best practices in this case? Should PostGIS extension
> be kept in its own schema (as was suggested when I asked about this on
> GIS.SE)? If I do so, can I treat public schema the way I have been using
> scratch schema, i.e. could I drop and recreate clean public schema at end
> of term? Should I leave extensions in public but limit rights of public
> role in that schema (so that they don't unintentionally create tables
> there, or accidentally delete other objects)? Or do Postgres DBA's just not
> worry about the objects in public schema, and rely upon applications and
> login roles to interact with the database intelligently?
>
> To be clear, primary goal is to keep student created objects in one schema
> which can be dropped at the end of the term. But the question of preventing
> accidental creation/deletion of objects in public schema is possibly
> related, and the overall database organization might address both concerns.
>
> Best,
> --Lee
>
>
> --
> Lee Hachadoorian
> Assistant Professor of Instruction, Geography and Urban Studies
> Assistant Director, Professional Science Master's in GIS
> Temple University
>


[GENERAL] Extensions and privileges in public schema

2016-12-04 Thread Lee Hachadoorian
This question is specifically motivated by my use of the PostGIS extension,
but since other extensions create functions and other supporting objects in
public schema, I believe it is more general.

I'm teaching a university-level class using PostGIS. I have created a
scratch schema for students to create objects in. At the end of the term I
can drop scratch and start fresh the following term.

Students of course can also create objects in public schema, and often do
unintentionally because the forget to schema qualify their CREATE TABLE
statements. This complicates things because I can't drop public schema
without dropping various PostGIS (and other) tables and functions.
Additionally, while I doubt the students would do something like drop a
public function or supporting table (like spatial_ref_sys), it nonetheless
seems like a poor idea for these database objects to be vulnerable.

What is considered best practices in this case? Should PostGIS extension be
kept in its own schema (as was suggested when I asked about this on GIS.SE)?
If I do so, can I treat public schema the way I have been using scratch
schema, i.e. could I drop and recreate clean public schema at end of term?
Should I leave extensions in public but limit rights of public role in that
schema (so that they don't unintentionally create tables there, or
accidentally delete other objects)? Or do Postgres DBA's just not worry
about the objects in public schema, and rely upon applications and login
roles to interact with the database intelligently?

To be clear, primary goal is to keep student created objects in one schema
which can be dropped at the end of the term. But the question of preventing
accidental creation/deletion of objects in public schema is possibly
related, and the overall database organization might address both concerns.

Best,
--Lee


-- 
Lee Hachadoorian
Assistant Professor of Instruction, Geography and Urban Studies
Assistant Director, Professional Science Master's in GIS
Temple University


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Adrian Klaver

On 12/04/2016 12:01 PM, Rich Shepard wrote:

On Sun, 4 Dec 2016, Adrian Klaver wrote:


The important questions that forgot to ask before, is how do you plan
to deploy this:



Or are you looking to deploy a front end/database combination to each
client?


  This way. I've no interest or expertise in maintaining a web site (my ISP
does this for mine). I would provide some guidance on installing necessary
software, but it's theirs to use as they choose.


Well that sort of negates the benefits of Web front end. You will now be 
responsible for setting up a database server, a Web server and the 
Django code that ties them together on each client.


What is this application going to do?



Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Tom Lane
Justin Pryzby  writes:
> I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
> running into a problem when the table has >830 columns (we have some tables
> which are at the 1600 column limit, and have previously worked around that
> limit using arrays or multiple tables).

Yeah, this is a bug, but fortunately the fix is pretty trivial.  See

https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=da05d0ebc637a84ba41a172b32552557ebad199f

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Rich Shepard

On Sun, 4 Dec 2016, Adrian Klaver wrote:

The important questions that forgot to ask before, is how do you plan to 
deploy this:



Or are you looking to deploy a front end/database combination to each client?


  This way. I've no interest or expertise in maintaining a web site (my ISP
does this for mine). I would provide some guidance on installing necessary
software, but it's theirs to use as they choose.

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Adrian Klaver

On 12/04/2016 09:59 AM, Rich Shepard wrote:

On Sun, 4 Dec 2016, Adrian Klaver wrote:


That is why I ended up learning Django, I could not find a GUI that was
cross platform and had the capabilities I needed. Others might have
suggestions. It would help though if you could list what you want in a
GUI
platform:


  OK. I spent a lot of time over the past few years looking for a simpler
solution and they just don't exist.

  I'll learn django and take it step-by-step.


The important questions that forgot to ask before, is how do you plan to 
deploy this:


Are you looking at having clients remotely access a site/database on 
your machine?


Or are you looking to deploy a front end/database combination to each 
client?




Thanks,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Rich Shepard

On Sun, 4 Dec 2016, Adrian Klaver wrote:


That is why I ended up learning Django, I could not find a GUI that was
cross platform and had the capabilities I needed. Others might have
suggestions. It would help though if you could list what you want in a GUI
platform:


  OK. I spent a lot of time over the past few years looking for a simpler
solution and they just don't exist.

  I'll learn django and take it step-by-step.

Thanks,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Adrian Klaver

On 12/04/2016 08:47 AM, Rich Shepard wrote:

On Sun, 4 Dec 2016, Adrian Klaver wrote:


Base is basically a GUI over the database.


Adrian,

  That's what I assumed it to be.


That being said my experience with Base is that is not up to the task. It
worked for me early on, but development on Base lagged relative to the
other modules in LO. My recent attempts have been less successful. Exact
examples on why, will have to wait on me dredging up the memories.


  I'm very open to suggestions what to use for the UI. I'm not a
professional coder and learning SQLAlchemy or Django takes too much time
away from my business.

  My clients all run Windows about which I know nothing. Is there a GUI for
postgres that works on all platforms and suitable for someone like me?


That is why I ended up learning Django, I could not find a GUI that was 
cross platform and had the capabilities I needed. Others might have 
suggestions. It would help though if you could list what you want in a 
GUI platform:


1) Form/report designer?

2) Widgets beyond the basics?

3) Programming language bindings?

4) Anything else you can think of.




Thanks,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-04 Thread Tom Lane
Joseph Brenner  writes:
>> So what happens when you specify the port in your psql connection, eg:
>> /usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5432
>> /usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5433
>> /usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5434

> With /usr/local/pgsql/bin/psql, only "-p 5433" connects, the
> other two complain like so:

>   psql: could not connect to server: No such file or directory
> Is the server running locally and accepting
> connections on Unix domain socket "/tmp/.s.PGSQL.5434"?

What this probably indicates is that the other two installations are
configured to put their socket files someplace else than /tmp, perhaps
/var/run/postgresql.  Connecting to them and issuing "show
unix_socket_directories" would tell the tale.

You can persuade a psql to connect to a socket in a nondefault directory
by giving the directory name as host, eg

psql --host=/var/run/postgresql -p 5434

It would be interesting to try all nine combinations of the psql's
supplied by your various installations and the servers, just to confirm
which ones behave normally and which don't.  Of course, the other two
would have to be told --host=/tmp to talk to the handbuilt server.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] count(*) in binary mode returns 0

2016-12-04 Thread Tom Lane
"imagene...@gmail.com"  writes:
> I have an implementation of binary format communication that is working
> quite well but it is failing inexplicably on returning valid results for
> the aggregate function count. It returns a correct result for instance for
> the aggregate function max.

> Here is some debug information to illustrate that I have debugged the host
> order conversions and what have you in implementing this binary format. As
> you can see, it is simply returning 0.

Offhand I'd bet that you're mistransmitting the statement parameter so
that the WHERE condition never succeeds.  You could set log_statement=all
and look in the postmaster log to see what the server thinks it's getting.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-04 Thread Tom Lane
Adrian Klaver  writes:
> My suspicion is that when you did the source build you got some cross 
> contamination of libraries.

That's the best theory I can come up with either, although libpq's
APIs haven't really changed in any non-backwards-compatible fashion
in years.  I could imagine a newer psql flat-out crashing because it
tries to call some libpq function that doesn't exist in an older
libpq, but that's not what we're seeing here.  It's really weird.

A couple of tests that might help narrow things down:

1. In the server configuration, enable log_connections,
log_disconnections, and log_statement = 'all'.  Does anything show up in
the log when you connect with the broken psql and issue commands?

2. If you issue commands that span multiple lines --- unclosed left
parens, multiline string literals, missing semicolons --- does the
psql prompt change to match?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-04 Thread Joseph Brenner
Yeah, I get the sense I need to simplify the situation, if only to
eliminate some distractions from the story.

This morning I was thinking I want to play around with pg_hba.conf
settings some more, but after that I'll do some pg_dumps and
uninstalls and see how things behave.

Oh, trying:

  /usr/bin/psql --dbname=doom --username=doom -p 543x

As login 'doom' the connection happens only for port 5432,
as login 'postgres' the connection fails differently for port 5432:

  psql: FATAL:  Peer authentication failed for user "doom"

(Like I said, I need to play with pg_hba.conf a bit.)

But that actually surprises me, I would've expected it'd be on
port 5434, because that's the most recent binary install, not the
original 9.4 version:

  /usr/bin/psql -V
  psql (PostgreSQL) 9.6.1

/etc/postgresql/9.6/main/postgresql.conf
  port = 5434# (change requires restart)

/etc/postgresql/9.4/main/postgresql.conf
  port = 5432# (change requires restart)


On Sun, Dec 4, 2016 at 7:06 AM, Adrian Klaver  wrote:
> On 12/03/2016 09:38 PM, Joseph Brenner wrote:
>>>
>>> So is the 9.4 instance the production/live database?
>>
>>
>> Essentially, but it's not heavily used: this is me messing around on a dev
>> box.
>>
>>> So what happens when you specify the port in your psql connection, eg:
>>> /usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5432
>>> /usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5433
>>> /usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5434
>>
>>
>> With /usr/local/pgsql/bin/psql, only "-p 5433" connects, the
>> other two complain like so:
>>
>>   psql: could not connect to server: No such file or directory
>> Is the server running locally and accepting
>> connections on Unix domain socket "/tmp/.s.PGSQL.5434"?
>>
>
> Alright how about?:
>
> /usr/bin/psql --dbname=doom --username=doom -p 5434
>
> My suspicion is that when you did the source build you got some cross
> contamination of libraries.
>
> If it where me I would get rid of the instance that you built from source,
> assuming that there is nothing important on it.
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Gmail

> On Dec 4, 2016, at 9:32 AM, Justin Pryzby  wrote:
> 
> Our application INSERTs data from external sources, and infrequently UPDATEs
> the previously-inserted data (currently, it first SELECTs to determine whether
> to UPDATE).
> 
> I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
> running into a problem when the table has >830 columns (we have some tables
> which are at the 1600 column limit, and have previously worked around that
> limit using arrays or multiple tables).
> 
> I tried to work around the upsert problem by using pygresql inline=True
> (instead of default PREPAREd statements) but both have the same issue.
> 
> I created a test script which demonstrates the problem (attached).
> 
> It seems to me that there's currently no way to "upsert" such a wide table?

Pardon my intrusion here, but I'm really curious what sort of datum has so many 
attributes?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Rich Shepard

On Sun, 4 Dec 2016, Adrian Klaver wrote:


Base is basically a GUI over the database.


Adrian,

  That's what I assumed it to be.


That being said my experience with Base is that is not up to the task. It
worked for me early on, but development on Base lagged relative to the
other modules in LO. My recent attempts have been less successful. Exact
examples on why, will have to wait on me dredging up the memories.


  I'm very open to suggestions what to use for the UI. I'm not a
professional coder and learning SQLAlchemy or Django takes too much time
away from my business.

  My clients all run Windows about which I know nothing. Is there a GUI for
postgres that works on all platforms and suitable for someone like me?

Thanks,

Rich


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Justin Pryzby
Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).

I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).

I tried to work around the upsert problem by using pygresql inline=True
(instead of default PREPAREd statements) but both have the same issue.

I created a test script which demonstrates the problem (attached).

It seems to me that there's currently no way to "upsert" such a wide table?

I see:
./src/include/access/htup_details.h:#define MaxTupleAttributeNumber 1664
/* 8 * 208 */

./src/backend/parser/parse_node.c-  /*
./src/backend/parser/parse_node.c-   * Check that we did not produce too 
many resnos; at the very least we
./src/backend/parser/parse_node.c-   * cannot allow more than 2^16, since 
that would exceed the range of a
./src/backend/parser/parse_node.c:   * AttrNumber. It seems safest to use 
MaxTupleAttributeNumber.
./src/backend/parser/parse_node.c-   */
./src/backend/parser/parse_node.c:  if (pstate->p_next_resno - 1 > 
MaxTupleAttributeNumber)
./src/backend/parser/parse_node.c-  ereport(ERROR,
./src/backend/parser/parse_node.c-  
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
./src/backend/parser/parse_node.c:   errmsg("target 
lists can have at most %d entries",
./src/backend/parser/parse_node.c:  
MaxTupleAttributeNumber)));

Thanks in advance for any suggestions.

Justin
#! /bin/sh
set -e

n=831
t=wide_upsert

psql -c "DROP TABLE IF EXISTS $t"

cols='id int'
vals='0'
sets='id=0'
for a in `seq -w 0 $n`
do
c="c$a int"
cols="$cols, c$a int"

vals="$vals, \$1"
sets="$sets, c$a=\$1"
done

echo $cols
psql -c "CREATE TABLE $t ($cols, PRIMARY KEY (id))"

set -x
psql -c "PREPARE x AS INSERT INTO $t VALUES ($vals) ON CONFLICT (id) DO UPDATE 
SET $sets; EXECUTE x(0)"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Justin Pryzby
Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).

I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).

I tried to work around the upsert problem by using pygresql inline=True
(instead of default PREPAREd statements) but both have the same issue.

I created a test script which demonstrates the problem (attached).

It seems to me that there's currently no way to "upsert" such a wide table?

I see:
./src/include/access/htup_details.h:#define MaxTupleAttributeNumber 1664
/* 8 * 208 */

./src/backend/parser/parse_node.c-  /*
./src/backend/parser/parse_node.c-   * Check that we did not produce too 
many resnos; at the very least we
./src/backend/parser/parse_node.c-   * cannot allow more than 2^16, since 
that would exceed the range of a
./src/backend/parser/parse_node.c:   * AttrNumber. It seems safest to use 
MaxTupleAttributeNumber.
./src/backend/parser/parse_node.c-   */
./src/backend/parser/parse_node.c:  if (pstate->p_next_resno - 1 > 
MaxTupleAttributeNumber)
./src/backend/parser/parse_node.c-  ereport(ERROR,
./src/backend/parser/parse_node.c-  
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
./src/backend/parser/parse_node.c:   errmsg("target 
lists can have at most %d entries",
./src/backend/parser/parse_node.c:  
MaxTupleAttributeNumber)));

Thanks in advance for any suggestions.

Justin
#! /bin/sh
set -e

n=831
t=wide_upsert

psql -c "DROP TABLE IF EXISTS $t"

cols='id int'
vals='0'
sets='id=0'
for a in `seq -w 0 $n`
do
c="c$a int"
cols="$cols, c$a int"

vals="$vals, \$1"
sets="$sets, c$a=\$1"
done

echo $cols
psql -c "CREATE TABLE $t ($cols, PRIMARY KEY (id))"

set -x
psql -c "PREPARE x AS INSERT INTO $t VALUES ($vals) ON CONFLICT (id) DO UPDATE 
SET $sets; EXECUTE x(0)"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Adrian Klaver

On 12/04/2016 07:41 AM, Rich Shepard wrote:

   I have postgres tables (with data) for a specific application but have
not found the time to learn django to make it an application that my
clients
can use. It occurs to me that the most parsimonious approach is to use
LibreOffice's Base for the UI with postgres as the dbms-engine. While I
work
in only linux (and unixODBC is installed), my clients all use various
flavors of Windows, but the three critical software applications (LO,
PostgreSQL, and ODBC) are available for Microsoft, too.

   I've scanned the Base portion of the LO User Guide and it looks to be a
practical solution to quickly providing clients with working database
applications.

   As this is completely new territory for me I'd like is to learn from
those who've done this before. As examples, What can/should I do as stored
procedures using PL/pgSQL? Can queries be created and tested using psql
before being imported into Base?


Base is basically a GUI over the database. You can still do whatever you 
want in Postgres via psql or any other client for that matter. That 
being said my experience with Base is that is not up to the task. It 
worked for me early on, but development on Base lagged relative to the 
other modules in LO. My recent attempts have been less successful. Exact 
examples on why, will have to wait on me dredging up the memories.




   All tips and gotcha's for a new user are certainly welcome.

TIA,

Rich







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index size

2016-12-04 Thread Andreas Joseph Krogh
På søndag 04. desember 2016 kl. 16:51:54, skrev Melvin Davidson <
melvin6...@gmail.com >:

 On Sun, Dec 4, 2016 at 4:43 AM, Samuel Williams <
space.ship.travel...@gmail.com > wrote: 
Melvin, of course there are differences. However, I suspect there are at least 
SOME tangible differences which can be identified.  

  On 4 December 2016 at 15:53, Melvin Davidson > wrote:     On Sat, Dec 3, 2016 at 9:32 PM, Steve 
Atkins> wrote: 
> On Dec 3, 2016, at 3:57 PM, Samuel Williams > wrote:
 >
 > Thanks everyone for your feedback so far. I've done a bit more digging:
 >
 > MySQL in MBytes (about 350 million rows):
 >
 > index_user_event_on_what_category_id_created_at_latlng | 22806.00
 > index_user_event_for_reporting | 18211.00
 > index_user_event_on_created_at | 9519.00
 > index_user_event_on_user_id | 6884.00
 > index_user_event_on_poi_id | 4891.00
 > index_user_event_on_deal_id | 3979.00
 >
 > Postgres (about 250 million rows):
 >
 > index_user_event_on_what_category_id_created_at_latlng | 25 GB
 > index_user_event_for_reporting | 19 GB
 > index_user_event_on_created_at | 7445 MB
 > index_user_event_on_user_id | 7274 MB
 > index_user_event_on_deal_id | 7132 MB
 > index_user_event_on_poi_id | 7099 MB
 >
 > So, the index is a bit bigger, plus there is also the PKEY index which
 > increases disk usage by another whole index. Keep in mind in the
 > above, MySQL has about 40% more data.
 >
 > With some indexes, it looks like MySQL might not be adding all data to
 > the index (e.g. ignoring NULL values). Does MySQL ignore null values
 > in an index? Can we get the same behaviour in Postgres to minimise
 > usage? What would be the recommendation here?

 It's unlikely anyone will be able to usefully answer the questions you
 should be asking without seeing the schema and index definitions,
 and maybe some clues about how you're querying the data.

 Cheers,
   Steve 
--
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org 
)
 To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general 





You are comparing apples to oranges. MySQL and PostgreSQL engines are 
different by design, so likewise
 the size of the indexes will be different.
 You may as well ask why a 2015 Porsche 911 Carrera 6 cyl, 3.4 L, Auto(AM-S7) 
gets 22 MPG,
 but the 2016 Jaguar XF MPG gets 24 mpg.

-- Melvin Davidson
I reserve the right to fantasize.  Whether or not you 
wish to share my fantasy is entirely up to you. 






   Samuel, 
Please note that the accepted convention in this forum is to bottom post. 
Please do not top post.



 
The accepted convention in this forum is to not have purple signature:-)
 
-- Andreas Joseph Krogh


 


Re: [GENERAL] Index size

2016-12-04 Thread Melvin Davidson
On Sun, Dec 4, 2016 at 4:43 AM, Samuel Williams <
space.ship.travel...@gmail.com> wrote:

> Melvin, of course there are differences. However, I suspect there are at
> least SOME tangible differences which can be identified.
>
> On 4 December 2016 at 15:53, Melvin Davidson  wrote:
>
>>
>>
>> On Sat, Dec 3, 2016 at 9:32 PM, Steve Atkins  wrote:
>>
>>>
>>> > On Dec 3, 2016, at 3:57 PM, Samuel Williams <
>>> space.ship.travel...@gmail.com> wrote:
>>> >
>>> > Thanks everyone for your feedback so far. I've done a bit more digging:
>>> >
>>> > MySQL in MBytes (about 350 million rows):
>>> >
>>> > index_user_event_on_what_category_id_created_at_latlng | 22806.00
>>> > index_user_event_for_reporting | 18211.00
>>> > index_user_event_on_created_at | 9519.00
>>> > index_user_event_on_user_id | 6884.00
>>> > index_user_event_on_poi_id | 4891.00
>>> > index_user_event_on_deal_id | 3979.00
>>> >
>>> > Postgres (about 250 million rows):
>>> >
>>> > index_user_event_on_what_category_id_created_at_latlng | 25 GB
>>> > index_user_event_for_reporting | 19 GB
>>> > index_user_event_on_created_at | 7445 MB
>>> > index_user_event_on_user_id | 7274 MB
>>> > index_user_event_on_deal_id | 7132 MB
>>> > index_user_event_on_poi_id | 7099 MB
>>> >
>>> > So, the index is a bit bigger, plus there is also the PKEY index which
>>> > increases disk usage by another whole index. Keep in mind in the
>>> > above, MySQL has about 40% more data.
>>> >
>>> > With some indexes, it looks like MySQL might not be adding all data to
>>> > the index (e.g. ignoring NULL values). Does MySQL ignore null values
>>> > in an index? Can we get the same behaviour in Postgres to minimise
>>> > usage? What would be the recommendation here?
>>>
>>> It's unlikely anyone will be able to usefully answer the questions you
>>> should be asking without seeing the schema and index definitions,
>>> and maybe some clues about how you're querying the data.
>>>
>>> Cheers,
>>>   Steve
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>> You are comparing apples to oranges. MySQL and PostgreSQL engines are
>> different by design, so likewise
>> the size of the indexes will be different.
>> You may as well ask why a 2015 Porsche 911 Carrera 6 cyl, 3.4 L,
>> Auto(AM-S7) gets 22 MPG,
>> but the 2016 Jaguar XF MPG gets 24 mpg.
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>

*Samuel, *



*Please note that the accepted convention in this forum is to bottom post.
Please do not top post.>Melvin, of course there are differences. However, I
suspect there are at least SOME tangible differences which can
be identified.*

*I have to ask, WHY do you think it is important to identify any
differences in the index methods.*








*What is important is that you understand why you need indexes and how they
are used.Tutorial -
Indexeshttps://www.tutorialspoint.com/postgresql/postgresql_indexes.htm
CREATE
INDEXhttps://www.postgresql.org/docs/9.4/static/sql-createindex.html
*
-- 

*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Postgres and LibreOffice's 'Base'

2016-12-04 Thread Rich Shepard

   I have postgres tables (with data) for a specific application but have
not found the time to learn django to make it an application that my clients
can use. It occurs to me that the most parsimonious approach is to use
LibreOffice's Base for the UI with postgres as the dbms-engine. While I work
in only linux (and unixODBC is installed), my clients all use various
flavors of Windows, but the three critical software applications (LO,
PostgreSQL, and ODBC) are available for Microsoft, too.

   I've scanned the Base portion of the LO User Guide and it looks to be a
practical solution to quickly providing clients with working database
applications.

   As this is completely new territory for me I'd like is to learn from
those who've done this before. As examples, What can/should I do as stored
procedures using PL/pgSQL? Can queries be created and tested using psql
before being imported into Base?

   All tips and gotcha's for a new user are certainly welcome.

TIA,

Rich




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select works only when connected from login postgres

2016-12-04 Thread Adrian Klaver

On 12/03/2016 09:38 PM, Joseph Brenner wrote:

So is the 9.4 instance the production/live database?


Essentially, but it's not heavily used: this is me messing around on a dev box.


So what happens when you specify the port in your psql connection, eg:
/usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5432
/usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5433
/usr/local/pgsql/bin/psql --dbname=doom --username=doom -p 5434


With /usr/local/pgsql/bin/psql, only "-p 5433" connects, the
other two complain like so:

  psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5434"?



Alright how about?:

/usr/bin/psql --dbname=doom --username=doom -p 5434

My suspicion is that when you did the source build you got some cross 
contamination of libraries.


If it where me I would get rid of the instance that you built from 
source, assuming that there is nothing important on it.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] count(*) in binary mode returns 0

2016-12-04 Thread imagene...@gmail.com
I have an implementation of binary format communication that is working
quite well but it is failing inexplicably on returning valid results for
the aggregate function count. It returns a correct result for instance for
the aggregate function max.

Here is some debug information to illustrate that I have debugged the host
order conversions and what have you in implementing this binary format. As
you can see, it is simply returning 0.

s:select count(*) from "User" where ("email" = $1)
valsl 1
val_str:t...@t.com len:7
result received
free_iter_pre
alloc_result pre
fname: count
nfields: 1
from_psql  bytes_to_read:4 read:4
host_order  bytes_to_read:4 read:4
conv_int ir:0
fname: count


Re: [GENERAL] Index size

2016-12-04 Thread Samuel Williams
Melvin, of course there are differences. However, I suspect there are at
least SOME tangible differences which can be identified.

On 4 December 2016 at 15:53, Melvin Davidson  wrote:

>
>
> On Sat, Dec 3, 2016 at 9:32 PM, Steve Atkins  wrote:
>
>>
>> > On Dec 3, 2016, at 3:57 PM, Samuel Williams <
>> space.ship.travel...@gmail.com> wrote:
>> >
>> > Thanks everyone for your feedback so far. I've done a bit more digging:
>> >
>> > MySQL in MBytes (about 350 million rows):
>> >
>> > index_user_event_on_what_category_id_created_at_latlng | 22806.00
>> > index_user_event_for_reporting | 18211.00
>> > index_user_event_on_created_at | 9519.00
>> > index_user_event_on_user_id | 6884.00
>> > index_user_event_on_poi_id | 4891.00
>> > index_user_event_on_deal_id | 3979.00
>> >
>> > Postgres (about 250 million rows):
>> >
>> > index_user_event_on_what_category_id_created_at_latlng | 25 GB
>> > index_user_event_for_reporting | 19 GB
>> > index_user_event_on_created_at | 7445 MB
>> > index_user_event_on_user_id | 7274 MB
>> > index_user_event_on_deal_id | 7132 MB
>> > index_user_event_on_poi_id | 7099 MB
>> >
>> > So, the index is a bit bigger, plus there is also the PKEY index which
>> > increases disk usage by another whole index. Keep in mind in the
>> > above, MySQL has about 40% more data.
>> >
>> > With some indexes, it looks like MySQL might not be adding all data to
>> > the index (e.g. ignoring NULL values). Does MySQL ignore null values
>> > in an index? Can we get the same behaviour in Postgres to minimise
>> > usage? What would be the recommendation here?
>>
>> It's unlikely anyone will be able to usefully answer the questions you
>> should be asking without seeing the schema and index definitions,
>> and maybe some clues about how you're querying the data.
>>
>> Cheers,
>>   Steve
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
> You are comparing apples to oranges. MySQL and PostgreSQL engines are
> different by design, so likewise
> the size of the indexes will be different.
> You may as well ask why a 2015 Porsche 911 Carrera 6 cyl, 3.4 L,
> Auto(AM-S7) gets 22 MPG,
> but the 2016 Jaguar XF MPG gets 24 mpg.
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


Re: [GENERAL] Index size

2016-12-04 Thread Samuel Williams
So, uh, my main question was, does MySQL add null values to an index, and
is this different from Postgres. The schema is irrelevant, except that the
column allows null values. I noticed when you create an index you can add a
where clause. Could it be I should add WHERE the fields are not null?

Here is the schema. Any advice or suggestions most welcome.

--

-- PostgreSQL database dump

--


-- Dumped from database version 9.5.4

-- Dumped by pg_dump version 9.5.4


SET statement_timeout = 0;

SET lock_timeout = 0;

SET client_encoding = 'UTF8';

SET standard_conforming_strings = on;

SET check_function_bodies = false;

SET client_min_messages = warning;

SET row_security = off;


SET search_path = public, pg_catalog;


SET default_tablespace = '';


SET default_with_oids = false;


--

-- Name: user_event; Type: TABLE; Schema: public; Owner: http

--


CREATE TABLE user_event (

id integer NOT NULL,

user_id integer NOT NULL,

latitude numeric(9,6),

longitude numeric(9,6),

active boolean DEFAULT true NOT NULL,

poi_id integer,

deal_id integer,

category_id integer,

what character varying(32) NOT NULL,

locale character varying(8),

created_at timestamp without time zone NOT NULL,

parameters jsonb

);



ALTER TABLE user_event OWNER TO http;


--

-- Name: user_event_id_seq; Type: SEQUENCE; Schema: public; Owner: http

--


CREATE SEQUENCE user_event_id_seq

START WITH 1

INCREMENT BY 1

NO MINVALUE

NO MAXVALUE

CACHE 1;



ALTER TABLE user_event_id_seq OWNER TO http;


--

-- Name: user_event_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner:
http

--


ALTER SEQUENCE user_event_id_seq OWNED BY user_event.id;



--

-- Name: id; Type: DEFAULT; Schema: public; Owner: http

--


ALTER TABLE ONLY user_event ALTER COLUMN id SET DEFAULT
nextval('user_event_id_seq'::regclass);



--

-- Name: user_event_pkey; Type: CONSTRAINT; Schema: public; Owner: http

--


ALTER TABLE ONLY user_event

ADD CONSTRAINT user_event_pkey PRIMARY KEY (id);



--

-- Name: index_user_event_for_reporting; Type: INDEX; Schema: public;
Owner: http

--


CREATE INDEX index_user_event_for_reporting ON user_event USING btree
(latitude, longitude, created_at, user_id);



--

-- Name: index_user_event_on_created_at; Type: INDEX; Schema: public;
Owner: http

--


CREATE INDEX index_user_event_on_created_at ON user_event USING btree
(created_at);



--

-- Name: index_user_event_on_deal_id; Type: INDEX; Schema: public; Owner:
http

--


CREATE INDEX index_user_event_on_deal_id ON user_event USING btree
(deal_id);



--

-- Name: index_user_event_on_poi_id; Type: INDEX; Schema: public; Owner:
http

--


CREATE INDEX index_user_event_on_poi_id ON user_event USING btree (poi_id);



--

-- Name: index_user_event_on_user_id; Type: INDEX; Schema: public; Owner:
http

--


CREATE INDEX index_user_event_on_user_id ON user_event USING btree
(user_id);



--

-- Name: index_user_event_on_what_category_id_created_at_latlng; Type:
INDEX; Schema: public; Owner: http

--


CREATE INDEX index_user_event_on_what_category_id_created_at_latlng ON
user_event USING btree (what, category_id, created_at, latitude, longitude);



--

-- PostgreSQL database dump complete

--