Re: [GENERAL] Puzzled by UNION with unknown types

2017-09-20 Thread Ken Tanzer
Thanks Tom!

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Puzzled by UNION with unknown types

2017-09-20 Thread Ken Tanzer
Hi.  I've solved my practical problems, and I know unknown types are
just bad, but am still curious about why some of these cause errors,
and others don't.

It seems contingent on whether the value is generated on the fly, but
I don't understand the mechanism or distinction.  Any help
appreciated!

-- These work:

ag_consulting=> SELECT '1'::unknown  UNION ALL SELECT '2'::unknown;

 unknown
-
 1
 2
(2 rows)


-- db_list is a table with one row:

ag_consulting=> SELECT '1'::unknown FROM db_list UNION ALL SELECT
'2'::unknown FROM db_list;
 unknown
-
 1
 2
(2 rows)


-- These don't work:


ag_consulting=> SELECT * FROM (SELECT '1'::unknown) foo UNION ALL
SELECT * FROM (SELECT '1'::unknown) bar;
ERROR:  failed to find conversion function from unknown to text


ag_consulting=> CREATE TEMP TABLE t1 AS SELECT '1'::unknown; \d t1
WARNING:  column "unknown" has type "unknown"
DETAIL:  Proceeding with relation creation anyway.
SELECT 1
 Table "pg_temp_2.t1"
 Column  |  Type   | Modifiers
-+-+---
 unknown | unknown |

ag_consulting=> SELECT * FROM t1 UNION ALL SELECT * FROM t1;
ERROR:  failed to find conversion function from unknown to text



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Fields re-ordered on JOIN with * and USING

2017-09-01 Thread Ken Tanzer
>
> ...least excruciating version of the relevant text...


Ouch, I'm glad you folks take care of reading such stuff!  What you put in
the documentation was much much clearer--just wish I had found it!

Speaking of which, I had looked at the "From" section of the "SELECT" page (
https://www.postgresql.org/docs/current/static/sql-select.html#SQL-FROM),
which also has a somewhat detailed section on joins.  I'm wondering about
the utility of:


   - Adding a link from the "SELECT" page (FROM section) to the page Tom
   referenced (which includes a link the other way)

and/or

   - Adding this detail to the section on USING on the select page:


A clause of the form USING ( a, b, ... ) is shorthand for ON left_table.a =
right_table.a AND left_table.b = right_table.b  Also, USING implies
that only one of each pair of equivalent columns will be included in the
join output, not both.  *Outcome columns specified by USING will appear
first in the joined results.*

Cheers,
Ken

>
>


[GENERAL] Fields re-ordered on JOIN with * and USING

2017-09-01 Thread Ken Tanzer
Hi.  I recently noticed that when doing a SELECT * with USING, that the
join field(s) appear first in the output.  I'd never noticed that before,
and was just curious if that is expected behavior or not.  Thanks.

Ken

CREATE TEMP TABLE t1 (
  f1 INTEGER,
  f2 INTEGER UNIQUE,
  f3 INTEGER,
  f4 INTEGER
);

CREATE TEMP TABLE t2 (
  f2 INTEGER,
  f3 INTEGER,
  f5 INTEGER
);

SELECT * FROM t1 LEFT JOIN t2 USING (f3,f2);

 f3 | f2 | f1 | f4 | f5
++++
(0 rows)



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] 9.6 parameters messing up my 9.2 pg_dump/pg_restore

2017-06-29 Thread Ken Tanzer
On Thu, Jun 29, 2017 at 9:34 AM, Jeff Janes  wrote:

>
>> Well sure, I can see it increases your chances of getting _something_
>> restored.  But there's also a lot to be said for ensuring that _all_ your
>> data restored, and did so correctly, no?
>>
>
> Record the errors, and look through them to decide if they are important
> or not.
>
>
I'd still rather have the data be correct, or not at all.  It also greatly
increases the chances someone will notice it, and let me know about it.



> But better yet, use v9.2 of pg_dump to dump things out of a 9.2 server
> which you want to load to another 9.2 server.  Don't be at the mercy of
> your $PATH.
>
>
Yep, that's the direction I went.


> (Or even more better yet, upgrade the servers from 9.2 to 9.6, and then
> use 9.6's pg_dump)
>
>
On the todo list.  I don't imagine though that I'm the only one who would
install a newer version of PG, do some testing, and then upgrade DBs to the
newer version, and possibly not do it all immediately and at once.

I think it's great and impressive that you can install and run two versions
simultaneously, but I have found a couple gotchas in the process.  Maybe
those are documented somewhere, but if so I haven't seen it.  The issues I
hit all had fairly easy solutions, but I'd humbly suggest that a "things to
watch out for when running multiple versions of Postgres concurrently"
might be a useful document.

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] 9.6 parameters messing up my 9.2 pg_dump/pg_restore

2017-06-29 Thread Ken Tanzer
Thanks for the responses.  For me, using the 9.2 binary was the winner.
Shoulda thought of that!

On Wed, Jun 28, 2017 at 1:30 PM, Tom Lane  wrote:

>
> Generally speaking, it helps a lot if you don't insist on restoring the
> output in a single transaction.  In this case, that would allow the
> restore to ignore the new parameters and move on.
>
> regards, tom lane
>

Well sure, I can see it increases your chances of getting _something_
restored.  But there's also a lot to be said for ensuring that _all_ your
data restored, and did so correctly, no?

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] 9.6 parameters messing up my 9.2 pg_dump/pg_restore

2017-06-28 Thread Ken Tanzer
Hi.  I've got a CentOS server with 9.2 and 9.6 both running.  (Both from
PGDG).  I've got a cron job that transfers data from one DB to another,
that recently stopped working, and I traced it to my installing 9.6.  The
dump comand is pretty straightforward:

pg_dump -c -O -t "${prefix}*"...

But at the top it sets a bunch of parameters, some of which are
unrecognized by 9.2, which then throws an error and causes my transaction
to fail.  Top of the dump file:

--
-- PostgreSQL database dump
--

-- Dumped from database version 9.2.21
-- Dumped by pg_dump version 9.6.3

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;
SET row_security = off;

And output from running pg_restore:

BEGIN
SET
ERROR:  unrecognized configuration parameter "lock_timeout"
ERROR:  current transaction is aborted, commands ignored until end of
transaction block
ERROR:  current transaction is aborted, commands ignored until end of
transaction block
ERROR:  current transaction is aborted, commands ignored until end of
transaction block
...


I didn't see any options for dealing with this, though I'm hoping I'm
missing something easy or obvious.  Any suggestions or help would be
appreciated.  Thanks.

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Ordering of window functions with no order specified?

2017-06-15 Thread Ken Tanzer
Thanks for the replies!


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Ordering of window functions with no order specified?

2017-06-14 Thread Ken Tanzer
Hi.  If you use a window function and don't specify an order, will the rows
be processed in the same order as the query results?

In this particular case, I'm wondering about row_number(), and whether I
explicitly need to repeat the ordering that's already specified in the
query?

SELECT a,b,c,row_number()
OVER (PARTITION BY a) -- Do I need an ORDER BY c,b here?
FROM foo
ORDER  BY c,b

Also, I'm interested in both what if any behavior is guaranteed, and what
gets done in practice.  (i.e., a SELECT with no order doesn't have
guarantees, but in practice seems to return the results in the order they
were added to the table.  Is it something similar here?)

Thanks!

Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-11 Thread Ken Tanzer
On Sun, Jun 11, 2017 at 12:15 PM, Bruno Wolff III <br...@wolff.to> wrote:

> On Fri, Jun 09, 2017 at 21:14:15 -0700,
>  Ken Tanzer <ken.tan...@gmail.com> wrote:
>
>> On Fri, Jun 9, 2017 at 5:38 PM, Bruno Wolff III <br...@wolff.to> wrote:
>>
>> Seems to me they are separate issues.   App currently has access to the
>> password for accessing the DB.  (Though I could change that to ident
>> access
>> and skip the password.)  App 1) connects to the DB, 2) authenticates the
>> user (within the app), then 3) proceeds to process input, query the DB,
>> produce output.  If step 2A becomes irrevocably changing to a
>> site-specific
>> role, then at least I know that everything that happens within 3 can't
>> cross the limitations of per-site access.  If someone can steal my
>> password
>> or break into my backend, that's a whole separate problem that already
>> exists both now and in this new scenario.
>>
>
> In situations where a person has enough access to the app (e.g. it is a
> binary running on their desktop) to do spurious role changes, they likely
> have enough acces to hijack the database connection before privileges are
> dropped.
>

Ah yes, I could see that.  In this case it's a web app, so only the server
has the DB credentials.  I'd really hate it if each client had to be able
to access those credentials!

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
On Fri, Jun 9, 2017 at 5:38 PM, Bruno Wolff III <br...@wolff.to> wrote:

> On Thu, Jun 08, 2017 at 22:37:34 -0700,
>  Ken Tanzer <ken.tan...@gmail.com> wrote:
>
>>
>> My approach was to have the initial connection made by the owner, and then
>> after successfully authenticating the user, to switch to the role of the
>> site they belong to.  After investigation, this still seems feasible but
>> imperfect.  Specifically, I thought it would be possible to configure such
>> that after changing to a more restricted role, it would not be possible to
>> change back.  But after seeing this thread (
>>
>
> How are you keeping the credentials of the owner from being compromised?
> It seems if you are worried about role changing, adversaries will likely
> also be in a position to steal the owner's credentials or hijack the
> connection before privileges are dropped.
>

Seems to me they are separate issues.   App currently has access to the
password for accessing the DB.  (Though I could change that to ident access
and skip the password.)  App 1) connects to the DB, 2) authenticates the
user (within the app), then 3) proceeds to process input, query the DB,
produce output.  If step 2A becomes irrevocably changing to a site-specific
role, then at least I know that everything that happens within 3 can't
cross the limitations of per-site access.  If someone can steal my password
or break into my backend, that's a whole separate problem that already
exists both now and in this new scenario.

Cheers,
Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
>
> As to your very last point (suggestions about other approaches), is it
> impossible or impractical to migrate to a scheme in which each user
> actually has a data base role and their own password? Postgresql has really
> great facility for managing database authorization and access by means of
> login roles assignable membership in group roles. Why not let the tool do
> what it can already do very effectively?
>
> -- B
>
>
If you mean having each individual person having their own role, I'd say
it's not impossible, impractical at the current moment but (probably)
desirable and a longer-term goal.  There's just an awful lot of logic that
would have to be worked into the access control, as well as a way to create
and maintain all the roles.  Some day!  Maybe! :)

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
On Fri, Jun 9, 2017 at 11:22 AM, Joe Conway <m...@joeconway.com> wrote:

> On 06/09/2017 08:56 AM, Ken Tanzer wrote:
> > The extra logging would be undesirable.  Is there any way to skip that
> > entirely?  I see with block_log_statement I could dial down the logging
> > after switching users, but that would require the app to be aware of
> > what the current "normal" logging level was.
>
> Also from the README:
> ---
> Notes:
>
> If set_user.block_log_statement is set to "off", the log_statement
> setting is left unchanged.
> ---
>
> So assuming you do not normally have statements being logged, this would
> not change that.
>
>
Despite reading that, I was a little uncertain because of it being called
block_log_statement.  It seems like conceptually it's really
log_all_statements, though I suspect you won't want to change the name in
midstream.

FWIW, it would be clearer at least to me if you took the two statements in
the description:


   - log_statement setting is set to "all", meaning every SQL statement
   executed while in this state will also get logged.
   - If set_user.block_log_statement is set to "on", SET log_statement and
   variations will be blocked. And this one from the notes:

 And this one from the notes:

   - If set_user.block_log_statement is set to "off", the log_statement
   setting is left unchanged.


And combined them together:

If set-user.block_log_statement is set to "on", log_statement setting is
set to "all", meaning every SQL statement executed while in this state will
also get logged.  SET log_statement and variations will be blocked.  If set
to "off," the log statement setting is left unchanged.


> > Any other pitfalls I'm not seeing, or reasons this might be a bad idea?
>
> As noted in the README, set_user will refuse to run inside a transaction
> block, but other than that none that I know of. Of course if you come up
> with any I'd be very interested to hear about them.
>
>
If I go this route, get it up and running and find any, I'll be happy to
let you know. :)

Thanks a lot for your help!

Ken



> Joe
>
> --
> Crunchy Data - http://crunchydata.com
> PostgreSQL Support for Secure Enterprises
> Consulting, Training, & Open Source Development
>
>

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
On Fri, Jun 9, 2017 at 6:42 AM, Joe Conway <m...@joeconway.com> wrote:

> On 06/08/2017 10:37 PM, Ken Tanzer wrote:
> > My approach was to have the initial connection made by the owner, and
> > then after successfully authenticating the user, to switch to the role
> > of the site they belong to.  After investigation, this still seems
> > feasible but imperfect.  Specifically, I thought it would be possible to
> > configure such that after changing to a more restricted role, it would
> > not be possible to change back.  But after seeing this thread
> > (http://www.postgresql-archive.org/Irreversible-SET-ROLE-td5828828.html),
> I'm
> > gathering that this is not the case.
>
> See set_user for a possible solution: https://github.com/pgaudit/
>
>
Thanks!  Looking at the README, it seems like the intended use case is the
opposite (escalating privileges), but if I understand could work anyway?

If I'm understanding, you could set_user() with a random token and thereby
prevent switching back?

The extra logging would be undesirable.  Is there any way to skip that
entirely?  I see with block_log_statement I could dial down the logging
after switching users, but that would require the app to be aware of what
the current "normal" logging level was.

Any other pitfalls I'm not seeing, or reasons this might be a bad idea?

Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Limiting DB access by role after initial connection?

2017-06-09 Thread Ken Tanzer
As a follow up to this, a couple more questions from my wishful thinking
list:

1) Any way, whether kosher or hacky, to set an arbitrary yet immutable (for
the lifetime of the session) variable within a session?  Something akin to
DEFINE, which wouldn't allow redefinition?  A temp table that couldn't be
dropped?

2) Same as above, but set by the app before making the connection?  I'd
settle for this even if it meant two connections per page.

Either of those would facilitate more fine-grained, per-user access, which
would ultimately be better and maybe allow me to skip the creation and
management of roles.  Having discovered that the RLS will not apply to all
my existing views and I'll need to rewrite them anyway, this seems like a
better thing to wish for!

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Limiting DB access by role after initial connection?

2017-06-08 Thread Ken Tanzer
Hi.  As context, I'm working with an organization with a current production
database.  Organizations in other locations using the same service delivery
model want to share this database, with some but not all of the data
restricted so that people at each site can see only that site's data.  I've
been looking at doing this by creating a role for each location, and then
using RLS to restrict access to some tables.  Currently the database has
one user, the owner, and access is controlled within the application by
usernames and passwords within the DB.

My approach was to have the initial connection made by the owner, and then
after successfully authenticating the user, to switch to the role of the
site they belong to.  After investigation, this still seems feasible but
imperfect.  Specifically, I thought it would be possible to configure such
that after changing to a more restricted role, it would not be possible to
change back.  But after seeing this thread (
http://www.postgresql-archive.org/Irreversible-SET-ROLE-td5828828.html),
I'm gathering that this is not the case.

I can still go this route, either by:

1)  Setting the role as described above, and then trying to be damn sure
that a subsequent query doing a role change never ever slips through the
app. :)

2) After authentication, close the DB connection and reconnect as the site
role.  This seems tighter from a security standpoint, but at the cost of
doubling my # of DB connections, and then also needing the app to manage
passwords for each site.

So before doing either of those, I wanted to confirm if there is/isn't a
way to configure and change roles in a way that reduces privileges, and
cannot be undone.  Either with changing roles, or as some kind of
"RECONNECT TO db_name AS user" command that wold allow a DB owner to
connect without supplying credentials.

Those might both be wishful thinking.  If so, I'd also welcome any
thoughts, suggestions or feedback about 1) and 2), or better approaches
entirely.  Thanks!

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)

2017-06-05 Thread Ken Tanzer
>
> > So I can switch to Custom format for future backups.  But regarding the
> > existing backups I have in Tar format, is there any way to successfully
> > restore them?
>
> FWIW, the business with making and editing a list file should work just
> fine with a tar-format dump, not only with a custom-format dump.  The
> metadata is all there in either case.
>

I had tried that originally, but got an error:

bash-4.1$ pg_restore -L spc_restore_list.tmp -d spc_test_1
agency_backup.spc.2017.06.05_10.30.01.tar

pg_restore: [tar archiver] restoring data out of order is not supported in
this archive format: "10608.dat" is required, but comes before "10760.dat"
in the archive file.

The pg_dump doc page kinda suggests but doesn't quite say that you can't
re-order tar files; between that and the error message I gave up on that
possibility.  Are you suggesting it should work?

https://www.postgresql.org/docs/9.3/static/app-pgdump.html

The alternative archive file formats must be used with pg_restore
 to rebuild
the database. They allow pg_restore to be selective about what is restored,
or even to reorder the items prior to being restored. The archive file
formats are designed to be portable across architectures.

When used with one of the archive file formats and combined with pg_restore
, pg_dump provides a flexible archival and transfer mechanism. pg_dump can
be used to backup an entire database, then pg_restore can be used to
examine the archive and/or select which parts of the database are to be
restored. *The most flexible output file formats are the "custom" format
(-Fc) and the "directory" format(-Fd). They allow for selection and
reordering* of all archived items, support parallel restoration, and are
compressed by default. The "directory" format is the only format that
supports parallel dumps.
Cheers,
Ken
-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)

2017-06-05 Thread Ken Tanzer
>
> I do get the "make \d show relevant information" argument and that is one
> that seems easier to solve...
>

Maybe I'm missing something, but I'm not sure how you'd solve this or
change what \d shows for a table.  Right now I get to see this in my \d:

"authorized_approvers_only" CHECK (approved_by IS NULL OR
has_perm(approved_by, 'APPROVE_PAYMENT'::character varying,
'W'::character
varying))

But when I move that to a trigger, I'll only see the trigger name.  Any
while this procedure would be really short, others not so much, so you
wouldn't really want to automatically display it inline.

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)

2017-06-05 Thread Ken Tanzer
>
> I can't really make this an FK.  I can (and probably will) put this into a
>> trigger.  Although it seems like an extra layer of wrapping just to call a
>> function.  I'm curious if there's any conceptual reason why constraints
>> couldn't (as an option) be restored after all the data is loaded, and
>> whether there would be any negative consequences of that?  I could see if
>> your data still didn't pass the CHECKs, it's already loaded.  But the
>> constraint could then be marked not valid?
>>
>
> Not sure why just know that if I stay within the guidelines it works, if I
> do not its does not work:)
>
>
That's fair enough, leaving aside the curiosity part.  Usually though the
things you can't do just aren't allowed.  It's easier to overlook something
that you shouldn't (but can) do!



> See that, but in your scenario you wanted to create a 'scratch' database
> so you are back to a user with privileges.


>
Yeah, I was thinking pg_dump could just conjure it up in the ether (and
then discard it), but I can see that doesn't really work.


Basically, if you have no way to test your backup/restore procedure before
> hand you are flying blind.
>
>
In this case, we had tested the restore part.  But then we changed the DB
in a way that made it stop working.  Good reminder to retest that
periodically!

Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)

2017-06-05 Thread Ken Tanzer
>
> Aside from being a bit more verbose there is nothing useful that writing
> this as "CHECK function()" provides that you don't also get by writing
> "CREATE TRIGGER".
>

I agree you get the same result.  It may be a minor issue, but for me it is
convenient to see the logic spelled out when using \d on the table.

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)

2017-06-05 Thread Ken Tanzer
Thanks Adrian and David.  That all makes sense, and I gather the answer
regarding the existing dumps is "no, they can't be restored."  So be it.
Here's a couple of follow-on comments::

Ideally figure out how to write an actual FK constraint - otherwise use
> triggers.


I can't really make this an FK.  I can (and probably will) put this into a
trigger.  Although it seems like an extra layer of wrapping just to call a
function.  I'm curious if there's any conceptual reason why constraints
couldn't (as an option) be restored after all the data is loaded, and
whether there would be any negative consequences of that?  I could see if
your data still didn't pass the CHECKs, it's already loaded.  But the
constraint could then be marked not valid?


-1; pg_dump should not be trying to restore things.​  The core developers
> shouldn't really concern themselves with the various and sundry ways people
> might want to setup such a process.  You have tools for dump, and tools for
> restore, and you can combine them in whatever fashion you deem useful.  Or
> otherwise acquire someone else's ideas.


I get that as a general principle.  OTOH, being able to restore your
backups isn't just a random or inconsequential feature.  I have access to
the superuser and can create DBs, but users in more locked down scenarios
might not be able to do so.


>From the docs:
> https://www.postgresql.org/docs/9.6/static/sql-createtable.html
> "Currently, CHECK expressions cannot contain subqueries nor refer to
> variables other than columns of the current row. The system column tableoid
> may be referenced, but not any other system column.


I wonder if that should say "should not," or be followed by something like
this:

n.b., In CHECK expressions, Postgres will not prevent you from calling
functions that reference other rows or tables.  However, doing so may have
undesirable consequences, including the possible inability to restore from
output created by pg_dump.

(Are there other possible pitfalls too, or is that the only one?)

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)

2017-06-05 Thread Ken Tanzer
On 9.3.17, I tried to restore a tar from pg_dump.  It failed thusly:

bash-4.1$ pg_restore -d spc_test_1 agency_backup.spc.2017.06.05_10.30.01.tar

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 10608; 0 107743 TABLE DATA
tbl_payment spc
pg_restore: [archiver (db)] COPY failed for table "tbl_payment": ERROR:
 new row for relation "tbl_payment" violates check constraint
"authorized_approvers_only"
DETAIL:  Failing row contains (286541, 3685, 2015-09-14, ADJUST, null,
null, 137798, 93.00, HONEY, 4841, 2, SHONCRE, September adjustment,
2015-10-01, null, null, null, null, null, f, f, t, f, f, f, f, null, null,
null, null, 6, 2015-09-14 16:43:37, 25, 2016-02-08 16:34:20, f, null, null,
null, Adjusting approved_at to changed_at for first few approvals
, 6, 2015-09-14 16:43:37, 2015-09-17).
CONTEXT:  COPY tbl_payment, line 179785: "28654136852015-09-14
 ADJUST  \N  \N  137798  93.00   HONEY   48412
SHONCRE September adjustment2015-10-0..."
WARNING: errors ignored on restore: 1

The rest of the DB is fine, but tbl_payment has 0 rows.  I believe this is
because tbl_payment has a constraint that calls a function has_perm() that
relies on data in a couple of other tables, and that tbl_payment is being
restored before those tables.  I was able to created a new dump in Custom
format, reorder the List file, and restore that successfully.

So I can switch to Custom format for future backups.  But regarding the
existing backups I have in Tar format, is there any way to successfully
restore them?  Specifically:

   - Any way to ignore or delay constraint checking?  Something like
   disable-triggers?


   - Any way to tell pg_restore to skip past the failing row, and restore
   the rest of what was in tbl_payment?


   - Some other way to go about this?

I also wonder if you folks might consider adding something like a
--test_restore option to pg_dump that would attempt to create a new
(scratch) DB from the output it creates, and report any errors?  I know the
pieces are all there for us users to do that ourselves, but it would be
handy for automated backups and might help us to avoid creating backups
that won't restore successfully.  In my case, I think the problem started
from changes we made about 9 months ago, and happily I discovered it during
development/testing and not after a DB crash, which is why I'm also happily
not gouging my eyeballs out right now. :)

Cheers, and thanks in advance!

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org

(253) 245-3801

Subscribe to the mailing list

 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Installing module for 9.6, not 9.2, on Centos?

2017-05-24 Thread Ken Tanzer
>
> The biggest problem is that I was out in the Sun too long today and was
> not paying attention to what you posted.  This part:
>
> PATH=/usr/local/pgsql96/bin:$PATH
>
> is from how I installed various versions of Postgres from source on my
> machine. My guess is it does not match your setup. You need to find where
> pg_config is for your 9.6 install and use that path.
>
>
>>
Egads, my bad for not catching that path.  Works fine with adding the
(actually-existing) /usr/pgsql-9.6/bin to the path.  Thanks so much for
your help!

Can I also ask y'all a more general question about this, specifically
related to how Postgres is packaged for RHEL/Centos?  I've got both 9.6 and
9.2 installed.  In this case though, it seems that the 9.2 version is
privileged/selected by default.  But psql defaults to the 9.6 version.  Are
there other similar things that will default to either 9.2 or 9.6?  And if
so, what controls that behavior, is it easily-changeable, and/or can you go
back and forth?

I've never tried running two versions at once before.  Maybe this is an
isolated incident, but I'm just trying to get my mind around the concept,
and know what kind of pitfalls if any to expect or beware of.  Thanks!

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Installing module for 9.6, not 9.2, on Centos?

2017-05-23 Thread Ken Tanzer
On Tue, May 23, 2017 at 5:34 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 05/23/2017 05:27 PM, Ken Tanzer wrote:
>
>>
>> But the install still goes to 9.2:
>>
>> PATH=/usr/local/pgsql96/bin/:$PATH make USE_PGXS=1 install
>>
>>
>> Did you do:
>>
>> PATH=/usr/local/pgsql96/bin/:$PATH make USE_PGXS=1 clean all
>>
>>
>> I did.
>>
>>
>> Or is there some environment variable set that could be interfering?
>>
>>
>> I didn't see anything that would apply.  I've attached a dump of
>> environment variables and also a re-run of the compile/install process.
>>
>
> Try without the trailing:
>
> PATH=/usr/local/pgsql96/bin:$PATH make USE_PGXS=1 clean all
>
>
>>
Sorry, not sure I follow what is the trailing part here?

Does the compile stage specify where a module is to be installed?  Or is it
determined at install time?

Apart from the extra 96/bin directories I'm accumulating, I don't see
anything suspicious in my path.  (And my /usrl/ocal/(s)bin directories are
empty.)

[root@hosting table_log-0.4.4]# PATH=/usr/local/pgsql96/bin:$PATH ; echo
"Path is: $PATH" ;  make USE_PGXS=1 clean all

*Path is:
/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin*
rm -f table_log.so table_log.o
...

[root@hosting table_log-0.4.4]# PATH=/usr/local/pgsql96/bin:$PATH ; echo
"Path is: $PATH" ;  make USE_PGXS=1 install
*Path is:
/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/pgsql96/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin*

/bin/mkdir -p '/usr/pgsql-9.2/share/contrib'
/bin/mkdir -p '/usr/pgsql-9.2/lib'
/bin/mkdir -p '/usr/pgsql-9.2/doc/contrib'
/usr/bin/install -c -m 644  table_log.sql '/usr/pgsql-9.2/share/contrib/'
/usr/bin/install -c -m 755  table_log.so '/usr/pgsql-9.2/lib/'
/usr/bin/install -c -m 644 .//README.table_log '/usr/pgsql-9.2/doc/contrib/'


Thanks,
Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Installing module for 9.6, not 9.2, on Centos?

2017-05-23 Thread Ken Tanzer
>
>
>> But the install still goes to 9.2:
>>
>> PATH=/usr/local/pgsql96/bin/:$PATH make USE_PGXS=1 install
>>
>
> Did you do:
>
> PATH=/usr/local/pgsql96/bin/:$PATH make USE_PGXS=1 clean all
>
>
I did.


> Or is there some environment variable set that could be interfering?
>
>
I didn't see anything that would apply.  I've attached a dump of
environment variables and also a re-run of the compile/install process.

Cheers,
Ken

>
>

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.
[root@hosting table_log-0.4.4]# set
BASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:expand_aliases:extquote:force_fignore:hostcomplete:interactive_comments:login_shell:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="4" [1]="1" [2]="2" [3]="2" [4]="release" 
[5]="x86_64-redhat-linux-gnu")
BASH_VERSION='4.1.2(2)-release'
COLORS=/etc/DIR_COLORS
COLUMNS=108
DIRSTACK=()
EUID=0
GROUPS=()
G_BROKEN_FILENAMES=1
HISTCONTROL=ignoredups
HISTFILE=/root/.bash_history
HISTFILESIZE=1000
HISTSIZE=1000
HOME=/root
HOSTNAME=hosting.agency-software.org
HOSTTYPE=x86_64
ID=0
IFS=$' \t\n'
LANG=en_US.UTF-8
LESSOPEN='||/usr/bin/lesspipe.sh %s'
LINES=37
LOGNAME=root
LS_COLORS='rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.tbz=01;31:*.tbz2=01;31:*.bz=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:'
MACHTYPE=x86_64-redhat-linux-gnu
MAIL=/var/spool/mail/root
MAILCHECK=60
OLDPWD=/root
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
PIPESTATUS=([0]="0")
PPID=14295
PROMPT_COMMAND='printf "\033]0;%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" 
"${PWD/#$HOME/~}"'
PS1='[\u@\h \W]\$ '
PS2='> '
PS4='+ '
PWD=/root/table_log-0.4.4
SHELL=/bin/bash
SHELLOPTS=braceexpand:emacs:hashall:histexpand:history:interactive-comments:monitor
SHLVL=1
SSH_CLIENT='174.143.174.50 52745 22'
SSH_CONNECTION='174.143.174.50 52745 23.253.224.168 22'
SSH_TTY=/dev/pts/3
TERM=screen
UID=0
USER=root
_=install
colors=/etc/DIR_COLORS


[root@hosting table_log-0.4.4]# PATH=/usr/local/pgsql96/bin/:$PATH make 
USE_PGXS=1 clean all
rm -f table_log.so table_log.o  

rm -f table_log.sql 

gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
-Wendif-labels -Wmissing-format
-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexcep
tions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic 
-DLINUX_OOM_SCORE_ADJ=0 -fpic -I. -I./
 -I/usr/pgsql-9.2/include/server -I/usr/pgsql-9.2/include/internal 
-D_GNU_SOURCE -I/usr/include/libxml2  -I/
usr/include  -c -o table_log.o table_log.c  

sed 's,MODULE_PATHNAME,$libdir/table_log,g' table_log.sql.in >table_log.sql 

gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement 
-Wendif-labels -Wmissing-format|
-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexcep
tions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic 
-DLINUX_OOM_SCORE_ADJ=0 -fpic -L/usr/p
gsql-9.2/lib -Wl,--as-needed  -L/usr/lib64 -Wl,--as-needed 
-Wl,-rpath,'/usr/pgsql-9.2/lib',--enable-new-dtag
s  -shared -o table_log.so table_log.o

Re: [GENERAL] Installing module for 9.6, not 9.2, on Centos?

2017-05-23 Thread Ken Tanzer
Thanks Adrian, though still no luck for me.  The compiling was working OK
already, the install still goes to 9.2.


> Where did you get the table_log extension
>
>>
I no longer remember (it's been a few years), but my tgz file was the same
as what's on pgfoundry



> I usually do something like(using existing Makefile):


PATH=/usr/local/pgsql96/bin/:$PATH make USE_PGXS=1 clean all

   ...

>
>
Though in both cases the extension did not actually compile due to errors.


That worked fine for me.  (Somewhere along the way I added these two lines
to table_log.c:

+#include 
|
+#include 
)

But the install still goes to 9.2:

PATH=/usr/local/pgsql96/bin/:$PATH make USE_PGXS=1 install
 |
/bin/mkdir -p '/usr/pgsql-9.2/share/contrib'
/bin/mkdir -p '/usr/pgsql-9.2/lib'
/bin/mkdir -p '/usr/pgsql-9.2/doc/contrib'
/usr/bin/install -c -m 644  table_log.sql '/usr/pgsql-9.2/share/contrib/'
/usr/bin/install -c -m 755  table_log.so '/usr/pgsql-9.2/lib/'
/usr/bin/install -c -m 644 .//README.table_log '/usr/pgsql-9.2/doc/contrib/'

Any further help appreciated!  Thanks.

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Installing module for 9.6, not 9.2, on Centos?

2017-05-22 Thread Ken Tanzer
Hi.  I recently installed 9.6 on my Centos 6.9 server, alongside a
still-running 9.2.

I need to install a module for 9.6 (table_log) that is installed on 9.2.
I took the table_log Makefile and changed it to point to
/usr/pgsql-9.6/bin/pg_config
instead of 9.2.  It is still installing into the 9.2 directory though.

What's the cleanest or most proper way to target 9.6 in this situation?
Transcript below, and TIA.

Ken

[root@hosting table_log-0.4.4]# more Makefile

MODULES = table_log
DATA_built = table_log.sql
DOCS = README.table_log

ifdef USE_PGXS
  PGXS := $(shell /usr/pgsql-9.6/bin/pg_config --pgxs)
  include $(PGXS)
else
  subdir = contrib/table_log
  top_builddir = ../..
  include $(top_builddir)/src/Makefile.global
  include $(top_srcdir)/contrib/contrib-global.mk
endif

[root@hosting table_log-0.4.4]# make USE_PGXS=1 clean
rm -f table_log.so table_log.o
rm -f table_log.sql

[root@hosting table_log-0.4.4]# make USE_PGXS=1

sed 's,MODULE_PATHNAME,$libdir/table_log,g' table_log.sql.in >table_log.sql

gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic -DLINUX_OOM_SCORE_ADJ=0 -fpic
-I. -I./ -I/usr/pgsql-9.2/include/server -I/usr/pgsql-9.2/include/internal
-D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include  -c -o table_log.o
table_log.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute
-Wformat-security -fno-strict-aliasing -fwrapv -O2 -g -pipe -Wall
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic -DLINUX_OOM_SCORE_ADJ=0 -fpic
-L/usr/pgsql-9.2/lib -Wl,--as-needed  -L/usr/lib64 -Wl,--as-needed
-Wl,-rpath,'/usr/pgsql-9.2/lib',--enable-new-dtags  -shared -o table_log.so
table_log.o

[root@hosting table_log-0.4.4]# make USE_PGXS=1 install

/bin/mkdir -p '/usr/pgsql-9.2/share/contrib'
/bin/mkdir -p '/usr/pgsql-9.2/lib'
/bin/mkdir -p '/usr/pgsql-9.2/doc/contrib'
/usr/bin/install -c -m 644  table_log.sql '/usr/pgsql-9.2/share/contrib/'
/usr/bin/install -c -m 755  table_log.so '/usr/pgsql-9.2/lib/'
/usr/bin/install -c -m 644 .//README.table_log '/usr/pgsql-9.2/doc/contrib/'
-- 



AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Ken Tanzer
On Mon, May 15, 2017 at 4:45 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 05/15/2017 01:40 PM, Ken Tanzer wrote:
>
>
>
>> But let me ask, is there a big warning about this somewhere I missed?
>> Can the 9.2 updates do something to fix this, or at least create a warning
>> or an RPMNEW file?  I'm happy this is a cloud server and that I worked on a
>> copy.  However, in different circumstances I might well have reasoned
>> "well, installing the 9.6 packages really should be safe for 9.2, since
>> they're clearly meant to exist side-by-side."  And then have a setup that
>> no longer worked as it once did.  With an RHEL clone and PGDG packages
>> straight from the horses mouth, I'd have higher expectations than that.
>> Only because of the great work y'all do! ;)
>>
>
> Might want to file an issue here:
>
> https://redmine.postgresql.org/projects/pgrpms/
>
> You will need a Postgres community account, which you can sign up for on
> the same page.
>
>
>>
Done, and thanks for pointing me to the tracker.

https://redmine.postgresql.org/issues/2409

Cheers,
Ken


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Ken Tanzer
>
>
>> Workarounds:
>>
>> * You can connect to 9.2 using /usr/pgsql-9.2/bin/psql command. It knows
>> the
>> old socket directory.
>>
>
> That was where I was going until I saw this in the OP:
>
> bash-4.1$ /usr/pgsql-9.2/bin/psql -p 5432
> psql: could not connect to server: Connection refused
> Is the server running locally and accepting
> connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.
> 5432"?
>
>
>
>> * Pass -h /tmp to 9.6's psql, so that it connects to 9.2 instance.
>>
>> -HTH
>>
>> Regards,
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Thanks everyone for the replies.  Adrian is right--I did try this with the
9.2 binaries, with the same problem.  But to address Tom's question (and if
I'm using ldd properly), the 9.2 psql binary is using the 9.6 libpq.

[root@centos-new postgresql]# ldd /usr/bin/psql | grep libpq
libpq.so.5 => /usr/pgsql-9.6/lib/libpq.so.5 (0x7f2e6c99a000)
[root@centos-new postgresql]# ldd /usr/pgsql-9.2/bin/psql | grep libpq
libpq.so.5 => /usr/pgsql-9.6/lib/libpq.so.5 (0x7f52f9c67000)

Devrim--the -h /tmp option works great.

I still wanted this to just "work" though, for scripts and such.  I
specified the socket directory in the 9.2 postgresql.conf, and it seems to
be working "normally" now.

But let me ask, is there a big warning about this somewhere I missed?  Can
the 9.2 updates do something to fix this, or at least create a warning or
an RPMNEW file?  I'm happy this is a cloud server and that I worked on a
copy.  However, in different circumstances I might well have reasoned
"well, installing the 9.6 packages really should be safe for 9.2, since
they're clearly meant to exist side-by-side."  And then have a setup that
no longer worked as it once did.  With an RHEL clone and PGDG packages
straight from the horses mouth, I'd have higher expectations than that.
Only because of the great work y'all do! ;)

Cheers,
Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Ken Tanzer
Hi.  On a Centos 6.9 server (in the cloud with Rackspace), I'm wanting to
install PGDG 9.6 alongside the already-running 9.2.  After installing the
9.6 packages (and even before doing an initdb), I am no
longer able to make a local connection to the 9.2 server.  Instead I get
the message:

psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

That socket file does not exist on the server. (And in fact, the
/var/run/postgresql directory didn't exist before installing 9.6).  When I
configure 9.6 to use port 5433 and run it, it does create that socket for
5433.  I tried creating such a socket manually for 5432, but that didn't
seem to change anything.

Any help in getting this working and/or pointing out what I'm missing would
be great.  I'm also confused conceptually about what is happening here.
What is it that the installation (but not execution) of 9.6 does that's
blocking the local 9.2 access?  I'm guessing it's gotta be something in the
RPM install scripts.

Thanks!

Ken

bash-4.1$ whoami
postgres
bash-4.1$ psql -p 5432
psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
bash-4.1$ psql -p 5432 -h localhost
Password:
bash-4.1$ psql -p 5433
psql (9.6.3)
Type "help" for help.

postgres=# \q
bash-4.1$ /usr/pgsql-9.2/bin/psql -p 5432
psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
bash-4.1$ /usr/pgsql-9.6/bin/psql -p 5432
psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?


[root@centos-new postgresql]# cat /etc/issue
CentOS release 6.9 (Final)
Kernel \r on an \m

[root@centos-new postgresql]# yum list installed "postgresql*"
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.mirror.lstn.net
 * extras: centos.mirror.lstn.net
 * updates: centos.mirror.lstn.net
Installed Packages
postgresql92.x86_64 9.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-contrib.x86_64 9.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-devel.x86_64   9.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-docs.x86_649.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-libs.x86_649.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-plpython.x86_649.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-pltcl.x86_64   9.2.21-1PGDG.rhel6
 @pgdg92
postgresql92-server.x86_64  9.2.21-1PGDG.rhel6
 @pgdg92
postgresql96.x86_64 9.6.3-1PGDG.rhel6
@pgdg96
postgresql96-contrib.x86_64 9.6.3-1PGDG.rhel6
@pgdg96
postgresql96-devel.x86_64   9.6.3-1PGDG.rhel6
@pgdg96
postgresql96-docs.x86_649.6.3-1PGDG.rhel6
@pgdg96
postgresql96-libs.x86_649.6.3-1PGDG.rhel6
@pgdg96
postgresql96-pltcl.x86_64   9.6.3-1PGDG.rhel6
@pgdg96
postgresql96-server.x86_64  9.6.3-1PGDG.rhel6
@pgdg96
[root@centos-new postgresql]#



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org

(253) 245-3801

Subscribe to the mailing list

 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] referential integrity between elements of an array and another table?

2017-04-19 Thread Ken Tanzer
For me, foreign keys for arrays would be incredibly helpful.  For simple
checkbox-type options, it's _much_ simpler to just put them in an array,
and I work with a UI that can handle the arrays well.  I do hope this makes
it into Postgresql someday!  In the meantime, I've mostly sacrificed the
referential integrity in favor of the convenience.  Just my two cents.

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Postgres, apps, special characters and UTF-8 encoding

2017-03-07 Thread Ken Tanzer
Hi.  I've got a recurring problem with character encoding for a
Postgres-based web PHP app, and am hoping someone can clue me in or at
least point me in the right direction.  I'll confess upfront my
understanding of encoding issues is extremely limited.  Here goes.

The app uses a Postgres database, UTF-8 encoded.  Through their browsers,
users can add and edit records often including text.  Most of the time this
works fine.  Though sometimes this will fail with Postgres complaining, for
example, "Could query with ... , The error text was: ERROR: invalid byte
sequence for encoding "UTF8": 0xe9 0x20 0x67"

So this generally happens when people copy and paste things out of their
word documents and such.

As I understand it, those are likely encoded in something non-UTF-8, like
WIN-1251 or something.  And that one way or another, the encoding needs to
be translated before it can be placed into the database.  I'm not clear how
this is supposed to happen though.  Automatically by the browser?  Done in
the app?  Some other way?  And if in the app, how is one supposed to know
what the incoming encoding is?

Thanks in advance for any help or pointers.

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] isnull() function in pgAdmin3

2016-10-03 Thread Ken Tanzer
On Mon, Oct 3, 2016 at 6:39 AM, dudedoe01  wrote:

> What is the most feasible way to emulate the below MySQL function into
> postgreSQL. Since the isnull() function is no longer supported in 9.6
> version. I have tried every trick in the hat to get the desired results.
> Still 'RPG INV' doesn't show only the other two then options show up.
>
> (case
> when
> ((`s`.`Funding_Date` = '')
> and (isnull(`s`.`Actual_Close_Date`)
> or (`s`.`Actual_Close_Date` = '')))
> then
> 'RPG_INV'
> when
> ((isnull(`s`.`Funding_Date`)
> or (`s`.`Funding_Date` <> ''))
> and ((`s`.`Actual_Close_Date` = '')
> or isnull(`s`.`Actual_Close_Date`)))
> then
> 'Builder_Inventory'
> else 'Owner_Inventory'
> end) AS `Lot_Status`
>
>
>
>
Hi.  Over the years I have gotten a lot of patient assistance from this
mailing list, for which I am abundantly grateful.  In light of that, I
offer the following "translations," which are specifically in response to
your question about emulating the SQL above:

and (isnull(`s`.`Actual_Close_Date`)
--> and (`s`.`Actual_Close_Date` IS NULL)

((isnull(`s`.`Funding_Date`)
--> (`s`.`Funding_Date` IS NULL)

or isnull(`s`.`Actual_Close_Date`)))
-->or (`s`.`Actual_Close_Date` IS NULL)))

HTH, and Cheers.

Ken






> --
> View this message in context: http://postgresql.nabble.com/
> isnull-function-in-pgAdmin3-tp5923122p5924161.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-08 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 4:31 PM, John R Pierce <pie...@hogranch.com> wrote:

> On 9/7/2016 1:36 PM, Ken Tanzer wrote:
>
>> No transformation is needed, except for padding the fields out to their
>> maximum lengths.
>>
>
> without accessing metadata, how would you know what those maximum lengths
> are??
>

I agree you'd need metadata.  My OP mentioned looking this up in the
information schema


>
> and how would the calling program even know what the fields are if its not
> aware of the field lengths?
>
> Do you mean how would the function know?  I didn't see a better way than
by explicitly passing the view name.



> what if a field is type 'text' ? or an array type ?
>
>
They won't be.  This has to work for a very specific set of views I've
created, not for any view in the universe!

Cheers,
Ken


>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-08 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 4:39 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 09/07/2016 04:25 PM, Jim Nasby wrote:
>
>> On 9/7/16 6:07 PM, Ken Tanzer wrote:
>>
>>> ERROR:  PL/Python functions cannot accept type record
>>>
>>
>> Ugh, yeah... that won't work. plperl might be able to do it, but I
>> suspect you're going to be stuck pulling the size info out of
>> info_schema or the catalog.
>>
>> Actually, there is a way you could hack this via plpython; pass the row
>> in as text as well as the relation (regclass is good for that). You
>> could then do plpy.execute('SELECT (%::%).*'.format(row_text,
>> relation)); that should give you a dict just like Adrian's example did.
>>
>> It would be nice if there was a function that accepted something with a
>> row descriptor and spit out the details of the descriptor.
>> http://pgxn.org/dist/colnames/doc/colnames.html comes close; if you know
>> much about C at all it shouldn't be hard to add a function to that
>> extension that returned the full details of the row. That and converting
>> the row to JSON would make it relatively easy to accomplish what you
>> want in a plpgsql (or maybe even plsql) function.
>>
>
> Getting closer:
>
> CREATE OR REPLACE FUNCTION public.str_concat(r json)
>  RETURNS text
>  LANGUAGE plpythonu
> AS $function$
> import json
> j = json.loads(r)
> str_out = ""
> plpy.notice(type(j))
> for col in j:
> str_out += j[col]
> return str_out
> $function$
>
> production=# select str_concat(row_to_json(t)) from str_test as t;
> NOTICE:  
> CONTEXT:  PL/Python function "str_concat"
>   str_concat
> ---
>  09/07/161234  1
>
> That's great, and was more than enough to get me started.  This is what I
ended up with, which I think does the trick.  Thanks so much!

CREATE OR REPLACE FUNCTION ebh_transaction_assemble( view_name varchar,
j_rec json ) RETURNS varchar LANGUAGE plpythonu AS $$
import json
j = json.loads(j_rec)
str_out = ""
query="SELECT
column_name,character_maximum_length,ordinal_position,data_type FROM
information_schema.columns WHERE table_name ='" + view_name + "' ORDER BY
ordinal_position"
cols =  plpy.execute(query)
for col in cols:
  dtype = col["data_type"]
  dlength = col["character_maximum_length"]
  dname = col["column_name"]
  dvalue = str(j[dname])
  if ( dvalue == "None" ) : dvalue = ""
  if ( dtype == "character" ) : dvalue = dvalue.ljust(dlength)
  str_out += dvalue
return str_out
$$
;



> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 3:38 PM, Jim Nasby <jim.na...@bluetreble.com> wrote:

> On 9/7/16 5:32 PM, Ken Tanzer wrote:
>
>> SELECT my_cat(ebh_gain,'ebh_gain') FROM ebh_gain;
>>
>> I know TCL and probably Python and others can work with a record as a
>> trigger function.  But TCL doesn't seem to accept a record as an
>> argument.  Can any of the other languages that could also accomplish
>> this function? Or some other way? Thanks.
>>
>
> A PL that can accept composite types (such as plpythonu) should be able to
> do this.


OK, python is new to me, so I'm trying to dig into it.  I installed it, and
tried declaring a function:

CREATE OR REPLACE FUNCTION ebh_transaction_assemble( varchar, record )
RETURNS varchar AS $$...

but got the same error:

ERROR:  PL/Python functions cannot accept type record

Is there some other way I should be specifying this?  Or do I need a more
recent version of Postgres (I'm on 9.2) to do this?  Thanks.



>
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
> 855-TREBLE2 (855-873-2532)   mobile: 512-569-9461
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 3:46 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 09/07/2016 03:32 PM, Ken Tanzer wrote:
>
>>
>>
>> On Wed, Sep 7, 2016 at 3:18 PM, Adrian Klaver <adrian.kla...@aklaver.com
>> <mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 09/07/2016 01:36 PM, Ken Tanzer wrote:
>>
>>
>>
>> On Wed, Sep 7, 2016 at 1:22 PM, Adrian Klaver
>> <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>> <mailto:adrian.kla...@aklaver.com
>>
>> <mailto:adrian.kla...@aklaver.com>>> wrote:
>>
>> On 09/07/2016 01:15 PM, Ken Tanzer wrote:
>>
>> Hi.  Using version 9.2.  I'm trying to create a function
>> that
>> will take
>> a record from any view and assemble it into a string,
>> for export to
>> another system.  For example, this view:
>>
>> \d ebh_gain
>>
>> View "public.ebh_gain"
>>   Column   | Type  | Modifiers
>> ---+---+---
>>  reporting_unit_id | character(3)  |
>>  case_id   | character(10) |
>>  event_date| character(8)  |
>>  ids_score | character(1)  |
>>  eds_score | character(1)  |
>>  sds_score | character(1)  |
>>  kc_auth_number| integer   |
>>  king_county_id| integer   |
>>
>> would get converted into a string with all the fields
>> concatenated
>> together, and space-padded to their full lengths.
>>
>>
>> I think an example is needed. I was thinking you wanted the
>> field
>> values transformed, but the below seems to indicate
>> something different.
>>
>> No transformation is needed, except for padding the fields out
>> to their
>> maximum lengths.  So for example with these values
>>
>> ('AA','1243','20160801','2','1','1',37,24)
>>
>> I need a string created that looks like this:
>>
>> 'AA 1243  201608012113724'
>>
>> I have a whole bunch of views that I need to do this for, and am
>> hoping
>> to avoid coding something specific for each one.
>>
>>
>> I can do it relatively easy in plpythonu:
>>
>> production=# \d str_test
>>Table "history.str_test"
>>   Column   | Type  | Modifiers
>> ---+---+---
>>  reporting_unit_id | character(3)  |
>>  case_id   | character(10) |
>>  event_date| character(8)  |
>>
>> production=# insert into str_test values ('1', '1234', '09/07/16');
>> INSERT 0 1
>>
>>
>> DO
>> $$
>> rs =  plpy.execute("SELECT * FROM str_test", 1)
>> cols = rs.colnames()
>> plpy.notice(rs.colnames())
>> str_out = ""
>> for col in cols:
>> str_out += str(rs[0][col])
>> plpy.notice(str_out)
>> $$ LANGUAGE plpythonu;
>>
>> NOTICE:  ['reporting_unit_id', 'case_id', 'event_date']
>> CONTEXT:  PL/Python anonymous code block
>> NOTICE:  1  1234  09/07/16
>> CONTEXT:  PL/Python anonymous code block
>> DO
>>
>> Yeah, that and a trip to the information schema to pad out the fields
>> would get me the string I need.  But I was hoping to be able to do this
>>
>
> Well the above has the padding already there, though that assumes char(x)
> fields.

Oh, I didn't see how it would pick up the padding, but great!


>
>
> without having the function select the individual record.  Ideally:
>>
>> SELECT my_cat(ebh_gain) FROM ebh_gain;
>>
>
> So do you want one record to be converted at a time or many?
>
>
I would like one record converted per function call.  But of course to be
able to generate multiple ones in a select:

INSERT INTO holding_table my_cat(ebh_gain,'ebh_gain') FROM ebh_gain WHERE
needs_to_be_exported...

Thanks,
Ken




>
>> or, at least somewhat more realistically:
>>
>> SELECT my_cat(ebh_gain,'ebh_gain') FROM ebh_gain;
>>
>> I know TCL and probably Python and others can work with a record as a
>> trigger function.  But TCL doesn't seem to accept a record as an
>> argument.  Can any of the other languages that could also accomplish
>> this function? Or some other way? Thanks.
>>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 3:18 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 09/07/2016 01:36 PM, Ken Tanzer wrote:
>
>>
>>
>> On Wed, Sep 7, 2016 at 1:22 PM, Adrian Klaver <adrian.kla...@aklaver.com
>> <mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 09/07/2016 01:15 PM, Ken Tanzer wrote:
>>
>> Hi.  Using version 9.2.  I'm trying to create a function that
>> will take
>> a record from any view and assemble it into a string, for export
>> to
>> another system.  For example, this view:
>>
>> \d ebh_gain
>>
>> View "public.ebh_gain"
>>   Column   | Type  | Modifiers
>> ---+---+---
>>  reporting_unit_id | character(3)  |
>>  case_id   | character(10) |
>>  event_date| character(8)  |
>>  ids_score | character(1)  |
>>  eds_score | character(1)  |
>>  sds_score | character(1)  |
>>  kc_auth_number| integer   |
>>  king_county_id| integer   |
>>
>> would get converted into a string with all the fields concatenated
>> together, and space-padded to their full lengths.
>>
>>
>> I think an example is needed. I was thinking you wanted the field
>> values transformed, but the below seems to indicate something
>> different.
>>
>> No transformation is needed, except for padding the fields out to their
>> maximum lengths.  So for example with these values
>>
>> ('AA','1243','20160801','2','1','1',37,24)
>>
>> I need a string created that looks like this:
>>
>> 'AA 1243  201608012113724'
>>
>> I have a whole bunch of views that I need to do this for, and am hoping
>> to avoid coding something specific for each one.
>>
>
> I can do it relatively easy in plpythonu:
>
> production=# \d str_test
>Table "history.str_test"
>   Column   | Type  | Modifiers
> ---+---+---
>  reporting_unit_id | character(3)  |
>  case_id   | character(10) |
>  event_date| character(8)  |
>
> production=# insert into str_test values ('1', '1234', '09/07/16');
> INSERT 0 1
>
>
> DO
> $$
> rs =  plpy.execute("SELECT * FROM str_test", 1)
> cols = rs.colnames()
> plpy.notice(rs.colnames())
> str_out = ""
> for col in cols:
> str_out += str(rs[0][col])
> plpy.notice(str_out)
> $$ LANGUAGE plpythonu;
>
> NOTICE:  ['reporting_unit_id', 'case_id', 'event_date']
> CONTEXT:  PL/Python anonymous code block
> NOTICE:  1  1234  09/07/16
> CONTEXT:  PL/Python anonymous code block
> DO
>
Yeah, that and a trip to the information schema to pad out the fields would
get me the string I need.  But I was hoping to be able to do this without
having the function select the individual record.  Ideally:

SELECT my_cat(ebh_gain) FROM ebh_gain;

or, at least somewhat more realistically:

SELECT my_cat(ebh_gain,'ebh_gain') FROM ebh_gain;

I know TCL and probably Python and others can work with a record as a
trigger function.  But TCL doesn't seem to accept a record as an argument.
Can any of the other languages that could also accomplish this function? Or
some other way? Thanks.

>
>


Re: [GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
On Wed, Sep 7, 2016 at 1:22 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 09/07/2016 01:15 PM, Ken Tanzer wrote:
>
>> Hi.  Using version 9.2.  I'm trying to create a function that will take
>> a record from any view and assemble it into a string, for export to
>> another system.  For example, this view:
>>
>> \d ebh_gain
>>>
>> View "public.ebh_gain"
>>   Column   | Type  | Modifiers
>> ---+---+---
>>  reporting_unit_id | character(3)  |
>>  case_id   | character(10) |
>>  event_date| character(8)  |
>>  ids_score | character(1)  |
>>  eds_score | character(1)  |
>>  sds_score | character(1)  |
>>  kc_auth_number| integer   |
>>  king_county_id| integer   |
>>
>> would get converted into a string with all the fields concatenated
>> together, and space-padded to their full lengths.
>>
>
> I think an example is needed. I was thinking you wanted the field values
> transformed, but the below seems to indicate something different.
>
> No transformation is needed, except for padding the fields out to their
maximum lengths.  So for example with these values

('AA','1243','20160801','2','1','1',37,24)

I need a string created that looks like this:

'AA 1243  201608012113724'

I have a whole bunch of views that I need to do this for, and am hoping to
avoid coding something specific for each one.




>
>> My original idea was to do this in TCL by passing a record and a view
>> name.  The function would then look up the columns in the
>> information_schema, and use that info to assemble and return the
>> string.  But it looks like TCL functions won't accept a record as an
>> argument.
>>
>> Any suggestions or advice most welcome.  Thanks!
>>
>> Ken
>> --
>> AGENCY Software
>> A Free Software data system
>> By and for non-profits
>> /http://agency-software.org//
>> /https://agency-software.org/demo/client/
>> ken.tan...@agency-software.org <mailto:ken.tan...@agency-software.org>
>> (253) 245-3801
>>
>> Subscribe to the mailing list
>> <mailto:agency-general-requ...@lists.sourceforge.net?body=subscribe> to
>> learn more about AGENCY or
>> follow the discussion.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


[GENERAL] How to assemble all fields of (any) view into a string?

2016-09-07 Thread Ken Tanzer
Hi.  Using version 9.2.  I'm trying to create a function that will take a
record from any view and assemble it into a string, for export to another
system.  For example, this view:

> \d ebh_gain
View "public.ebh_gain"
  Column   | Type  | Modifiers
---+---+---
 reporting_unit_id | character(3)  |
 case_id   | character(10) |
 event_date| character(8)  |
 ids_score | character(1)  |
 eds_score | character(1)  |
 sds_score | character(1)  |
 kc_auth_number| integer   |
 king_county_id| integer   |

would get converted into a string with all the fields concatenated
together, and space-padded to their full lengths.

My original idea was to do this in TCL by passing a record and a view
name.  The function would then look up the columns in the
information_schema, and use that info to assemble and return the string.
But it looks like TCL functions won't accept a record as an argument.

Any suggestions or advice most welcome.  Thanks!

Ken
-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-15 Thread Ken Tanzer
On Fri, Jun 10, 2016 at 2:23 PM, Ken Tanzer <ken.tan...@gmail.com> wrote:

> On Fri, Jun 10, 2016 at 1:47 PM, Steve Atkins <st...@blighty.com> wrote:
>
>>

> You could look at one of the existing SQL parsers implemented in PHP, and
>> use those to parse the constraint to a tree from which you could easily
>> pull PHP.
>>
>
>> This one sounds most promising!  I took a quick Google, and it looks like
> there are lots of them, and a heavy Mysql focus.  Anyone have experience or
> suggestions about which of these might be best, and particularly for
> Postgres?
>
>
> Hi.  Just wanted to follow up one more time on this.  Best parsers in PHP,
especially for Postgres?  Anyone?  Thanks!

Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread Ken Tanzer
On Fri, Jun 10, 2016 at 1:54 PM, rob stone  wrote:

>
> Hi Ken,
>
> Would this be static or dynamic?
> For example, if you altered a column to become defined as NOT NULL,
> say, when you build the form used to maintain that table you'd like to
> have a "required" attribute against the input field for that column. So
> if it were dynamic you'd have to access the database information_schema
> each time you wanted to send the HTML down the wire as well as when
> carrying out validation.
>

Hi Rob.  I guess this is currently static, in that whenever you change the
tables you have to run an update process that rereads the tables and stores
the configuration information.


> Also, views are updateable and you'd have to check the tables and

columns making up the view.
>

Yeah.  We're not using any updateable views at the moment, so for now I'd
be happy for this to work with tables.  One step at a time! :)


> I've never used pg_meta_data but it might be helpful.
> Another thought would be to write a program that scanned all the tables
> in your database and wrote out a table in your application's schema
> where the table and column constraints were synthesised in a way
> suitable for use in PHP code.
>
> I'm not sure I'm following this one.  Wouldn't such a program have to
parse the SQL and convert it to PHP anyway?


> Cheers,
> Rob
>

Thanks!

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread Ken Tanzer
On Fri, Jun 10, 2016 at 1:47 PM, Steve Atkins  wrote:

>
> You could name the check constraints, catch the errors and use a
> client-side mapping between constraint name and a friendly error message
> for display in the web interface.
>
> This seems plausible, but not ideal.  I could get over the aesthetics of
sequence gaps, but don't like throwing lots of database errors on
principle, and also for silting up my logs.  Seems better to catch these
before submitting.



> You could implement the checks in PHP in the database.
> https://public.commandprompt.com/projects/plphp/wiki
>
> I'm aiming as much as possible for this to "just work" with standard
Postgres, and this approach gets away from that.  Plus people would have to
know SQL AND PHP in order to create tables.


> You could look at one of the existing SQL parsers implemented in PHP, and
> use those to parse the constraint to a tree from which you could easily
> pull PHP.
>
> This one sounds most promising!  I took a quick Google, and it looks like
there are lots of them, and a heavy Mysql focus.  Anyone have experience or
suggestions about which of these might be best, and particularly for
Postgres?

Thanks for the suggestions!

Ken


> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Converting Postgres SQL constraint logic to PHP?

2016-06-10 Thread Ken Tanzer
Hi.  I was hoping this list might be able to offer some
help/advice/suggestions/opinions about feasibility for something I want to
implement, namely converting Postgres constraints into PHP logic.  Here's
the context and explanation:

I work on a PHP web app using Postgres.  When possible, we try to build as
much logic as possible directly into the DB.  The app already automatically
reads NOT NULL and foreign key constraints from the DB, and enforces them
through the UI thus preventing people from getting ugly database errors.
It doesn't do that with check constraints and table constraints though,
which means we either end up duplicating the constraint logic in PHP, or
else sometimes get lazy/expedient and only put the constraint into PHP.
Obviously neither of those is ideal.

What would be ideal is for the app to handle those constraints
automatically.  It looks like I can pull them out (as SQL) from
information_schema.check_constraints, with the remaining issue being how to
make them usable in PHP.

I'm wondering if anyone has done this already, or if there is some kind of
library available for this purpose?

If not, and absent any better suggestions, I'm looking at trying to
parse/search/replace.  This might well be imperfect, and error-prone.  But
if I can get something that at least works in a lot of cases, that would
help a lot.  So as a simple example, converting from

((sat3 >= 0) AND (sat3 <= 5))

to the needed format:

(($rec['sat3'] >=0) and ($rec['sat3'] <= 5))

seems like it would be relatively straightforward, since the structure of
the logic is neatly parallel between SQL and PHP.  Each of these below, and
many others I could pull, all have additional complications beyond that
though:

   -  (((incentive_sent_date IS NULL) OR (incentive_sent_date >=
   residence_date)))
   - (((application_date IS NOT NULL) OR (employment_date IS NOT NULL)))
   -  (xor((assessed_by IS NULL), (assessor_organization IS NULL)))
   -  (((release_to IS NULL) OR ((consent_type_code)::text = ANY
   ((ARRAY['ROI'::character varying, 'MEDIA'::character varying])::text[]

So anyway, there's my issue.  Any thoughts/comments/suggestions welcome.
Thanks in advance!

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Suppress decimal point like digits in to_char?

2016-03-14 Thread Ken Tanzer
>
>
> Well, this may be a good enhancement request, add something like
> d=decimal point, supressed if alone.
>
> Yeah. Maybe that's all that need to be said. :)


> > In particular, one might reasonably choose a format string like
> 'FM999,999D99' and not realize it will fail on whole numbers.  Is there any
> particular reason the D is not suppressible in this case, either by default
> or as an option?  It seems to me if the trailing 0s are suppressed, the
> decimal should follow suit for whole numbers.
>
> It does not fail, it just works in a diffrent way of what you would
> like. Regarding supression, IMO it's a bad thing, it can lead to
> misleading results. Imagine it is, and you do a right aligned print (
> usual for numbers ) of prices 5.45, 1.20, 99.00, 2.40, and you end up
> with ( using x for align )
> price:
> --
> xx5.45
> xx1.20
> 99
> xx2.40
>
> It would be misleading, I prefer to have xxx99., ugly but clearer IMO
> ( of course one never supress decimals in prices, so I would use
> 990D00, but anyway ).
>
> Francisco Olarte.
>

I appreciate the comment and explanation.  But your example shows numbers
where the trailing 0s are not suppressed.   It seems to me that if you're
requesting suppression of trailing 0s, then you're accepting that your
numbers aren't going to align in the first place. And so it's hard for me
to see how, for example "99." is ever going to be desirable output if
suppression is what you're after.

And just as context on my end, the times I use to_char are generally to
merge numbers into a document or some fragment of English text.  For
anything going into a table, I'd usually just leave it as a number and case
it to the desired (fixed) number of decimals.  And as we've touched on, it
may just be different use cases colliding! :)

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Suppress decimal point like digits in to_char?

2016-03-14 Thread Ken Tanzer
On Mon, Mar 14, 2016 at 8:22 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Mar 14, 2016 at 3:31 AM, Francisco Olarte <fola...@peoplecall.com>
> wrote:
>
>> Hi;
>>
>> On Mon, Mar 14, 2016 at 2:53 AM, David G. Johnston
>> <david.g.johns...@gmail.com> wrote:
>> > On Sunday, March 13, 2016, Ken Tanzer <ken.tan...@gmail.com> wrote:
>> 
>> > Typically if I'm going to format any currency amount with pennies I
>> would
>> > format all values, even those with zero pennies, to the same precision.
>> > Typically when displaying such amounts I'd right-justify the values and
>> thus
>> > cause the decimals to line up.
>>
>> But a right-aligning string output routine needs to be used.
>>
>>
>> cdrs=> select val, tc, '"'||tc||'"' as quoted,
>> '"'||regexp_replace(tc,'\.$','   ')||'"' as replaced from (select val,
>> to_char(val::decimal(6,2),'FM999,990D99') as tc from (values
>> (1234),(1.05),(0)) as v(val)) as w;
>>  val  |   tc   |  quoted  |  replaced
>> --++--+
>>  1234 | 1,234. | "1,234." | "1,234   "
>>  1.05 | 1.05   | "1.05"   | "1.05"
>> 0 | 0. | "0." | "0   "
>> (3 rows)
>>
>> Summarising, any combination can be easily done with a single round of
>> replace.
>>
>>
> ​See also:
>
> http://www.postgresql.org/docs/9.5/interactive/functions-string.html
>
> ​
> format(formatstr text [, formatarg "any" [, ...] ])
>
> ​David J.​
>
>
>
Thanks for all the info and suggestions.  I'll just observe that sure, you
can do it with a regex, but I'm still surprised that this can't be done
with to_char.

In particular, one might reasonably choose a format string
like 'FM999,999D99' and not realize it will fail on whole numbers.  Is
there any particular reason the D is not suppressible in this case, either
by default or as an option?  It seems to me if the trailing 0s are
suppressed, the decimal should follow suit for whole numbers.

Cheers,
Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Suppress decimal point like digits in to_char?

2016-03-13 Thread Ken Tanzer
On Mar 13, 2016 6:29 PM, "David G. Johnston" <david.g.johns...@gmail.com>
wrote:
>
> On Sunday, March 13, 2016, Ken Tanzer <ken.tan...@gmail.com> wrote:
>>
>> Hi.  Is there a way with to_char to suppress a decimal point, like a
leading or trailing 0, so that integers will not have them, but non-ints
will?  I'm hoping I'm missing something easy.  Thanks.
>>
>> Ken
>>
>> SELECT val,to_char(val::decimal(6,2),'FM999,999D99') FROM
>> ( SELECT 1 AS val UNION SELECT 1.05 AS val) foo;
>>
>>  val  | to_char
>> --+-
>> 1 | 1.
>>  1.05 | 1.05
>>
>>
>
> Not seeing a native way to do so - and I'd question doing so as a general
rule - though you know your domain.  If you must have this you will want to
utilize regexp_replace to identify the situation and replace it.  A simple
"\.$" check and a substring would work also.
>
> David J.

Thanks David. Just curious what part of this you would question.  The case
for numbers, currency in particular, coming out with a decimal and pennies
when present, and as whole dollars when not (and without a decimal place at
the end) seems pretty common and clear cut.  What am I missing in your
question?

Cheers,
Ken


[GENERAL] Suppress decimal point like digits in to_char?

2016-03-13 Thread Ken Tanzer
Hi.  Is there a way with to_char to suppress a decimal point, like a
leading or trailing 0, so that integers will not have them, but non-ints
will?  I'm hoping I'm missing something easy.  Thanks.

Ken

SELECT val,to_char(val::decimal(6,2),'FM999,999D99') FROM
( SELECT 1 AS val UNION SELECT 1.05 AS val) foo;

 val  | to_char
--+-
1 | 1.
 1.05 | 1.05


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Very puzzling sort behavior

2015-09-10 Thread Ken Tanzer
On Thu, Sep 10, 2015 at 2:02 PM, Alvaro Herrera <alvhe...@2ndquadrant.com>
wrote:

> Ken Tanzer wrote:
>
> > Are there any other potential solutions, pitfalls or considerations that
> > come to mind?  Any thoughts welcome.  And as I said, if there's not a
> good
> > way to do this I'll probably leave it alone.
>
> In part, it boils down to what you use the in ORDER BY clause.  If you
> concatenate the last name and first name, they will be considered as a
> single string and run afoul of funny behavior of dictionary sorting,
> which ignores non-alphanumeric chars in the first pass.  But if you keep
> them separate by using "ORDER BY last_name, first_name" then sorting
> will consider the last name separately from first name, and you'd get
> the results you want (I think).
>

Thanks, but I guess I should have been clearer.  Thanks to y'all wonderful
mailing list folks, I get it now as to why the two sorts are not the same.
I'm hoping for practical suggestions or advice about how to get C locale
sorting without having to rewrite all my existing queries.  I still might
be clinging to futile hope, but is there really no way to specify a
collation for the return value of a function?  And are there any pitfalls
associated with pg_dump-ing into a new database with a different collation?

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Very puzzling sort behavior

2015-09-10 Thread Ken Tanzer
On Thu, Sep 10, 2015 at 3:03 PM, Andrew Sullivan <a...@crankycanuck.ca>
wrote:

> On Thu, Sep 10, 2015 at 02:54:31PM -0700, Ken Tanzer wrote:
> > Thanks, but I guess I should have been clearer.  Thanks to y'all
> wonderful
> > mailing list folks, I get it now as to why the two sorts are not the
> same.
> > I'm hoping for practical suggestions or advice about how to get C locale
> > sorting without having to rewrite all my existing queries.
>
> Why not change the collation for the column?
> http://www.postgresql.org/docs/9.4/interactive/sql-altertable.html
>
>
Hmm... that sounds promising.  And less drastic than rewriting the whole
database.  I'll have to ponder that one, and thanks for the suggestion!

Cheers,
Ken


[GENERAL] Very puzzling sort behavior

2015-09-10 Thread Ken Tanzer
Hi.  In a table that includes these columns:

my_db=> \d tbl_client
...
 name_last   | character varying(40)  | not null
 name_first  | character varying(30)  | not null
...

I am extremely puzzled by the sorting of the "CLARKE"s in this list:

my_db=> SELECT name_last,length(name_last),name_last='CLARK' AS
clark,name_last='CLARKE' AS clarke FROM tbl_client WHERE name_last ILIKE
'CLARK%' OR name_last ILIKE 'ADAMS%' ORDER BY name_last || ', ' ||
name_first;
 name_last | length | clark | clarke
---++---+
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMSON   |  7 | f | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARKE|  6 | f | t
 CLARKE|  6 | f | t
 CLARKE|  6 | f | t
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
(17 rows)

The ADAMS are included just to show a similar example is ordering
correctly.  I put the length and equality test columns in to try to make
sure there weren't some bizarre characters in the data.  This is only
happening on one particular database.  I did a reindex on the table just
for good measure.  If I remove the name_first piece of the ORDER BY (which
doesn't seem like it should matter), it sorts as expected:

my_db=> SELECT name_last,length(name_last),name_last='CLARK' AS
clark,name_last='CLARKE' AS clarke FROM tbl_client WHERE name_last ILIKE
'CLARK%' OR name_last ILIKE 'ADAMS%' ORDER BY name_last || ', '

;
 name_last | length | clark | clarke
---++---+
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMS |  5 | f | f
 ADAMSON   |  7 | f | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARK |  5 | t | f
 CLARKE|  6 | f | t
 CLARKE|  6 | f | t
 CLARKE|  6 | f | t
(17 rows)

I tried selecting those 17 rows from tbl_client into a new table, and get
the same odd behavior.  However, if I run with new data I get an expected
order:

CREATE TEMP TABLE test (
  name_first VARCHAR(40),
  name_last VARCHAR(30)

);

INSERT INTO test VALUES ('JOE','CLARKE'),('BILL','CLARK');

SELECT * FROM test ORDER BY name_last;
SELECT * FROM test ORDER BY name_last || ', ' || name_first;

Any thoughts about what's going on, what to do about it, or what obvious
point I missing?  Thanks in advance!

my_db=> SELECT version();
version


 PostgreSQL 9.2.13 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-11), 64-bit
(1 row)

Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://agency-software.org/demo/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Very puzzling sort behavior

2015-09-10 Thread Ken Tanzer
Alright never mind, I guess I see what's going on.  Thanks!

Ken

On Thu, Sep 10, 2015 at 12:51 PM, Ken Tanzer <ken.tan...@gmail.com> wrote:

>
>
> On Thu, Sep 10, 2015 at 12:47 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>
>> Peter Geoghegan <peter.geoghega...@gmail.com> writes:
>> > On Thu, Sep 10, 2015 at 12:35 PM, Ken Tanzer <ken.tan...@gmail.com>
>> wrote:
>> >> Any thoughts about what's going on, what to do about it, or what
>> obvious point I missing?  Thanks in advance!
>>
>> > This is the expected behavior.
>>
>> If you don't like it, sort in C locale ...
>>
>> regards, tom lane
>>
>
> OK, can one of you help me out in understanding this?  I would have
> thought that given "CLARK," and "CLARKE" that the comma would get compared
> against the E and come first.  End of story, before we even get to anything
> farther in the string.  What am I missing?
>
> --
> AGENCY Software
> A Free Software data system
> By and for non-profits
> *http://agency-software.org/ <http://agency-software.org/>*
> *https://agency-software.org/demo/client
> <https://agency-software.org/demo/client>*
> ken.tan...@agency-software.org
> (253) 245-3801
>
> Subscribe to the mailing list
> <agency-general-requ...@lists.sourceforge.net?body=subscribe> to
> learn more about AGENCY or
> follow the discussion.
>



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Very puzzling sort behavior

2015-09-10 Thread Ken Tanzer
On Thu, Sep 10, 2015 at 12:47 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Peter Geoghegan <peter.geoghega...@gmail.com> writes:
> > On Thu, Sep 10, 2015 at 12:35 PM, Ken Tanzer <ken.tan...@gmail.com>
> wrote:
> >> Any thoughts about what's going on, what to do about it, or what
> obvious point I missing?  Thanks in advance!
>
> > This is the expected behavior.
>
> If you don't like it, sort in C locale ...
>
> regards, tom lane
>

OK, can one of you help me out in understanding this?  I would have thought
that given "CLARK," and "CLARKE" that the comma would get compared against
the E and come first.  End of story, before we even get to anything farther
in the string.  What am I missing?

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Very puzzling sort behavior

2015-09-10 Thread Ken Tanzer
On Thu, Sep 10, 2015 at 12:56 PM, Peter Geoghegan <
peter.geoghega...@gmail.com> wrote:

> On Thu, Sep 10, 2015 at 12:51 PM, Ken Tanzer <ken.tan...@gmail.com> wrote:
> > OK, can one of you help me out in understanding this?  I would have
> thought that given "CLARK," and "CLARKE" that the comma would get compared
> against the E and come first.  End of story, before we even get to anything
> farther in the string.  What am I missing?
>
> That's only how it works with the C locale. Otherwise, there are
> complicated rules to weigh things like space and punctuation (and
> accents/diacritics) less prominently than primary alphabetical
> ordering. This is often useful. Anyway, based on what you say here, I
> think you should actually "ORDER BY name_last, name_first".
>
> --
> Regards,
> Peter Geoghegan
>


Thanks.  A little more help would be appreciated.  First a little context:

What I mailed out what a boiled down example.  In reality, what I have is a
ton of tables with a client_id in them, and a convenience function
client_name(client_id) that returns the name_last, name_first string (plus
an alias if it exists).  client_name is used all over the place in both
views and in an app that uses the database.  There is a similar, also
heavily used, staff_name function.  Eliminating the use of these functions
is a non-starter for me--I'd much rather live with the existing sort
behavior, especially as no one has ever pointed this out despite over a
decade of use.

I'm hoping to sort change this behavior with as minimal a change as
possible (e.g., minimal potential for unexpected side effects or
breakage).  I was hoping to just add a COLLATE "C" within the function:

CREATE OR REPLACE FUNCTION client_name( cid int4 ) RETURNS text AS $$
 -- client is a view that draws from tbl_client.  name_full is the
field with the name_last, name_first data in it
 SELECT name_full COLLATE "C" FROM client WHERE client_id=$1;
$$

but that seems to have no effect.  And sure enough the documentation seems
to back that up. ("The collation assigned to a function or operator's
combined input expressions is also considered to apply to the function or
operator's result, if the function or operator delivers a result of a
collatable data type.")  So this may be wishful thinking, but is there any
other way to specify the collation of a function result?  Specifying the
collation every time the function is used is likely a no-go for me too.

Alternatively, it seems I could create new databases with a C collation and
then move my data into them.  This seems a bit drastic, although possible.
I'd again be worried about the breakage/side effects.  And actually, will
this work?  (i.e., can you use pg_dump to populate a new database with a
different locale?)

Are there any other potential solutions, pitfalls or considerations that
come to mind?  Any thoughts welcome.  And as I said, if there's not a good
way to do this I'll probably leave it alone.  Thanks.

Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ <http://agency-software.org/>*
*https://agency-software.org/demo/client
<https://agency-software.org/demo/client>*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
<agency-general-requ...@lists.sourceforge.net?body=subscribe> to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Count of non-null values per table column

2015-08-14 Thread Ken Tanzer
On Fri, Aug 14, 2015 at 6:35 PM, David Nelson dnelson77...@gmail.com
wrote:

 On Fri, Aug 14, 2015 at 10:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 
  David Rowley david.row...@2ndquadrant.com writes:



 Tthat is the way I would do it for a table with a small number of columns,
 but these have several dozen so this would get tedious. Although I just
 realized I could output the list of column names from the pg_stat view to a
 file and whip up some vi find and replace to create the entire statement
 pretty quickly. I was just wondering if that was the only way or not.


You could use this to generate the SQL:

\set my_table my_real_table_name

 SELECT 'SELECT COUNT(*) AS total_rows, '||array_to_string(array(SELECT
'COUNT('||column_name::text ||') AS ' || column_name::text FROM
information_schema.columns WHERE table_name=:'my_table'),E',\n') || ' FROM
' || :'my_table' || ';';

Cheers,
Ken
-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] EXCLUDE, Gist and integers

2015-07-15 Thread Ken Tanzer
On Wed, Jul 15, 2015 at 10:03 PM, Guillaume Lelarge guilla...@lelarge.info
wrote:

 Hi,

 Le 16 juil. 2015 2:23 AM, Ken Tanzer ken.tan...@gmail.com a écrit :
 
  Hi.  I'm looking into adding daterange exclusions to some of my tables.
 Following the documentation, I can do this no problem to prevent any
 records from overlapping:
 
  CREATE TEMP TABLE foo (
client_id integer,
start_date date NOT NULL,
end_date date,
EXCLUDE using gist (daterange(start_date,end_date) with )
  );
 
  But what I really want is no overlapping records on a per-client basis.
 I optimistically tried this:
 
  CREATE TEMP TABLE foo (
client_id integer,
start_date date NOT NULL,
end_date date,
EXCLUDE using gist (daterange(start_date,end_date) with ,client_id
 with =)
  );
 
  But Postgres responds thusly:
 
  ERROR:  data type integer has no default operator class for access
 method gist
  HINT:  You must specify an operator class for the index or define a
 default operator class for the data type.
 
  Can someone tell me what's the easiest way to make this work?  Thanks in
 advance!
 

 You should install the btree_gist extension.

That sure did the trick.  Thanks!

Ken

p.s., in case it's helpful for anyone, as superuser, CREATE EXTENSION
btree_gist;




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


[GENERAL] EXCLUDE, Gist and integers

2015-07-15 Thread Ken Tanzer
Hi.  I'm looking into adding daterange exclusions to some of my tables.
Following the documentation, I can do this no problem to prevent any
records from overlapping:

CREATE TEMP TABLE foo (
  client_id integer,
  start_date date NOT NULL,
  end_date date,
  EXCLUDE using gist (daterange(start_date,end_date) with )
);

But what I really want is no overlapping records on a per-client basis.  I
optimistically tried this:

CREATE TEMP TABLE foo (
  client_id integer,
  start_date date NOT NULL,
  end_date date,
  EXCLUDE using gist (daterange(start_date,end_date) with ,client_id with
=)
);

But Postgres responds thusly:

ERROR:  data type integer has no default operator class for access method
gist
HINT:  You must specify an operator class for the index or define a default
operator class for the data type.

Can someone tell me what's the easiest way to make this work?  Thanks in
advance!

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [SQL] Re: [GENERAL] Link Office Word form document with data from PostgreSQL

2015-03-30 Thread Ken Tanzer
On Mon, Mar 30, 2015 at 1:51 PM, avpro avpro avprowebe...@gmail.com wrote:

 thank you all for your responses.

 I think to easiest option for me would be to use the ODBC connection. Now
 my problem is where to start with the VBA code; but that's for another list.
 cheers

 On 30 March 2015 at 19:51, Vincent Veyron vv.li...@wanadoo.fr wrote:

 On Mon, 30 Mar 2015 17:37:28 +0200
 Vincenzo Campanella vin...@gmail.com wrote:
 Stirling Newberry stirling.newbe...@gmail.com wrote:

 post the code

 Here you go :

 http://pastebin.com/eVXZiFws


 This is a mod_perl module used in the HeaderParser phase of the Apache
 request cycle; it first grabs the sql associated with the model, and passes
 parameters to it; this sql is used to retrieve the required values for the
 model, and the placeholders in the document are replaced with those values.
 The user is then redirected to the newly created document

 You can see it in action in the demo account of the site referenced in my
 signature : open any file (those are legal files/insurance claims), click
 on the 'Contributors' tab (third tab from left; add a contributor if none
 is present), and click on 'Templates'. Click on the template to generate a
 document


 --
 Salutations, Vincent Veyron

 https://legalcase.libremen.com/
 Legal case, contract and insurance claim management software



If PHP is an option, you might want to look at the Tiny But Strong template
system (http://www.tinybutstrong.com/), along with the OpenTBS plugin (
http://www.tinybutstrong.com/plugins/opentbs/demo/demo.html).

We used it to replace some homegrown code similar to what Vincent described
that directly manipulated the document.  There are enough idiosyncracies
 among the file formats that I've been much happier using the TBS code.  I
think this is especially true if you are merging multiple records into a
single, repeating template document as opposed to doing a single set of
replacements.

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] What is the alternate of FILTER below Postgresql 9.4 ?

2015-02-24 Thread Ken Tanzer



 I tried, but it is not giving the output exactly like *FILTER*.

 app_development=# select CASE WHEN lower(location) = 'x' THEN COUNT(id)
 ELSE 0 END AS X_loc, CASE WHEN lower(location) != 'x' THEN COUNT(id) ELSE 0
 END AS Non_X_loc from people group by lower(location);
   x_loc | non_x_loc
 ---+---
   0 | 1
   2 | 0
 (2 rows)
 app_development=# select count(CASE WHEN lower(location) = 'x' THEN 1
 END) AS X_loc, count(CASE WHEN lower(location) != 'x' THEN 1 END) AS
 Non_X_loc from people group by lower(location);
   x_loc | non_x_loc
 ---+---
   0 | 1
   2 | 0
 (2 rows)

 It is 2 rows output.


 then don't include the group by (which forces 1 row per location)

 select sum(case WHEN lower(location) = 'x' THEN 1 else 0 end) AS x_loc,
 sum(case WHEN lower(location) != 'x' THEN 1 else 0 end) AS not_x_loc
 from people;


 Roxanne


Also, if performance is not a big concenr, you can define a little
function, which I find makes the queries easier to read:

CREATE OR REPLACE FUNCTION or_null( boolean ) RETURNS boolean AS $$

 SELECT CASE WHEN $1 THEN TRUE ELSE NULL END;

$$ LANGUAGE sql IMMUTABLE;


select count(or_null(lower(location) = 'x')) AS x_loc,
count(or_null(lower(location) != 'x'))) AS not_x_loc
from people;

Cheers,
Ken




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Array string casts with SELECT but not SELECT DISTINCT

2015-02-21 Thread Ken Tanzer
I tried that and it does indeed work. (With, of course, the appropriate
permissions to create the cast.)

So this makes me wonder--is there any downside or unwelcome side effects to
having such a cast?  And if not, why isn't it part of the default setup?

Cheers,
Ken

On Sat, Feb 21, 2015 at 3:34 AM, sridhar bamandlapally 
sridhar@gmail.com wrote:

 Hi

 Please see below, this works, way for implicit type casting

 bns=# CREATE TEMP TABLE foo (my_array varchar[]);
 CREATE TABLE
 bns=#
 bns=# INSERT INTO foo (my_array) SELECT '{TEST}';
 INSERT 0 1
 bns=#
 bns=# SELECT my_array[1],array_length(my_array,1) FROM foo;
  my_array | array_length
 --+--
  TEST |1
 (1 row)

 bns=#
 *bns=# INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}';*
 *ERROR:  column my_array is of type character varying[] but expression
 is of type text*
 *LINE 1: INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}';*
 *   ^*
 *HINT:  You will need to rewrite or cast the expression.*
 bns=#
 bns=# CREATE CAST (text AS varchar[]) WITH INOUT AS IMPLICIT;
 CREATE CAST
 bns=#
 bns=# INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}';
 this works
 INSERT 0 1
 bns=#
 bns=#


 in previous mail, sorry for not mentioning varchar*[]*

 We did type cast implicit method to avoid application code changes for
 Oracle to PostgreSQL compatible

 Thanks
 Sridhar BN



 On Sat, Feb 21, 2015 at 9:38 AM, Ken Tanzer ken.tan...@gmail.com wrote:

 I'm not able to run this unless I'm the Postgres super user.  But if I
 run it as such, it tells me that cast already exists anyway.

 CREATE CAST (text AS varchar) WITH INOUT AS IMPLICIT;
 ERROR:  cast from type text to type character varying already exists

 Of course this will work fine:
 INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}'::varchar[];

 I was mostly surprised by having DISTINCT added to a SELECT make things
 break.  It may be too obscure an issue to be worth adding, but nothing on
 the DISTINCT documentation suggests this possibility.

 If DISTINCT is specified, all duplicate rows are removed from the result
 set...
 http://www.postgresql.org/docs/9.0/static/sql-select.html#SQL-DISTINCT

 Cheers,
 Ken

 On Fri, Feb 20, 2015 at 6:30 PM, sridhar bamandlapally 
 sridhar@gmail.com wrote:

 ERROR:  column my_array is of type character varying[] but
 expression is of type text

 please try this below, may be this should help

 CREATE CAST (text AS varchar) WITH INOUT AS IMPLICIT;

 just for info:
 actually this should be available in default


 On Fri, Feb 20, 2015 at 9:48 AM, Ken Tanzer ken.tan...@gmail.com
 wrote:

 Hi.  Here's a boiled down example of something that caught me by
 surprise:

 ag_reach_test= CREATE TEMP TABLE foo (my_array varchar[]);
 CREATE TABLE
 ag_reach_test= INSERT INTO foo (my_array) SELECT '{TEST}';
 INSERT 0 1
 ag_reach_test= SELECT my_array[1],array_length(my_array,1) FROM foo;
  my_array | array_length
 --+--
  TEST |1
 (1 row)

 ag_reach_test= INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}';
 ERROR:  column my_array is of type character varying[] but expression
 is of type text
 LINE 1: INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}';
^
 HINT:  You will need to rewrite or cast the expression.

 It's easy enough to add a cast, but I was curious if this was expected
 and desired behavior.   Thanks.

 Ken

 --
 AGENCY Software
 A Free Software data system
 By and for non-profits
 *http://agency-software.org/ http://agency-software.org/*
 *https://agency-software.org/demo/client
 https://agency-software.org/demo/client*
 ken.tan...@agency-software.org
 (253) 245-3801

 Subscribe to the mailing list
 agency-general-requ...@lists.sourceforge.net?body=subscribe to
 learn more about AGENCY or
 follow the discussion.





 --
 AGENCY Software
 A Free Software data system
 By and for non-profits
 *http://agency-software.org/ http://agency-software.org/*
 *https://agency-software.org/demo/client
 https://agency-software.org/demo/client*
 ken.tan...@agency-software.org
 (253) 245-3801

 Subscribe to the mailing list
 agency-general-requ...@lists.sourceforge.net?body=subscribe to
 learn more about AGENCY or
 follow the discussion.





-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Array string casts with SELECT but not SELECT DISTINCT

2015-02-20 Thread Ken Tanzer
I'm not able to run this unless I'm the Postgres super user.  But if I run
it as such, it tells me that cast already exists anyway.

CREATE CAST (text AS varchar) WITH INOUT AS IMPLICIT;
ERROR:  cast from type text to type character varying already exists

Of course this will work fine:
INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}'::varchar[];

I was mostly surprised by having DISTINCT added to a SELECT make things
break.  It may be too obscure an issue to be worth adding, but nothing on
the DISTINCT documentation suggests this possibility.

If DISTINCT is specified, all duplicate rows are removed from the result
set...
http://www.postgresql.org/docs/9.0/static/sql-select.html#SQL-DISTINCT

Cheers,
Ken

On Fri, Feb 20, 2015 at 6:30 PM, sridhar bamandlapally 
sridhar@gmail.com wrote:

 ERROR:  column my_array is of type character varying[] but expression
 is of type text

 please try this below, may be this should help

 CREATE CAST (text AS varchar) WITH INOUT AS IMPLICIT;

 just for info:
 actually this should be available in default


 On Fri, Feb 20, 2015 at 9:48 AM, Ken Tanzer ken.tan...@gmail.com wrote:

 Hi.  Here's a boiled down example of something that caught me by surprise:

 ag_reach_test= CREATE TEMP TABLE foo (my_array varchar[]);
 CREATE TABLE
 ag_reach_test= INSERT INTO foo (my_array) SELECT '{TEST}';
 INSERT 0 1
 ag_reach_test= SELECT my_array[1],array_length(my_array,1) FROM foo;
  my_array | array_length
 --+--
  TEST |1
 (1 row)

 ag_reach_test= INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}';
 ERROR:  column my_array is of type character varying[] but expression
 is of type text
 LINE 1: INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}';
^
 HINT:  You will need to rewrite or cast the expression.

 It's easy enough to add a cast, but I was curious if this was expected
 and desired behavior.   Thanks.

 Ken

 --
 AGENCY Software
 A Free Software data system
 By and for non-profits
 *http://agency-software.org/ http://agency-software.org/*
 *https://agency-software.org/demo/client
 https://agency-software.org/demo/client*
 ken.tan...@agency-software.org
 (253) 245-3801

 Subscribe to the mailing list
 agency-general-requ...@lists.sourceforge.net?body=subscribe to
 learn more about AGENCY or
 follow the discussion.





-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


[GENERAL] Array string casts with SELECT but not SELECT DISTINCT

2015-02-19 Thread Ken Tanzer
Hi.  Here's a boiled down example of something that caught me by surprise:

ag_reach_test= CREATE TEMP TABLE foo (my_array varchar[]);
CREATE TABLE
ag_reach_test= INSERT INTO foo (my_array) SELECT '{TEST}';
INSERT 0 1
ag_reach_test= SELECT my_array[1],array_length(my_array,1) FROM foo;
 my_array | array_length
--+--
 TEST |1
(1 row)

ag_reach_test= INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}';
ERROR:  column my_array is of type character varying[] but expression is
of type text
LINE 1: INSERT INTO foo (my_array) SELECT DISTINCT '{TEST}';
   ^
HINT:  You will need to rewrite or cast the expression.

It's easy enough to add a cast, but I was curious if this was expected and
desired behavior.   Thanks.

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Use of 'now' constant datatype in view to take advantage of partitioned table

2014-08-21 Thread Ken Tanzer
 EXPLAIN ANALYZE
  SELECT *
FROM food
  WHERE food.post_timestamp = ('now'::date - interval '1 month')::date AND
 food.post_timestamp = 'now'
   ORDER BY food.post_timestamp  DESC
  LIMIT 30;



I think the problem is that you're using 'now'::date in your first example,
which gets frozen.  You can use now() or current_timestamp or current_date
to get dynamic results.

CREATE  VIEW test_now AS SELECT current_timestamp as current_ts, now() as
now_function,'now'::timestamp AS now_literal;

(wait a couple of seconds)

SELECT * FROM test_now;

  current_ts   | now_function  |
 now_literal
---+---+
 2014-08-21 01:25:54.147004-07 | 2014-08-21 01:25:54.147004-07 | 2014-08-21
01:18:22.207073
(1 row)

You'll see that the last column is frozen while the other two stay current.

Cheers,
Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


[GENERAL] Localhost vs. Unix Domain Sockets?

2014-08-18 Thread Ken Tanzer
Hi.  I'm working with a couple of machines that have Postgres/Apache on
Linux setups.  Connections to Postgres are currntly TCP/IP to localhost.
 (We're also using itk, so that the apache connections are per-user.)  We
began looking into about encrypting these connections with SSL, but now I'm
thinking of using unix domain socket connections instead.

I see two possible benefits to this:

1)  Maybe better performance or use of resources.  I didn't find a lot of
info, although this post from Bruce Momjian indicates that is is the case:
http://momjian.us/main/blogs/pgblog/2012.html#June_6_2012.

2)  Our webapp and users wouldn't need to be given a Postgres password at
all.  Authenticating as their user would be sufficient.

So I've got two questions.  One is whether there are any downsides to using
sockets, or any gotchas to be aware of.  The second is whether there is
anything to do to increase the security of sockets?  (e.g., analagous to
encrypting localhost conenctions with SSL?)  From the little I saw, it
sounds like sockets are just inherently secure, but wanted to confirm
that or get another opinion!

Thanks in advance,

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Localhost vs. Unix Domain Sockets?

2014-08-18 Thread Ken Tanzer
Thanks all for the input.  Sounds like there aren't downsides to sockets,
and they are at least as secure.  I do have on follow-up question though:

* peer auth (OS user == DB user name) is typically the way to go in

I used to have my db and linux usernames match, until this issue came
along:  http://www.postgresql.org/support/security/faq/2013-04-04/.  It
specifically mentions potentially increased vulnerability if the names
match.  So when I set up a new server I had them not match.  I know this
particular issue is fixed.  But are there other ways that having the names
match could potentially increase vulnerability (even if not known or
identified yet), or am I pointlessly fighting the last war by keeping the
names different?

Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Localhost vs. Unix Domain Sockets?

2014-08-18 Thread Ken Tanzer
Thanks.  I'm not really worried about this particular vulnerability, just
wondering about the more general idea that having db user name = os user
could reduce your security, even if only slightly.  Is it just as
conceivable that a vulnerability could come along that was more exploitable
only if the two names were _different_?

To put it another way, keeping the two sets of names distinct is
incrementally more complex to manage.  Which might be worth it if there
really is any gain.  Is this a best practice, or is it really a
manifestation of its closely-related cousin, the silly practice? :)

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] php password authentication failed for user ...

2014-07-10 Thread Ken Tanzer
On Wed, Jul 9, 2014 at 5:37 AM, basti ba...@unix-solution.de wrote:


 #hostall all 0.0.0.0   0.0.0.0   md5
 did not work.


If it really starts with a # like you show it above, it's just a comment
and pretty much guaranteed not to do anything.

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Questions about daterange() function

2014-06-26 Thread Ken Tanzer

  So here are my questions:
 
  1)  Is there anyway to control this behavior of daterange(), or is it
 just
  best to (for example) add 1 to the upper bound argument if I want an
  inclusive upper bound?

 See link for question #3; namely use the three-arg version of daterange
 (type,type,text)


  2)  This is purely cosmetic, but is there anyway to control the output
  formatting of a daterange to show the upper bound as inclusive?  So that
  daterange(d1,d2) would display as [d1,d2-1] rather than [d1,d2)?

 Not easily - you could write a custom type with the desired canonical form.


  3)  I couldn't find this discussed in the documentation, and specifically
  didn't find the daterange() function documented, including on this page
  where I might have expected it:
  http://www.postgresql.org/docs/9.3/static/functions-range.html.  Is it
  somewhere else where I'm not finding it?

 Yes, the documentation could maybe use some work on this topic.  The
 relevant information is provided at:

 http://www.postgresql.org/docs/9.3/interactive/rangetypes.html

 See especially: 8.17.2  8.17.6

 David J.


 Thanks for your help!





 --
 View this message in context:
 http://postgresql.1045698.n5.nabble.com/Questions-about-daterange-function-tp5809274p5809277.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


[GENERAL] Questions about daterange() function

2014-06-25 Thread Ken Tanzer
Hi.  I've got lots of tables with start and end dates in them, and I'm
trying to learn how to work with them as date ranges (which seem
fantastic!).  I've noticed that the daterange() function seems to create
ranges with an inclusive lower bound, and an exclusive upper bound.  For
example:

SELECT
 reg_spc_date,
 reg_spc_date_end,
 daterange(reg_spc_date,reg_spc_date_end)
FROM reg_spc
LIMIT 5;

 reg_spc_date | reg_spc_date_end |daterange
--+--+-
 2012-04-05   | 2013-10-21   | [2012-04-05,2013-10-21)
 2013-10-28   |  | [2013-10-28,)
 2013-11-01   |  | [2013-11-01,)
 2012-10-19   | 2013-11-01   | [2012-10-19,2013-11-01)
 2005-03-29   | 2013-10-31   | [2005-03-29,2013-10-31)
(5 rows)

So here are my questions:

1)  Is there anyway to control this behavior of daterange(), or is it just
best to (for example) add 1 to the upper bound argument if I want an
inclusive upper bound?

2)  This is purely cosmetic, but is there anyway to control the output
formatting of a daterange to show the upper bound as inclusive?  So that
daterange(d1,d2) would display as [d1,d2-1] rather than [d1,d2)?

3)  I couldn't find this discussed in the documentation, and specifically
didn't find the daterange() function documented, including on this page
where I might have expected it:
http://www.postgresql.org/docs/9.3/static/functions-range.html.  Is it
somewhere else where I'm not finding it?

Thanks in advance!

Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Questions about daterange() function

2014-06-25 Thread Ken Tanzer
On Wed, Jun 25, 2014 at 6:12 PM, Adrian Klaver adrian.kla...@aklaver.com
 wrote:

 On 06/25/2014 05:53 PM, Ken Tanzer wrote:

 Hi.  I've got lots of tables with start and end dates in them, and I'm
 trying to learn how to work with them as date ranges (which seem
 fantastic!).  I've noticed that the daterange() function seems to create
 ranges with an inclusive lower bound, and an exclusive upper bound.  For
 example:

 SELECT
   reg_spc_date,
   reg_spc_date_end,
   daterange(reg_spc_date,reg_spc_date_end)
 FROM reg_spc
 LIMIT 5;

   reg_spc_date | reg_spc_date_end |daterange
 --+--+-
   2012-04-05   | 2013-10-21   | [2012-04-05,2013-10-21)
   2013-10-28   |  | [2013-10-28,)
   2013-11-01   |  | [2013-11-01,)
   2012-10-19   | 2013-11-01   | [2012-10-19,2013-11-01)
   2005-03-29   | 2013-10-31   | [2005-03-29,2013-10-31)
 (5 rows)

 So here are my questions:

 1)  Is there anyway to control this behavior of daterange(), or is it
 just best to (for example) add 1 to the upper bound argument if I want
 an inclusive upper bound?

 2)  This is purely cosmetic, but is there anyway to control the output
 formatting of a daterange to show the upper bound as inclusive?  So that
 daterange(d1,d2) would display as [d1,d2-1] rather than [d1,d2)?

 3)  I couldn't find this discussed in the documentation, and
 specifically didn't find the daterange() function documented, including
 on this page where I might have expected it:
 http://www.postgresql.org/docs/9.3/static/functions-range.html.  Is it
 somewhere else where I'm not finding it?


 What version of Postgres are you using?


In this particular case  9.3, although also working in 9.2.  Both are the
currrent versions.





 Thanks in advance!

 Ken


 --
 Adrian Klaver
 adrian.kla...@aklaver.com




-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
agency-general-requ...@lists.sourceforge.net?body=subscribe to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] encrypting data stored in PostgreSQL

2014-04-09 Thread Ken Tanzer

 On 04/09/2014 02:52 PM, John R Pierce wrote:

 On 4/9/2014 1:40 PM, CS_DBA wrote:

 Hi All;

 We have a client with this requirement:

 At rest data must be encrypted with a unique client key

 Any thoughts on how to pull this off for PostgreSQL stored data?

 I looked at this a while ago because I have clients who might require this
in the future.  ISTM you should be able to have your PG data directory
stored on an encrypted filesystem.  I believe this will decrease
performance, but I have no idea by how much.

Does anyone else have experience with such a setup, or knowledge of how bad
the performance hit might be?  Or other factors to take into consideration?
 Thanks.

Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] encrypting data stored in PostgreSQL

2014-04-09 Thread Ken Tanzer
On Wed, Apr 9, 2014 at 2:32 PM, John R Pierce pie...@hogranch.com wrote:

 On 4/9/2014 2:16 PM, Ken Tanzer wrote:

 I looked at this a while ago because I have clients who might require
 this in the future.  ISTM you should be able to have your PG data directory
 stored on an encrypted filesystem.  I believe this will decrease
 performance, but I have no idea by how much.

 Does anyone else have experience with such a setup, or knowledge of how
 bad the performance hit might be?  Or other factors to take into
 consideration?  Thanks.


 whats the threat model this encryption is supposed to solve ?

 a encrypted file system has to be mounted and readable as long as the file
 system is operational, this implies that any data in it can be read by
 anyone with access to that system.

 now, if you just need a checkbox saying its encrypted, then whatever, it
 hardly matters.


 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast


Well the needing to check a box on a checklist was the starting point for
me looking into this.  I think the scenario would be what if someone stole
your hard disks? (Or stole Rackspace's hard disk, in my case.)  I didn't
dig too deep, but it seemed that there was/is a basic tradeoff--either the
encryption key is accessible from the server and thus the filesystem can be
conveniently and automatically mounted,but providing little extra security,
or 2)  the encryption key is user supplied at boot time, providing a good
deal extra security but way less convenience.

Cheers,
Ken


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] pg_config problems on PG9.3/Centos?

2014-01-22 Thread Ken Tanzer
On Tue, Jan 21, 2014 at 11:49 PM, Devrim GÜNDÜZ dev...@gunduz.org wrote:


 Hi,

 On Tue, 2014-01-21 at 18:00 -0800, Ken Tanzer wrote:
  It didn't seem like a great idea to me either, but what's the better
  alternative?  Without the symlink I get lots of errors:
 
  make USE_PGXS=1
  make: pg_config: Command not found

 Sometimes exporting PG_CONFIG does the trick. Alternatively, while
 building the RPMs, we add a patch to makefiles like this:

 http://svn.pgrpms.org/repo/rpm/redhat/9.3/ip4r/F-20/Makefile-pgxs.patch


Thanks, that works.  I'm inferring that PG_CONFIG is a variable that needs
to be set?  This seemingly similar piece of makefile doesn't work:

ifdef USE_PGXS
  PGXS := $(shell /usr/pgsql-9.3/bin/pg_config --pgxs)
  include $(PGXS)


 Regards,
 --
 Devrim GÜNDÜZ
 Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
 PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
 Twitter: @DevrimGunduz , @DevrimGunduzTR



Cheers,
Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ http://agency-software.org/*
*https://agency-software.org/demo/client
https://agency-software.org/demo/client*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] pg_config problems on PG9.3/Centos?

2014-01-21 Thread Ken Tanzer
Hi.  I'm trying to build the table_log module for Postgres 9.3, and am
wondering if there is an issue with pg_config.  I installed Postgres on a
fresh CentOS 6.5 with the pgdg packages:

 yum list installed postgres*

Installed Packages
postgresql93.x86_64   9.3.2-1PGDG.rhel6
  @pgdg93
postgresql93-contrib.x86_64   9.3.2-1PGDG.rhel6
  @pgdg93
postgresql93-devel.x86_64 9.3.2-1PGDG.rhel6
  @pgdg93
postgresql93-libs.x86_64  9.3.2-1PGDG.rhel6
  @pgdg93
postgresql93-pltcl.x86_64 9.3.2-1PGDG.rhel6
  @pgdg93
postgresql93-server.x86_649.3.2-1PGDG.rhel6
  @pgdg93

I'm using this makefile, which has the path to pg_config hard-coded in it,
with make USE_PGXS=1:

MODULES = table_log
DATA_built = table_log.sql
DOCS = README.table_log

ifdef USE_PGXS
  PGXS := $(shell /usr/pgsql-9.3/bin/pg_config --pgxs)
  include $(PGXS)
else
  subdir = contrib/table_log
  top_builddir = ../..
  include $(top_builddir)/src/Makefile.global
  include $(top_srcdir)/contrib/contrib-global.mk
endif

This works for me on a similar pg 9.2/Centos 6 setup, but fails on 9.3:

table_log.c:14:22: error: postgres.h: No such file or directory
table_log.c:15:18: error: fmgr.h: No such file or directory
table_log.c:16:71: error: executor/spi.h: No such file or directory
table_log.c:17:53: error: commands/trigger.h: No such file or directory
table_log.c:18:65: error: mb/pg_wchar.h: No such file or directory
table_log.c:21:23: error: miscadmin.h: No such file or directory
table_log.c:22:30: error: utils/formatting.h: No such file or directory
table_log.c:23:28: error: utils/builtins.h: No such file or directory
table_log.c:24:29: error: utils/lsyscache.h: No such file or directory
table_log.c:25:21: error: funcapi.h: No such file or directory

When I look the output of pg_config, and especially compared to my 9.2
output, it seems suspiciously lacking some 9.3 paths:

[root@new-agency table_log-0.4.4]# pg_config | more
BINDIR = /usr/bin
DOCDIR = /usr/share/doc/pgsql
HTMLDIR = /usr/share/doc/pgsql
INCLUDEDIR = /usr/include
PKGINCLUDEDIR = /usr/include
INCLUDEDIR-SERVER = /usr/include/server
LIBDIR = /usr/lib
PKGLIBDIR = /usr/lib
LOCALEDIR = /usr/share/locale
MANDIR = /usr/share/man
SHAREDIR = /usr/share
SYSCONFDIR = /etc/sysconfig/pgsql
PGXS = /usr/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--disable-rpath' '--prefix=/usr/pgsql-9.3'
'--includedir=/usr/pgsql-9.3/include' '--mandir=/
usr/pgsql-9.3/share/man' '--datadir=/usr/pgsql-9.3/share' '--with-perl'
'--with-python' '--with-tcl' '--w
ith-tclconfig=/usr/lib64' '--with-openssl' '--with-pam' '--with-krb5'
'--with-gssapi' '--with-includes=/u
sr/include' '--with-libraries=/usr/lib64' '--enable-nls' '--with-ossp-uuid'
'--with-libxml' '--with-libxs
lt' '--with-ldap' '--with-system-tzdata=/usr/share/zoneinfo'
'--sysconfdir=/etc/sysconfig/pgsql' '--docdi
r=/usr/share/doc' 'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2
-fexceptions -fstack-protector --par
am=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et' 'CPPFLAGS=
-I/usr/include/et'
CC = gcc
CPPFLAGS = -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2
-I/usr/include
CFLAGS = -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-siz
e=4 -m64 -mtune=generic -I/usr/include/et -Wall -Wmissing-prototypes
-Wpointer-arith -Wdeclaration-after-
statement -Wendif-labels -Wmissing-format-attribute -Wformat-security
-fno-strict-aliasing -fwrapv
CFLAGS_SL = -fpic
LDFLAGS = -L../../../src/common -L/usr/lib64 -Wl,--as-needed
LDFLAGS_EX =
LDFLAGS_SL =
LIBS = -lpgport -lpgcommon -lxslt -lxml2 -lpam -lssl -lcrypto -lgssapi_krb5
-lz -lreadline -lcrypt -ldl -
lm
VERSION = PostgreSQL 9.3.2


My 9.2 pg_config:

[root@hosting table_log-0.4.4]# pg_config
BINDIR = /usr/pgsql-9.2/bin
DOCDIR = /usr/share/doc/pgsql
HTMLDIR = /usr/share/doc/pgsql
INCLUDEDIR = /usr/pgsql-9.2/include
PKGINCLUDEDIR = /usr/pgsql-9.2/include
INCLUDEDIR-SERVER = /usr/pgsql-9.2/include/server
LIBDIR = /usr/pgsql-9.2/lib
PKGLIBDIR = /usr/pgsql-9.2/lib
LOCALEDIR = /usr/pgsql-9.2/share/locale
MANDIR = /usr/pgsql-9.2/share/man
SHAREDIR = /usr/pgsql-9.2/share
SYSCONFDIR = /etc/sysconfig/pgsql
PGXS = /usr/pgsql-9.2/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--disable-rpath' '--prefix=/usr/pgsql-9.2'
'--includedir=/usr/pgsql-9.2/include' '--mandir=/usr/pgsql-9.2/share/man'
'--datadir=/usr/pgsql-9.2/share' '--with-perl' '--with-python' '--with-tcl'
'--with-tclconfig=/usr/lib64' '--with-openssl' '--with-pam' '--with-krb5'
'--with-gssapi' '--with-includes=/usr/include'
'--with-libraries=/usr/lib64' '--enable-nls' '--with-ossp-uuid'
'--with-libxml' '--with-libxslt' '--with-ldap'
'--with-system-tzdata=/usr/share/zoneinfo'
'--sysconfdir=/etc/sysconfig/pgsql' '--docdir=/usr/share/doc' 

Re: [GENERAL] pg_config problems on PG9.3/Centos?

2014-01-21 Thread Ken Tanzer
On Tue, Jan 21, 2014 at 12:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:


 Exactly where is root's path finding pg_config?

 IIRC, most of the paths shown here are actually computed relative to the
 location of the pg_config executable, so I could imagine getting this
 kind of result if you'd done something like symlinking pg_config into
 /usr/bin.


Oddly, there was a pg_config in /usr/bin that was not a symlink and not
owned by any package.  I'm really puzzled as to how it got there, but I
removed it, and symlinked the one from /usr/pgsql-9.3/bin.  It puts out
better information.

It also sounds like from your comment that symlinking to /usr/bin is
frowned upon.  What is the better way to deal with this?

So with the pg_config thing resolved, my make now get stuck on the error
below.  I found a BSD bug report with the same message from 9.2, although I
was able to successfully compile for 9.2 previously.  I'm wondering if
anyone seems something obvious or simple that could be causing this or
could be fixed.  I know the table_log packages are kind of ancient, but
they do the trick!

table_log.c: In function ‘table_log’:
table_log.c:134: warning: implicit declaration of function
‘RelationGetNamespace’
table_log.c:140: error: dereferencing pointer to incomplete type
table_log.c: In function ‘__table_log’:
table_log.c:301: error: dereferencing pointer to incomplete type
table_log.c:310: error: dereferencing pointer to incomplete type
table_log.c:312: error: dereferencing pointer to incomplete type
table_log.c:346: error: dereferencing pointer to incomplete type
table_log.c:354: error: dereferencing pointer to incomplete type
table_log.c:373: error: dereferencing pointer to incomplete type
table_log.c:381: error: dereferencing pointer to incomplete type
table_log.c: In function ‘table_log_restore_table’:
table_log.c:794: error: ‘timestamptz_out’ undeclared (first use in this
function)
table_log.c:794: error: (Each undeclared identifier is reported only once
table_log.c:794: error: for each function it appears in.)
make: *** [table_log.o] Error 1

http://lists.freebsd.org/pipermail/freebsd-ports-bugs/2013-June/257856.html




 I would've guessed that you were invoking a pg_config shipped with the
 regular Red Hat postgres packages, except for this:

  CONFIGURE = '--disable-rpath' '--prefix=/usr/pgsql-9.3'
  '--includedir=/usr/pgsql-9.3/include' '--mandir=/
  usr/pgsql-9.3/share/man' '--datadir=/usr/pgsql-9.3/share' '--with-perl'

 which seems to prove that the package was built with the correct
 options for PGDG's file placement.

 regards, tom lane


I'm _pretty_ sure I didn't even install the CentOs postgres packages. OTOH,
where that /usr/bin/pg_config came from is a complete mystery!

Cheers,
Ken

-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] pg_config problems on PG9.3/Centos?

2014-01-21 Thread Ken Tanzer
On Tue, Jan 21, 2014 at 1:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  On Tue, Jan 21, 2014 at 12:32 PM, Tom Lane t...@sss.pgh.pa.us wrote:
  IIRC, most of the paths shown here are actually computed relative to the
  location of the pg_config executable, so I could imagine getting this
  kind of result if you'd done something like symlinking pg_config into
  /usr/bin.

  Oddly, there was a pg_config in /usr/bin that was not a symlink and not
  owned by any package.  I'm really puzzled as to how it got there, but I
  removed it, and symlinked the one from /usr/pgsql-9.3/bin.  It puts out
  better information.

 OK.

  It also sounds like from your comment that symlinking to /usr/bin is
  frowned upon.  What is the better way to deal with this?

 I had forgotten the details, but if pg_config is giving you the right
 answers then it must know about following the symlink.  So nevermind
 that worry.


I'm happy not to mind, but it seems like everything else just works out of
the box, so I wonder why not this little piece?


   So with the pg_config thing resolved, my make now get stuck on the error
 I' below.  I found a BSD bug report with the same message from 9.2,
 although I
  was able to successfully compile for 9.2 previously.  I'm wondering if
  anyone seems something obvious or simple that could be causing this or
  could be fixed.  I know the table_log packages are kind of ancient, but
  they do the trick!

  table_log.c: In function ‘table_log’:
  table_log.c:134: warning: implicit declaration of function
  ‘RelationGetNamespace’
  table_log.c:140: error: dereferencing pointer to incomplete type
  table_log.c: In function ‘__table_log’:
  table_log.c:301: error: dereferencing pointer to incomplete type
  table_log.c:310: error: dereferencing pointer to incomplete type
  table_log.c:312: error: dereferencing pointer to incomplete type
  table_log.c:346: error: dereferencing pointer to incomplete type
  table_log.c:354: error: dereferencing pointer to incomplete type
  table_log.c:373: error: dereferencing pointer to incomplete type
  table_log.c:381: error: dereferencing pointer to incomplete type
  table_log.c: In function ‘table_log_restore_table’:
  table_log.c:794: error: ‘timestamptz_out’ undeclared (first use in this
  function)
  table_log.c:794: error: (Each undeclared identifier is reported only once
  table_log.c:794: error: for each function it appears in.)
  make: *** [table_log.o] Error 1

 It looks like the code is missing some #include's.  You at least
 need utils/rel.h for RelationGetNamespace and utils/timestamp.h
 for timestamptz_out.  Can't tell from this what typedef is missing
 but it's possible adding those will fix it; if not you'll need to
 look at the complained-of lines and then grep the Postgres include
 files to see which one provides it.

 We occasionally add or remove header inclusions of other headers,
 which probably explains why this code compiled on older versions
 but not 9.3.

 regards, tom lane


Adding those two includes did the trick--thanks!

Ken

-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] pg_config problems on PG9.3/Centos?

2014-01-21 Thread Ken Tanzer
On Tue, Jan 21, 2014 at 3:16 PM, Devrim GÜNDÜZ dev...@gunduz.org wrote:


 Hi,

 On Tue, 2014-01-21 at 13:18 -0800, Ken Tanzer wrote:

  Oddly, there was a pg_config in /usr/bin that was not a symlink and
  not owned by any package.  I'm really puzzled as to how it got there,
  but I removed it, and symlinked the one from /usr/pgsql-9.3/bin.  It
  puts out better information.

 Please don't do it. PGDG RPMs are designed for parallel installation
 (like 9.2 and 9.3 on the same machine), and then the pg_config in
 regular $PATH might be tricky.


It didn't seem like a great idea to me either, but what's the better
alternative?  Without the symlink I get lots of errors:

make USE_PGXS=1
make: pg_config: Command not found
make: pg_config: Command not found
make: pg_config: Command not found
make: pg_config: Command not found
make: pg_config: Command not found
make: pg_config: Command not found
make: pg_config: Command not found
make: pg_config: Command not found
make: pg_config: Command not found
make: pg_config: Command not found



 Anyway:

  I know the table_log packages are kind of ancient, but they do the
  trick!
 
  table_log.c: In function 'table_log':
  table_log.c:134: warning: implicit declaration of function
  'RelationGetNamespace'

 snip

 table_log is not being maintained anymore -- you can use emaj. It is
 already available in the same RPM repo:

 http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/repoview/emaj.html


I'm not opposed to newer and better, but at first glance this sounds like
headache for no gain.  Currently table_log is doing the trick for me (I
only use it for tracking revisions, not rollbacks), and I have several
organizations running with their revision history in table_log format.  Is
the table format by any chance the same, and/or is there an easy way to
move from one to the other?



 Regards,
 --
 Devrim GÜNDÜZ
 Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
 PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
 Twitter: @DevrimGunduz , @DevrimGunduzTR


 Thanks, and Cheers,
Ken


-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Question(s) about crosstab

2013-12-17 Thread Ken Tanzer
Hi.  I've got a simple table unit_hold, with grant numbers, buildings and
counts of unit types, which I need to summarize, along with a table listing
unit types:

\d unit_hold
 Table public.unit_hold
Column| Type  | Modifiers
--+---+---
 grant_number_code| character varying(10) |
 housing_project_code | character varying(10) |
 unit_type_code   | character varying(10) |
 count| bigint|

SELECT * FROM unit_hold limit 3;
 grant_number_code | housing_project_code | unit_type_code | count
---+--++---
 1 |  | 4BR| 1
 1 |  | 1BR| 1
 1 |  | 1BR| 1

SELECT unit_type_code,description FROM l_unit_type;
 unit_type_code | description
+-
 5BR| 5 Bedroom
 4BR| 4 Bedroom
 3BR| 3 Bedroom
 6BR| 6 Bedroom
 UNKNOWN| Unknown
 GROUP  | Group Home
 2BR| 2 Bedroom
 1BR| 1 Bedroom
 0BR| Studio
 SRO| SRO


I thought this would be a good candidate for crosstab.  After wrestling
with the documentation, this is the best I could come up with:

SELECT * FROM crosstab(
  'SELECT housing_project_code||''_''||grant_number_code AS
project_and_grant,grant_number_code,housing_project_code,unit_type_code,count
FROM unit_hold ORDER BY 1,2',
  'SELECT * FROM (SELECT DISTINCT unit_type_code FROM l_unit_type) foo
ORDER BY unit_type_code ~ ''^[0-9]'' DESC, unit_type_code'
) AS ct(project_and_grant varchar, grant_number_code varchar,
housing_project_code varchar, 0BR bigint, 1BR bigint, 2BR bigint,
3BR bigint, 4BR bigint, 5BR bigint, 6BR bigint,GROUP bigint,
SRO bigint, UNKNOWN bigint)

So here are my questions:

1)  Is there a simpler way?  I'm hoping I made this unnecessarily
cumbersome and complicated.
2)  AFAICT, if a new unit type were to be added, I'd have to rewrite this
query.  Is there any way to avoid that?
3)  It seems like everything after the first query, except for the category
field, is redundant information, and that in theory you should be able to
say crosstab('query','category_field').  Is there any inherent reason this
simpler form couldn't work, or is it just that no one has wanted to do it,
or gotten to it yet?

Thanks in advance!

Ken


-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.orghttps://mail.google.com/mail/?view=cmfs=1tf=1to=ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listhttps://mail.google.com/mail/?view=cmfs=1tf=1to=agency-general-requ...@lists.sourceforge.netbody=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Question(s) about crosstab

2013-12-17 Thread Ken Tanzer
On Tue, Dec 17, 2013 at 3:47 PM, rob stone floripa...@gmail.com wrote:


 SELECT UH.grant_number_code, UH.housing_project_code, UH. count,
 UT.description
 FROM l_unit_type UT, unit_hold UH
 WHERE UH.unit_type_code = UT.unit_type_code;


 Easier to create a view.


Thanks Rob, but that doesn't get the data into a 1 line per grant/building
format, with the unit types as columns.  That's why I was looking at
crosstab in the first place!

Cheers,
Ken

-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Question(s) about crosstab

2013-12-17 Thread Ken Tanzer
On Tue, Dec 17, 2013 at 5:11 PM, David Johnston pol...@yahoo.com wrote:

 3) Limitation of SQL - explained below:

 The function call string that you pass in is just that, a string, the SQL
 construct within which it resides has no knowledge of its contents.

 SQL has the hard requirement that at the time you submit a query all
 columns
 must be known.  If a function is polymorphic (in the sense it can output
 different columns/row-types) then when you call that function you must
 indicate which columns (and types) are going to be output by the function
 during this specific execution.


I guess crosstabs were not all that I hoped they were (basically pivot
tables), but thanks for the clear explanation.

Cheers,
Ken

-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Getting non_NULL right-side values on a non-matching join?

2013-11-24 Thread Ken Tanzer
On Sat, Nov 23, 2013 at 9:21 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  On Sat, Nov 23, 2013 at 2:20 AM, Vik Fearing vik.fear...@dalibo.com
 wrote:
  Chapter 15 of our documentation handles installing from source.
  http://www.postgresql.org/docs/current/static/installation.html

  Thanks for the link.  I really do appreciate all the documentation that
  Postgres has put together.  In this case I especially like the short
  version provided, which covers part of what I was looking for.  It would
 be
  great if there were a similar page that addressed how to set this up
  side-by-side with an existing installation, and had a cheat sheet for
  pulling in build tools and libraries.  (As in, on Cent OS run yum
 install
  x y z..., Ubunutu apt-get install a x z.)  I get that the build
  environment and libraries are outside of the scope of Postgres proper and
  maybe unfair to ask it be documented, but they're still steps people have
  to go through.  If they were included in that short version format, it
  would be fantastic!

 FWIW, I think this is outside the scope of Chapter 15, and especially
 outside the scope of the short version ;-).  If you're not wanting to
 do the /usr/local approach, you're most likely wanting to build a
 replacement for some distro-supplied packaging of Postgres.


Well yes and no.  In my case I'm actually running CentOs with the PGDG
packages straight from you folks.

My starting point was that right now I don't have a non-production spare
machine.  So I need to _assure_ myself I'm not going to screw things up
(time sink!).  I don't keep a build environment, but don't mind installing
one temporarily.

The two approaches I'd thought of were:

1) Build a new binary, stop server, move new binary into place; start server
(Not sure if this works or not!)

2) Build the whole thing, install and run side-by-side

So maybe /usr/local would work, but it seems like SRPM + patch will be the
closest to matching my current setup, and do the quickest job of pulling in
everything needed.



 There
 are too many of those, and they change too often, for us to be able
 to provide reasonable instructions for that in our formal docs.
 Moreover, 99% of what you need to know for that is not PG-specific but
 distro-specific.

 Perhaps it'd be worth setting up page(s) on our wiki about this, though?
 The question certainly comes up often enough.

 regards, tom lane


As a side thought, what about creating a source RPM configured to install
side-by-side, and on a different port?  Wouldn't that be just a few tweaks
to make?  And since you're maintaining packages anyway...

But if not, it would be great if the wiki page included a to build a
side-by-side RPM, these are the X things you have to change after you
install the source section, and I guess that would work for the
distro-supplied RPMs as well.

Then I could boil it down to:

rpm -qa  original.packages
Download source RPM
yum-builddep source RPM
rpm -i source RPM
Apply patch(es)
Tweak side-by-side config (if not done already)
Build  install binary RPM
(possibly copy binary, or data directory)
test / run / test /run...
rpm -e $( rpm -qa | cat - original.packages | sort | uniq -u )
and maybe remove a few files or folders (listed on the wiki page of course!)

And then it's done, with little hassle and no trace left behind.  Although
maybe I'm missing some complexities here.  And while I know it's easy to
suggest work for other people, it does seem that at least documenting it
would really simplify things for casual or occasional builders or testers.
 I can say for sure that if I'd found something like this documented I
would have tested your patch.  Of course in this case it wouldn't have
ended up doing any good... ;)

Cheers,
Ken


-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Getting non_NULL right-side values on a non-matching join?

2013-11-23 Thread Ken Tanzer
On Sat, Nov 23, 2013 at 2:20 AM, Vik Fearing vik.fear...@dalibo.com wrote:

  On 11/23/2013 07:41 AM, Ken Tanzer wrote:

 OTOH, if there were a very clear and credible page with good instructions
 on installing build environment + postgres (for say RHEL, Ubuntu  Fedora)
 that would install side by side with an existing installation (and how to
 remove it all cleanly) it would make me and maybe others more able/likely
 to test patches.  There may be such a page--I just didn't find it.  And I
 was somewhat dissuaded from building an RPM on my CentOs machine by the
 note in the Postgres wiki that the ubuntu packages allow multiple
 versions more easily than other packaging schemes.

  Just a thought.  I know all the information is out there and can be
 pieced together.  Like many computing endeavors, I'm sure the second time
 would be quick and easy, but likely not so much the first!


 Chapter 15 of our documentation handles installing from source.
 http://www.postgresql.org/docs/current/static/installation.html

 --
 Vik

  Thanks for the link.  I really do appreciate all the documentation that
Postgres has put together.  In this case I especially like the short
version provided, which covers part of what I was looking for.  It would be
great if there were a similar page that addressed how to set this up
side-by-side with an existing installation, and had a cheat sheet for
pulling in build tools and libraries.  (As in, on Cent OS run yum install
x y z..., Ubunutu apt-get install a x z.)  I get that the build
environment and libraries are outside of the scope of Postgres proper and
maybe unfair to ask it be documented, but they're still steps people have
to go through.  If they were included in that short version format, it
would be fantastic!

Cheers,
Ken




-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Getting non_NULL right-side values on a non-matching join?

2013-11-22 Thread Ken Tanzer
On Fri, Nov 22, 2013 at 8:04 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 I wrote:
  If you just need a work-around-it-right-now solution, I'd suggest
  introducing an OFFSET 0 optimization fence into one or another of the
  levels of view below the outer joins.  I've not experimented but I think
  that ought to fix it, at some possibly-annoying cost in query
  optimization.  Hopefully I'll have another answer tomorrow.

 I found a less nasty workaround: if you replace my_field by
 foo.my_field in the SELECT list of boo_top_view, the problem goes away.
 The bug seems to be due to wrong processing of join alias variables
 during subquery pullup.  The unqualified name my_field is a reference
 to an output alias of the unnamed LEFT JOIN in that view, but if you
 qualify it with the name of the component table, it's not an alias
 anymore so the bug doesn't trigger.

 Thanks for reporting this!  I'll try to make sure there's a real fix
 in the next update releases, which will be out PDQ because of the
 replication bug that was identified this week.

 regards, tom lane


I'm happy to be of assistance, but mostly glad it wasn't a me-bug! :)  I
put an alias on every field reference in the view, and the problem did
indeed go away.  Thank you very much for providing an easy workaround!

Cheers,
Ken

-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Getting non_NULL right-side values on a non-matching join?

2013-11-22 Thread Ken Tanzer
Thanks Tom.  The workaround seems harmless, even good coding practice, so
it's all good on my end.  If it were useful to you I'd gladly build and
test it, but I doubt that's the case.  But just say the word!  Otherwise
it's just a question of time and priorities, and it seems likely to chew up
at least a few hours if I'm careful about it.

OTOH, if there were a very clear and credible page with good instructions
on installing build environment + postgres (for say RHEL, Ubuntu  Fedora)
that would install side by side with an existing installation (and how to
remove it all cleanly) it would make me and maybe others more able/likely
to test patches.  There may be such a page--I just didn't find it.  And I
was somewhat dissuaded from building an RPM on my CentOs machine by the
note in the Postgres wiki that the ubuntu packages allow multiple versions
more easily than other packaging schemes.

Just a thought.  I know all the information is out there and can be pieced
together.  Like many computing endeavors, I'm sure the second time would be
quick and easy, but likely not so much the first!

Cheers,
Ken


On Fri, Nov 22, 2013 at 8:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  On Fri, Nov 22, 2013 at 8:04 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  I found a less nasty workaround: if you replace my_field by
  foo.my_field in the SELECT list of boo_top_view, the problem goes
 away.

  I'm happy to be of assistance, but mostly glad it wasn't a me-bug! :)  I
  put an alias on every field reference in the view, and the problem did
  indeed go away.  Thank you very much for providing an easy workaround!

 Great, I'm glad that was good enough for you.  There's a real fix
 committed here:

 http://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=c0aa210f6ebab06ca3933c735c7c6d2b8bdd024e
 but since you expressed some discomfort about patching source before,
 it's probably best if you just work around it till we put out new
 releases.

 regards, tom lane




-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] Getting non_NULL right-side values on a non-matching join?

2013-11-21 Thread Ken Tanzer
Hello.  In doing a left join with a particular view as the right table, and
non-matching join criteria, I am getting values returned in a few fields.
 All the rest are NULL.  I would expect all the right side values to be
NULL.

(The view is large and messy, but it doesn't seem like that should matter.
 I've attached the create statement for the view anyway.)

Am I missing something really really obvious about LEFT JOINs here?  This
statement seems to confirm my expectations:

This query is called a left outer join because the table mentioned on the
left of the join operator will have each of its rows in the output at least
once, whereas the table on the right will only have those rows output that
match some row of the left table. *When outputting a left-table row for
which there is no right-table match, empty (null) values are substituted
for the right-table columns.*
http://www.postgresql.org/docs/9.2/static/tutorial-join.html

Thanks in advance.

Ken

-- Nothing special about the -1 id here, just an example of a non-matching
value.

ag_spc= SELECT foo.client_id AS foo_id,rent_info.client_id AS
ri_id,move_in_type,annual_income,monthly_income_total,dependent_count from
(SELECT -1 AS client_id) foo LEFT JOIN rent_info USING (client_id);
 foo_id | ri_id | move_in_type | annual_income | monthly_income_total |
dependent_count
+---+--+---+--+-
 -1 |   | Move-in  | 0 |0 |
  0
(1 row)


ag_spc= SELECT * FROM rent_info WHERE client_id = -1;
 effective_date | effective_date_end | rent_amount_tenant_calculated |
rent_amount_tenant | rent_amount_spc | project_date | own_date |
is_active_manual | residence_own_id | client_id | housing_project_code |
housing
_project_label | housing_unit_code | residence_date | residence_date_end |
was_received_hap | was_received_compliance | moved_from_code |
chronic_homeless_status_code | move_in_type | lease_on_file | moved_to_code
|
moved_to_unit | departure_type_code | departure_reason_code |
move_out_was_code | returned_homeless | unit_rent_manual |
tenant_pays_deposit | comment_damage | comment_deposit | comment |
income_id | income_date | in
come_date_end | annual_income | monthly_income_total |
monthly_income_primary | income_primary_code | monthly_income_secondary |
income_secondary_code | monthly_income_tertiary | income_tertiary_code |
monthly_intere
st_income | other_assistance_codes | income_certification_type_code |
child_care | handicap_assistance | medical_expense | fund_type_code |
rent_date_effective | rent_date_end | housing_unit_id | housing_unit_label |
 housing_unit_date | housing_unit_date_end | unit_type_code | tax_credit |
max_occupant | alternate_address_id | mailing_address_unit | street_address
| mailing_address_client | housing_unit_subsidy_id | housing_unit
_subsidy_date | housing_unit_subsidy_date_end | unit_subsidy_amount |
unit_rent | tenant_vendor_number | vendor_number | utility_allowance_manual
| utility_allowance_unit | utility_allowance | utility_allowance_code
| security_deposit | fair_market_rent | rent_amount_tenant_manual |
dependent_count | cid | reg_spc_date | reg_spc_date_end | grant_number_code
| agency_code | agency_phone | agency_label | agency_contact | fake_key
++---++-+--+--+--+--+---+--+
---+---+++--+-+-+--+--+---+---+-
--+-+---+---+---+--+-++-+-+---+-+---
--+---+--++-+--+---+-+--+---
--++++-+-++-+---+-++
---+---+++--+--+--+++-+-
--+---+-+---+--+---+--++---+

Re: [GENERAL] Getting non_NULL right-side values on a non-matching join?

2013-11-21 Thread Ken Tanzer
On Thu, Nov 21, 2013 at 2:22 PM, Kevin Grittner kgri...@ymail.com wrote:

 Ken Tanzer ken.tan...@gmail.com wrote:

  In doing a left join with a particular view as the right table,
  and non-matching join criteria, I am getting values returned in a
  few fields.  All the rest are NULL.  I would expect all the right
  side values to be NULL.

 What is the output of executing?:

 SELECT version();


I think I'm current on 9.2.5 / CentOs 6.4.

ag_spc= SELECT version();
   version

--
 PostgreSQL 9.2.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7
20120313 (Red Hat 4.4.7-3), 64-bit
(1 row)

[spc@hosting agency_code]$ rpm -qi postgresql92
Name: postgresql92 Relocations: (not relocatable)
Version : 9.2.5 Vendor: (none)
Release : 1PGDG.rhel6   Build Date: Wed 09 Oct 2013
06:02:22 AM PDT
Install Date: Tue 15 Oct 2013 06:22:39 PM PDT  Build Host:
koji-sl6-x86-64-pg92
Group   : Applications/DatabasesSource RPM:
postgresql92-9.2.5-1PGDG.rhel6.src.rpm
Size: 5279557  License: PostgreSQL
Signature   : DSA/SHA1, Wed 09 Oct 2013 06:02:31 AM PDT, Key ID
1f16d2e1442df0f8
URL : http://www.postgresql.org/
Summary : PostgreSQL client programs and libraries
...


Don't know if this is useful information, but I was surprised that the
problem continues even wrapping the view as a subquery, and then even if
the subquery has a client_id IS NOT NULL clause:

ag_spc= SELECT foo.client_id AS foo_id,boo.client_id AS
ri_id,move_in_type,annual_income,monthly_income_total,dependent_count FROM
(SELECT -1 AS client_id) foo LEFT OUTER JOIN (SELECT * FROM rent_info WHERE
client_id IS NOT NULL) boo USING (client_id);
 foo_id | ri_id | move_in_type | annual_income | monthly_income_total |
dependent_count
+---+--+---+--+-
 -1 |   | Move-in  | 0 |0 |
  0
(1 row)

I'm happy to provide whatever additional information is helpful--just let
me know.  Thanks.

Ken



-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Getting non_NULL right-side values on a non-matching join?

2013-11-21 Thread Ken Tanzer
On Thu, Nov 21, 2013 at 5:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  Hello.  In doing a left join with a particular view as the right table,
 and
  non-matching join criteria, I am getting values returned in a few fields.
   All the rest are NULL.  I would expect all the right side values to be
  NULL.

 Hmmm ... the join conditions involving COALESCE() remind me of a bug I
 just fixed last week.  Are you in a position to try a patch?  If so,
 here's the fix against 9.2:

 http://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=51b6ae6bba75bca2374a24cf7c740da74c955ad5

 If that doesn't help, please see if you can extract a self-contained
 test case.


Getting a build environment together seemed more painful, so here's a test
case.  Just for fun, I tried this in another database on a different
machine (and with 9.0.08).  I got the same results, so it doesn't seem to
be a case of something wacky with my particular database.

Cheers,
Ken

p.s.,  Not your problem I know, but I need to deal with this somehow and
rather soon.  If the patch you mentioned does fix this, and that's the
easiest way to get this fixed on my machine, please do let me know and I'll
start googling Postgres build source.  Thanks!


DROP VIEW IF EXISTS boo_top_view;
DROP VIEW IF EXISTS boo_view;
DROP TABLE IF EXISTS boo_table;
DROP TABLE IF EXISTS a_table;

CREATE TABLE boo_table ( client_id INTEGER);
CREATE TABLE a_table ( client_id INTEGER );

CREATE OR REPLACE VIEW boo_view AS
SELECT
r1.client_id,
666 AS my_field
FROM boo_table r1;

CREATE OR REPLACE VIEW boo_top_view AS
SELECT
client_id,
my_field
FROM (
SELECT
a.client_id,
a.my_field
FROM boo_view AS a
) foo
--Problem goes away if you take out this left join
LEFT JOIN (
SELECT client_id FROM  a_table
)  a2 USING (client_id);

SELECT foo.client_id AS foo_id,boo.client_id AS ri_id,my_field from (SELECT
-1 AS client_id) foo LEFT JOIN boo_top_view boo USING (client_id);

 foo_id | ri_id | my_field
+---+--
 -1 |   |  666




 --
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Getting non_NULL right-side values on a non-matching join?

2013-11-21 Thread Ken Tanzer
The issue also seems tied to the non-NULL constant in the view.

This one yields rows
  33::int AS b_field

This one doesn't
NULL::int AS b_field

DROP VIEW IF EXISTS boo_top_view;
DROP VIEW IF EXISTS boo_view;
DROP TABLE IF EXISTS boo_table;
DROP TABLE IF EXISTS a_table;

CREATE TABLE boo_table ( client_id INTEGER,b_field INTEGER);
CREATE TABLE a_table ( client_id INTEGER );

CREATE OR REPLACE VIEW boo_view AS
SELECT
r1.client_id,
--  This one yields rows
33::int AS b_field
--  This one doesn't
--  NULL::int AS b_field

FROM boo_table r1;

CREATE OR REPLACE VIEW boo_top_view AS
SELECT
client_id,
--my_field
b_field
FROM (
SELECT
a.client_id,
--  a.my_field
a.b_field
FROM boo_view AS a
) foo
--Problem goes away if you take out this left join
LEFT JOIN (
SELECT client_id FROM  a_table
)  a2 USING (client_id);

SELECT foo.client_id AS foo_id,boo.client_id AS ri_id,b_field from (SELECT
-1 AS client_id) foo LEFT JOIN boo_top_view boo USING (client_id);





On Thu, Nov 21, 2013 at 7:49 PM, Ken Tanzer ken.tan...@gmail.com wrote:

 On Thu, Nov 21, 2013 at 5:12 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  Hello.  In doing a left join with a particular view as the right table,
 and
  non-matching join criteria, I am getting values returned in a few
 fields.
   All the rest are NULL.  I would expect all the right side values to be
  NULL.

 Hmmm ... the join conditions involving COALESCE() remind me of a bug I
 just fixed last week.  Are you in a position to try a patch?  If so,
 here's the fix against 9.2:

 http://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=51b6ae6bba75bca2374a24cf7c740da74c955ad5

 If that doesn't help, please see if you can extract a self-contained
 test case.


 Getting a build environment together seemed more painful, so here's a test
 case.  Just for fun, I tried this in another database on a different
 machine (and with 9.0.08).  I got the same results, so it doesn't seem to
 be a case of something wacky with my particular database.

 Cheers,
 Ken

 p.s.,  Not your problem I know, but I need to deal with this somehow and
 rather soon.  If the patch you mentioned does fix this, and that's the
 easiest way to get this fixed on my machine, please do let me know and I'll
 start googling Postgres build source.  Thanks!


 DROP VIEW IF EXISTS boo_top_view;
 DROP VIEW IF EXISTS boo_view;
 DROP TABLE IF EXISTS boo_table;
 DROP TABLE IF EXISTS a_table;

 CREATE TABLE boo_table ( client_id INTEGER);
 CREATE TABLE a_table ( client_id INTEGER );

 CREATE OR REPLACE VIEW boo_view AS
 SELECT
 r1.client_id,
 666 AS my_field
 FROM boo_table r1;

 CREATE OR REPLACE VIEW boo_top_view AS
 SELECT
 client_id,
 my_field
 FROM (
 SELECT
 a.client_id,
 a.my_field
 FROM boo_view AS a
 ) foo
 --Problem goes away if you take out this left join
 LEFT JOIN (
 SELECT client_id FROM  a_table
 )  a2 USING (client_id);

 SELECT foo.client_id AS foo_id,boo.client_id AS ri_id,my_field from
 (SELECT -1 AS client_id) foo LEFT JOIN boo_top_view boo USING (client_id);

  foo_id | ri_id | my_field
 +---+--
  -1 |   |  666




 --
 AGENCY Software
 A data system that puts you in control
 100% Free Software
 *http://agency-software.org/ http://agency-software.org/*
 ken.tan...@agency-software.org
 (253) 245-3801

 Subscribe to the mailing 
 listagency-general-requ...@lists.sourceforge.net?body=subscribe
  to
 learn more about AGENCY or
 follow the discussion.




-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Help : Sum 2 tables based on key from other table

2013-11-18 Thread Ken Tanzer
If the tables aren't huge, you're not concerned about optimization, and you
just want to get your numbers, I think something like this would do the
trick.  I haven't actually tried it 'cause I didn't have easy access to
your tables:

SELECT
  a.product_id,
  a.product_name,
  b.initial_stock_sum,
  c.in_out_sum,
  c.in_sum,
  c.out_sum
FROM
  a
LEFT JOIN
  (SELECT
product_id,
SUM(initial_stock) AS initial_stock_sum
  FROM b
  GROUP BY product_id
  ) b USING (product_id)
LEFT JOIN
  (SELECT
product_id,
sum(CASE WHEN date  'BEGINNING DATE' THEN in-out ELSE 0 END) AS
in_out_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' THEN in
ELSE 0 END) AS in_sum,
sum(CASE WHEN date BETWEEN 'BEGINNING DATE' AND 'ENDING DATE' THEN out
ELSE 0 END) AS out_sum
   FROM c
   GROUP BY product_id
   ) c USING (product_id)
WHERE a.supplier_id='XXX';

Cheers,
Ken


On Mon, Nov 18, 2013 at 12:47 AM, Raymond O'Donnell r...@iol.ie wrote:

 On 18/11/2013 02:16, Hengky Liwandouw wrote:
  Dear Friends,
 
  Please help for the select command, as i had tried many times and
  always can not display the result as what i want.
 
  I am looking for the solution on google but still can not found the
  right answer to solve the problem.
 
  I have 3 tables :
 
  Table A ProductID ProductName SupplierID
 
  Table B ProductID InitialStock
 
  Table C ProductID Date In Out
 
  1. I want to select all productID from Table A where
  supplierID='XXX'.
 
  2. Based on list from Step.1 : sum the initialstock from  Table B
 
  3. Based on list from Step 1 : Sum (in-out) from Table C where date
  'BEGINNING DATE'
 
  4. Based on list from Step 1 : Sum (in) and sum(out) from Table C
  where date between 'BEGINNING DATE' and 'ENDING DATE'
 
  So the result will look like this :
 
  ProductID  ProductName  SumofIntialStock  sum(in-Out)beginningdate
  SumofIN  SumofOut    x  99
  99 99 99 
  x  99   99
  99 99    x  99
  99 99 99 
  x  99   99
  99 99

 You could try using common table expressions, which let you build up to
 your final result in steps. Some reading:

 http://www.postgresql.org/docs/9.3/static/queries-with.html


 http://www.chesnok.com/daily/2013/11/12/how-i-write-queries-using-psql-ctes/


 Ray.


 --
 Raymond O'Donnell :: Galway :: Ireland
 r...@iol.ie


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] What does this error message mean?

2013-11-17 Thread Ken Tanzer
Hi.  I got an error message reported to me that I've never seen before, and
I'm not quite sure what it means or what would cause it.  When I re-run the
query now, it runs without complaint, so the problem seems to have gone
away.  Which of course I don't understand either!  Would be nice to know
for the future.  This is on 9.2.5.

The message says the error is at line 195 of a function.  I've attached the
function source in case it's relevant/helpful.

(And BTW, how exactly is that line number counted? Does the CREATE
FUNCTION statement count as line 1? Or does it start with the opening
quote?  And it looks like whitespace and comments are preserved internally,
so is it safe to assume the line number should match with a source file
used to create the function?  My best guess is that line 195 in this case
is pay_test.is_deleted=false; but that doesn't help me understand this
error any better!

CREATE FUNCTION blah blah $$
DECLARE... Line 1?

CREATE FUNCTION blah blah
$$

DECLARE... Line 1? 2?

CREATE FUNCTION blah blah
$$DECLARE... Line 1?


Here's the query:

INSERT INTO tbl_payment SELECT * FROM generate_payments
('12/1/2013','ASSIST_PRI','3852',sys_user()) WHERE NOT
ROW(client_id,payment_type_code,payment_date) IN (SELECT
client_id,payment_type_code,payment_date FROM payment_valid);

The error message was:

ERROR:  type of parameter 70 (text) does not match that when preparing the
plan (unknown) CONTEXT: PL/pgSQL function
generate_payments(date,text,integer,integer) line 195 at assignment

Thanks in advance for shedding any light on this.

Ken


-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.
CREATE OR REPLACE FUNCTION pro_rate_amount(amount float, p_start date, p_end date) returns decimal(7,2)
language plpgsql AS $$
	-- Function to calculate SPC pro-rate amounts
	-- Can handle multiple months
	-- Doesn't round--need to do it yourself
DECLARE
		result	float = 0;
		tmp_date date;
		work_start date;
		work_end date;

argcnt  int = 1;
chrcnt  int = 0;
fmtlen  int;
CHR text;
output  text = '';
BEGIN
		work_start = p_start;
		work_end = p_end;
		-- If multiple months, break into separate pieces and call recursively
		LOOP
			IF (date_part('month',work_start)  date_part('month',work_end)) or (date_part('year',work_start)  date_part('year',work_end)) THEN
tmp_date = date_trunc('month',work_start) + '1 month - 1 day';
result = result + pro_rate_amount(amount,work_start,tmp_date);
work_start = tmp_date + 1;
			ELSE	
EXIT;
			END IF;
		END LOOP;
		result = result + (amount / days_in_month(work_start) * (work_end - work_start + 1));
RETURN result::decimal(7,2);
END;
$$;

--CREATE OR REPLACE FUNCTION generate_payments ( date, text[], int[] ) RETURNS SETOF record AS $FUNC$
CREATE OR REPLACE FUNCTION generate_payments ( date, text, int, int ) RETURNS SETOF tbl_payment AS $FUNC$
DECLARE
mdate ALIAS FOR $1;
	mtype ALIAS FOR $2;
	mclient ALIAS FOR $3;
	by_who ALIAS FOR $4;
	mdate_text text;
	quer_assist text;
	quer_assist_pri text;
	quer_security text;
	quer_utility text;
	quer_utility_pr text;
	quer_assist_x text;
	quer_assist_pri_x text;
	quer_security_x text;
	quer_utility_x text;
	quer_utility_pr_x text;
	duplicate_clause text;
	final_query text;
	union_clause text;
	cid_clause text;
	payment record;
--	pay_test tbl_payment_test%rowtype;
	pay_test tbl_payment%rowtype;

BEGIN

-- Define ASSISTANCE query

	quer_assist = $$
	 SELECT
		ro.client_id,
		'%'::date AS payment_date,
		'ASSIST'::text AS payment_type_code,
		rent_amount_spc AS amount,
		ro.housing_project_code,
		ro.housing_unit_code,
		ri.grant_number_code,
		NULL AS comment,
		vendor_number
--	FROM residence_own ro
	FROM residence_own_current ro
		LEFT JOIN rent_info ri ON 
		(ri.client_id=ro.client_id AND 
		'%' BETWEEN effective_date AND COALESCE(effective_date_end,'%'))
	WHERE '%' BETWEEN ro.residence_date AND COALESCE(ro.residence_date_end,'%')
	AND ro.is_active_manual
	$$;

-- Define UTILITY query
	quer_utility := $$
	SELECT
		ro.client_id,
		'%'::date AS payment_date,
		'UTILITY'::text AS payment_type_code,
		0 - rent_amount_tenant AS amount,
		ro.housing_project_code,
		ro.housing_unit_code,
		grant_number_code,
		NULL AS comment,
		tenant_vendor_number AS vendor_number
--	FROM residence_own ro
	FROM residence_own_current ro
		LEFT JOIN rent_info ri ON 
		(ri.client_id=ro.client_id AND 
		'%' BETWEEN effective_date AND COALESCE(effective_date_end,'%'))
	WHERE '%' BETWEEN ro.residence_date AND COALESCE(ro.residence_date_end,'%')
	AND ro.is_active_manual
	AND rent_amount_tenant = -5 -- Under $5 utility payments are not issued.
	$$;

-- Define SECURITY query
	quer_security := $$
	SELECT
		

Re: [GENERAL] What does this error message mean?

2013-11-17 Thread Ken Tanzer
On Sun, Nov 17, 2013 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  Hi.  I got an error message reported to me that I've never seen before,
 and
  I'm not quite sure what it means or what would cause it.

  ERROR:  type of parameter 70 (text) does not match that when preparing
 the
  plan (unknown) CONTEXT: PL/pgSQL function
  generate_payments(date,text,integer,integer) line 195 at assignment

 I think this must mean that you changed the schema of table tbl_payment
 during this session, and then re-ran the function.  plpgsql isn't as good
 as it could be about dealing with intra-session changes of composite
 types.  The reference to parameter 70 seems a bit odd though, it doesn't
 seem like you have anywhere near 70 variables in that function ...



 (thinks about it for a bit)  Actually it seems more likely that a change
 in the rowtype of payment caused this, ie some change in the output
 column set of that final_query query.  Difficult to guess more than
 that without more context.

 regards, tom lane


Thanks Tom.  If you really mean it that the schema must have changed during
the session, that seems impossible (er, highly unlikely).  The error was
generated through a web app that doesn't know how to do any schema changing.

If there's a broader window, though, schema changes do seem plausible.  The
type of that comment field hasn't changed, but on Friday I did some
dropping and recreating of both the generate_payments function and the
views it draws upon.  If the function was created before the view existed,
would that account for this error?  (Leaving aside parameter 70, of
course).  It is possible that this row was the first one generated by the
function since the schema drops/creates on Friday.  (It actually looks like
it was 9 minutes after another row, but that's based on comparing server
time to a screenshot of a client's desktop with their clock showing, so I
wouldn't put too much faith in that.)

And if this error was from the Friday schema changes, would it have
auto-corrected itself so it only happened the one time?

Cheers,
Ken

-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] What does this error message mean?

2013-11-17 Thread Ken Tanzer

 The type of that comment field hasn't changed


Oh, and I'm going to slight eat my words, or at least elaborate.  That
comment field has been in all the views unchanged.  Until Tuesday, though,
the field wasn't being used or referenced in the function.  So that line
195 is actually new as of Tuesday.  But the function was replaced at the
time, and definitely has generated rows since then.

But thinking about it some more, the function runs one of 5 possible
queries.  4 of them select NULL as comment (no cast), while the fifth (and
the one that caused this error) selects 'a string'. That actually got me
thinking more, and I can now reproduce the error.  If I run the query  with
any of the NULL comments, and then with the string, the query consistently
fails.  And actually if I run the string query first, the other 4 then will
fail for the rest of the session.  (Example below.)

I assume this will go away if I change my lazy query and cast my NULLs, but
still wonder if this is something that should be expected to fail?

Thanks,
Ken

--'ASSIST'=untyped NULL comment, 'ASSIST_PRI'=string comment
SELECT * FROM generate_payments ('12/1/2013','ASSIST','3852',sys_user());
(one row returned)
SELECT * FROM generate_payments
('12/1/2013','ASSIST_PRI','3852',sys_user());
ERROR:  type of parameter 70 (text) does not match that when preparing the
plan (unknown)
CONTEXT:  PL/pgSQL function generate_payments(date,text,integer,integer)
line 195 at assignment

(quit psql, start psql)

SELECT * FROM generate_payments
('12/1/2013','ASSIST_PRI','3852',sys_user());
(one row returned)
SELECT * FROM generate_payments ('12/1/2013','ASSIST','3852',sys_user());
ERROR:  type of parameter 70 (unknown) does not match that when preparing
the plan (text)
CONTEXT:  PL/pgSQL function generate_payments(date,text,integer,integer)
line 195 at assignment


On Sun, Nov 17, 2013 at 7:03 PM, Ken Tanzer ken.tan...@gmail.com wrote:


 On Sun, Nov 17, 2013 at 6:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  Hi.  I got an error message reported to me that I've never seen before,
 and
  I'm not quite sure what it means or what would cause it.

  ERROR:  type of parameter 70 (text) does not match that when preparing
 the
  plan (unknown) CONTEXT: PL/pgSQL function
  generate_payments(date,text,integer,integer) line 195 at assignment

 I think this must mean that you changed the schema of table tbl_payment
 during this session, and then re-ran the function.  plpgsql isn't as good
 as it could be about dealing with intra-session changes of composite
 types.  The reference to parameter 70 seems a bit odd though, it doesn't
 seem like you have anywhere near 70 variables in that function ...



 (thinks about it for a bit)  Actually it seems more likely that a change
 in the rowtype of payment caused this, ie some change in the output
 column set of that final_query query.  Difficult to guess more than
 that without more context.

 regards, tom lane


 Thanks Tom.  If you really mean it that the schema must have changed
 during the session, that seems impossible (er, highly unlikely).  The error
 was generated through a web app that doesn't know how to do any schema
 changing.

 If there's a broader window, though, schema changes do seem plausible.
  The type of that comment field hasn't changed, but on Friday I did some
 dropping and recreating of both the generate_payments function and the
 views it draws upon.  If the function was created before the view existed,
 would that account for this error?  (Leaving aside parameter 70, of
 course).  It is possible that this row was the first one generated by the
 function since the schema drops/creates on Friday.  (It actually looks like
 it was 9 minutes after another row, but that's based on comparing server
 time to a screenshot of a client's desktop with their clock showing, so I
 wouldn't put too much faith in that.)

 And if this error was from the Friday schema changes, would it have
 auto-corrected itself so it only happened the one time?

 Cheers,
 Ken

 --
 AGENCY Software
 A data system that puts you in control
 100% Free Software
 *http://agency-software.org/ http://agency-software.org/*
 ken.tan...@agency-software.org
 (253) 245-3801

 Subscribe to the mailing 
 listagency-general-requ...@lists.sourceforge.net?body=subscribe
  to
 learn more about AGENCY or
 follow the discussion.




-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] What does this error message mean?

2013-11-17 Thread Ken Tanzer
Agreed.  Although given that you can cast text to unknown, and NULL to
text, it's not intuitively clear why this would have to fail absent
replanning.  However, knowing nothing about Postgres internals, I'm happy
to take your word for it! Thanks again.

Ken


On Sun, Nov 17, 2013 at 7:59 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  But thinking about it some more, the function runs one of 5 possible
  queries.  4 of them select NULL as comment (no cast), while the fifth
 (and
  the one that caused this error) selects 'a string'.

 Ah.  Fixing that so all the variants produce the same (explicit) type
 should take care of this.

  I assume this will go away if I change my lazy query and cast my NULLs,
 but
  still wonder if this is something that should be expected to fail?

 Ideally it wouldn't, but it's not clear what it'd cost to fix it.
 If we just silently replanned when the query output types changed,
 then this type of situation would work but would carry a large hidden
 performance penalty.  That's not too appetizing either.

 regards, tom lane




-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/ http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


[GENERAL] pg_dump doesn't restore on newer database due to schema issues (bug or limitation?)

2013-10-21 Thread Ken Tanzer
When trying to move a database from 8.3 to 9.2.5, the plain text pg_dump
did not restore without errors.  (I used pg_dump from 9.2.5)  The problem
seems to relate to references to other schemas and the schema search paths.

First, here's the error message:

psql:/tmp/spc_test_1005_dump_with_pg_9_2_5.orig.sql:451247: ERROR:
 relation tbl_housing_unit does not exist
LINE 3:  SELECT a.housing_project_code FROM tbl_housing_unit a WHERE...
^
QUERY:

SELECT a.housing_project_code FROM tbl_housing_unit a WHERE
LOWER(a.housing_unit_code)=LOWER($1) LIMIT 1;


CONTEXT:  SQL function housing_project_from_unit during inlining
COPY tbl_unit_absence, line 1: 1 [data snipped] \...

This database has public and spc schemas.  tbl_housing_unit (table) and
housing_project_from_unit (function) are in the public schema.  The error
occurs during this operation:

COPY tbl_unit_absence (unit_absence_id, client_id, housing_project_code,
housing_unit_code, unit_absence_date, unit_absence_date_end,
absence_reason_code, comment, added_by, added_at, changed_by, changed_at,
is_deleted, deleted_at, deleted_by, deleted_comment, sys_log) FROM stdin;

tbl_unit_absence is in the spc schema.  It has a constraint that uses the
housing_project_from_unit (from public).  Prior to running the copy
command, the dump file has done a SET search_path = spc, pg_catalog; and
so it doesn't find tbl_housing_unit from public.

I was able to get my database restored by changing the SET search_path
commands to spc, public, pg_catalog and public, spc, pg_catalog so this
isn't a practical issue for me.  Even more so because those relations were
all meant to be in the public schema--things just got a bit screwy.

But I haven't seen anything that indicates this should stop a pg_dump from
working, and so wonder if this should be reported as a bug.  It might be a
known limitation, or maybe it's just tough luck if you cross schemas?

I'm happy to provide more information if it's helpful.  Thanks.

Ken






-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] pg_dump doesn't restore on newer database due to schema issues (bug or limitation?)

2013-10-21 Thread Ken Tanzer
On Mon, Oct 21, 2013 at 6:55 AM, Adrian Klaver adrian.kla...@gmail.comwrote:

 What was the pg_dump command you used to dump the database?


pg_dump -p  -h localhost -F p -U spc_ag spc_test_1005 
~/spc_test_1005_dump_with_pg_9_2_5.sql



 So to be clear, housing_project_from_unit was not restored at all unless
 you manually changed the search_path or did Postgres throw an error at
 restore it at a later point?


I believe housing_project_from_unit (function) was created.
 tbl_unit_absence was created, and the error was caused when it tried to
populate tbl_unit_absence, because it had the constraint using
housing_project_from_unit().  That function couldn't find
tbl_housing_project, because it was in the public schema, which was not in
the search path at that point.

I didn't actually try it, but it's a plain SQL dump, and there were no
attempts to populate tbl_unit_absence further on in the file.

Ken


-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] pg_dump doesn't restore on newer database due to schema issues (bug or limitation?)

2013-10-21 Thread Ken Tanzer
On Mon, Oct 21, 2013 at 7:47 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Ken Tanzer ken.tan...@gmail.com writes:
  When trying to move a database from 8.3 to 9.2.5, the plain text pg_dump
  did not restore without errors.  (I used pg_dump from 9.2.5)  The problem
  seems to relate to references to other schemas and the schema search
 paths.

 What this looks like to me is that you have a SQL function which isn't
 protecting itself against changes in search_path.  It could fail in any
 context where somebody's changed search_path, not just a restore run.
 You should consider fully qualifying the table reference in the function's
 source code, or adding a SET search_path clause to the function
 definition.


That all sounds about right.  It's just that my previous experience had
been you dump a file with pg_dump, and it restores OK.  These relations
were all _supposed_ to be in the same schema, so it may reflect a poor or
wacky (or accidental!) use case, but the database could run OK with the
search path set as needed, whereas the dump seems destined to fail.  There
may be no way around it, but it's helpful for me to know that a dump is not
guaranteed to restore!

Cheers,
Ken


-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] psql swallowed my BEGIN; on reset... user beware?

2013-09-30 Thread Ken Tanzer

 you can control this with on error stop directive -- add it to your
 psqlrc if you want the setting to persist.


I hear what you're saying, but wonder about the implications:

It sounds like you can't trust your BEGIN to actually start a transaction
unless on_error_stop is set

The default for on_error_stop seems to be off, so at a minimum this is a
default gotcha waiting to happen.  At the very least, perhaps the
documentation for BEGIN and ON_ERROR_STOP should mention this?

Should BEGIN be a special case?  The operator clearly wants the subsequent
commands to run together or not at all.  I'm having trouble imagining any
scenario under which one would want their commands to continue after a
failed BEGIN, especially if it happens at random times outside of one's
control.

Is your response based on belief that the current behavior is good, or a
concern about changing the way psql works, or undesired side effects from
changing it?  I could understand any of those--just curious what your
thinking is!

Cheers,
Ken


[GENERAL] psql swallowed my BEGIN; on reset... user beware?

2013-09-29 Thread Ken Tanzer
After restarting the server in another window, I was surprised that my
command did not run in a transaction:

spc_test_scratch=# BEGIN; DROP VIEW IF EXISTS ptest_mip ; DROP VIEW
rent_info; \i create.view.rent_info.sql
FATAL:  terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
NOTICE:  view ptest_mip does not exist, skipping
DROP VIEW
DROP VIEW
CREATE VIEW
spc_test_scratch=# commit;
WARNING:  there is no transaction in progress
COMMIT

It looks like the behavior makes perfect sense--the BEGIN; failed, the
server reset, psql continued processing commands.  It seems a little
dangerous, though.  In my case I could/should have known, but a user could
easily have no way of knowing if their server was reset by an admin.  And
of course BEGIN; is an unusually likely candidate for first command in a
series.

Is this the desired behavior, or would it be better to abort the commands
at this point?

Ken



-- 
AGENCY Software
A data system that puts you in control
100% Free Software
*http://agency-software.org/*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing
listagency-general-requ...@lists.sourceforge.net?body=subscribe
 to
learn more about AGENCY or
follow the discussion.


Re: [GENERAL] Why does this array query fail?

2013-09-19 Thread Ken Tanzer

 Is this a quarterly report because that is how long it takes to run?


It takes about 7 seconds to run.  I suppose if I optimized it I could save
a minute every couple of years.

I usually get concerned about performance issues when they're actually
causing problems.  I'm generally more concerned about how long it takes to
write queries, and how cumbersome the SQL involved is.  And since arrays
are relatively new to me, I've been trying to understand generally the best
ways to query information out of them, or when their behavior just doesn't
make sense to me.  I'll say I answer 99.99% of my own questions before they
ever make it to the list, and by the time they do I invariably have read
the documentation as best as I can.  This has been my favorite list ever to
read, as people are invariably helpful, patient and polite to each other.

I would suggest considering how to use functions to encapsulate some
of the medical
 code collecting logic.  And consider WITH/CTE constructs as well, like I
 used in my last message, to effectively create temporary named tables for
 different parts of the query.



 Might want to move the whole thing into function and pass in the various
 parameters - namely the date range - instead of hard-coding the values
 into the view.


Thanks for these constructive suggestions.  I see benefits both ways.  And
the dates are actually parsed in by an app at run-time.  (I stripped that
part out to avoid confusion--I find it hard to know when submitting a list
item how much to just dump a full real example, and how much to simplify
down to a test case that illustrates the specific issue.)

On a smaller scale I've written queries like this.  I enrolled in a
 university database design course shortly thereafter...


until someone more knowledgeable (like a future you probably) comes back and


I'm not sure what the point of either of these comments were, but perhaps
they made you feel better.  Either way, thanks for taking the time to look
my stuff over and for the other comments and explanations you made.

Ken


Re: [GENERAL] Why does this array query fail?

2013-09-17 Thread Ken Tanzer

 Can we please follow list norms (or at least my example since I was the
 first to respond) and bottom-post.


Absolutely.  Gmail did it without my realizing, but my bad and I'm all for
following the list conventions.

Thanks for taking the time to explain this stuff, which I appreciate.
 Mostly it makes sense, but a couple of things remain puzzling to me.

1)  On what exactly does PG base its decision to interpret the ANY as
scalar or not?  Or are you saying a sub-query will always be treated as
non-scalar, unless it is explicitly cast to an array?

2) Regarding:

 In this context PostgreSQL goes looking for an operator -
 e.g., equals(=) - with the right-side argument being of the type of the
 sub-query column.


Why?  In this case you have ANY (varchar[]), which as I understand it =
needs to compare with a varchar.  So why is it looking for an array?  If it
was just varchar = varchar[], I'd get that, but in this case it's
ANY(varchar[]), so does PG extract the type of the argument to ANY without
paying attention to the ANY itself?

There is quite a bit more to this that can be gleaned by reading the
 documentation for sub-queries.


 I'm not sure if there's something specific you're referring to.  I had
looked at the page on subquery expressions (
http://www.postgresql.org/docs/9.0/static/functions-subquery.html), as well
as the following page on row and array comparisons to see the two forms
of ANY, but don't see anything that covers these nuances in greater depth.
 Is there another page I should be looking at?

Note that (SELECT ARRAY[...])::text[] is only a proper solution if...


Yeah, I tried to boil down my example, but this is closer to what I was
really trying to do:

CREATE TEMP TABLE mytable ( codes varchar[] );
INSERT INTO mytable VALUES ( array[ 'pass','fail'] );
INSERT INTO mytable VALUES ( array[ 'found'] );
SELECT 'found' WHERE 'found' =ANY(
(SELECT array_agg(code) FROM (SELECT unnest(codes) AS code FROM
mytable) foo
 )
);

And for immediate purposes, found this worked just as well (as a non-scalar
subquery, I guess):

SELECT 'found' WHERE 'found' =ANY(
(SELECT unnest(codes) AS code FROM mytable)
);

Thanks again for your help and explanations!

Ken


  1   2   >