Re: [GENERAL] Session Identifiers

2015-12-20 Thread Melvin Davidson
PostgreSQL does not "store" the session_id per se in any system
catalogs/tables, however, you can configure the log_line_prefix in
postgresql.conf to record it for each connection. It will then be stored in
the postgresql log file.
Please not that in the future, it is always helpful to provide the exact
version of PostgreSQL and the O/S you are working with.

On Sun, Dec 20, 2015 at 11:08 AM, Pavel Stehule <pavel.steh...@gmail.com>
wrote:

> Hi
>
> 2015-12-20 16:16 GMT+01:00 oleg yusim <olegyu...@gmail.com>:
>
>> Greetings!
>>
>> I'm new to PostgreSQL, working on it from the point of view of Cyber
>> Security assessment. In regards to the here is my questions:
>>
>> From the security standpoint we have to assure that database invalidates
>> session identifiers upon user logout or other session termination (timeout
>> counts too).
>>
>> Does PostgreSQL perform this type of actions? If so, where are those
>> Session IDs are stored, so I can verify it?
>>
>
> Postgres is based on processes - for any session is created new process
> when user is logged and this process is destroyed when user does logout.
> Almost all data are in process memory only, but shared data related to
> sessions are stored in shared memory - in array of PGPROC structures.
> Postgres invalidates these data immediately when process is destroyed.
> Search PGPROC in our code. Look to postmaster.c, where these operations are
> described.
>
> What I know, there are not any other session data - so when process is
> destroyed, then all is destroyed by o.s.
>
> Can be totally different if you use some connection pooler like pgpool or
> pgbouncer - these applications can reuse Postgres server sessions for more
> user sessions.
>
> Regards
>
> Pavel
>
>
>>
>> Thanks,
>>
>> Oleg
>>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] how to create dump of selected rows from a table

2015-12-18 Thread Melvin Davidson
And to get the structure of a table, you can use
pg_describe_object(catalog_id, object_id, object_sub_id)

On Fri, Dec 18, 2015 at 10:16 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 12/17/2015 09:28 AM, anj patnaik wrote:
>
>> Currently, I am running pg_dump on an entire table of a database by doing:
>>
>>   PGPASSWORD=$PGPASSWORDB /opt/data/PostgreSQL/9.4/bin/pg_dump -t
>> RECORDER -Fc $i -U pguser -Z0 | xz -9 >
>> "$backup_dir/$i-$timeslot-database"
>>
>> However, this table has a lot of rows.
>>
>> Is there a way I could do pg_dump on a subset of rows that match a
>> criteria say via a sql statement?
>>
>
> Yes it is called COPY:):
>
> http://www.postgresql.org/docs/9.4/interactive/sql-copy.html
>
> or its psql equivalent \copy:
>
> http://www.postgresql.org/docs/9.4/interactive/app-psql.html
>
>
>
>> Does anyone have any snippets on how to do that?
>>
>> Thank you
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] dblink_connect fails

2015-12-16 Thread Melvin Davidson
Perhaps the attached will help. It's a sample function that I wrote and
tested a few years ago on PG v8.3
It worked then, so it should be a good model for you.

On Wed, Dec 16, 2015 at 8:00 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 12/16/2015 04:53 PM, James Sewell wrote:
>
>> No it is not.
>>
>> Just in case I tried setting it to 'postgres', logged in without -U
>> (doesn't work without PGUSER set) and tried the operation again.
>>
>
> Do you mean this:
>
> DETAIL:  FATAL:  role "PRDSWIDEGRID01$" does not exist
>
> or that you could not connect for another reason?
>
> If for the original reason, does the role PRDSWIDEGRID01$ actually exist
> somewhere?
>
>
>
>> Same result.
>>
>> Cheers,
>>
>>
>> James Sewell,
>> PostgreSQL Team Lead / Solutions Architect
>> __
>>
>> Level 2, 50 Queen St, Melbourne VIC 3000
>>
>> *P *(+61) 3 8370 8000***W* www.lisasoft.com *F *(+61) 3 8370 8099
>>
>> On Wed, Dec 16, 2015 at 3:26 PM, Joe Conway <m...@joeconway.com
>> <mailto:m...@joeconway.com>> wrote:
>>
>> On 12/15/2015 06:24 PM, James Sewell wrote:
>> > I have a Windows PostgreSQL server where dblink_connect fails to
>> pick up
>> > the current user as follows:
>>
>> > ffm=# SELECT dblink_connect('master', 'dbname=ffm');
>> > ERROR:  could not establish connection
>> > DETAIL:  FATAL:  role "PRDSWIDEGRID01$" does not exist
>>
>> > Has anyone seen this before? It seems very odd to me, I have another
>> > identical machine (except for being on 9.4.0) which this works on.
>>
>>
>> Do you happen to have a PGUSER variable defined in your environment
>> (that is, in the environment as seen by the OS user the postmaster
>> runs
>> under)?
>>
>> See: http://www.postgresql.org/docs/9.4/static/libpq-envars.html
>>
>> Joe
>>
>> --
>> Crunchy Data - http://crunchydata.com
>> PostgreSQL Support for Secure Enterprises
>> Consulting, Training, & Open Source Development
>>
>>
>>
>> 
>> The contents of this email are confidential and may be subject to legal
>> or professional privilege and copyright. No representation is made that
>> this email is free of viruses or other defects. If you have received
>> this communication in error, you may not copy or distribute any part of
>> it or otherwise disclose its contents to anyone. Please advise the
>> sender of your incorrect receipt of this correspondence.
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
-- Function: test_archive(text, text, text, text, date)

-- DROP FUNCTION test_archive(text, text, text, text, date);

CREATE OR REPLACE FUNCTION test_archive(text, text, text, text, date)
  RETURNS bigint AS
$BODY$
-- CREATED BY: Melvin Davidson
-- DATE: 2008-08-11
-- 
-- Execute this function from archive db
-- General flow
-- 1. Open connection to production db 
-- 2. Insert into archive db as select from with criteria
-- 3. Delete from production db with same criteria
-- 4. Close connection.
-- 5. Return the number of TXMaster (Total) records archived.
--
-- ERROR HANDLING
-- If dblink connection fails, -1 is returned
-- If archive date is >= Current Date, -10 is returned
-- If Total number of transactions inserted does not = TXMaster inserts, 
-- rollback is done and -100 returned
-- If Total number of remote transactions deleted does not = remote TXMaster 
deletes, 
-- rollback is done and -200 returned

DECLARE
-- Parameter(s)
p_host_ip   ALIAS FOR $1;
p_dbnameALIAS FOR $2;
p_user  ALIAS FOR $3;
p_passwdALIAS FOR $4;
p_date  ALIAS FOR $5;

-- Variable(s)
v_ctr   BIGINT DEFAULT 0;
v_txmaster_ctr  BIGINT DEFAULT 0;
v_txassetcycle_ctr  BIGINT DEFAULT 0;
v_txassetdamaged_ctrBIGINT DEFAULT 0;
v_txassetfilledemptied_ctr  BIGINT DEFAULT 0;
v_txassetfound_ctr  BIGINT DEFAULT 0;
v_txassetlost_ctr

Re: [GENERAL] Interrupted streaming replication

2015-12-07 Thread Melvin Davidson
As long as you have 00010089002C and the subsequent WALs in
your archive directory, then you should
be able to simply scp them to you second slave's pg_xlog directory.


On Mon, Dec 7, 2015 at 8:50 AM, Tony Nelson <tnel...@starpoint.com> wrote:

>
>
> > -Original Message-
> > Tony Nelson <tnel...@starpoint.com> wrote:
> >
> > > 2015-12-07 08:24:50 EST FATAL:  requested WAL segment
> > > 00010089002C has already been removed
> > >
> > >
> > >
> > > Can I simply copy the file from my archive directory back to the WAL
> > directory?
> >
> > I'm afraid that won't work, because of the master knows nothing about
> this
> > copy and can't stream its content.
> >
> > Maybe you can create/update your recovery.conf, see restore_command.
> >
> >
> > for the future: increase wal_keep_segments, or better: use replication
> slots
> > (since 9.4 a really good feature to prevent such errors).
> >
> >
>
> Thank you.  I'm still running 9.1, but I'll look into 9.4.  Fortunately my
> database is small so doing a new base backup is not the end of the world.
> It's been a long time since I've set this up, so I'm going to have to go
> through a refresher course on exactly what I need to do.
>
> Thanks for your help.
>
> Tony
>
> Since 1982, Starpoint Solutions has been a trusted source of human capital
> and solutions. We are committed to our clients, employees, environment,
> community and social concerns.  We foster an inclusive culture based on
> trust, respect, honesty and solid performance. Learn more about Starpoint
> and our social responsibility at
> http://www.starpoint.com/social_responsibility
>
> This email message from Starpoint Solutions LLC is for the sole use of
> the intended recipient(s) and may contain confidential and privileged
> information.  Any unauthorized review, use, disclosure or distribution is
> prohibited.  If you are not the intended recipient, please contact the
> sender by reply email and destroy all copies of the original message.
> Opinions, conclusions and other information in this message that do not
> relate to the official business of Starpoint Solutions shall be understood
> as neither given nor endorsed by it.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Selecting newly added column returns empty but only when selecting with other columns in table

2015-11-26 Thread Melvin Davidson
Is it possible you have more than one row where state = 'CA'? Putting a
LIMIT 1 would then restrict to only 1 row.
Have you tried with no limit? IE: SELECT new_col FROM data;


On Thu, Nov 26, 2015 at 7:13 PM, mrtruji <mrtr...@gmail.com> wrote:

> Just added a new bytea type column to an existing psql table and populated
> the column entirely with row data. Running into some strange query results:
>
> When I select the newly added column by itself I get all the data as
> expected:
>
> SELECT new_col FROM data LIMIT 1;
> Result: \x8481e7dec3650040b
>
> When I try to filter with 'where' on another column in the table, I get
> the values from the other columns as expected but empty from my new_column:
>
> SELECT id, state, new_col FROM data WHERE state='CA' limit 1;
> Result: 123456_1; CA; EMPTY ROW
>
> The reverse is also true. If I select my new column in combination with
> other columns with no 'where' I get the correct value from my new column
> but empty for the other columns:
>
> SELECT id, state, new_col FROM data limit 1;
> Result: EMPTY ROW; EMPTY ROW; \x8481e7dec3650040b
>
> Thanks to anyone with advice!
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Selecting newly added column returns empty but only when selecting with other columns in table

2015-11-26 Thread Melvin Davidson
OK, thanks for clarifying, but just for sanity sake, it would REALLY be
nice if you would advise us of the exact version of PostgreSQL and the O/S
you are working with.

A copy of the table structure would also be helpful.

Just one more thing, is it possible you have an index on that table that
might be corrupted?

What does the following query return?

SELECT n.nspname,
   i.relname,
   i.indexrelname,
   CASE WHEN idx.indisprimary
THEN 'pkey'
WHEN idx.indisunique
THEN 'uidx'
ELSE 'idx'
END AS type,
'INVALID'
  FROM pg_stat_all_indexes i
  JOIN pg_class c ON (c.oid = i.relid)
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
  JOIN pg_index idx   ON (idx.indexrelid =  i.indexrelid )
 WHERE idx.indisvalid = FALSE
   AND i.relname = 'data'
 ORDER BY 1, 2,3;

On Thu, Nov 26, 2015 at 9:10 PM, mrtruji <mrtr...@gmail.com> wrote:

> Hi,
>
> Thanks for the reply. The limit is just to simplify results for the
> examples. The same behavior occurs when each of the three queries are not
> limited. Whenever I try to filter by the original columns and select the
> new column the resultant values for the new column are empty. Conversely,
> whenever I select the new column along with original columns without any
> filtering the resultant values for the original columns return empty. It's
> as if the added column is disconnected to the table in some way causing
> problems with queries that combine original columns and the new one.
>
> I created and filled in the new column externally using psycopg2 in Python
> so I'm not sure if that could be the source of the problem...
>
> On Thu, Nov 26, 2015 at 5:39 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>> Is it possible you have more than one row where state = 'CA'? Putting a
>> LIMIT 1 would then restrict to only 1 row.
>> Have you tried with no limit? IE: SELECT new_col FROM data;
>>
>>
>> On Thu, Nov 26, 2015 at 7:13 PM, mrtruji <mrtr...@gmail.com> wrote:
>>
>>> Just added a new bytea type column to an existing psql table and
>>> populated the column entirely with row data. Running into some strange
>>> query results:
>>>
>>> When I select the newly added column by itself I get all the data as
>>> expected:
>>>
>>> SELECT new_col FROM data LIMIT 1;
>>> Result: \x8481e7dec3650040b
>>>
>>> When I try to filter with 'where' on another column in the table, I get
>>> the values from the other columns as expected but empty from my new_column:
>>>
>>> SELECT id, state, new_col FROM data WHERE state='CA' limit 1;
>>> Result: 123456_1; CA; EMPTY ROW
>>>
>>> The reverse is also true. If I select my new column in combination with
>>> other columns with no 'where' I get the correct value from my new column
>>> but empty for the other columns:
>>>
>>> SELECT id, state, new_col FROM data limit 1;
>>> Result: EMPTY ROW; EMPTY ROW; \x8481e7dec3650040b
>>>
>>> Thanks to anyone with advice!
>>>
>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Selecting newly added column returns empty but only when selecting with other columns in table

2015-11-26 Thread Melvin Davidson
Look like another case of PEBKAC solved. Thanks Adrian & JOhn for the
assist. Happy Thanksgiving everyone.

On Thu, Nov 26, 2015 at 10:22 PM, John R Pierce <pie...@hogranch.com> wrote:

> On 11/26/2015 7:08 PM, mrtruji wrote:
>
>> Ok this is strange. Shouldn't it always be one or the other? Total row
>> count for the table is 279,096.
>>
>> doggies=# select count(*) from data where features_bin is null;
>>  count
>> 
>>  279096
>> (1 row)
>>
>> doggies=# select count(*) from data where features_bin is not null;
>>  count
>> 
>>  279096
>> (1 row)
>>
>
>
> you inserted 278096 additional rows with (NULL, NULL, ) to the
> 279096 existing rows that had (value,value,NULL)
>
>
>
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Selecting newly added column returns empty but only when selecting with other columns in table

2015-11-26 Thread Melvin Davidson
ok. It looks like a bug tome, and this is Thanksgiving holiday, so probably
the developers won't be able to lot at this until Monday. But just to be
sure, what happens when you create a new table with the same structure and
populate that?

And can you do a pg_dump of the table and see if the data appears?



On Thu, Nov 26, 2015 at 9:29 PM, mrtruji <mrtr...@gmail.com> wrote:

> Sure thing. Below are the results from your query along with the version
> and table info. Not sure about the index. I queried the table quite a bit
> before adding the new column and didn't have any issues.
>
> Here is the result from your query:
>
>  nspname | relname | indexrelname | type | ?column?
> -+-+--+--+--
> (0 rows)
>
> Version:
> PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu
> 4.8.2-19ubuntu1) 4.8.2, 64-bit
>
>
> Table "public.data"
> Column| Type | Modifiers
> --+--+---
>  id   | text |
>  name | text |
>  gender   | text |
>  age  | text |
>  street   | text |
>  city | text |
>  state| text |
>  zip  | text |
>  longitude| double precision |
>  latitude | double precision |
>  geom | geometry(Point,4326) |
>  features_bin | bytea    |
> Indexes:
> "ix_data_id" btree (id)
>
>
>
>
> On Thu, Nov 26, 2015 at 6:19 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>> OK, thanks for clarifying, but just for sanity sake, it would REALLY be
>> nice if you would advise us of the exact version of PostgreSQL and the O/S
>> you are working with.
>>
>> A copy of the table structure would also be helpful.
>>
>> Just one more thing, is it possible you have an index on that table that
>> might be corrupted?
>>
>> What does the following query return?
>>
>> SELECT n.nspname,
>>i.relname,
>>i.indexrelname,
>>CASE WHEN idx.indisprimary
>> THEN 'pkey'
>> WHEN idx.indisunique
>> THEN 'uidx'
>> ELSE 'idx'
>> END AS type,
>> 'INVALID'
>>   FROM pg_stat_all_indexes i
>>   JOIN pg_class c ON (c.oid = i.relid)
>>   JOIN pg_namespace n ON (n.oid = c.relnamespace)
>>   JOIN pg_index idx   ON (idx.indexrelid =  i.indexrelid )
>>  WHERE idx.indisvalid = FALSE
>>AND i.relname = 'data'
>>  ORDER BY 1, 2,3;
>>
>> On Thu, Nov 26, 2015 at 9:10 PM, mrtruji <mrtr...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Thanks for the reply. The limit is just to simplify results for the
>>> examples. The same behavior occurs when each of the three queries are not
>>> limited. Whenever I try to filter by the original columns and select the
>>> new column the resultant values for the new column are empty. Conversely,
>>> whenever I select the new column along with original columns without any
>>> filtering the resultant values for the original columns return empty. It's
>>> as if the added column is disconnected to the table in some way causing
>>> problems with queries that combine original columns and the new one.
>>>
>>> I created and filled in the new column externally using psycopg2 in
>>> Python so I'm not sure if that could be the source of the problem...
>>>
>>> On Thu, Nov 26, 2015 at 5:39 PM, Melvin Davidson <melvin6...@gmail.com>
>>> wrote:
>>>
>>>> Is it possible you have more than one row where state = 'CA'? Putting a
>>>> LIMIT 1 would then restrict to only 1 row.
>>>> Have you tried with no limit? IE: SELECT new_col FROM data;
>>>>
>>>>
>>>> On Thu, Nov 26, 2015 at 7:13 PM, mrtruji <mrtr...@gmail.com> wrote:
>>>>
>>>>> Just added a new bytea type column to an existing psql table and
>>>>> populated the column entirely with row data. Running into some strange
>>>>> query results:
>>>>>
>>>>> When I select the newly added column by itself I get all the data as
>>>>> expected:
>>>>>
>>>>> SELECT new_col FROM data LIMIT 1;
>>>>> Result: \x8481e7dec3650040b
>>>>>
>>>>> When I try to filter with 'where' on another column in the table, I
>>>>> get the val

Re: [GENERAL] Taking lot time

2015-11-25 Thread Melvin Davidson
Adrian,

He never stated what the O/S is. I strongly suspect that he is using
PgAdmin to connect to a remote server, which adds another degree of
complication. So at the very least, we also need to know if the server is
local or remote and what the O/S is. Plus, as you have requested before,
exactly what time is "a long time"?

On Wed, Nov 25, 2015 at 3:50 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 11/25/2015 07:46 AM, Ramesh T wrote:
>
>>
>> Hi All,
>>  9.3 version using pgadmin3
>>
>> Query like below,20 columns fetching
>>   SELECT col1,col2,col3,...col.20
>> FROM detail i,
>>  adjdetail ia,
>>WHERE i.trans_id = ia.detail_id AND
>> (i.event = ANY (ARRAY[21, 22, 3, 5]))
>> created indexes on  where clause declared columns.
>>
>> Taking a lot of time above ..?any help apprectiated !.
>>
>
> Realized my previous post was not really helpful. What I should have said
> is below.
>
> To get a useful answer you will need to supply a complete(from one run of
> the query) set of information for the problem you are experiencing.
>
> So:
>
> 1) The complete definitions for the tables involved in the query,
> including FKs, triggers, etc.
>
> 2) The query you are running.
>
> 3) The EXPLAIN ANALYZE for said query
>
> 4) The number of rows returned from the query.
>
> 5) The procedure you used to getting the time value and where it was
> applied(psql, pgAdmin, etc).
>
>
>> I have total 4 gb ram,i changed below in postgres.conf
>>
>> shared_buffers--1024mb
>> temp_bufffers=8mb
>> work_mem=200mb
>> maintanace_work_mem=500mb
>> seq_page_cost = 1.0
>> random_page_cost = 5.0
>> effective_cache_size = 1024MB
>>
>>
>>
>> On Wed, Nov 18, 2015 at 11:12 PM, Adrian Klaver
>> <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 11/17/2015 04:18 AM, Ramesh T wrote:
>>
>> the query is big it's selecting 20 rows from two table like i
>> mentioned
>> above exaplain analyze
>>
>> what should i do..?any help
>>
>>
>> Please do not top post.
>>
>> I must be missing a post, as I see no explanation of what the query
>> is doing.
>>
>>
>> On Wed, Nov 4, 2015 at 4:27 AM, Adrian Klaver
>> <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>> <mailto:adrian.kla...@aklaver.com
>> <mailto:adrian.kla...@aklaver.com>>> wrote:
>>
>>  On 11/03/2015 06:42 AM, Ramesh T wrote:
>>
>>  I have a Query it taking a lot of time to fetch results
>>  so,explain query gave
>>
>>  "Hash Join  (cost=55078.00..202405.95 rows=728275
>> width=418)"
>>  "  Hash Cond: (itd.tran_id = iad._adj__id)"
>>  "  ->  Seq Scan on inv_detail itd  (cost=0.00..40784.18
>> rows=731029
>>  width=95)"
>>  "Filter: (event_type = ANY
>> ('{21,22,3,5}'::integer[]))"
>>  "  ->  Hash  (cost=20590.78..20590.78 rows=610978
>> width=331)"
>>  "->  Seq Scan on inv_adj  iad
>> (cost=0.00..20590.78
>>  rows=610978
>>  width=331)"
>>
>>  Can you Please let me know wt happen in query..?wt
>> should i do..
>>
>>
>>  And the query is?
>>
>>
>>  --
>>  Adrian Klaver
>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>> <mailto:adrian.kla...@aklaver.com
>> <mailto:adrian.kla...@aklaver.com>>
>>
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] error messages not getting logged when running script from cron

2015-11-23 Thread Melvin Davidson
You could also try tweaking the following attached backup script.
Caution, I wrote this quickly from a skeleton script and has not been
tested.

On Mon, Nov 23, 2015 at 3:21 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> anj patnaik <patn...@gmail.com> writes:
> > My cron tab entry:
>
> > 0 20 * * * db_backup.sh 1> /dev/null 2>&1 | mail -s "backup cron"
> > myem...@comp.com
>
> > I am re-directing stderr to stdout and then sending that to email.
>
> Uh, read it again: you're redirecting stdout to /dev/null and then
> redirecting stderr to go where stdout goes.  So all output is
> going to the bit bucket, not the pipe.
>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


make_backup.sh
Description: Bourne shell script

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] current_query='IDLE" in pg_stat_activity

2015-11-23 Thread Melvin Davidson
You have not specified your O/S or PostgreSQL version, but if you are
running on a Linux O/S, you can run the attached current_queries.sh, which
loops every 5 seconds by default.
Note the usage options below:
usage() {
echo "Usage: $0 [-s  -d  -h  -U  -p
 -x -i]"
echo "-x will exit after 1 iteration"
echo "-i will exclude  queries"
exit 1
}

On Mon, Nov 23, 2015 at 5:42 AM, Chris Withers <ch...@simplistix.co.uk>
wrote:

>
> On 23/11/2015 10:33, paramjib baruah wrote:
>
>> I am trying to check current running sql queries in postgres through
>> pg_stat_activity .However, in the current_query column I could see only
>> "IDLE" . I am not able to see the query .
>>
> That would suggest no query is running, what suggests to you that there
> are queries you should be expecting to see?
>
> cheers,
>
> Chris
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


current_queries.sh
Description: Bourne shell script

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Taking lot time

2015-11-18 Thread Melvin Davidson
And as always, what is the O/S and the version of
PostgreSQL??

On Tue, Nov 17, 2015 at 7:18 AM, Ramesh T <rameshparnandit...@gmail.com>
wrote:

> the query is big it's selecting 20 rows from two table like i mentioned
> above exaplain analyze
>
> what should i do..?any help
>
> On Wed, Nov 4, 2015 at 4:27 AM, Adrian Klaver <adrian.kla...@aklaver.com>
> wrote:
>
>> On 11/03/2015 06:42 AM, Ramesh T wrote:
>>
>>> I have a Query it taking a lot of time to fetch results
>>> so,explain query gave
>>>
>>> "Hash Join  (cost=55078.00..202405.95 rows=728275 width=418)"
>>> "  Hash Cond: (itd.tran_id = iad._adj__id)"
>>> "  ->  Seq Scan on inv_detail itd  (cost=0.00..40784.18 rows=731029
>>> width=95)"
>>> "Filter: (event_type = ANY ('{21,22,3,5}'::integer[]))"
>>> "  ->  Hash  (cost=20590.78..20590.78 rows=610978 width=331)"
>>> "->  Seq Scan on inv_adj  iad  (cost=0.00..20590.78 rows=610978
>>> width=331)"
>>>
>>> Can you Please let me know wt happen in query..?wt should i do..
>>>
>>
>> And the query is?
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres zeroization of dead tuples ? i.e scrubbing dead tuples with sensitive data.

2015-11-18 Thread Melvin Davidson
Which begs the question, what is more important, the old/vacuumed data, or
the current valid data?
If someone can hack into the freed data, then they certainly have the
ability to hack into the current valid data.
So ultimately, the best thing to do is to secure the system from being
hacked, not zero out old data.
AFAIK, the only time you need to zero out the bytes is when you are
decommissioning the disk, in which case ALL data on the disk needs to be
wiped.

On Wed, Nov 18, 2015 at 3:13 PM, Alvaro Herrera <alvhe...@2ndquadrant.com>
wrote:

> David G. Johnston wrote:
> > On Wed, Nov 18, 2015 at 12:45 PM, Day, David <d...@redcom.com> wrote:
>
> > > I believe the   concern,  based on my current understanding  of
> postgres
> > > inner workings,  is  that when a dead tuple is reclaimed by
> vacuuming:  Is
> > > that reclaimed space initialized in some fashion that would  shred any
> > > sensitive data that was formerly there to any  inspection by  the
> > > subsequent owner of  that disk page ? ( zeroization )
>
> No.  Ultimately, space occupied by dead tuples is "freed" in
> PageRepairFragmentation(), src/backend/storage/page/bufpage.c;
> the contents of the tuples are shuffled to "defragment" the free space,
> but the free space is not zeroed.  You could certainly try to read the
> unused page and extract some data from there.
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres zeroization of dead tuples ? i.e scrubbing dead tuples with sensitive data.

2015-11-18 Thread Melvin Davidson
'm still trying to understand why you think someone can access old data but
not current/live data.
If you encrypt the live data, wouldn't that solve both concerns?

On Wed, Nov 18, 2015 at 4:38 PM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 11/18/2015 01:34 PM, Andrew Sullivan wrote:
>
>> On Wed, Nov 18, 2015 at 03:22:44PM -0500, Tom Lane wrote:
>>
>>> It's quite unclear to me what threat model such a behavior would add
>>> useful protection against.
>>>
>>
>> If you had some sort of high-security database and deleted some data
>> from it, it's important for the threat modeller to know whether the
>> data is gone-as-in-overwritten or gone-as-in-marked-free.  This is the
>> same reason they want to know whether a deleted file is actually just
>> unlinked on the disk.
>>
>> This doesn't mean one thing is better than another; just that, if
>> you're trying to understand what data could possibly be exfiltrated,
>> you need to know the state of all of it.
>>
>> For realistic cases, I expect that deleted data is usually more
>> important than updated data.  But a threat modeller needs to
>> understand all these variables anyway.
>>
>
> Alright, I was following you up to this. Seems to me deleted data would
> represent stale/old data and would be less valuable.
>
>>
>> A
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgreSQL.conf has become zero byte file

2015-11-17 Thread Melvin Davidson
More importantly, what version of PostgreSQL and what O/S are you working
with.
If this is Ubuntu, you could simply be looking at the wrong postgresql.conf
file.

On Tue, Nov 17, 2015 at 7:33 AM, John McKown <john.archie.mck...@gmail.com>
wrote:

> I agree with Adrian. If this is on a Linux system, I'd suggest setting up
> "icrond" to monitor that file and at least record who is accessing it. In
> addition, I would suggest that said Linux system run with SELinux in
> "enforcing" mode. That can stop even "root" from updating something, if it
> doesn't have the proper SELinux credentials.
>
> On Mon, Nov 16, 2015 at 11:06 PM, Adrian Klaver <adrian.kla...@aklaver.com
> > wrote:
>
>> On 11/16/2015 06:59 PM, M Tarkeshwar Rao wrote:
>>
>>> Hi All,
>>>
>>> In our production setup we found new issue as postgreSQL.conf has become
>>> zero byte file.
>>>
>>> After some time we copied that file from some back up, after some time
>>> it has again become zero byte.
>>>
>>> Any clue what is the reason of this behavior.
>>>
>>
>> I tend to doubt that Postgres is zeroing out its own conf file. My guess
>> is some other program/script is doing that. If you can narrow down the time
>> frame this happening, I would then look for any cron/scheduled jobs that
>> are running at the same time.
>>
>>
>>> Regards
>>>
>>> Tarkeshwar
>>>
>>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
>
> Schrodinger's backup: The condition of any backup is unknown until a
> restore is attempted.
>
> Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.
>
> He's about as useful as a wax frying pan.
>
> 10 to the 12th power microphones = 1 Megaphone
>
> Maranatha! <><
> John McKown
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Querying same lookup table with multiple columns based on another view

2015-11-17 Thread Melvin Davidson
Your problem is the use of SELECT *

You need to be specific about which columns you are selecting from
macro_lookup
and what names you want to give them. Since you have not provided us with
the full structure, I can only suggest something like

CREATE OR REPLACE VIEW sample_macro AS
SELECT sample.col1 as viewcol1
  m1.col1   as viewcol2,
  m1.col2   as viewcol3,
  m2.col4   as viewcol4,
  m3.col7   as viewcol5
FROM sample
LEFT JOIN macroscopic ON query_srpnt_sample.samp_id = macroscopic.sample_id
LEFT JOIN macro_lookup AS ml1 ON ml1.macro_lookup_id = macroscopic.grain_id
LEFT JOIN macro_lookup AS ml2 ON ml2.macro_lookup_id = macroscopic.lustre_id
LEFT JOIN macro_lookup AS ml3 ON ml3.macro_lookup_id =
macroscopic.translucency_id
WHERE samp_id is not null;


On Tue, Nov 17, 2015 at 10:10 AM, Killian Driscoll <
killiandrisc...@gmail.com> wrote:

> I have a view with 15 columns and want to create another view based on a
> join with another table with 15 columns that includes three columns that
> reference one lookup table.
>
> If I use the the below sql I get the error "column "macro_lookup_id"
> specified more than once". I have read that I can rename the columns (I
> renamed the tables as ml1, ml2, ml3) but can't figure out how to do this
> but also use the select * to avoid writing out all rest of the column names.
>
> CREATE OR REPLACE VIEW sample_macro AS
> SELECT *
> FROM sample
> LEFT JOIN macroscopic ON query_srpnt_sample.samp_id = macroscopic.sample_id
> LEFT JOIN macro_lookup AS ml1 ON ml1.macro_lookup_id = macroscopic.grain_id
> LEFT JOIN macro_lookup AS ml2 ON ml2.macro_lookup_id =
> macroscopic.lustre_id
> LEFT JOIN macro_lookup AS ml3 ON ml3.macro_lookup_id =
> macroscopic.translucency_id
> WHERE samp_id is not null;
>
> What is the most efficient way (in terms of typing out column names) to
> create this type of view?
>
>
> Killian DriscoIl
> IRC Postdoctoral Fellow
> UCD School of Archaeology
> University College Dublin
>
> academia.edu/KillianDriscoll <https://ucd.academia.edu/KillianDriscoll>
> www.lithicsireland.ie
> ca.linkedin.com/in/killiandriscoll
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] can postgres run well on NFS mounted partitions?

2015-11-16 Thread Melvin Davidson
Another good idea is to do a

grep FATAL: your_postgres_log

and see if PostgreSQL is complaining about anything relating to table.
index or WAL access,
otherwise, usually the only "acceptable" FATAL's are related to pg_hba.conf
authorization or other connection
problems.

On Mon, Nov 16, 2015 at 3:43 AM, Albe Laurenz <laurenz.a...@wien.gv.at>
wrote:

> anj patnaik wrote:
> > How do you tell if a database is corrupted? Are there specific error
> messages/symptoms to look for?
>
> That's actually a pretty tough question.
>
> The standard test is to run "pg_dumpall", see if it finishes without error
> and if the dump can be restored without error.
> That won't detect any index corruption though.
>
> You could try:
>
> COPY (SELECT * FROM tab ORDER BY ...) TO 'file1';
> SET enable_seqscan=off;
> COPY (SELECT * FROM tab ORDER BY ...) TO 'file2';
>
> and see if "file1" and "file2" are identical. That would check the index
> used in the second COPY statement.
>
> I don't know, but maybe enabling checksums with the -k option of "initdb"
> would make such corruption more obvious.
>
> Yours,
> Laurenz Albe
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL]

2015-11-13 Thread Melvin Davidson
It is very simple. If you have already created database icare then
DO NOT use the -create flag!

http://www.postgresql.org/docs/9.2/interactive/app-pgrestore.html

And as requested before, please provide the O/S and PostgreSQL version.

On Fri, Nov 13, 2015 at 12:38 AM, Alex Luya <alexander.l...@gmail.com>
wrote:

> Hello,
>I created a new database by
>
>create database icare;
>
>then quit off psql and  run:
>
> pg_restore --clean --create --exit-on-error --dbname=icare
> icare-test.tar
>
>  it complains:
>
>   pg_restore: [archiver (db)] Error while PROCESSING TOC:
>   pg_restore: [archiver (db)] Error from TOC entry 21; 2615 80924
> SCHEMA icare icare
>   pg_restore: [archiver (db)] could not execute query: ERROR:
>  permission denied for database icare
>    Command was: CREATE SCHEMA icare;
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] Re: [GENERAL] pg_restore tells schema “test” already exists but it isn't actually

2015-11-12 Thread Melvin Davidson
Try this instead:
SELECT current_database();
SELECT nspname FROM pg_namespace ORDER BY 1;

Also, What is the exact command you are using with pg_restore?

On Thu, Nov 12, 2015 at 9:53 PM, Alex Luya <alexander.l...@gmail.com> wrote:

> When restoring a dump like this:
>
> pg_restore --clean --create --exit-on-error --dbname=test test.tar
>
> these error messages got printed out:
>
> pg_restore: [archiver (db)] Error while PROCESSING TOC:
> pg_restore: [archiver (db)] Error from TOC entry 21; 2615 80924 SCHEMA test 
> test
> pg_restore: [archiver (db)] could not execute query: ERROR:  schema "test" 
> already exists
> Command was: CREATE SCHEMA test;
>
> but when:
>
> select schema_name from information_schema.schemata;
>
> these got printed out
>
> schema_name 
> pg_toast
> pg_temp_1
> pg_toast_temp_1
> pg_catalogpublic
> information_schema
>
> It seems like schema "test" doesn't exist yet,why do I got this kind of
> error?
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Melvin Davidson
Oops, I forgot to mention pg_initicluster, which is used to initiate the
various version/cluster combinations in Ubuntu.
It makes it easier to have several versions and clusters on the same
server, albeit that is not what most people/companies usually do.
eg: In Ubuntu, you could possibly have the following on one server:

Ver Cluster Port Status OwnerData directoryLog file
9.1  prod5432 online  postgres /var/lib/postgresql/9.1/prod
/var/log/postgresql/postgresql-9.1-prod.log
9.1  dev  5433 online postgres /var/lib/postgresql/9.1/dev
/var/log/postgresql/postgresql-9.1-dev.log
9.4  prod5435 online  postgres /var/lib/postgresql/9.4/prod
/var/log/postgresql/postgresql-9.4-prod.log
9.4  dev  5436 online postgres /var/lib/postgresql/9.4/dev
/var/log/postgresql/postgresql-9.4-dev.log
9.4  qa   5437 online postgres /var/lib/postgresql/9.4/qa
/var/log/postgresql/postgresql-9.4-qa.log

On Thu, Nov 12, 2015 at 11:56 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 11/12/2015 08:37 AM, Melvin Davidson wrote:
>
>> To clarify.
>>
>> To see the ACTUAL clusters installed, and the LIVE config directories,
>> you use
>> pg_lsclusters.
>>
>
> Actually:
>
> aklaver@arkansas:~$ pg_lsclusters
>
> Ver Cluster Port Status OwnerData directory   Log file
>
> 9.3 main5432 down   postgres /var/lib/postgresql/9.3/main
> /var/log/postgresql/postgresql-9.3-main.log
>
> 9.4 main5434 online postgres /var/lib/postgresql/9.4/main
> /var/log/postgresql/postgresql-9.4-main.log
>
> You get the $DATA directories and the log files. The config directories
> are in /etc/postgresql/*
>
>
>> pg_ctlclusters then can control each individual cluster
>>
>> So, for example, since apparently you have the 9.4 version of PostgreSQL,
>> pg_ctlcluster 9.4 main status
>> would probably show you that the 9.4 version of PostgreSQL is active and
>> will also show you the command line arguments used to start it.
>>
>>
>> On Thu, Nov 12, 2015 at 11:29 AM, Tom Lane <t...@sss.pgh.pa.us
>> <mailto:t...@sss.pgh.pa.us>> wrote:
>>
>> Adrian Klaver <adrian.kla...@aklaver.com
>> <mailto:adrian.kla...@aklaver.com>> writes:
>> > On 11/12/2015 08:07 AM, Melvin Davidson wrote:
>> >> In Ubuntu 14.10, there are three variations of the postgresql.conf
>> >> configuration file, as follows:
>> >>
>> >> /var/lib/postgresql/9.4/main/postgresql.auto.conf
>> >> /usr/lib/tmpfiles.d/postgresql.conf
>> >> /etc/postgresql/9.4/main/postgresql.conf
>>
>> FWIW, I would imagine that /usr/lib/tmpfiles.d/postgresql.conf is
>> a configuration file for the system's tmp-file-cleaner daemon
>> (probably telling it not to flush some postgres-related files),
>> not a PG config file at all.  If you look into it you'll likely
>> find that it doesn't look anything like PG configuration data.
>>
>> As already noted, postgresql.auto.conf is not for hand-editing.
>>
>>  regards, tom lane
>>
>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Melvin Davidson
Ubuntu does not use the standard pg_ctl for postgreSQL. Instead, it uses
pg_ctlcluster.
That in turn controls the different PostgreSQL clusters. When you do an
install of a new cluster, pg_ctlcluster is smart enough to put
postgresql.conf & pg_hba.conf into separate dirs.
So to be specific, /usr/lib/tmpfiles.d/postgresql.conf is just a tmp file
that you can ingnore, /etc/postgresql/9.4/main/postgresql.conf is the
original version for the 9.4 cluster and
/var/lib/postgresql/9.4/main/postgresql.auto.conf is the actual, live
version of the 9.4 cluster that you need to change to affect the 9.4
cluster. Likewise for the pg_hba.conf.

On Thu, Nov 12, 2015 at 10:46 AM, Michael Convey <smcon...@gmail.com> wrote:

> In Ubuntu 14.10, there are three variations of the postgresql.conf
> configuration file, as follows:
>
> /var/lib/postgresql/9.4/main/postgresql.auto.conf
> /usr/lib/tmpfiles.d/postgresql.conf
> /etc/postgresql/9.4/main/postgresql.conf
>
> What is the difference between these files and which is the correct one to
> change for modifying the configurations contained therein?
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Three Variations of postgresql.conf

2015-11-12 Thread Melvin Davidson
To clarify.

To see the ACTUAL clusters installed, and the LIVE config directories, you
use
pg_lsclusters.

pg_ctlclusters then can control each individual cluster

So, for example, since apparently you have the 9.4 version of PostgreSQL,
pg_ctlcluster 9.4 main status
would probably show you that the 9.4 version of PostgreSQL is active and
will also show you the command line arguments used to start it.


On Thu, Nov 12, 2015 at 11:29 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Adrian Klaver <adrian.kla...@aklaver.com> writes:
> > On 11/12/2015 08:07 AM, Melvin Davidson wrote:
> >> In Ubuntu 14.10, there are three variations of the postgresql.conf
> >> configuration file, as follows:
> >>
> >> /var/lib/postgresql/9.4/main/postgresql.auto.conf
> >> /usr/lib/tmpfiles.d/postgresql.conf
> >> /etc/postgresql/9.4/main/postgresql.conf
>
> FWIW, I would imagine that /usr/lib/tmpfiles.d/postgresql.conf is
> a configuration file for the system's tmp-file-cleaner daemon
> (probably telling it not to flush some postgres-related files),
> not a PG config file at all.  If you look into it you'll likely
> find that it doesn't look anything like PG configuration data.
>
> As already noted, postgresql.auto.conf is not for hand-editing.
>
> regards, tom lane
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] get this warning from pgAdmin

2015-11-12 Thread Melvin Davidson
This is not necessarily a major issue. It is simply an advisory that a
significant number of rows in the table have been inserted, updated or
deleted and that in order to have reliable statistics, a vacuum analyze
should be done.

Depending on the number of rows in the table and the difference between
actual row count and and the current statistics row count, you can decide
if you need to vacuum  analyze or delay.

One rule of thumb is to consider the response time of queries against that
tablet. IOW, are DML queries involving that table completing within a
couple of seconds? If yes, you can delay a while or wait for autovacuum to
be done on that table. If the DML is taking more than a few seconds, then
yes, it is advisable to vacuum analyze it.

On Thu, Nov 12, 2015 at 1:02 PM, anj patnaik <patn...@gmail.com> wrote:

> I get this warning when trying to fetch data for a postgres db. Does this
> indicate a real issue? Thanks
>
> Running VACUUM recommended
> The estimated rowcount on the table "recorder" deviates significantly from
> the actual rowcount. You should run VACUUM ANALYZE on this table.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Slow response if pulling through pgadmin

2015-11-10 Thread Melvin Davidson
I would also like to add that I am very suspicious of a table with 80 columns.
Offhand, it sounds like poor database design where someone was trying
to put all the eggs in one basket (figuratively).

Further, what was the exact query?
Queries of the form SELECT * will always be inherently slow with
tables that have many columns. Ideally, you only want to select just
the columns that are needed.

On 11/10/15, Jim Nasby <jim.na...@bluetreble.com> wrote:
> On 11/10/15 9:39 AM, Mammarelli, Joanne T wrote:
>> Hi – same rookie user as before.
>>
>> We have one table
>>
>> 100,000 rows
>>
>> 80 columns
>>
>> When we try to retrieve the data (select * from table) using pgadmin, we
>> get a 193456 ms retrieve time.
>>
>> When I ran a query analyze in the command prompt, we get a 316ms
>> retrieve time.
>
> You mean EXPLAIN ANALYZE?
>
>> .. and finally.  When we retrieve the data from the command line, we get
>> a 5720 ms retrieve time.
>
> What was psql doing with the output?
>
> Basically, pgAdmin and psql aren't meant for users to deal with huge
> data sets, because humans don't deal well with huge data sets.
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] is there any difference DROP PRIMARY KEY in oracle and postgres?

2015-11-05 Thread Melvin Davidson
In PostgreSQL, there is no “DROP PRIMARY KEY”. Instead, you need to use
DROP INDEX index_name;

http://www.postgresql.org/docs/9.4/interactive/sql-dropindex.html


On Thu, Nov 5, 2015 at 10:52 PM, M Tarkeshwar Rao <
m.tarkeshwar@ericsson.com> wrote:

> Hi,
>
>
>
> one thing in oracle is there any difference between “DROP PRIMARY KEY”
> used directly in oracle to drop primary key, or “DROP CONSTRAINT
> CDRAUDITPOINT_pk”, as first syntax is not available in postgres and we need
> to give primary key name as constraint to delete a key. SO right now to
> delete primary key I am using second approach, so is there any difference
> between two?
>
>
>
> Regards
>
> Tarkeshwar
>
>
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Taking lot time

2015-11-03 Thread Melvin Davidson
And in addition to providing the actual query, the structure (and indexes)
of all tables involved) are needed.

On Tue, Nov 3, 2015 at 5:59 PM, Rob Sargent <robjsarg...@gmail.com> wrote:

> On 11/03/2015 07:42 AM, Ramesh T wrote:
>
> I have a Query it taking a lot of time to fetch results
> so,explain query gave
>
> "Hash Join  (cost=55078.00..202405.95 rows=728275 width=418)"
> "  Hash Cond: (itd.tran_id = iad._adj__id)"
> "  ->  Seq Scan on inv_detail itd  (cost=0.00..40784.18 rows=731029
> width=95)"
> "Filter: (event_type = ANY ('{21,22,3,5}'::integer[]))"
> "  ->  Hash  (cost=20590.78..20590.78 rows=610978 width=331)"
> "->  Seq Scan on inv_adj  iad  (cost=0.00..20590.78 rows=610978
> width=331)"
>
> Can you Please let me know wt happen in query..?wt should i do..
>
> I could be way off base (again) but I think the first thing you do is
> supply the query (sql).
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] does pg_dump get delayed if I have a loop that does continuous insertions

2015-10-29 Thread Melvin Davidson
>Probably not a good idea to put your data in the postgres database. This
is one of the system databases created by >initdb and is meant to be used
as a default database to connect to, not really as one to store data in.
Just did a test >with pg_dumpall and it is not dumped when you backup the
entire cluster.

This statement is not entirely accurate. Although I agree it is a good idea
to have a separate named database for each "application"pg_dumpall does not
add a "CREATE DATABASE postgres;" to the dump file because pg_restore
expects that the postgres database already exists. However, all objects in
the postgres database are dumped and will be restored. This can be verified
because you can do a "grep 'connect postgres' <your_pg_dump_file" and it
will be found, as are any object in the postgres database..

On Wed, Oct 28, 2015 at 7:57 PM, Yves Dorfsman <y...@zioup.com> wrote:

> On 2015-10-28 14:09, anj patnaik wrote:
> >
> > Also, I want to know if anyone has found any handy cron scripts for
> automated
> > backups to run on a daily/weekly basis? i found some on google, but
> interested
> > to know if there are better ones.
>
> It does a lot more but:
>   https://github.com/wal-e/wal-e
>
> --
> http://yves.zioup.com
> gpg: 4096R/32B0F416
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Importing CSV File

2015-10-27 Thread Melvin Davidson
The trick is to tell postgres where the data ends with \.

>From http://www.postgresql.org/docs/9.3/interactive/sql-copy.html

End of data can be represented by a single line containing just
backslash-period (\.). An end-of-data marker is not necessary when reading
from a file, since the end of file serves perfectly well; it is needed only
when copying data to or from client applications using pre-3.0 client
protocol.

So if you can somehow add the \. to the end of your data before the
extraneous \r\n 's , it will solve your problem.

The following works as a test:

CREATE TABLE junk
(
pkey  integer,
jdata varchar(10),
CONSTRAINT junk_pk PRIMARY KEY (pkey)
);

TRUNCATE TABLE junk;
COPY junk FROM '/tmp/junk.data' WITH CSV;

contents of /tmp/junk.data:
---
1,'junk1'
2,'junk1'
3,'junk1'
\.
garbage data1
more garbage

blah
enough alread




On Tue, Oct 27, 2015 at 12:23 PM, Rob Sargent <robjsarg...@gmail.com> wrote:

> On 10/27/2015 10:04 AM, Adrian Klaver wrote:
>
> On 10/27/2015 08:44 AM, Jeff Janes wrote:
>
> On Mon, Oct 26, 2015 at 2:45 PM, David Blomstrom
> <david.blomst...@gmail.com <mailto:david.blomst...@gmail.com>
> <david.blomst...@gmail.com>> wrote:
>
> I tried to import a CSV file into a PostgreSQL table using pgAdmin
> III. I got an error message: "extra data after last column."
>
>
> All my spreadsheets have an "end of data" column that has /r/n in
> each cell. When I import a CSV file into a MySQL table, everything
> beyond /r/n is ignored. Is there some way to tell PostgreSQL to stop
> at /r/n?
>
>
> How does it know when to stop ignoring and start the next record?
>
>
> I wondered about that also. I did find this:
>
> http://dev.mysql.com/doc/refman/5.7/en/load-data.html
>
> LINES TERMINATED BY
>
>
> You could write a little awk or perl script to give the PROGRAM option
> of copy, but you can't do that within pgAdmin.
>
> Cheers,
>
> Jeff
>
>
>
> Whence the csv file?  If it starts out in spreadsheet, can you not export
> only the columns you want in the database?
>
> Google "postgres import tool" finds several options
>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] psql --quiet

2015-10-27 Thread Melvin Davidson
The only other option would be to redirect to a file.
EG:

psql -q --o some_file_to_send_output_to.log

On Tue, Oct 27, 2015 at 10:16 AM, Leo Baltus <leo.bal...@npo.nl> wrote:

> postgresql-9.4.5
>
> When running pg_stop_backup() psql outputs on stderr:
>
> NOTICE:  pg_stop_backup complete, all required WAL segments have been
> archived
>
> --quiet does not seem to suppress it, what does?
>
> I really hate to redirect stderr to /dev/null.
>
> --
> Leo Baltus, internetbeheerder
> NPO ICT Internet Services
> Bart de Graaffweg 2, 1217 ZL Hilversum
> serviced...@omroep.nl, 035-6773555
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Recursive Arrays 101

2015-10-26 Thread Melvin Davidson
The law of O/S & databases:
For every Linux / PostgreSQL user, there is and equal an opposite Mac /
MySQL user.
However, the latter is completely useless.

On Mon, Oct 26, 2015 at 8:54 PM, David Blomstrom <david.blomst...@gmail.com>
wrote:

> That's exactly what I've been doing. I just did it again...
>
> Last login: Mon Oct 26 17:53:05 on ttys001
>
> Davids-MacBook-Pro-2:~ davidblomstrom$
> /Library/PostgreSQL/9.5/scripts/runpsql.sh; exit
>
> Server [localhost]: Server [localhost
>
> Database [postgres]: Database [postgres]:
>
> Port [5432]: Port [5432]:
>
> Username [postgres]: Username [postgres]:
>
> psql: warning: extra command-line argument "[5432]:" ignored
>
> psql: warning: extra command-line argument "[postgres]:" ignored
>
> psql: warning: extra command-line argument "Database" ignored
>
> psql: warning: extra command-line argument "[postgres]:" ignored
>
> psql: invalid port number: "Port"
>
>
> Press  to continue...
>
> On Mon, Oct 26, 2015 at 5:49 PM, Gavin Flower <
> gavinflo...@archidevsys.co.nz> wrote:
>
>> On 27/10/15 13:29, John R Pierce wrote:
>>
>>> On 10/26/2015 5:20 PM, David G. Johnston wrote:
>>>
>>>> What exactly are you showing us here?
>>>>
>>>
>>> he's demonstrating a lack of reading comprehension.   I'm done. thread
>>> on ignore.
>>>
>>>
>>>
>>> I think its proof that Apple products rot your brain!
>>
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
>
> --
> David Blomstrom
> Writer & Web Designer (Mac, M$ & Linux)
> www.geobop.org
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Service not starting on Ubuntu 15.04

2015-10-26 Thread Melvin Davidson
Just out of curiosity, is there anything in the postgresql.log which gives
an indication of a problem?

On Mon, Oct 26, 2015 at 11:18 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 10/26/2015 08:04 AM, Lasse Westh-Nielsen wrote:
>
>> Adrian,
>>
>> The service starts once the package is installed.
>>
>>   - It did that on Ubuntu14.04
>>   - On Ubuntu 15.04, `sudo service postgresql status` _claims_ it has
>> been started.
>>
>
> Yes, it starts but it is a go nowhere, do nothing service:
>
> $ cat /etc/systemd/system/multi-user.target.wants/postgresql.service
> # systemd service for managing all PostgreSQL clusters on the system.
> This
> # service is actually a systemd target, but we are using a service
> since
> # targets cannot be reloaded.
>
> [Unit]
> Description=PostgreSQL RDBMS
>
> [Service]
> Type=oneshot
> ExecStart=/bin/true
> ExecReload=/bin/true
> RemainAfterExit=on
>
> [Install]
> WantedBy=multi-user.target
>
> All it does is start /bin/true which is why the status shows it started,
> but Postgres does not run. My guess is the answer is in here:
>
>
> http://askubuntu.com/questions/632882/does-upstart-really-handle-all-startup-scripts
>
> Second answer
>
> " PostgreSQL comes with two systemd service unit files in version 15,
> /lib/systemd/system/postgresql@.service and
> /lib/systemd/system/postgresql.service. System 5 rc stuff is thus
> completely ignored."
>
>
> Best guess is /lib/systemd/system/postgresql@.service is the one that
> actually starts Postgres. Sort of confirmed by the comments in the service
> script above:
>
> "# systemd service for managing all PostgreSQL clusters on the system. This
> # service is actually a systemd target, but we are using a service since
> # targets cannot be reloaded."
>
> Pretty sure /lib/systemd/system/postgresql.service is just a placeholder
> script.
>
> At any rate Stuarts answer and your application of it confirms the service
> script is the issue.
>
>
>>
>>
>> On Mon, Oct 26, 2015 at 2:57 PM, Adrian Klaver
>> <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 10/26/2015 07:08 AM, Lasse Westh-Nielsen wrote:
>>
>> Adrian,
>>
>> Thanks. I know about the systemd change, and indeed the postgres
>> package
>> I end up with _has_ systemd integration already:
>>
>>  $ cat
>> /etc/systemd/system/multi-user.target.wants/postgresql.service
>>  # systemd service for managing all PostgreSQL clusters on the
>>  system. This
>>  # service is actually a systemd target, but we are using a
>> service since
>>  # targets cannot be reloaded.
>>
>>  [Unit]
>>  Description=PostgreSQL RDBMS
>>
>>  [Service]
>>  Type=oneshot
>>  ExecStart=/bin/true
>>  ExecReload=/bin/true
>>  RemainAfterExit=on
>>
>>  [Install]
>>  WantedBy=multi-user.target
>>
>>
>> The thing above the above is I do not see anything that starts
>> Postgres.
>>
>> Are you sure the same script is being called in cloud-init and
>> outside of it?
>>
>> Or more to the point, what script actually starts Postgres?
>>
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Where do I enter commands?

2015-10-25 Thread Melvin Davidson
Since you are just starting, you should probably familarize yourself with
how PgAdmin works with PostgreSQL.
Therefore, it is best you refer to the documentation for PgAdmin.

PgAdmin III
http://www.pgadmin.org/docs/1.20/index.html

I also suggest you obtain a copy of one, or both, of the following two
books to help you with PostgreSQL.

Beginning Databases with PostgreSQL: From Novice to Professional
http://www.amazon.com/gp/product/1590594789?keywords=postgresql=1445778326_=sr_1_3=books=1-3

PostgreSQL Administration Essentials
http://www.amazon.com/gp/product/1783988983?keywords=postgresql=1445778326_=sr_1_7=books=1-7

You can also find a very extensive listing of other books about PostgreSQL
here:

http://www.postgresql.org/docs/books/

On Sun, Oct 25, 2015 at 5:21 AM, Joshua D. Drake <j...@commandprompt.com>
wrote:

> On 10/24/2015 09:19 PM, David Blomstrom wrote:
>
>> I'm a writer. I studied programing and MySQL so I could create websites
>> that I can publish my articles to. I don't have time to keep up with the
>> endless technology - MySQL, PDO, stored procedures, PHP, JavaScript,
>> JQuery, and on and on - especially when I have to work for a living.
>> I've been using MySQL for years, so I'm familiar with it. It therefore
>> makes sense for me to find a GUI as similar to MySQL as possible.
>>
>> With phpMyAdmin, I can easily create, modify, copy and migrate tables
>> between databases. If that can be done as easily with a
>> command-line-tool, even after surviving the learning curve, then I'm
>> interested. But it's really hard to imagine how that could be.
>>
>> Thanks for the tips.
>>
>
> I would ignore Rob, he obviously is suffering from a lack of coffee. Our
> community always tries to help new users. It is great to see you here.
>
> Sincerely,
>
> JD
>
>
> --
> Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> Announcing "I'm offended" is basically telling the world you can't
> control your own emotions, so everyone else should do it for you.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-10-23 Thread Melvin Davidson
Good catch. Thank you very much!

On Fri, Oct 23, 2015 at 10:50 AM, Predrag Maric <predr...@gmail.com> wrote:

> The script is generating foreign keys which reference tables in the source
> schema.
> I had to change
>
> 219SELECT 'ALTER TABLE ' || quote_ident(dest_schema) || '.' ||
> quote_ident(rn.relname)
> 220  || ' ADD CONSTRAINT ' ||
> quote_ident(ct.conname) || ' ' || pg_get_constraintdef(ct.oid) || ';'
>
> to
>
> 219SELECT 'ALTER TABLE ' || quote_ident(dest_schema) || '.' ||
> quote_ident(rn.relname)
> 220 || ' ADD CONSTRAINT ' ||
> quote_ident(ct.conname) || ' ' || replace(pg_get_constraintdef(ct.oid),
> quote_ident(source_schema) || '.', quote_ident(dest_schema) || '.') || ';'
>
> I'm attaching the updated script.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] interperting type oid in C code

2015-10-19 Thread Melvin Davidson
SELECT typname
   FROM pg_type
 WHERE oid = Oid_x;

On Mon, Oct 19, 2015 at 2:36 PM, Ken Been <kbb...@gmail.com> wrote:

> I'm working on a foreign data wrapper and I want to switch based on the
> column type.  Specifically, if the column type in the external table is the
> same as in the (locally defined) foreign table then I can get some speedup
> for some types.
>
> Through the ForeignScanState object I can get TupleDesc and AttInMetadata
> opjects, and through there I can get the Oid of the column type, but now
> I'm stumped.  How can I programmatically check whether Oid X refers to type
> int4, or whatever?
>
> Thanks.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PSQL Tools

2015-10-18 Thread Melvin Davidson
PgAdmin works great on Windows and is also available for the Mac.

You can find both versions here:

http://www.pgadmin.org/download/windows.php
or
http://www.pgadmin.org/download/macosx.php

On Sun, Oct 18, 2015 at 6:37 PM, Nicolas Paris <nipari...@gmail.com> wrote:

>
>
> 2015-10-19 0:08 GMT+02:00 dinesh kumar <dineshkuma...@gmail.com>:
>
>> On Sun, Oct 18, 2015 at 7:04 AM, <jwienc...@comcast.net> wrote:
>>
>>> Hello
>>>
>>> Is anyone aware of any tools like TOAD that are available for Postgresql?
>>>
>>>
>> PgAdmin fits here.
>>
>>
>>> Regards
>>>
>>> John Wiencek
>>>
>>
>>
>>
>> --
>>
>> Regards,
>> Dinesh
>> manojadinesh.blogspot.com
>>
>
>
> ​I didn't know DBeaver, it looks great ! (I personnally have many
> bugs/freeze with pgadmin3 on ubuntu, moreover the interface is not
> userfriendly)​
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PSQL Tools

2015-10-18 Thread Melvin Davidson
PgAdmin seems to fit those requirements. It's free.and available for
multiple platforms.

http://www.pgadmin.org/

What O/S do you wish to install on?



On Sun, Oct 18, 2015 at 10:48 AM, <jwienc...@comcast.net> wrote:

>
> Hello
>
> I am looking for a tool like TOAD for DBA's.   I would like something that
> can generate/extract DDL; report manage tablespace, create explain plans,
> create/mange users, synonyms, triggers, etc.
>
>
> regards
>
> John Wiencek
> --
> *From: *"Adrian Klaver" <adrian.kla...@aklaver.com>
> *To: *jwienc...@comcast.net, pgsql-general@postgresql.org
> *Sent: *Sunday, October 18, 2015 9:32:45 AM
> *Subject: *Re: [GENERAL] PSQL Tools
>
> On 10/18/2015 07:04 AM, jwienc...@comcast.net wrote:
> > Hello
> >
> > Is anyone aware of any tools like TOAD that are available for Postgresql?
>
> The universe of database management tools is large. In order to answer
> the above, what is is that you want to do?
>
> >
> > Regards
> >
> > John Wiencek
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] question

2015-10-15 Thread Melvin Davidson
In addition to exactly what you mean by "a long time" to pg_dump 77k of
your table,

What is your O/S and how much memory is on your system?
How many CPU's are in your system?
Also, what is your hard disk configuration?
What other applications are running simultaneously with pg_dump?
What is the value of shared_memory & maintenance_work_mem in
postgresql.conf?

On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 10/14/2015 06:39 PM, anj patnaik wrote:
>
>> Hello,
>>
>> I recently downloaded postgres 9.4 and I have a client application that
>> runs in Tcl that inserts to the db and fetches records.
>>
>> For the majority of the time, the app will connect to the server to do
>> insert/fetch.
>>
>> For occasional use, we want to remove the requirement to have a server
>> db and just have the application retrieve data from a local file.
>>
>> I know I can use pg_dump to export the tables. The questions are:
>>
>> 1) is there an in-memory db instance or file based I can create that is
>> loaded with the dump file? This way the app code doesn't have to change.
>>
>
> No.
>
>
>> 2) does pg support embedded db?
>>
>
> No.
>
> 3) Or is my best option to convert the dump to sqlite and the import the
>> sqlite and have the app read that embedded db.
>>
>
> Sqlite tends to follow Postgres conventions, so you might be able to use
> the pg_dump output directly if you use --inserts or --column-inserts:
>
> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>
>
>> Finally, I am noticing pg_dump takes a lot of time to create a dump of
>> my table. right now, the table  has 77K rows. Are there any ways to
>> create automated batch files to create dumps overnight and do so quickly?
>>
>
> Define long time.
>
> What is the pg_dump command you are using?
>
> Sure use a cron job.
>
>
>> Thanks for your inputs!
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] question

2015-10-15 Thread Melvin Davidson
The PostgreSQL default configuration is very conservative so as to insure
it will work on almost any system.
However, based on your latest information, you should definitely adjust
shared_buffers = 4GB
maintenance_work_mem = 512MB

Note that you will need to restart PostgreSQL for this to take effect.

On Thu, Oct 15, 2015 at 4:35 PM, anj patnaik <patn...@gmail.com> wrote:

> Hello all,
> I will experiment with -Fc (custom). The file is already growing very
> large.
>
> I am running this:
> ./pg_dump -t RECORDER  -Fc postgres |  gzip > /tmp/dump
>
> Are there any other options for large tables to run faster and occupy less
> disk space?
>
> Below is memory info:
>
> [root@onxl5179 tmp]# cat /proc/meminfo
> MemTotal:   16333720 kB
> MemFree:  187736 kB
> Buffers:   79696 kB
> Cached: 11176616 kB
> SwapCached: 2024 kB
> Active: 11028784 kB
> Inactive:4561616 kB
> Active(anon):3839656 kB
> Inactive(anon):   642416 kB
> Active(file):7189128 kB
> Inactive(file):  3919200 kB
> Unevictable:   0 kB
> Mlocked:   0 kB
> SwapTotal:  33456120 kB
> SwapFree:   33428960 kB
> Dirty: 33892 kB
> Writeback: 0 kB
> AnonPages:   4332408 kB
> Mapped:   201388 kB
> Shmem:147980 kB
> Slab: 365380 kB
> SReclaimable: 296732 kB
> SUnreclaim:68648 kB
> KernelStack:5888 kB
> PageTables:37720 kB
> NFS_Unstable:  0 kB
> Bounce:0 kB
> WritebackTmp:  0 kB
> CommitLimit:41622980 kB
> Committed_AS:7148392 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed:  179848 kB
> VmallocChunk:   34359548476 kB
> HardwareCorrupted: 0 kB
> AnonHugePages:   3950592 kB
> HugePages_Total:   0
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> DirectMap4k:   10240 kB
> DirectMap2M:16766976 kB
>
>
> # CPUs=8
> RHEL 6.5
>
> The PG shared memory info is the defaults as I've not touched the .conf
> file. I am not a DBA, just a test tools developer who needs to backup the
> table efficiently. I am fairly new to PG and not an expert at Linux.
>
> Also if there are recommended backup scripts/cron that you recommend,
> please point them to me.
>
> Thanks!!
>
> On Thu, Oct 15, 2015 at 3:59 PM, Scott Mead <sco...@openscg.com> wrote:
>
>>
>> On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge <
>> guilla...@lelarge.info> wrote:
>>
>>> 2015-10-15 20:40 GMT+02:00 anj patnaik <patn...@gmail.com>:
>>>
>>>> It's a Linux machine with 8 CPUs. I don't have the other details.
>>>>
>>>> I get archive member too large for tar format.
>>>>
>>>> Is there a recommended command/options when dealing with very large
>>>> tables, aka 150K rows and half of the rows have data being inserted with
>>>> 22MB?
>>>>
>>>>
>>> Don't use tar format? I never understood the interest on this one. You
>>> should better use the custom method.
>>>
>>
>> + 1
>>
>>  Use -F c
>>
>>
>> --
>> Scott Mead
>> Sr. Architect
>> *OpenSCG*
>> PostgreSQL, Java & Linux Experts
>>
>>
>> http://openscg.com
>>
>>
>>>
>>>
>>>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
>>>> pg_dump: [archiver (db)] connection to database "postgres" failed:
>>>> fe_sendauth: no password supplied
>>>> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
>>>> Password:
>>>> pg_dump: [tar archiver] archive member too large for tar format
>>>> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>>>> -bash: pg_dumpall: command not found
>>>> -bash: tmpdb.out-2015101510.gz: Permission denied
>>>> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>>>>
>>>>
>>>> Thank you so much for replying and accepting my post to this NG.
>>>>
>>>> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6...@gmail.com
>>>> > wrote:
>>>>
>>>>> In addition to exactly what you mean by "a long time" to pg_dump 77k
>>>>> of your table,
>>>>>
>>>>> What is your O/S and how much memory is on your system?
>>>>> How many CPU's are in your system?
>>>>&

Re: [GENERAL] question

2015-10-15 Thread Melvin Davidson
You stated you wanted to dump just one table, but your command is dumping
the whole database!

So if you truly want to dump just a single table, then change your command
to:

pg_dump -t RECORDER postgres --format=t -t your_table_name -w  > /tmp/dump

Also, please explain why you cannot provide the other required information.
Are you not the DBA? If that is the case, then I can only encourage you to
consult with him/her.

On Thu, Oct 15, 2015 at 2:40 PM, anj patnaik <patn...@gmail.com> wrote:

> It's a Linux machine with 8 CPUs. I don't have the other details.
>
> I get archive member too large for tar format.
>
> Is there a recommended command/options when dealing with very large
> tables, aka 150K rows and half of the rows have data being inserted with
> 22MB?
>
> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
> pg_dump: [archiver (db)] connection to database "postgres" failed:
> fe_sendauth: no password supplied
> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
> Password:
> pg_dump: [tar archiver] archive member too large for tar format
> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
> -bash: pg_dumpall: command not found
> -bash: tmpdb.out-2015101510.gz: Permission denied
> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>
>
> Thank you so much for replying and accepting my post to this NG.
>
> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>> In addition to exactly what you mean by "a long time" to pg_dump 77k of
>> your table,
>>
>> What is your O/S and how much memory is on your system?
>> How many CPU's are in your system?
>> Also, what is your hard disk configuration?
>> What other applications are running simultaneously with pg_dump?
>> What is the value of shared_memory & maintenance_work_mem in
>> postgresql.conf?
>>
>> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
>> adrian.kla...@aklaver.com> wrote:
>>
>>> On 10/14/2015 06:39 PM, anj patnaik wrote:
>>>
>>>> Hello,
>>>>
>>>> I recently downloaded postgres 9.4 and I have a client application that
>>>> runs in Tcl that inserts to the db and fetches records.
>>>>
>>>> For the majority of the time, the app will connect to the server to do
>>>> insert/fetch.
>>>>
>>>> For occasional use, we want to remove the requirement to have a server
>>>> db and just have the application retrieve data from a local file.
>>>>
>>>> I know I can use pg_dump to export the tables. The questions are:
>>>>
>>>> 1) is there an in-memory db instance or file based I can create that is
>>>> loaded with the dump file? This way the app code doesn't have to change.
>>>>
>>>
>>> No.
>>>
>>>
>>>> 2) does pg support embedded db?
>>>>
>>>
>>> No.
>>>
>>> 3) Or is my best option to convert the dump to sqlite and the import the
>>>> sqlite and have the app read that embedded db.
>>>>
>>>
>>> Sqlite tends to follow Postgres conventions, so you might be able to use
>>> the pg_dump output directly if you use --inserts or --column-inserts:
>>>
>>> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>>>
>>>
>>>> Finally, I am noticing pg_dump takes a lot of time to create a dump of
>>>> my table. right now, the table  has 77K rows. Are there any ways to
>>>> create automated batch files to create dumps overnight and do so
>>>> quickly?
>>>>
>>>
>>> Define long time.
>>>
>>> What is the pg_dump command you are using?
>>>
>>> Sure use a cron job.
>>>
>>>
>>>> Thanks for your inputs!
>>>>
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Melvin Davidson
Can you connect as user postgres? IE: psql -U postgres -d 

If so, then you should have the ability to execute the commands without any
problem.

On Wed, Oct 7, 2015 at 9:53 AM, Adrian Klaver <adrian.kla...@aklaver.com>
wrote:

> On 10/07/2015 05:12 AM, Andrus wrote:
>
>> Hi!
>>
>> The objects can't be owned by nothing, so you will need to reassign
>>> ownership:
>>> REASSIGN OWNED BY old_role TO new_role;
>>> e.g.
>>> REASSIGN OWNED BY vantaa TO postgres;
>>> Then you can drop the role.
>>>
>>
>> User who deletes other users is not superuser. It is created using
>>
>> CREATE ROLE admin LOGIN
>>   NOSUPERUSER INHERIT CREATEDB CREATEROLE NOREPLICATION CONNECTION LIMIT
>> 100;
>> GRANT idd_owner TO admin;
>>
>>
>> I tried
>>
>
> So to be clear admin is doing the below, correct?
>
>
>> REASSIGN OWNED BY vantaa TO postgres;
>>
>> and
>>
>> REASSIGN OWNED BY vantaa TO idd_owner;
>>
>> but got  error
>>
>> permission denied to reassign objects .
>>
>
> Is the above a blanket error or does it mention specific objects?
>
>
>> How to fix ?
>>
>
> What you are trying to do is reverse what you did to get the present
> setup. Do you have a record/script that shows what you did to create the
> role and assign it to the objects?
>
>
>> I can add some rights to user who invokes this command if this helps.
>>
>> Andrus.
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Melvin Davidson
No. You need to be a superuser to reassign objects unless you own the
object.
You must also be a superuser to drop roles.

So.
1. first connect as user postgres
2. REASSIGN all the tables owned by the missing user first.
3. Then you can drop the missing user AFTER you have reassigned all the
objects they own.

On Wed, Oct 7, 2015 at 10:48 AM, Andrus <kobrule...@hot.ee> wrote:

> Hi!
>
> Can you connect as user postgres? IE: psql -U postgres -d 
>>
>
> Applicaton has admin users which should be able to delete other users.
> Those users dont have superuser rights.
>
> I can connect as user postgres for testing only.
> I'm looking for a way to delete users without superuser right.
>
> If so, then you should have the ability to execute the commands without
>> any problem.
>>
>
> I tried in database ktp :
>
> reassign owned by farukkugay to postgres;
> drop user farukkugay ;
>
> This causes error
>
> ERROR:  role "farukkugay" cannot be dropped because some objects depend on
> it
> DETAIL:  privileges for schema public
>
> So even superuser cannot delete.
>
> Andrus.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to drop user if objects depend on it

2015-10-07 Thread Melvin Davidson
Andrus,

>is running by superuser but it still causes the error.

That does not sound right. Please verify you are running as user postgres
with:

SELECT current_user;

Then make sure postgres is still a superuser with:

SELECT rolname as user,
   CASE WHEN rolcanlogin THEN 'user'
ELSE 'group'
   END,
   CASE WHEN rolsuper THEN 'SUPERUSER'
ELSE 'normal'
END AS super
  FROM pg_authid
WHERE rolname = 'postgres';

If you still get errors, then please show the exact error to us.


On Wed, Oct 7, 2015 at 11:11 AM, Andrus <kobrule...@hot.ee> wrote:

> Hi!
>
> No. You need to be a superuser to reassign objects unless you own the
>> object.
>> 1. first connect as user postgres
>> 2. REASSIGN all the tables owned by the missing user first.
>> 3. Then you can drop the missing user AFTER you have reassigned all the
>> objects they own.
>>
>
> Script
>
> reassign owned by farukkugay to postgres;
> drop user farukkugay ;
>
> is running by superuser but it still causes the error.
>
>
> You must also be a superuser to drop roles.
>>
>
> Non-superuser creates roles, assigns rights and drop users using scripts
> which I provided.
> Those scripts work OK on most cases.
>
> For some users, vantaa and farukkugan  delete script causes error which I
> described. For farukkugan it occurs also if running under superuser.
>
> So it looks like it should be possible for non-superusers also.
>
> Andrus.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Processing data from table using awk.

2015-10-06 Thread Melvin Davidson
Your best bet is something like

#!/bin/bash

get_data ()
{
QRY=$(psql $HOST $PORT $USER $DBNAME <<_QUERY_

\o your_output_file
SELECT col1, col2, , coln
  FROM your_table
WHERE ;

_QUERY_

)
}

awk  your_table


On Tue, Oct 6, 2015 at 10:04 AM, John McKown <john.archie.mck...@gmail.com>
wrote:

> I'm wanting to do some reporting on data which I have an a PostgreSQL
> table. For lack of anything better, I've decided to see if I can do it in
> GNU awk. OK, using Perl with DBI might be a better idea, I'll grant you
> that. Or maybe Python or Ruby (which I don't know). But out of shear
> cussedness, I'm going to see what I can do in gawk. What I don't see is a
> way to get the data out of PostgreSQL and into my awk program. Does anybody
> know of a way to do this, short of "cheating" by using psql? Yes, I know
> that I could do something like (BASH on Linux/Fedora 22 x86_64):
>
> awk -f some-program.awk <(echo 'SELECT a, b, c, d FROM schema.table ORDER
> BY a' | psql dbname)
>
> What I was hoping for was a "Dynamic Extension" (dll) which would allow
> "native" use of PostgreSQL, or may ODBC. But I haven't found any. Have I
> missed anything? If I were to create such a beastie, would it be of any use
> to others? I guess, in this, I'm wondering what "report writer" most are
> using when psql just doesn't have sufficient capability.
>
> Yeah, sometimes I'm just plain stupid & pig headed. But then it will be a
> "leaning experience" (with associated scars, I'm sure).
>
> --
>
> Schrodinger's backup: The condition of any backup is unknown until a
> restore is attempted.
>
> Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.
>
> He's about as useful as a wax frying pan.
>
> 10 to the 12th power microphones = 1 Megaphone
>
> Maranatha! <><
> John McKown
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] How to speed up delete where not in

2015-09-26 Thread Melvin Davidson
In addition to the previous recommendation, make sure you have an index on
dokumnr in table omrid.

EG:
CREATE INDEX omrid_dokumnr_fk ON omrid
  USING BTREE (dokumnr);



On Sat, Sep 26, 2015 at 7:33 AM, David Rowley <david.row...@2ndquadrant.com>
wrote:

> On 26 September 2015 at 19:53, Andrus <kobrule...@hot.ee> wrote:
>
>> Hi!
>>
>>
>> I'm looking for a way to delete records which do not have child rows on
>> big tables where lot of rows needs to be deleted. Both tables have lot of
>> other foreign key references.
>>
>>
>> Document headers are in omdok table:
>>
>>create table omdok ( dokumnr serial primary key, ... );
>>
>> Document rows are in omrid table
>>
>>CREATE TABLE omrid
>>(
>>  id serial NOT NULL,
>>  reanr serial NOT NULL,
>>  dokumnr integer NOT NULL,
>>  CONSTRAINT omrid_pkey PRIMARY KEY (id),
>>  CONSTRAINT omrid_dokumnr_fkey FOREIGN KEY (dokumnr)
>>  REFERENCES omdok (dokumnr) MATCH SIMPLE
>>  ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY
>> IMMEDIATE,
>>  
>>);
>>
>>
>> I tried
>>
>>delete from omdok where dokumnr not in  (select dokumnr from omrid)
>>
>> Query it is running currently 15 hours and is still running.
>> postgres.exe is using 50% CPU  all the time (this is 2 core CPU).
>>
>>explain delete from omdok where dokumnr not in  (select dokumnr
>> from omrid)
>>
>> returns:
>>
>>"Delete  (cost=0.00..21971079433.34 rows=220815 width=6)"
>>"  ->  Seq Scan on omdok  (cost=0.00..21971079433.34 rows=220815
>> width=6)"
>>"Filter: (NOT (SubPlan 1))"
>>"SubPlan 1"
>>"  ->  Materialize  (cost=0.00..94756.92 rows=1897261 width=4)"
>>"->  Seq Scan on omrid  (cost=0.00..77858.61
>> rows=1897261 width=4)"
>>
>> - How to delete parents without child rows fast?
>>
>
> if you write the query as with a NOT EXISTS, rather than a NOT IN() it
> should complete much more quickly.
>
> It's important to know that the semantics of NOT IN are likely not at all
> what you think:
>
> For example, in the following query, would you expect it to return the row
> with a.a = 3 ?
>
> select * from (values(1),(2),(3)) a(a) WHERE a NOT IN(SELECT b FROM
> (VALUES(NULL),(1),(2)) b(b));
>
> The presence of NULL causes this to not behave the way you might think,
> yet it works exactly the way the SQL standard dictates.
>
> You could think of this as equivalent to writing:
>
> select * from (values(1),(2),(3)) a(a) WHERE a.a <> NULL AND a.a <> 1 AND
> a.a <> 2;
>
> And since a.a <> NULL is 'unknown', this causes the entire WHERE clause to
> be false, therefore nothing matches.
>
> For this reason PostgreSQL does not optimise NOT IN() the same way as it
> optimises NOT EXISTS().
>
> If you write the query as:
>
> delete from omdok where not exists(select 1 from omrid where omdok.dokumnr
> = omrid.dokumnr);
>
> then you might see it execute in a few seconds. Perhaps you should
> consider cancelling the current query, perhaps perform a VACUUM on omdoc
> after cancelling, and then run the NOT EXISTS version instead.
>
> I mocked up your tables locally, and populated them with the same number
> of records as your row estimates in the EXPLAIN you pasted and I got:
>
> test=# delete from omdok where not exists (select 1 from omrid where
> omdok.dokumn = omrid.dokumnr);
> DELETE 0
> Time: 1698.233 ms
>
> Whereas with the NOT IN() I cancelled it after 10 minutes.
>
> Regards
>
> David Rowley
>
> --
>  David Rowley   http://www.2ndQuadrant.com/
> <http://www.2ndquadrant.com/>
>  PostgreSQL Development, 24x7 Support, Training & Services
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Convert number to string

2015-09-24 Thread Melvin Davidson
Postgresql has a translate function:

So you can create a pgsql function that
A;: converts the numeric to text:   txtnum := 12345::text
B, Translate it. eg: transnum := SELECT translate(txtnum, '123456789',
'FGHIJKLMN');
eg SELECT translate('31241', '12345', 'FGHIJ'); = HFGIF

You would then need a little further processing to determine the tens,
hundreds, etc.

I'll leave that to you to work out, but additional functions of
strpos(string, substring)
substr(string, from [, count])
length(string)


would seem helpful
http://www.postgresql.org/docs/9.2/interactive/functions-string.html


On Thu, Sep 24, 2015 at 5:35 AM, Francisco Olarte <fola...@peoplecall.com>
wrote:

> Forgot replying to all, forwarding to the list, sorree for the potential
> dupes.
>
>
> Hi Hengky:
>
> On Thu, Sep 24, 2015 at 10:09 AM, Hengky Liwandouw
> <hengkyliwand...@gmail.com> wrote:
> >
> > I don’t have any idea how can I use postgres function to convert series
> of number (currency) to my own string.
> >
> >
> >
> > My string : F G H I J K L M N  as the replacement for number : 1
> 2 3 4 5 6 7 8 9
> >
> > Dozens = O
> >
> > Hundreds = P
> >
> > Thousands = C
> >
> > Ten thousands = B
> >
> >
> >
> > So…
> >
> > 200 = GP
> >
> > 2000 = GC
> >
> > 1150 = FFJO
> >
> > 3 = HB
> >
> >
> >
> > Any idea ?
>
>
> Your example es severely unspecified, how do you translate 1024? and
> 1002? and 10?
>
> given the numbers in the example you can use a simple trick. 1st
> replace using O for 0 via translate ( I'm not using capitals in teh
> exmaple for better 0-o contrast ):
>
> # with data(s) as (values('200'),('2000'),('1150'),('3')) select
> translate(s,'0123456789','ofghijklmn') from data;
>  translate
> ---
>  goo
>  gooo
>  ffjo
>  h
> (4 rows)
>
> then replace sequences of 'o' starting with the longer ones:
>
> # with source(s) as (values('200'),('2000'),('1150'),('3'))
> , step1(s) as (select translate(s,'0123456789','ofghijklmn') from source)
> , step2(s) as (select replace(s,'','b') from step1)
> , step3(s) as (select replace(s,'ooo','c') from step2)
> , step4(s) as (select replace(s,'oo','p') from step3)
> select * from step4;
>   s
> --
>  gp
>  gc
>  ffjo
>  hb
> (4 rows)
>
> clasical trick. But, as I said above, you need to specify it much better.
>
> Francisco Olarte.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres 9.3

2015-09-21 Thread Melvin Davidson
You are correct John, but in this case, he stated only one (1) database. So
changing the search path in .postgresql.conf  simplifies things for all
users.
However, should he create additional databases later on, then yes, your
solution would be better.

On Mon, Sep 21, 2015 at 3:08 PM, Alvaro Herrera <alvhe...@2ndquadrant.com>
wrote:

> John R Pierce wrote:
>
> > better would be to...
> >
> > ALTER ROLE username SET SEARCH_PATH='preview,"$user", public';
> > or
> > ALTER DATABASE dbname SET...;
> >
> > and then this change just applies to that named role or database...
>
> (or
>ALTER ROLE username IN DATABASE dbname SET ..
> which applies to the specified role in the specified database)
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] postgres 9.3

2015-09-21 Thread Melvin Davidson
You cannot directly "connect" one schema with another,
but you can set the default to have both in the search_path.
Uncomment the #search_path parameter in Postgresql.conf
and change it to
search_path = 'preview, role,public,"$user",public'

The make sure you do:
sudo su - postgres
pg_ctl reload [-D datadir]

On Sat, Sep 19, 2015 at 4:36 AM, Ramesh T <rameshparnandit...@gmail.com>
wrote:

> I have one database and two schemas in that public,preview and role preview
> automatically i want to connect preview schema with preview role.
>
> set search_path to preview.
> show search_path;
> ..
> preview.
>
>
> when disconnect and connect database it's showing $user$:public schema not
> showing preview.
>
> what is the problem..?how to resolve the issue...?
>
>
> when session closed showing postgres user..even set preview.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-20 Thread Melvin Davidson
Additional revision to avoid duplicating RI_Constraint triggers.


On Fri, Sep 18, 2015 at 4:09 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

> Revised to add rules after all tables are create to avoid error where
> table referenced in rule was not created yet.
>
> Added copying of column statistics with thanks to Marc Mamin  for pointing
> that out.
>
>
> On Thu, Sep 17, 2015 at 12:06 PM, Marc Mamin <m.ma...@intershop.de> wrote:
>
>>
>> --
>> *Von:* Melvin Davidson [melvin6...@gmail.com]
>> *Gesendet:* Donnerstag, 17. September 2015 17:11
>> *An:* Marc Mamin
>> *Cc:* pgsql-general@postgresql.org
>> *Betreff:* Re: [GENERAL] clone_schema function
>>
>> Thanks,
>>
>> >I'm not sure why you had trouble with the REPLACE(), as I did extensive
>> testing and it was working as coded.
>>
>> might be that my modification is required when ( and only when ?) the
>> source_schema is not part of the current search_path.
>> This is just a guess, I only gave your code a quick try ...
>>
>> >As for SET STATISTICS, I'm guessing that is a glitch in the CREATE TABLE
>>  LIKE option.
>> Yes, we can see it as an incomplete feature.
>>
>> regards,
>>
>> Marc Mamin
>>
>> On Thu, Sep 17, 2015 at 11:05 AM, Marc Mamin <m.ma...@intershop.de>
>> wrote:
>>
>>> Hello,
>>>
>>> I had to make 2 changes to get it running:
>>>
>>>
>>>
>>> line 193:
>>>
>>> -  REPLACE(column_default::text, quote_ident(source_schema) || '.',
>>> quote_ident(dest_schema || '.') )
>>>
>>> + REPLACE(column_default::text, quote_ident(source_schema) || '.',
>>> quote_ident(dest_schema) || '.' )
>>>
>>>
>>>
>>> line 319
>>>
>>> -  SELECT replace(v_def, ' ON ', ' ON ' || quote_ident(dest_schema) ||
>>> '.') INTO dest_qry;
>>>
>>> + SELECT replace(v_def, ' ON ' || quote_ident(source_schema), ' ON ' ||
>>> quote_ident(dest_schema) ) INTO dest_qry;
>>>
>>>
>>>
>>>
>>>
>>> moreover, you don't take care of the column statistic targets
>>>
>>> (i.e. ALTER TABLE t ALTER COLUMN c SET STATISTICS n;)
>>>
>>>
>>>
>>>
>>>
>>> regards,
>>>
>>>
>>>
>>> Marc Mamin
>>>
>>>
>>>
>>>
>>>
>>> *From:* pgsql-general-ow...@postgresql.org [mailto:
>>> pgsql-general-ow...@postgresql.org] *On Behalf Of *Melvin Davidson
>>> *Sent:* Donnerstag, 17. September 2015 15:48
>>> *To:* David G. Johnston
>>> *Cc:* Igor Neyman; Jim Nasby; Daniel Verite;
>>> pgsql-general@postgresql.org
>>> *Subject:* Re: [GENERAL] clone_schema function
>>>
>>>
>>>
>>> Attached is hopefully the final version of
>>>
>>>  FUNCTION clone_schema(text, text, boolean)
>>>
>>>  This function now does the following:
>>>  1. Checks that the source schema exists and the destination does not.
>>>  2. Creates the destination schema
>>>  3. Copies all sequences, tables, indexes, rules, triggers,
>>> data(optional),
>>>  views & functions from the source schema to the destination schema
>>>  4. Optionally copies records from source schema tables to destination
>>> tabled. (boolean)
>>>  5. Copies comments for source schema and all sequences, tables,
>>> functions, rules   and triggers;
>>>
>>>  If you discover a problem with this function, then kindly advise me
>>> what it is
>>>  and attach a script (SQL dump) to duplicate it. If you also have a fix,
>>> that is
>>>  even better.
>>>
>>>  However, if you "think" there is a problem that occurs when
>>>  A. The moon is full
>>>  B. You have blood type A/B negative
>>>  C. You have a table the same name as your database and schema
>>>  D. All you tables have column "id" in them
>>>  E. You've had 16 beers and 4 oxycodones
>>>  F. Your meth supplier raised the price
>>>
>>>  then do not contact me. Instead, run, do not walk,  immediately to your
>>>  psychologist, as you have serious issues in addition to database design
>>> problems
>>>  and you should not use this function under any circumstance.
>>>
>>>  CAVEAT EMPTOR!
>>>  The only known problem with this script is if functions in the

Re: [GENERAL] clone_schema function

2015-09-18 Thread Melvin Davidson
Revised to add rules after all tables are create to avoid error where table
referenced in rule was not created yet.

Added copying of column statistics with thanks to Marc Mamin  for pointing
that out.


On Thu, Sep 17, 2015 at 12:06 PM, Marc Mamin <m.ma...@intershop.de> wrote:

>
> --
> *Von:* Melvin Davidson [melvin6...@gmail.com]
> *Gesendet:* Donnerstag, 17. September 2015 17:11
> *An:* Marc Mamin
> *Cc:* pgsql-general@postgresql.org
> *Betreff:* Re: [GENERAL] clone_schema function
>
> Thanks,
>
> >I'm not sure why you had trouble with the REPLACE(), as I did extensive
> testing and it was working as coded.
>
> might be that my modification is required when ( and only when ?) the
> source_schema is not part of the current search_path.
> This is just a guess, I only gave your code a quick try ...
>
> >As for SET STATISTICS, I'm guessing that is a glitch in the CREATE TABLE
>  LIKE option.
> Yes, we can see it as an incomplete feature.
>
> regards,
>
> Marc Mamin
>
> On Thu, Sep 17, 2015 at 11:05 AM, Marc Mamin <m.ma...@intershop.de> wrote:
>
>> Hello,
>>
>> I had to make 2 changes to get it running:
>>
>>
>>
>> line 193:
>>
>> -  REPLACE(column_default::text, quote_ident(source_schema) || '.',
>> quote_ident(dest_schema || '.') )
>>
>> + REPLACE(column_default::text, quote_ident(source_schema) || '.',
>> quote_ident(dest_schema) || '.' )
>>
>>
>>
>> line 319
>>
>> -  SELECT replace(v_def, ' ON ', ' ON ' || quote_ident(dest_schema) ||
>> '.') INTO dest_qry;
>>
>> + SELECT replace(v_def, ' ON ' || quote_ident(source_schema), ' ON ' ||
>> quote_ident(dest_schema) ) INTO dest_qry;
>>
>>
>>
>>
>>
>> moreover, you don't take care of the column statistic targets
>>
>> (i.e. ALTER TABLE t ALTER COLUMN c SET STATISTICS n;)
>>
>>
>>
>>
>>
>> regards,
>>
>>
>>
>> Marc Mamin
>>
>>
>>
>>
>>
>> *From:* pgsql-general-ow...@postgresql.org [mailto:
>> pgsql-general-ow...@postgresql.org] *On Behalf Of *Melvin Davidson
>> *Sent:* Donnerstag, 17. September 2015 15:48
>> *To:* David G. Johnston
>> *Cc:* Igor Neyman; Jim Nasby; Daniel Verite; pgsql-general@postgresql.org
>> *Subject:* Re: [GENERAL] clone_schema function
>>
>>
>>
>> Attached is hopefully the final version of
>>
>>  FUNCTION clone_schema(text, text, boolean)
>>
>>  This function now does the following:
>>  1. Checks that the source schema exists and the destination does not.
>>  2. Creates the destination schema
>>  3. Copies all sequences, tables, indexes, rules, triggers,
>> data(optional),
>>  views & functions from the source schema to the destination schema
>>  4. Optionally copies records from source schema tables to destination
>> tabled. (boolean)
>>  5. Copies comments for source schema and all sequences, tables,
>> functions, rules   and triggers;
>>
>>  If you discover a problem with this function, then kindly advise me what
>> it is
>>  and attach a script (SQL dump) to duplicate it. If you also have a fix,
>> that is
>>  even better.
>>
>>  However, if you "think" there is a problem that occurs when
>>  A. The moon is full
>>  B. You have blood type A/B negative
>>  C. You have a table the same name as your database and schema
>>  D. All you tables have column "id" in them
>>  E. You've had 16 beers and 4 oxycodones
>>  F. Your meth supplier raised the price
>>
>>  then do not contact me. Instead, run, do not walk,  immediately to your
>>  psychologist, as you have serious issues in addition to database design
>> problems
>>  and you should not use this function under any circumstance.
>>
>>  CAVEAT EMPTOR!
>>  The only known problem with this script is if functions in the source
>> schema
>>  have a SELECT using the form of tablename.columm, and tablename is the
>> same
>>  as  source schema, then tablename will be changed to destination schema
>> name.
>>  However, since good developers and DBA's use the form of alias.column,
>> this
>>  should rarely be a problem.
>>
>>
>>
>> On Tue, Sep 15, 2015 at 12:37 PM, Melvin Davidson <melvin6...@gmail.com>
>> wrote:
>>
>> David,
>>
>> Yes, it would be nice, but
>>
>> 1. I am still working also on bringing over the comments for various
>> objects
>>
>> 2. What you request is c

Re: [GENERAL] clone_schema function

2015-09-17 Thread Melvin Davidson
Attached is hopefully the final version of
 FUNCTION clone_schema(text, text, boolean)

 This function now does the following:
 1. Checks that the source schema exists and the destination does not.
 2. Creates the destination schema
 3. Copies all sequences, tables, indexes, rules, triggers, data(optional),
 views & functions from the source schema to the destination schema
 4. Optionally copies records from source schema tables to destination
tabled. (boolean)
 5. Copies comments for source schema and all sequences, tables, functions,
rules   and triggers;

 If you discover a problem with this function, then kindly advise me what
it is
 and attach a script (SQL dump) to duplicate it. If you also have a fix,
that is
 even better.

 However, if you "think" there is a problem that occurs when
 A. The moon is full
 B. You have blood type A/B negative
 C. You have a table the same name as your database and schema
 D. All you tables have column "id" in them
 E. You've had 16 beers and 4 oxycodones
 F. Your meth supplier raised the price

 then do not contact me. Instead, run, do not walk,  immediately to your
 psychologist, as you have serious issues in addition to database design
problems
 and you should not use this function under any circumstance.

 CAVEAT EMPTOR!
 The only known problem with this script is if functions in the source
schema
 have a SELECT using the form of tablename.columm, and tablename is the
same
 as  source schema, then tablename will be changed to destination schema
name.
 However, since good developers and DBA's use the form of alias.column,
this
 should rarely be a problem.

On Tue, Sep 15, 2015 at 12:37 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

> David,
>
> Yes, it would be nice, but
> 1. I am still working also on bringing over the comments for various
> objects
> 2. What you request is currently beyond my capability. Not to mention that
> there already
> are existing tools that do that, albeit they are not free.
>
> On Tue, Sep 15, 2015 at 12:27 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> To make the casual user's life easier, in the face of this reality, it
>> would nice if the routine would generate a reasonably attempted "diff"
>> between the two so that all changes can be reviewed in a structured manner
>> aided by correctly configured tools and advice.
>>
>> On Tue, Sep 15, 2015 at 12:20 PM, Melvin Davidson <melvin6...@gmail.com>
>> wrote:
>>
>>> Igor,
>>> I understand your point, however, I have spent over a week making a
>>> function
>>> that previously did very little do a lot.
>>> Naming a table the same as a schema is a very silly idea.
>>>
>>> Unless you care to take the time to provide a full
>>> schema, and function that fails for reasonable , practical design
>>> I will ignore all further comments.
>>>
>>> On Tue, Sep 15, 2015 at 9:55 AM, Igor Neyman <iney...@perceptron.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> That is correct. But table old will NOT be converted to new because
>>>>
>>>> only the schema name is converted. And table "old" WILL exist because
>>>> it will also be copied.
>>>>
>>>> I have tested and it works properly.
>>>>
>>>> Please do not provide hypothetical examples. Give me an actual working
>>>> example that causes the problem.
>>>>
>>>> This statement:
>>>>
>>>> SELECT old.field FROM old.old;
>>>>
>>>> selects column “field” from table “old” which is in schema “old”.
>>>>
>>>> Your script converts it into:
>>>>
>>>> SELECT new.field FROM new.old
>>>>
>>>> which will try to select column “field” from table “old” in schema
>>>> “new”.
>>>>
>>>>
>>>>
>>>> Again:
>>>>
>>>> SELECT new.field
>>>>
>>>> means select column “field” from table “new”, which does not exists.
>>>>
>>>> Not sure, what other example you need.
>>>>
>>>> Regards,
>>>>
>>>> Igor Neyman
>>>>
>>>>
>>>
>>>
>>> --
>>> *Melvin Davidson*
>>> I reserve the right to fantasize.  Whether or not you
>>> wish to share my fantasy is entirely up to you.
>>>
>>
>>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


clone_schema.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] clone_schema function

2015-09-17 Thread Melvin Davidson
Thanks,

I'm not sure why you had trouble with the REPLACE(), as I did extensive
testing and it was working as coded.

As for SET STATISTICS, I'm guessing that is a glitch in the CREATE TABLE
 LIKE option.

On Thu, Sep 17, 2015 at 11:05 AM, Marc Mamin <m.ma...@intershop.de> wrote:

> Hello,
>
> I had to make 2 changes to get it running:
>
>
>
> line 193:
>
> -  REPLACE(column_default::text, quote_ident(source_schema) || '.',
> quote_ident(dest_schema || '.') )
>
> + REPLACE(column_default::text, quote_ident(source_schema) || '.',
> quote_ident(dest_schema) || '.' )
>
>
>
> line 319
>
> -  SELECT replace(v_def, ' ON ', ' ON ' || quote_ident(dest_schema) ||
> '.') INTO dest_qry;
>
> + SELECT replace(v_def, ' ON ' || quote_ident(source_schema), ' ON ' ||
> quote_ident(dest_schema) ) INTO dest_qry;
>
>
>
>
>
> moreover, you don't take care of the column statistic targets
>
> (i.e. ALTER TABLE t ALTER COLUMN c SET STATISTICS n;)
>
>
>
>
>
> regards,
>
>
>
> Marc Mamin
>
>
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Melvin Davidson
> *Sent:* Donnerstag, 17. September 2015 15:48
> *To:* David G. Johnston
> *Cc:* Igor Neyman; Jim Nasby; Daniel Verite; pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] clone_schema function
>
>
>
> Attached is hopefully the final version of
>
>  FUNCTION clone_schema(text, text, boolean)
>
>  This function now does the following:
>  1. Checks that the source schema exists and the destination does not.
>  2. Creates the destination schema
>  3. Copies all sequences, tables, indexes, rules, triggers,
> data(optional),
>  views & functions from the source schema to the destination schema
>  4. Optionally copies records from source schema tables to destination
> tabled. (boolean)
>  5. Copies comments for source schema and all sequences, tables,
> functions, rules   and triggers;
>
>  If you discover a problem with this function, then kindly advise me what
> it is
>  and attach a script (SQL dump) to duplicate it. If you also have a fix,
> that is
>  even better.
>
>  However, if you "think" there is a problem that occurs when
>  A. The moon is full
>  B. You have blood type A/B negative
>  C. You have a table the same name as your database and schema
>  D. All you tables have column "id" in them
>  E. You've had 16 beers and 4 oxycodones
>  F. Your meth supplier raised the price
>
>  then do not contact me. Instead, run, do not walk,  immediately to your
>  psychologist, as you have serious issues in addition to database design
> problems
>  and you should not use this function under any circumstance.
>
>  CAVEAT EMPTOR!
>  The only known problem with this script is if functions in the source
> schema
>  have a SELECT using the form of tablename.columm, and tablename is the
> same
>  as  source schema, then tablename will be changed to destination schema
> name.
>  However, since good developers and DBA's use the form of alias.column,
> this
>  should rarely be a problem.
>
>
>
> On Tue, Sep 15, 2015 at 12:37 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
> David,
>
> Yes, it would be nice, but
>
> 1. I am still working also on bringing over the comments for various
> objects
>
> 2. What you request is currently beyond my capability. Not to mention that
> there already
>
> are existing tools that do that, albeit they are not free.
>
>
>
> On Tue, Sep 15, 2015 at 12:27 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
> To make the casual user's life easier, in the face of this reality, it
> would nice if the routine would generate a reasonably attempted "diff"
> between the two so that all changes can be reviewed in a structured manner
> aided by correctly configured tools and advice.
>
>
>
> On Tue, Sep 15, 2015 at 12:20 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
> Igor,
> I understand your point, however, I have spent over a week making a
> function
> that previously did very little do a lot.
>
> Naming a table the same as a schema is a very silly idea.
>
> Unless you care to take the time to provide a full
>
> schema, and function that fails for reasonable , practical design
>
> I will ignore all further comments.
>
>
>
> On Tue, Sep 15, 2015 at 9:55 AM, Igor Neyman <iney...@perceptron.com>
> wrote:
>
>
>
> That is correct. But table old will NOT be converted to new because
>
> only the schema name is converted. And table "old" WILL e

Re: [GENERAL] clone_schema function

2015-09-15 Thread Melvin Davidson
Igor,
I understand your point, however, I have spent over a week making a
function
that previously did very little do a lot.
Naming a table the same as a schema is a very silly idea.

Unless you care to take the time to provide a full
schema, and function that fails for reasonable , practical design
I will ignore all further comments.

On Tue, Sep 15, 2015 at 9:55 AM, Igor Neyman <iney...@perceptron.com> wrote:

>
>
> That is correct. But table old will NOT be converted to new because
>
> only the schema name is converted. And table "old" WILL exist because it
> will also be copied.
>
> I have tested and it works properly.
>
> Please do not provide hypothetical examples. Give me an actual working
> example that causes the problem.
>
> This statement:
>
> SELECT old.field FROM old.old;
>
> selects column “field” from table “old” which is in schema “old”.
>
> Your script converts it into:
>
> SELECT new.field FROM new.old
>
> which will try to select column “field” from table “old” in schema “new”.
>
>
>
> Again:
>
> SELECT new.field
>
> means select column “field” from table “new”, which does not exists.
>
> Not sure, what other example you need.
>
> Regards,
>
> Igor Neyman
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-15 Thread Melvin Davidson
David,

Yes, it would be nice, but
1. I am still working also on bringing over the comments for various objects
2. What you request is currently beyond my capability. Not to mention that
there already
are existing tools that do that, albeit they are not free.

On Tue, Sep 15, 2015 at 12:27 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> To make the casual user's life easier, in the face of this reality, it
> would nice if the routine would generate a reasonably attempted "diff"
> between the two so that all changes can be reviewed in a structured manner
> aided by correctly configured tools and advice.
>
> On Tue, Sep 15, 2015 at 12:20 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>> Igor,
>> I understand your point, however, I have spent over a week making a
>> function
>> that previously did very little do a lot.
>> Naming a table the same as a schema is a very silly idea.
>>
>> Unless you care to take the time to provide a full
>> schema, and function that fails for reasonable , practical design
>> I will ignore all further comments.
>>
>> On Tue, Sep 15, 2015 at 9:55 AM, Igor Neyman <iney...@perceptron.com>
>> wrote:
>>
>>>
>>>
>>> That is correct. But table old will NOT be converted to new because
>>>
>>> only the schema name is converted. And table "old" WILL exist because it
>>> will also be copied.
>>>
>>> I have tested and it works properly.
>>>
>>> Please do not provide hypothetical examples. Give me an actual working
>>> example that causes the problem.
>>>
>>> This statement:
>>>
>>> SELECT old.field FROM old.old;
>>>
>>> selects column “field” from table “old” which is in schema “old”.
>>>
>>> Your script converts it into:
>>>
>>> SELECT new.field FROM new.old
>>>
>>> which will try to select column “field” from table “old” in schema
>>> “new”.
>>>
>>>
>>>
>>> Again:
>>>
>>> SELECT new.field
>>>
>>> means select column “field” from table “new”, which does not exists.
>>>
>>> Not sure, what other example you need.
>>>
>>> Regards,
>>>
>>> Igor Neyman
>>>
>>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-15 Thread Melvin Davidson
That is correct. But table old will NOT be converted to new because
only the schema name is converted. And table "old" WILL exist because it
will also be copied.

I have tested and it works properly.

Please do not provide hypothetical examples. Give me an actual working
example that causes the problem.

On Tue, Sep 15, 2015 at 9:39 AM, Igor Neyman <iney...@perceptron.com> wrote:

> I still do not see any problem. The whole purpose of the function is to
> copy ALL sequences , tables and functions to "new" schema, so new.old WILL
> exist.
>
>
> I don't see how you can possibly write a function that references a schema
> that does not yet exist!
>
> Again, please provide a _working_ example of what you think the problem is.
>
>
>
> Melvin,
>
>
>
> This statement:
>
>
>
> SELECT old.field FROM old.old;
>
>
>
> selects column “field” from table “old” which is in schema “old”.
>
>
>
> Your script converts it into:
>
>
>
> SELECT new.field FROM new.old
>
>
>
> which will try to select column “field” from table “new” in schema “new”.
>
> The obvious problem is that there is no table “new” in schema “new”, the
> table will still be called “old”.
>
>
>
> Jim’s example is very similar to what I provided a few days ago.
>
>
>
> Regards,
>
> Igor Neyman
>
>
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-15 Thread Melvin Davidson
I still do not see any problem. The whole purpose of the function is to
copy ALL sequences , tables and functions to "new" schema, so new.old WILL
exist.

I don't see how you can possibly write a function that references a schema
that does not yet exist!

Again, please provide a _working_ example of what you think the problem is.



On Tue, Sep 15, 2015 at 3:22 AM, Jim Nasby <jim.na...@bluetreble.com> wrote:

> On 9/14/15 8:02 PM, Melvin Davidson wrote:
>
>> Actually, on further thought, you example shows that it works correctly
>> because we do want all references to the old schema to be changed to the
>> new schema, since all copies of functions will now reside in the new
>> schema. Otherwise, there is no point of duplicating those functions.
>>
>
> Read my example again:
>
>   SELECT old.field FROM old.old;
>
> That will end up as
>
>   SELECT new.field FROM new.old
>
> Which will give you this error:
>
> ERROR:  missing FROM-clause entry for table "new"
> LINE 1: SELECT new.field FROM new.old;
>
> Even if you could fix that, there's yet more problems you'll run into,
> like if someone has a plpgsql block with the same name as the old schema.
>
> I'm not trying to denigrate the work you and others have put into this
> script, but everyone should be aware that it's impossible to create a
> robust solution without a parser. Unfortunately, you could end up with a
> function that still compiles but does something rather different after the
> move. That makes the script potentially dangerous (granted, the odds of
> this are pretty low).
>
> One thing I think would be very interesting is a parser that preserves
> whitespace and comments. That would allow us to store a parsed version of
> (at least plpgsql and sql) functions. The same technique would also be
> handy for views. This would allow a lot (all?) other renames to propagate
> to functions instead of breaking them (as currently happens).
>
> Another option is supporting some kind of official way to specially
> designate database objects in any procedure language (ie, the @schema@
> syntax that extensions use). That would make it possible to rename properly
> written functions without adverse side effects.
>
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-14 Thread Melvin Davidson
Jim,

Have you actually tried this, or is it just a theory? AFAIK, the function
will work because only the schema name is changed.. So please provide
a full working example of a function that fails and I will attempt a
solution.

On Mon, Sep 14, 2015 at 6:36 PM, Jim Nasby <jim.na...@bluetreble.com> wrote:

> On 9/12/15 9:38 AM, Daniel Verite wrote:
>
>> "seriously flawed" is a bit of a stretch. Most sane developers would not
>>> >have schema names of one letter.
>>> >They usually name a schema something practical, which totally avoids
>>> your
>>> >nit picky exception.
>>>
>> That's confusing the example with the problem it shows.
>>
>> Another example could be:
>> if the source schema is "public" and the function body contains
>> GRANT SELECT on sometable to public;
>> then this statement would be wrongly altered by replace().
>>
>
> Well, the new version actually fixes that. But you could still trip this
> up, certainly in the functions. IE:
>
> CREATE FUNCTION ...
>   SELECT old.field FROM old.old;
>
> That will end up as
>
>   SELECT new.field FROM new.old
>
> which won't work.
>
> My objection is not about some corner case: it's the general
>> idea of patching the entire body of a function without a fully-fledged
>> parser that is dead on arrival.
>>
>
> ISTM that's also the biggest blocker for allowing extensions that refer to
> other schemas to be relocatable. It would be interesting if we had some way
> to handle this inside function bodies, perhaps via something equivalent to
> @extschema@.
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-14 Thread Melvin Davidson
Actually, on further thought, you example shows that it works correctly
because we do want all references to the old schema to be changed to the
new schema, since all copies of functions will now reside in the new
schema. Otherwise, there is no point of duplicating those functions.

On Mon, Sep 14, 2015 at 8:42 PM, Melvin Davidson <melvin6...@gmail.com>
wrote:

> Jim,
>
> Have you actually tried this, or is it just a theory? AFAIK, the function
> will work because only the schema name is changed.. So please provide
> a full working example of a function that fails and I will attempt a
> solution.
>
> On Mon, Sep 14, 2015 at 6:36 PM, Jim Nasby <jim.na...@bluetreble.com>
> wrote:
>
>> On 9/12/15 9:38 AM, Daniel Verite wrote:
>>
>>> "seriously flawed" is a bit of a stretch. Most sane developers would not
>>>> >have schema names of one letter.
>>>> >They usually name a schema something practical, which totally avoids
>>>> your
>>>> >nit picky exception.
>>>>
>>> That's confusing the example with the problem it shows.
>>>
>>> Another example could be:
>>> if the source schema is "public" and the function body contains
>>> GRANT SELECT on sometable to public;
>>> then this statement would be wrongly altered by replace().
>>>
>>
>> Well, the new version actually fixes that. But you could still trip this
>> up, certainly in the functions. IE:
>>
>> CREATE FUNCTION ...
>>   SELECT old.field FROM old.old;
>>
>> That will end up as
>>
>>   SELECT new.field FROM new.old
>>
>> which won't work.
>>
>> My objection is not about some corner case: it's the general
>>> idea of patching the entire body of a function without a fully-fledged
>>> parser that is dead on arrival.
>>>
>>
>> ISTM that's also the biggest blocker for allowing extensions that refer
>> to other schemas to be relocatable. It would be interesting if we had some
>> way to handle this inside function bodies, perhaps via something equivalent
>> to @extschema@.
>> --
>> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
>> Experts in Analytics, Data Architecture and PostgreSQL
>> Data in Trouble? Get it in Treble! http://BlueTreble.com
>>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-12 Thread Melvin Davidson
With thanks to a suggestion by David G. Johnston, I've attached another
revised version
of the function that hopefully eliminates the problem reported by Daniel
Verite.
This version also handles CamelCase schemas and tables better.

If anyone else finds a problem, kindly attach a dump of the schema to
duplicate the problem.


On Sat, Sep 12, 2015 at 10:38 AM, Daniel Verite <dan...@manitou-mail.org>
wrote:

>     Melvin Davidson wrote:
>
> > "seriously flawed" is a bit of a stretch. Most sane developers would not
> > have schema names of one letter.
> > They usually name a schema something practical, which totally avoids your
> > nit picky exception.
>
> That's confusing the example with the problem it shows.
>
> Another example could be:
> if the source schema is "public" and the function body contains
>GRANT SELECT on sometable to public;
> then this statement would be wrongly altered by replace().
>
> My objection is not about some corner case: it's the general
> idea of patching the entire body of a function without a fully-fledged
> parser that is dead on arrival.
>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
-- Function: clone_schema(text, text, boolean)

-- DROP FUNCTION clone_schema(text, text, boolean);

CREATE OR REPLACE FUNCTION clone_schema(
source_schema text,
dest_schema text,
include_recs boolean)
  RETURNS void AS
$BODY$

--  This function will clone all sequences, tables, data, views & functions 
from any existing schema to a new one
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema', TRUE);

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  column_  text;
  qry  text;
  dest_qry text;
  v_deftext;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
  IF FOUND
THEN 
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)
  LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

EXECUTE 'SELECT last_value, max_value, start_value, increment_by, 
min_value, cache_value, log_cnt, is_cycled, is_called 
  FROM ' || quote_ident(source_schema) || '.' || 
quote_ident(object) || ';' 
  INTO sq_last_value, sq_max_value, sq_start_value, 
sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, 
sq_is_called ; 

IF sq_is_cycled 
  THEN 
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;

EXECUTE 'ALTER SEQUENCE '   || quote_ident(dest_schema) || '.' || 
quote_ident(object) 
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value 
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH '   || sq_start_value
|| ' RESTART '  || sq_min_value 
|| ' CACHE '|| sq_cache_value 
|| sq_cycled || ' ;' ;

buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
IF include_recs 
THEN
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || 
', ' || sq_is_called || ');' ; 
ELSE
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_start_value 
|| ', ' || sq_is_called || ');' ;
END IF;

  END LOOP;

-- Create tables 
  FOR object IN
SELECT TABLE_NAME::text 
  FROM information_schema.tables 
 WHERE table_schema = quote_ident(source_schema)
   AND table_type = 'BASE TABLE'

  LOOP
buffer := quote_ident(dest_schema) || '.' || quote_ident(ob

Re: [GENERAL] clone_schema function

2015-09-11 Thread Melvin Davidson
Alvaro,

no I haven't updated the wiki (or git). To be honest, I'm retired and I
just don't want to bother learning something new,
but I do enjoy helping othersfrom time to time. I would consider it a favor
if you would do the update for me.

TIA,
Melvin

On Fri, Sep 11, 2015 at 12:30 PM, Alvaro Herrera <alvhe...@2ndquadrant.com>
wrote:

> Melvin Davidson wrote:
> > Here is one more tweak of clone_schema.
>
> Are you updating the wiki to match?  If not (why?), I think at the very
> least you should add a link in the wiki page to this thread.
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-11 Thread Melvin Davidson
Thank you very much Alvaro. Now I can go back to being Chief Engineer of
Sleeping Late @ retired. :)

On Fri, Sep 11, 2015 at 2:21 PM, Alvaro Herrera <alvhe...@2ndquadrant.com>
wrote:

> Melvin Davidson wrote:
> > Alvaro,
> >
> > no I haven't updated the wiki (or git). To be honest, I'm retired and I
> > just don't want to bother learning something new,
> > but I do enjoy helping othersfrom time to time. I would consider it a
> favor
> > if you would do the update for me.
>
> I wouldn't want to prevent your enjoyment of learning something new such
> as editing the wiki -- in spite of which I added a link to the wiki.
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-11 Thread Melvin Davidson
"seriously flawed" is a bit of a stretch. Most sane developers would not
have schema names of one letter.
They usually name a schema something practical, which totally avoids your
nit picky exception.
However, if you are that concerned about the "serious flaw", you have the
option of using the method
of dumping the schema, editing the dump and reloading. Or, I invite you to
use your great skills and
write a better method.

On Fri, Sep 11, 2015 at 4:06 PM, Daniel Verite <dan...@manitou-mail.org>
wrote:

> Melvin Davidson wrote:
>
> > I've added error checking and verified that it now copies the
> > current sequnce values, table data, views and functions.
>
> The code dealing with functions is seriously flawed.
>
> Consider that part:
>  SELECT pg_get_functiondef(func_oid) INTO qry;
>  SELECT replace(qry, source_schema, dest_schema) INTO dest_qry;
>  EXECUTE dest_qry;
>
> It suggests that to duplicate a function in schema A to B,
> every letter A in the entire function definition should be replaced
> by B, garbling everything along the way.
> For example CREATE FUNCTION would become CREBTE FUNCTION,
> DECLARE would become DECLBRE and so on.
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-11 Thread Melvin Davidson
Igor & David,

You are correct, I am subject to criticism, However, I have spent a few
days getting this to work as it should and it now does.
Even though the chance of a collision is not zero, it is still low and the
function does work.
I don't mind criticism, but when someone finds a problem, the least they
can do is suggest a fix, as you have David.

I'll try that and test over the weekend.. Or I also invite you to submit a
fixed version.

On Fri, Sep 11, 2015 at 4:39 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Fri, Sep 11, 2015 at 4:23 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>> "seriously flawed" is a bit of a stretch. Most sane developers would not
>> have schema names of one letter.
>> They usually name a schema something practical, which totally avoids your
>> nit picky exception.
>> However, if you are that concerned about the "serious flaw", you have the
>> option of using the method
>> of dumping the schema, editing the dump and reloading. Or, I invite you
>> to use your great skills and
>> write a better method.
>>
>>>
>>>  SELECT replace(qry, source_schema, dest_schema) INTO dest_qry;
>>>
>>
> Or maybe you can at least mitigate the potential problem a bit by changing
> this to read:
>
> replace(qry, source_schema || '.', dest_schema || '.') INTO dest_qry; ...
>
> Posting code for public consumption involves accepting constructive
> criticism and even if the example is contrived I'm doubting the possibility
> of collision is as close to zero as you think it may be or as close as it
> could be with a simple re-evaluation of what constraints as imposed on a
> particular sequence of characters being interpreted as a schema.  You do
> still end up with a possible false-positive when you have a
> (column.composite).composite_field expression.
>
> Regular expressions could maybe help here since the leading character is
> limited too...but less so then the trailing character.
>
> David J.
>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-11 Thread Melvin Davidson
Here is one more tweak of clone_schema.

I've added an include_recs flag.
If FALSE, then no records are copied into the tables from the old_schema
and all sequences start with the minimum value.
If TRUE, then all records are copied and sequences are set tot the last
value.


On Thu, Sep 10, 2015 at 11:52 AM, Melvin Davidson <melvin6...@gmail.com>
wrote:

> Yes, however, the documentation would be a lot clearer if it said "copies
> all constraints except foreign keys". I've made this known.
>
> At any rate, I've attached a new version of the function that now does
> copy the foreign keys. Let me know if I missed anything else.
>
> On Thu, Sep 10, 2015 at 9:09 AM, Igor Neyman <iney...@perceptron.com>
> wrote:
>
>>
>>
>>
>>
>> *From:* Melvin Davidson [mailto:melvin6...@gmail.com]
>> *Sent:* Wednesday, September 09, 2015 4:48 PM
>> *To:* Igor Neyman <iney...@perceptron.com>
>> *Cc:* pgsql-general@postgresql.org
>> *Subject:* Re: [GENERAL] clone_schema function
>>
>>
>>
>> Thanks Igor,
>>
>> hmm, apparently the "INCLUDING CONSTRAINTS' option of "CREATE TABLE' has
>> a glitch and only includes the primary key.
>>
>> I also noticed that INCLUDING ALL generates an error, so I'll have to
>> report that also.
>>
>> I'll go eat some crow and work on a fix to add all constraints in the
>> meantime.
>>
>>
>>
>>
>>
>> It’s not a bug.
>>
>> According to docs:
>>
>> “Not-null constraints are always copied to the new table. CHECK
>> constraints will only be copied if INCLUDING CONSTRAINTS is specified;
>> other types of constraints will never be copied.”
>>
>> So, FK constraints are not supposed to be copied.
>>
>>
>>
>> Regards,
>>
>> Igor Neyman
>>
>
>
>
> --
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


clone_schema.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bugs with like_option in CREATE TABLE

2015-09-10 Thread Melvin Davidson
My apologies.

I made a test case but this time it worked. I must have had a brain fart
while I was working on this yesterday. Well, at least I am getting used to
the taste of eating crow. :)

On Thu, Sep 10, 2015 at 5:56 AM, hubert depesz lubaczewski <
dep...@depesz.com> wrote:

> On Wed, Sep 09, 2015 at 07:51:48PM -0400, Melvin Davidson wrote:
> > *O/S = Windows 10PostgreSQL 9.2.10, compiled by Visual C++ build 1600,
> > 32-bit*
> > http://www.postgresql.org/docs/9.1/interactive/sql-createtable.html
> >
> > and like_option is:
> > { INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS | INDEXES | STORAGE |
> > COMMENTS | ALL }
> > 1. INCLUDING CONSTRAINTS does not bring over the Foreign Keys
>
> Of course it doesn't. It's documented in the page you linked:
>
> "Not-null constraints are always copied to the new table. CHECK
> constraints will only be copied if INCLUDING CONSTRAINTS is specified;
> other types of constraints will never be copied"
>
> > 2. INCLUDING ALL does not work and generates an ERROR;*
>
> Works for me. Please provide a test case that can be used to see the
> error.
>
> Best regards,
>
> depesz
>
> --
> The best thing about modern society is how easy it is to avoid contact
> with it.
>
> http://depesz.com/
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] clone_schema function

2015-09-10 Thread Melvin Davidson
Yes, however, the documentation would be a lot clearer if it said "copies
all constraints except foreign keys". I've made this known.

At any rate, I've attached a new version of the function that now does copy
the foreign keys. Let me know if I missed anything else.

On Thu, Sep 10, 2015 at 9:09 AM, Igor Neyman <iney...@perceptron.com> wrote:

>
>
>
>
> *From:* Melvin Davidson [mailto:melvin6...@gmail.com]
> *Sent:* Wednesday, September 09, 2015 4:48 PM
> *To:* Igor Neyman <iney...@perceptron.com>
> *Cc:* pgsql-general@postgresql.org
> *Subject:* Re: [GENERAL] clone_schema function
>
>
>
> Thanks Igor,
>
> hmm, apparently the "INCLUDING CONSTRAINTS' option of "CREATE TABLE' has a
> glitch and only includes the primary key.
>
> I also noticed that INCLUDING ALL generates an error, so I'll have to
> report that also.
>
> I'll go eat some crow and work on a fix to add all constraints in the
> meantime.
>
>
>
>
>
> It’s not a bug.
>
> According to docs:
>
> “Not-null constraints are always copied to the new table. CHECK
> constraints will only be copied if INCLUDING CONSTRAINTS is specified;
> other types of constraints will never be copied.”
>
> So, FK constraints are not supposed to be copied.
>
>
>
> Regards,
>
> Igor Neyman
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
-- Function: clone_schema(text, text)

-- DROP FUNCTION clone_schema(text, text);

CREATE OR REPLACE FUNCTION clone_schema(
source_schema text,
dest_schema text)
  RETURNS void AS
$BODY$

--  This function will clone all sequences, tables, data, views & functions 
from any existing schema to a new one
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema');

DECLARE
  src_oid  oid;
  tbl_oid  oid;
  func_oid oid;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  column_  text;
  qry  text;
  dest_qry text;
  v_deftext;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
  IF FOUND
THEN 
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)
  LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

EXECUTE 'SELECT last_value, max_value, start_value, increment_by, 
min_value, cache_value, log_cnt, is_cycled, is_called 
  FROM ' || quote_ident(source_schema) || '.' || 
quote_ident(object) || ';' 
  INTO sq_last_value, sq_max_value, sq_start_value, 
sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, 
sq_is_called ; 

IF sq_is_cycled 
  THEN 
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;

EXECUTE 'ALTER SEQUENCE '   || quote_ident(dest_schema) || '.' || 
quote_ident(object) 
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value 
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH '   || sq_start_value
|| ' RESTART '  || sq_min_value 
|| ' CACHE '|| sq_cache_value 
|| sq_cycled || ' ;' ;

buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || ', ' || 
sq_is_called || ');' ;

  END LOOP;

-- Create tables 
  FOR object IN
SELECT TABLE_NAME::text 
  FROM information_schema.tables 
 WHERE table_schema = quote_ident(source_schema)
   AND table_type = 'BASE TABLE'

  LOOP
buffer := dest_schema || '.' || quote_ident(object);
EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE ' || 
quote_ident(source_schema) || '.' || quote_ident(object) 
|| ' INCLUDING ALL)';

-- Insert records from so

[GENERAL] Bugs with like_option in CREATE TABLE

2015-09-09 Thread Melvin Davidson
*O/S = Windows 10PostgreSQL 9.2.10, compiled by Visual C++ build 1600,
32-bit*
http://www.postgresql.org/docs/9.1/interactive/sql-createtable.html


and like_option is:

{ INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS | INDEXES | STORAGE |
COMMENTS | ALL }


*1. INCLUDING CONSTRAINTS does not bring over the Foreign Keys2. INCLUDING
ALL does not work and generates an ERROR;*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Bugs with like_option in CREATE TABLE

2015-09-09 Thread Melvin Davidson
*>1. INCLUDING CONSTRAINTS does not bring over the Foreign Keys*

>Not a bug since the documentation states that the only additional
constraints that are brought over are check constraints.​  Not Null
constraints are always brought
>over.

Hmmm, The document would be a lot clearly if it simply stated Foreign Keys
are NOT brought over.
Anyhow, I've found a work around for that,


*>2. INCLUDING ALL does not work and generates an ERROR;*

​>For kicks does writing it out in long form work?

No. I tried INCLUDING ALL and just ALL by itself, both create a syntax
error.

On Wed, Sep 9, 2015 at 8:51 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Sep 9, 2015 at 7:51 PM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>>
>>
>>
>> *O/S = Windows 10PostgreSQL 9.2.10, compiled by Visual C++ build 1600,
>> 32-bit*
>> http://www.postgresql.org/docs/9.1/interactive/sql-createtable.html
>>
>>
>> and like_option is:
>>
>> { INCLUDING | EXCLUDING } { DEFAULTS | CONSTRAINTS | INDEXES | STORAGE |
>> COMMENTS | ALL }
>>
>> *1. INCLUDING CONSTRAINTS does not bring over the Foreign Keys*
>>
>
> ​Not a bug since the documentation states that the only additional
> constraints that are brought over are check constraints.​  Not Null
> constraints are always brought over.
>
> ​"""
> Not-null constraints are always copied to the new table. CHECK constraints
> will only be copied if INCLUDING CONSTRAINTS is specified; other types of
> constraints will never be copied. Also, no distinction is made between
> column constraints and table constraints — when constraints are requested,
> all check constraints are copied.
> """​
>
>
>> *2. INCLUDING ALL does not work and generates an ERROR;*
>>
>
> ​For kicks does writing it out in long form work?
>
> """
> INCLUDING ALL is an abbreviated form of INCLUDING DEFAULTS INCLUDING
> CONSTRAINTS INCLUDING INDEXES INCLUDING STORAGE INCLUDING COMMENTS.
> """
>
> David J.
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] clone_schema function

2015-09-09 Thread Melvin Davidson
I noted there was an inquiry as to how to copy or clone_schema
an entire schema. The standard method for doing that is to
1. pg_dump the schema in plain format
2. edit the dump file and change all occurrences of the schema name
3. reload the dump into the new schema.

The attached function is an alternate method for doing that.
It is a revision of the clone_schema by by Emanuel '3manuek'
from https://wiki.postgresql.org/wiki/Clone_schema

Originally, it did not copy views, functions or data from
the source schema despite the claim that it "copies everything".

I've added error checking and verified that it now copies the
current sequnce values, table data, views and functions.

As always, use with caution.
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
-- Function: clone_schema(text, text)

-- DROP FUNCTION clone_schema(text, text);

CREATE OR REPLACE FUNCTION clone_schema(
source_schema text,
dest_schema text)
  RETURNS void AS
$BODY$
-- by Emanuel '3manuek' 
-- Revised by Melvin Davidson
--  This function will clone all sequences, tables, data, views & functions 
from any existing schema to a new one
-- SAMPLE CALL:
-- SELECT clone_schema('public', 'new_schema');

DECLARE
  src_oid  oid;
  func_oid oid;
  object   text;
  buffer   text;
  srctbl   text;
  default_ text;
  column_  text;
  qry  text;
  dest_qry text;
  v_deftext;
  seqval   bigint;
  sq_last_valuebigint;
  sq_max_value bigint;
  sq_start_value   bigint;
  sq_increment_by  bigint;
  sq_min_value bigint;
  sq_cache_value   bigint;
  sq_log_cnt   bigint;
  sq_is_called boolean;
  sq_is_cycled boolean;
  sq_cycledchar(10);

BEGIN

-- Check that source_schema exists
  SELECT oid INTO src_oid
FROM pg_namespace
   WHERE nspname = quote_ident(source_schema);
  IF NOT FOUND
THEN 
RAISE NOTICE 'source schema % does not exist!', source_schema;
RETURN ;
  END IF;

  -- Check that dest_schema does not yet exist
  PERFORM nspname 
FROM pg_namespace
   WHERE nspname = quote_ident(dest_schema);
  IF FOUND
THEN 
RAISE NOTICE 'dest schema % already exists!', dest_schema;
RETURN ;
  END IF;

  EXECUTE 'CREATE SCHEMA ' || quote_ident(dest_schema) ;

  -- Create sequences
  -- TODO: Find a way to make this sequence's owner is the correct table.
  FOR object IN
SELECT sequence_name::text 
  FROM information_schema.sequences
 WHERE sequence_schema = quote_ident(source_schema)
  LOOP
EXECUTE 'CREATE SEQUENCE ' || quote_ident(dest_schema) || '.' || 
quote_ident(object);
srctbl := quote_ident(source_schema) || '.' || quote_ident(object);

EXECUTE 'SELECT last_value, max_value, start_value, increment_by, 
min_value, cache_value, log_cnt, is_cycled, is_called 
  FROM ' || quote_ident(source_schema) || '.' || 
quote_ident(object) || ';' 
  INTO sq_last_value, sq_max_value, sq_start_value, 
sq_increment_by, sq_min_value, sq_cache_value, sq_log_cnt, sq_is_cycled, 
sq_is_called ; 

IF sq_is_cycled 
  THEN 
sq_cycled := 'CYCLE';
ELSE
sq_cycled := 'NO CYCLE';
END IF;

EXECUTE 'ALTER SEQUENCE '   || quote_ident(dest_schema) || '.' || 
quote_ident(object) 
|| ' INCREMENT BY ' || sq_increment_by
|| ' MINVALUE ' || sq_min_value 
|| ' MAXVALUE ' || sq_max_value
|| ' START WITH '   || sq_start_value
|| ' RESTART '  || sq_min_value 
|| ' CACHE '|| sq_cache_value 
|| sq_cycled || ' ;' ;

buffer := quote_ident(dest_schema) || '.' || quote_ident(object);
EXECUTE 'SELECT setval( ''' || buffer || ''', ' || sq_last_value || ', ' || 
sq_is_called || ');' ;

  END LOOP;

-- Create tables 
  FOR object IN
SELECT TABLE_NAME::text 
  FROM information_schema.tables 
 WHERE table_schema = quote_ident(source_schema)
   AND table_type = 'BASE TABLE'

  LOOP
buffer := dest_schema || '.' || quote_ident(object);
EXECUTE 'CREATE TABLE ' || buffer || ' (LIKE ' || 
quote_ident(source_schema) || '.' || quote_ident(object) || ' INCLUDING 
CONSTRAINTS INCLUDING INDEXES INCLUDING DEFAULTS)';

-- Insert records from source table
EXECUTE 'INSERT INTO ' || buffer || ' SELECT * FROM ' || 
quote_ident(source_schema) || '.' || quote_ident(object) || ';';
 
FOR column_, default_ IN
  SELECT column_name::text, 
 REPLACE(column_default::text, source_schema, dest_schema) 
FROM information_schema.COLUMNS 
   WHERE table_schema = dest_schema 
 AND TABLE_NAME = object 
 AND column_default LIKE 'nextval(%' || quote_ident(source_schema) || 
'%::regclass)'
LOOP
  EXECUTE 'ALTER TABLE ' || buffer || ' ALTER COLUMN ' || column_ || ' SET 
DEFAULT ' || default_;
END LOOP;

Re: [GENERAL] clone_schema function

2015-09-09 Thread Melvin Davidson
Thanks Igor,

hmm, apparently the "INCLUDING CONSTRAINTS' option of "CREATE TABLE' has a
glitch and only includes the primary key.
I also noticed that INCLUDING ALL generates an error, so I'll have to
report that also.

I'll go eat some crow and work on a fix to add all constraints in the
meantime.

On Wed, Sep 9, 2015 at 3:43 PM, Igor Neyman <iney...@perceptron.com> wrote:

>
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Melvin Davidson
> *Sent:* Wednesday, September 09, 2015 12:31 PM
> *To:* pgsql-general@postgresql.org
> *Subject:* [GENERAL] clone_schema function
>
>
>
>
> I noted there was an inquiry as to how to copy or clone_schema
> an entire schema. The standard method for doing that is to
> 1. pg_dump the schema in plain format
> 2. edit the dump file and change all occurrences of the schema name
> 3. reload the dump into the new schema.
>
> The attached function is an alternate method for doing that.
> It is a revision of the clone_schema by by Emanuel '3manuek'
> from https://wiki.postgresql.org/wiki/Clone_schema
>
> Originally, it did not copy views, functions or data from
> the source schema despite the claim that it "copies everything".
>
> I've added error checking and verified that it now copies the
> current sequnce values, table data, views and functions.
>
> As always, use with caution.
> --
>
> *Melvin Davidson*
>
>
>
> I assume you are aware that this script does not produce complete copy of
> the source schema.
>
> Foregn Key constraints are not recreated along with the tables.
>
>
>
> Regards,
>
> Igor Neyman
>
>
>
>
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] view

2015-09-08 Thread Melvin Davidson
>ERROR:  permission denied for relation sub_item

So obviously, the user doing the select on the view (ie: YOU)  does not
have SELECT permission on table sub_item.

On Tue, Sep 8, 2015 at 8:19 AM, Ramesh T <rameshparnandit...@gmail.com>
wrote:

> HI ,
> I have view .when i try to select view ,it had return
>  select * from art;
> ERROR:  permission denied for relation sub_item
> SQL state: 42501
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] table dependencies

2015-09-07 Thread Melvin Davidson
All,
fwiw,  I once wrote a plpgsql function to assist in generating slony set
adds.
It grabs all the tables in pg_class and sorts them by foreign key count.
You can pull the main query logic from it and modify to suit your needs.

I've attached for your convenience.


On Mon, Sep 7, 2015 at 6:27 AM, Gavin Flower <gavinflo...@archidevsys.co.nz>
wrote:

> On 07/09/15 19:44, Raymond O'Donnell wrote:
>
>> On 06/09/2015 22:59, FarjadFarid(ChkNet) wrote:
>>
>>> No worries.
>>>
>>> I found a way.
>>>
>>> Would you share it, for the archives?
>>
>> Ray.
>>
>>
>> I think I can do it relatively simply, in a reasonable general fashion.
>
> if it is of real interest let me know, and I'll see if I can find time to
> try and implement  it.
>
>
> Cheers,
> Gavin
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
-- Function: slony_generate_set_adds(integer, text, integer)

-- DROP FUNCTION slony_generate_set_adds(integer, text, integer);

CREATE OR REPLACE FUNCTION slony_generate_set_adds(integer, text, integer)
  RETURNS void AS
$BODY$

DECLARE

p_set_idALIAS FOR $1;
p_schemaALIAS FOR $2;
p_start_seq ALIAS FOR $3;

v_recordrecord;
v_schemaTEXT;
v_relname   TEXT;
v_seq   TEXT;
v_prefix_digits INT4;
v_ctr   INT4;

v_msg   TEXT;

BEGIN

v_prefix_digits := 3;
v_ctr   := p_start_seq;

CREATE TEMP TABLE t_slony_set_add
( slonik_command TEXT )
ON COMMIT DROP;

--SELECT p_set_id::text || LPAD(v_ctr::text, v_prefix_digits, '0') INTO v_seq;
SELECT v_ctr INTO v_seq;

-- 2008-06-12
-- Sorting by OID replaced by sort by FK count

FOR v_record IN SELECT n.nspname, t.relname , 0 as count
  FROM pg_class t
  JOIN pg_namespace n ON n.oid = t.relnamespace
 WHERE relkind = 'r'
   AND relname NOT LIKE 'pg_%'
   AND relname NOT LIKE 'sql_%'
   AND n.nspname = p_schema
   AND t.OID NOT IN 
(SELECT conrelid
   FROM pg_constraint
  WHERE contype = 'f'
AND contype <> 'p'
AND contype <> 'c')
UNION
   SELECT n.nspname, t.relname as table, count(c.conname) as count
 FROM pg_class t
 JOIN pg_constraint c ON ( c.conrelid = t.OID AND c.contype = 
'f')
 JOIN pg_namespace n ON n.oid = t.relnamespace
WHERE relkind = 'r'
  AND t.relname NOT LIKE 'pg_%'
  AND t.relname NOT LIKE 'sql_%'
  AND n.nspname != 'rollback'
  AND CASE WHEN p_schema IS NOT NULL 
THEN n.nspname = p_schema
ELSE TRUE
END
GROUP BY n.nspname, t.relname
ORDER BY 3, 2

LOOP

SELECT 'SET ADD TABLE (SET ID=' || p_set_id || ', ORIGIN=1, ID=' 
|| v_seq || ', FULLY QUALIFIED NAME=' || 
|| v_record.nspname || '.' || v_record.relname ||  || ', comment=' 
||  
|| v_record.relname || ' fks->'|| v_record.count::text || ''');' INTO 
v_msg;

INSERT INTO t_slony_set_add 
( slonik_command )
VALUES
( v_msg);

RAISE NOTICE '%', v_msg;

v_ctr := v_ctr +1;
--SELECT p_set_id::text || LPAD(v_ctr::text, v_prefix_digits, '0') INTO 
v_seq;
SELECT v_ctr INTO v_seq;

END LOOP;

v_prefix_digits := v_prefix_digits + 1;
--v_ctr := 1;
v_ctr   := p_start_seq;

--SELECT p_set_id::text || LPAD(v_ctr::text, v_prefix_digits, '0') INTO v_seq;
SELECT v_ctr INTO v_seq;

FOR v_record IN SELECT n.nspname, c.relname 
  FROM pg_class c
 LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
 LEFT JOIN pg_tablespace t ON t.oid = c.reltablespace
 WHERE c.relkind = 'S'::"char"
   AND CASE WHEN p_schema IS NOT NULL 
THEN n.nspname = p_schema
ELSE TRUE
END
  ORDER BY c.oid

LOOP

SELECT 'SET ADD SEQUENCE (SET ID=' || p_set_id || ', ORIGIN=1, ID='
|| v_seq || ', FULLY QUALIFIED NAME='
||  || v_record.nspname || '.'|| v_record.relname ||  || ', 
comment='
||  || v_record.relname || ''');' INTO v_msg;

RAISE NOTICE '%', v_msg;
INSERT INTO t_slony_set_add 
( slonik_command )
VALUES
( v_msg);

v_ctr := v_ctr +1;

--SELECT p_set_id::text || LPAD(v_ctr::text, v_prefix_digits, '0') 

Re: [GENERAL] Partitioning and constraint exclusion

2015-09-07 Thread Melvin Davidson
First, what is the PostgresSQL version ??

Next, in postgresql.conf, what is the value of constraint_exclusion ?

On Mon, Sep 7, 2015 at 8:55 AM, Jayadevan M <maymala.jayade...@gmail.com>
wrote:

> Hello ,
>
> I have a parent table and 6 child tables (partitions). The child tables
> have check constraints defined in the form
> CHECK (myuid in (123,456,..)).
> myuid is bigint, the constraints for the 6 child tables are definitely
> mutually exclusive. The number of values in the list ranges from 2-10 for 5
> of the child tables. For the 6th child table, the list is 2500+ elements.
> When I try explain/explain analyze for even a simple query like
>
> select * from parent where myuid in (123,456,789)
>
> the child table with 2500+ elements gets always scanned. I have an index
> on the column and that does get used. But why doesn't the planner just use
> constraint exclusion and not go for the index scan? Anyone faced a similar
> issue?
>
> Thanks,
> Jayadevan
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] get first and last row in one sql as two columns

2015-09-02 Thread Melvin Davidson
Try this:

SELECT
  (SELECT 
 FROM 
 ORDER BY  offset 0 LIMIT 1) ,
  (SELECT 
 FROM 
 ORDER BY  OFFSET (SELECT COUNT(*) ) LIMIT 1)
   FROM  LIMIT 1;


On Wed, Sep 2, 2015 at 7:27 PM, Rob Sargent <robjsarg...@gmail.com> wrote:

> On 09/02/2015 05:14 PM, Tom Smith wrote:
>
> Hi:
>
> I need to get the first and last tow in one sql like below
>
> select first(col1), last(col1) from table order by col1
>
> I saw some posting in wiki with a custom function (or C extention)
> to do this.   Is it widely used and reliable?
> https://wiki.postgresql.org/wiki/First/last_(aggregate)
>
> I am wondering why these two functions are not part of postgresql built-in
> functions as it has many use cases
>
> Thanks
>
> If you're ordering by col1, does
>
> select min(col1), max(col1) from table order by col1
>
> not do the trick;
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] bdr admin role

2015-09-02 Thread Melvin Davidson
Oops postgres
On Sep 2, 2015 6:06 PM, "Melvin Davidson" <melvin6...@gmail.com> wrote:

> Initdb as user "postgred", not "postgresql"
> This doc specifies to initdb with the admin user "postgres,"
> http://bdr-project.org/docs/stable/quickstart-instances.html
> but if I do that the supervisor falls over with:
> $ cat bdr5598.log
> LOG:  registering background worker "bdr supervisor"
> LOG:  database system was shut down at 2015-09-02 16:04:45 EDT
> LOG:  starting up replication identifier with ckpt at 0/171EBF8
> LOG:  MultiXact member wraparound protections are now enabled
> LOG:  starting background worker process "bdr supervisor"
> LOG:  autovacuum launcher started
> LOG:  database system is ready to accept connections
> LOG:  Created database bdr_supervisordb (oid=16384) during BDR startup
> LOG:  worker process: bdr supervisor (PID 21666) exited with exit code 1
> FATAL:  role "postgresql" does not exist
> LOG:  starting background worker process "bdr supervisor"
>
> It works if I init with "-U postgresql"
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] bdr admin role

2015-09-02 Thread Melvin Davidson
Initdb as user "postgred", not "postgresql"
This doc specifies to initdb with the admin user "postgres,"
http://bdr-project.org/docs/stable/quickstart-instances.html
but if I do that the supervisor falls over with:
$ cat bdr5598.log
LOG:  registering background worker "bdr supervisor"
LOG:  database system was shut down at 2015-09-02 16:04:45 EDT
LOG:  starting up replication identifier with ckpt at 0/171EBF8
LOG:  MultiXact member wraparound protections are now enabled
LOG:  starting background worker process "bdr supervisor"
LOG:  autovacuum launcher started
LOG:  database system is ready to accept connections
LOG:  Created database bdr_supervisordb (oid=16384) during BDR startup
LOG:  worker process: bdr supervisor (PID 21666) exited with exit code 1
FATAL:  role "postgresql" does not exist
LOG:  starting background worker process "bdr supervisor"

It works if I init with "-U postgresql"


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Detecting autoincrement columns

2015-09-01 Thread Melvin Davidson
Try this:

SELECT *
FROM information_schema.columns
WHERE UPPER(column_default) LIKE 'NEXTVAL%'
ORDER BY 1, 2, 3;

On Tue, Sep 1, 2015 at 6:33 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> "FarjadFarid\(ChkNet\)" <farjad.fa...@checknetworks.com> writes:
> > Can someone please direct me as how to detect columns (serial) with
> > autoincrement option ?
>
> They have a default that depends on a nextval() call.
>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-31 Thread Melvin Davidson
Thank you to all that contributed to this thread. I now have about a dozen
guidelines
I would like to share. However, since a few individuals felt it was more of
an
opportunity to prove or disprove certain theories, rather than just
contribute additional
good practices, I will not be posting them directly to this thread. Rather,
if anyone  is
interested in obtaining a copy, please just send me a request to my direct
email and I
will gladly reply back with a copy.

Know that these are general guidelines. If you disagree with any of them,
then you
are free to change them to suit you specific requirements. Just please do
not start
another thread to debate the merits or disadvantages of them. If it's more
important
to prove that you are right rather than contribute to help others, then you
are doing
neither.


On Wed, Aug 26, 2015 at 3:13 PM, Gavin Flower <gavinflo...@archidevsys.co.nz
> wrote:

> On 27/08/15 06:59, Raymond O'Donnell wrote:
>
>> On 26/08/2015 19:54, Gavin Flower wrote:
>>
>>> On 27/08/15 00:03, Vincent de Phily wrote:
>>>
>>>> On Wednesday 26 August 2015 10:27:04 Gavin Flower wrote:
>>>>
>>>>> Actually I would suggest standardising on singular names, not JUST
>>>>> because that this the standard I prefer!  :-)
>>>>>
>>>>> But (also) because:
>>>>>
>>>>>1. Singular words tend to be shorter
>>>>>
>>>>>2. plurals are more ambiguous wrt spelling
>>>>>
>>>>>3. there other good reasons, that I've forgotten for now :-(
>>>>>   (but I remember having them!!!)
>>>>>
>>>> Oh, so you name your tables 'fish' and 'crow' instead of 'school' and
>>>> 'murder' ? Such wasted opportunities :p
>>>>
>>>> ???
>>>
>> A school of fish and a murder of crows... wonderfully evocative
>> collective nouns.
>>
>> Ray.
>>
>>
>> Hmm...
>
> Reminds me about a story I was told where a programmer had used names of
> his favourite footballers as labels in an assembler program!  :-)
>
>
> -Gavin
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Need Database Backup

2015-08-30 Thread Melvin Davidson
You need to recreate the database to reload the data into so

   from the windows command line:
   C:\PostgresDB\bin\createdb.exe -U postgres your db name;

   Then to restore the data from the sql backup
   C:\PostgresDB\bin\pg_restore.exe -U postgres the sql backup file


On Sun, Aug 30, 2015 at 10:06 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 08/30/2015 06:36 AM, Scott Marlowe wrote:

 OK we're gonna need to keep this on the list, as other folks may be a
 much better choice to help you than me (I'm not a Windows guy.)

 OK so we need more info to help. I don't know how to fix the NTLDR
 problem, etc, because it's windows. However, the folks who might be able
 to help need more info.


 What I could find:

 https://support.microsoft.com/en-us/kb/816793


 Specifically, what are you doing to try and load the .sql extension?


 My guess is the OP is using pgAdmin and is trying to restore via the
 restore wizard. That only handles custom format backups and not text(plain)
 formats.

 Are you using a second machine for this?
 Have you made a backup of the files in d:\Program Files\Postgresql and
 put it in a safe place? It's important to NOT mess around with the only
 copy in case things go wrong with it.


 Also, how was Postgres initially installed?

 What exactly HAVE you done to try and move the PostgreSQL directory
 elsewhere? What EXACT errors are you seeing.

 The more information you provide the more we can help. Right now you're
 asking for help, but you're not helping yourself, because you're not
 showing us what's going wrong. More details gets you more answers.




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Need Database Backup

2015-08-30 Thread Melvin Davidson
Adrian
From Murali N Rao 5:43 AM (4 hours ago)

now i have installed new os and installed PGSQL again it working now but
no data


On Sun, Aug 30, 2015 at 10:21 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 08/30/2015 07:13 AM, Melvin Davidson wrote:

 You need to recreate the database to reload the data into so

 from the windows command line:
 C:\PostgresDB\bin\createdb.exe -U postgres your db name;


 That assumes Postgres is running, which at this point seems not to be the
 case or at least is unproven.


 Then to restore the data from the sql backup
 C:\PostgresDB\bin\pg_restore.exe -U postgres the sql backup file


 This will not work if the *.sql is a plain text file.






 --
 Adrian Klaver
 adrian.kla...@aklaver.com




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Need Database Backup

2015-08-30 Thread Melvin Davidson
Good catch Adrian,

Murali,
Please change the restore command to

C:\PostgresDB\bin\psql.exe -U postgres -d the database name -f your
backup file name


On Sun, Aug 30, 2015 at 10:40 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 08/30/2015 07:27 AM, Melvin Davidson wrote:

 Adrian
  From Murali N Rao 5:43 AM (4 hours ago)

  now i have installed new os and installed PGSQL again it working now
 but no data


 I plead lack of coffee, but where is that message and was there more in it?



 On Sun, Aug 30, 2015 at 10:21 AM, Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com wrote:

 On 08/30/2015 07:13 AM, Melvin Davidson wrote:

 You need to recreate the database to reload the data into so

  from the windows command line:
  C:\PostgresDB\bin\createdb.exe -U postgres your db name;


 That assumes Postgres is running, which at this point seems not to
 be the case or at least is unproven.


  Then to restore the data from the sql backup
  C:\PostgresDB\bin\pg_restore.exe -U postgres the sql
 backup file


 This will not work if the *.sql is a plain text file.






 --
 Adrian Klaver
 adrian.kla...@aklaver.com mailto:adrian.kla...@aklaver.com




 --
 *Melvin Davidson*
 I reserve the right to fantasize.  Whether or not you
 wish to share my fantasy is entirely up to you.



 --
 Adrian Klaver
 adrian.kla...@aklaver.com




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Execute DDL across multiple servers in an automated manner

2015-08-28 Thread Melvin Davidson
In Linux, you can use Terminator

http://gnometerminator.blogspot.com/p/introduction.html

to execute a script simultaneously across multiple terminals.

Alternatively, you can write a script that loops through host connections
to execute the DDL.


On Fri, Aug 28, 2015 at 9:35 AM, Anderson Abreu andersonab...@gmail.com
wrote:

 Hi all,

 I use PostgreSQL 9.4

 I'm looking for some package/library/plugin to execute DDL across
 multiple servers in an automated manner.

 Can do this with shellscript.

 I would like to know if this is the only way?


 Thanks  Regards


 ---

 Anderson Abreu
 andersonab...@gmail.com

 The judoka is what has: humility to learn what you teach, patience to
 teachwhat they learned to their fellow man and faith to believe in what you
 don't understand. Learn a little more every day and use it every day for
 the good (Jigoro Kano)




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-25 Thread Melvin Davidson
Adrian,

Stop being so technical. When we/I speak of natural keys, we are talking
about the column
that would NATURALly lend itself as the primary key.
No one ever said a number is not natural. just that there is no need to
duplicate uniqueness
with a separate number.

IOW: If we have an account table, then the account_id or account_no
 would be the primary key. There is no need to have a separate serial
id as the primary key.

 Likewise, if we have a car table, then registration (or vehicle_id) is
preferred.

 EG: Good
 CREATE TABLE car
 (
  registration_no varchar(30) not null,
  car_make   varchar(25) not null,
  model  varchar(15) not null;
  build_year date not null;
  owner  varchar(50),
  CONSTRAINT car_pk PRIMARY KEY (registration_no)
  );

 bad
 CREATE TABLE car
 (
  id  serial not null,
  registration_no varchar(30) not null,
  car_make   varchar(25) not null,
  model  varchar(15) not null;
  build_year date not null;
  owner  varchar(50),
  CONSTRAINT car_pk PRIMARY KEY (id)
  );

The benefit in avoiding arbitrary and simple values for the key is that it
makes
the database design much more logical.

Consider:
SELECT c.registration_no,
   c.car_make,
   p.part_no
   FROM car c
   JOIN parts p ON ( p.registration_no = c.registration_no)
 WHERE registration_no = some_var;

 versus:
 SELECT c.registration_no,
   c.car_make,
   p.part_no
   FROM car c
   JOIN parts p ON ( p.id = c.id)
 WHERE registration_no = some_var;

 Why join on id when registration_no is better?


On Tue, Aug 25, 2015 at 10:17 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 08/24/2015 08:44 PM, Rob Sargent wrote:


 On Aug 24, 2015, at 6:53 PM, Melvin Davidson melvin6...@gmail.com
 wrote:

 You are right, he was probably talking about FK's. I was just so
 frustrated about people insisting that using ID as the primary key in
 every table is a good idea,
 I didn't bother to reply previously. I stand firm on my belief that the
 primary key should be something meaningful and NOT id just for the sake
 of having a unique numeric key.

 What, pray tell, is the unique natural key of person in any meaningfully
 large domain such as state? Certainly not name + birthdate.  Current
 address isn’t guaranteed. Social isn’t reliable and actually not truly
 unique.


 To add:

 1) Who determined that a number is not natural?

 2) One of the older unique natural keys (genus, species) is not so unique.
 I am a fisheries biologist by training and in my time the 'unique'
 identifier for various fishes has changed. Now that ichthyologists have
 discovered DNA testing, it can be expected there will be even more changes.
 This is even more apparent when you go back in in history. As an example:

 https://en.wikipedia.org/wiki/Rainbow_trout

 Rainbow trout

 Current

 Oncorhynchus mykiss

 Past

 Salmo mykiss Walbaum, 1792
 Parasalmo mykiss (Walbaum, 1792)
 Salmo purpuratus Pallas, 1814
 Salmo penshinensis Pallas, 1814
 Parasalmo penshinensis (Pallas, 1814)
 Salmo gairdnerii Richardson, 1836  --The one I learned.
 Fario gairdneri (Richardson, 1836)
 Oncorhynchus gairdnerii (Richardson, 1836)
 Salmo gairdnerii gairdnerii Richardson, 1836
 Salmo rivularis Ayres, 1855
 Salmo iridea Gibbons, 1855
 Salmo gairdnerii irideus Gibbons, 1855
 Salmo irideus Gibbons, 1855
 Trutta iridea (Gibbons, 1855)
 Salmo truncatus Suckley, 1859
 Salmo masoni Suckley, 1860
 Oncorhynchus kamloops Jordan, 1892
 Salmo kamloops (Jordan, 1892)
 Salmo rivularis kamloops (Jordan, 1892)
 Salmo gairdneri shasta Jordan, 1894
 Salmo gilberti Jordan, 1894
 Salmo nelsoni Evermann, 1908


 All the above point to the same fish and have appeared and appear in
 articles and reports about said fish. Lets not even get into the common
 name situation:).


 Even given that there are models which are made of entities with
 legitimate attributes which per force define a unique instance, I see no
 benefit in avoiding the convenience of an arbitrary and simple value for
 the key.  Is it the overhead of generating and storing one more value per
 tuple that you can’t abide?






 --
 Adrian Klaver
 adrian.kla...@aklaver.com




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-25 Thread Melvin Davidson
Consistency in naming convention.

Good suggestion!

On Tue, Aug 25, 2015 at 12:33 PM, Marc Munro marc.mu...@gmail.com wrote:

 On Sat, 2015-08-22 at 15:15 +, Melvin Davidson wrote:

  I've been searching for a PostgreSQL Developer Best Practices with not
  much luck,
  so I've started my own. At the risk of stirring up a storm of
 controversy,
  I would appreciate additional suggestions and feedback.
 

 You might add: Create all relation names as plurals.

 Or, if your site uses predominantly singular names, make that the
 standard.  Consistency within the site is more important than any
 dogmatic belief about whether singular or plural forms is better.  If
 you don't put it in the standard, someone will eventually create tables
 with names that don't gel with everything else.

 __
 Marc






-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-25 Thread Melvin Davidson
I think a lot of people here are missing the point. I was trying to give
examples of natural keys, but a lot of people are taking great delight
in pointing out exceptions to examples, rather than understanding the point.
So for the sake of argument, a natural key is something that in itself is
unique and the possibility of a duplicate does not exist.
Before ANYONE continues to insist that a serial id column is good, consider
the case where the number of tuples will exceed a bigint.
Don't say it cannot happen, because it can.
However, if you have an alphanumeric field, let's say varchar 50, and it's
guaranteed that it will never have a duplicate, then THAT is a natural
primary
key and beats the hell out of a generic id field.

Further to the point, since I started this thread, I am holding to it and
will not discuss natural primary keys any further.

Other suggestions for good PostgreSQL Developer database (not web app)
guidelines are still welcome.

On Tue, Aug 25, 2015 at 7:34 PM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 08/25/2015 04:23 PM, Jerry Sievers wrote:

 Adrian Klaver adrian.kla...@aklaver.com writes:

 On 08/25/2015 01:56 PM, John R Pierce wrote:

 On 8/25/2015 1:42 PM, Gavin Flower wrote:

 Account numbers are externally generated, and may potentially change.
 Management might suddenly decide that they want to start using the
 year the account started as the first 4 digits, or that the branch
 code should be reflected in it, or something else.  The database
 should be protected from these arbitrary changes.  Hence the
 account_no is not a good candidate for a primary key.



 such practices would raise total havoc on a traditional paper ledger
 accounting system as well as things like pending AR/AP where external
 companies will be referencing your account numbers.


 Agreed, but it happens. When Lowes took over a local hardware
 chain(Eagles) here in Washington state they moved very quickly on
 changing the account numbers. The company I worked for who supplied
 Eagles and then Lowes sat on a check for $22,000 that was sent to us
 in error because the account numbers got switched. We called them when
 we got the check, but it still took them six months to own up to it.


 DOH!

 Next time a screwball outfit sends you a check for $22k erroneously just
 go deposit it :-)


 Well that is what I wanted to do, the owner overruled me:(. Something
 about Lowes having more lawyers then we did. The strange part was we called
 them and told them what had happened and supplied the relevant information
 that explained the mix up. You would have thought us calling to return a
 check that was supposed to be to us would have raised a flag!


 --
 Adrian Klaver
 adrian.kla...@aklaver.com




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-24 Thread Melvin Davidson
David,

Thank you. This is exactly the type of feedback I was looking for. To
answer your question, for now it is a guide for one particular client,
however, if I get enough feedback and contributions, I will revise it and
submit to the PostgreSQL community.

On Mon, Aug 24, 2015 at 2:34 AM, Thomas Kellerer spam_ea...@gmx.net wrote:

 Melvin Davidson schrieb am 22.08.2015 um 17:15:
  I've attached a file with a few starters that although are numbered,
  are in no special order.


  2. End ALL queries with a semi-colon (;)
 EG: SELECT some_column FROM a_table;
 
 Although autocommit is on by default, it is always a good idea to
 signal the query processor that a statement is complete with the semicolon.
 Failure to do so could result in IDLE IN TRANSACTION, which will
 hold locks on the tables involved and prevent other queries from
 being processed.

 Terminating a statement with ; has nothing to do with idle in
 transaction connections.
 It is a mere syntax thing to make the SQL client (e.g. psql) recognize the
 end of the statement.
 If you don't use it, your statement won't be executed in the first place -
 at least with psql
 as it will wait indefinitely until you finish typing the statement. A GUI
 client might simply send
 the wrong statement to the backend.

 If you run with autocommit disabled, ending each statement with a
 semicolon, will not prevent your connection
 from getting into that idle in transaction state. You have to end the
 _transaction_ using commit or
 rollback to avoid that.

 I do agree with the end all queries with a semi-colon rule, but the
 explanation is wrong.

 You should have another rule that says:

End all transactions as soon as possible using commit or rollback.

 Thomas





 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-24 Thread Melvin Davidson
9.
1) What happens if someone mis-types the account-id?
 To correct that, you also need to correct the FK field in the other
dozen tables.
2) What happens when your company starts a new project (or buys a
competitor) and all the new account numbers are alpha-numeric?

I would reply that in good applications, the user DOES NOT type the key,
but rather selects from a drop down list, or the app looks it up / enters
it for them. Besides, it's just as easy to miskey an integer as it is an
aplha numeric. The point is, do not create two primary pkey's when one will
do.

On Mon, Aug 24, 2015 at 9:15 AM, Ray Cote rgac...@appropriatesolutions.com
wrote:

 On Sat, Aug 22, 2015 at 11:46 AM, Karsten Hilbert karsten.hilb...@gmx.net
  wrote:

  1. Prefix ALL literals with an Escape
 EG:  SELECT E'This is a \'quoted literal \'';
  SELECT E'This is an unquoted literal';
 
 Doing so will prevent the annoying WARNING:  nonstandard use of
 escape in a string literal


 I'd be concerned that what is missing here is the bigger issue of  Best
 Practice #0: Use Bound Variables.
 The only way I've seen invalid literals show up in SQL queries is through
 the dynamic generation of SQL Statements vs. using bound variables.
 Not using bound variables is your doorway to SQL injection exploits.


 9. Do NOT arbitrarily assign an id column to a table as a primary key
 when other columns
 are perfectly suited as a unique primary key.

 ...

 Good example:
 CREATE TABLE accounts
 ( accout_id bigint NOT NULL ,


 I would not consider the general use of natural primary keys to be best
 practice.
 Let's assume your account_id field is used as a foreign key in a dozen
 other tables.
 1) What happens if someone mis-types the account-id?
  To correct that, you also need to correct the FK field in the other
 dozen tables.
 2) What happens when your company starts a new project (or buys a
 competitor) and all the new account numbers are alpha-numeric?
 3) Your example shows the id as a bigint, but your rule is not limited to
 integers.
 What if your table is country populations and the primary key is country
 name?
 Now, you have quite large foreign keys (and a country changing its name is
 not unheard of).
 (and let's not even get started on case-sensitivity or character
 encodings).

 --
 Raymond Cote, President
 voice: +1.603.924.6079 email: rgac...@appropriatesolutions.com skype:
 ray.cote





-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-24 Thread Melvin Davidson
What then if it is discovered that the keyed in value was mis-typed?

That is why SQL has UPDATE and DELETE statements. If a primary key is
incorrect,
it can be fixed, be it one method of another.

On Mon, Aug 24, 2015 at 10:04 AM, David G. Johnston 
david.g.johns...@gmail.com wrote:

 On Mon, Aug 24, 2015 at 9:27 AM, Melvin Davidson melvin6...@gmail.com
 wrote:

 9.
 1) What happens if someone mis-types the account-id?
  To correct that, you also need to correct the FK field in the other
 dozen tables.
 2) What happens when your company starts a new project (or buys a
 competitor) and all the new account numbers are alpha-numeric?

 I would reply that in good applications, the user DOES NOT type the key,
 but rather selects from a drop down list, or the app looks it up / enters
 it for them. Besides, it's just as easy to miskey an integer as it is an
 aplha numeric. The point is, do not create two primary pkey's when one will
 do.


 ​Your missing the point.  The existing Account ID that you refer to is
 apparently externally defined.  Pretend it is a social security number.
 How would one create a new user in your system, and record their
 account_id/social-security-number, without typing it in.  What then if it
 is discovered that the keyed in value was mis-typed?

 ​The point is to not introduce redundant information.  Creating your own
 surrogate identifier in order to avoid using a surrogate identifier value
 created by another system does not introduce redundancy but rather provides
 the system using the primary key control over its generation and, more
 importantly, format.  The highly situational nature of this is why data
 modelling is not something I'd incorporate in a usage document.​

 David J.
 ​




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-24 Thread Melvin Davidson
And again, I am talking about _database_ design, not Web apps. Letting Web
developers design a database to work with their app, is a very, Very, VERY
bad idea.
It is far better to let DBA's and database develeopers design a good
database, then to let those apps mold a db into a non-optimum design.

On Mon, Aug 24, 2015 at 12:26 PM, Joshua D. Drake j...@commandprompt.com
wrote:

 On 08/24/2015 08:56 AM, Melvin Davidson wrote:

  The serial key is the default primary key amongst every single web
 development environment in existence.

 Methinks thou doest take too much for granted.

 Yes, serial has it's purpose, but I sincerely doubt it is the default
 primary key amongst every single web development environment in existence
 I am not sure where you get your stats from. Probably you are referring
 to Ruby on Rails. IMHO,


 Rails
 Anything that uses Hibernate (Java)
 Django
 Every PHP framework
 Pyramid
 Anything that uses sql-alchemy

 I can go on for miles with this. It is true that a lot of these support
 non-serial keys. It is also true that is not the default.


 JD


 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-24 Thread Melvin Davidson
The serial key is the default primary key amongst every single web
development environment in existence.

Methinks thou doest take too much for granted.

Yes, serial has it's purpose, but I sincerely doubt it is the default
primary key amongst every single web development environment in existence
I am not sure where you get your stats from. Probably you are referring to
Ruby on Rails. IMHO, RoR is something which has made it easier to code
Web apps, at the cost of developers not needing to use brain power. In any
case, the idea is to develop good database design. not web apps.

On Mon, Aug 24, 2015 at 11:46 AM, Joshua D. Drake j...@commandprompt.com
wrote:

 On 08/24/2015 07:58 AM, John Turner wrote:

 On Mon, 24 Aug 2015 09:15:27 -0400, Ray Cote



 Point 9 is well-intentioned, but perhaps needs to be
 clarified/rephrased:  Developers should not be creating production-grade
 tables devoid of well-defined business keys, period. That would be
 regardless of whether they're used as de facto primary keys or simply as
 unique keys.


 Although I appreciate your argument, I think we need a little foundation
 in reality. The serial key is the default primary key amongst every
 single web development environment in existence.

 We can make an argument within the doc to why that can be bad, but to
 state that it is wrong is just not going to get you anywhere.

 JD


 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-24 Thread Melvin Davidson
Thank you John R. Pierce. Finally someone who understands the purpose of
this thread. Otherwise, next thing you know, we'll have Web apps/developers
designing bra's for milk cows so they'll look better in the field. :)

On Mon, Aug 24, 2015 at 1:05 PM, John R Pierce pie...@hogranch.com wrote:

 On 8/24/2015 9:34 AM, Melvin Davidson wrote:

 And again, I am talking about _database_ design, not Web apps. Letting
 Web developers design a database to work with their app, is a very, Very,
 VERY bad idea.
 It is far better to let DBA's and database develeopers design a good
 database, then to let those apps mold a db into a non-optimum design.


 if you let the app drive the database design, you tend to end up with a
 database which is only useful to that single app, and likely breaks when
 that app changes.

 --
 john r pierce, recycling bits in santa cruz




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-24 Thread Melvin Davidson
ON UPDATE CASCADE ?

I believe he's talking about triggers.

On Mon, Aug 24, 2015 at 8:00 PM, Berend Tober bto...@computer.org wrote:

 Melvin Davidson wrote:

 9.
  1) What happens if someone mis-types the account-id?
   To correct that, you also need to correct the FK field in the
 other dozen tables.
  2) What happens when your company starts a new project (or buys a
 I would not consider the general use of natural primary keys to be
 best practice.
 Let's assume your account_id field is used as a foreign key in a
 dozen other tables.
 1) What happens if someone mis-types the account-id?
   To correct that, you also need to correct the FK field in the
 other dozen tables.



 ... ON UPDATE CASCADE ?




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-24 Thread Melvin Davidson
You are right, he was probably talking about FK's. I was just so frustrated
about people insisting that using ID as the primary key in every table is
a good idea,
I didn't bother to reply previously. I stand firm on my belief that the
primary key should be something meaningful and NOT id just for the sake
of having a unique numeric key.

On Mon, Aug 24, 2015 at 8:39 PM, CaT c...@zip.com.au wrote:

 On Mon, Aug 24, 2015 at 08:22:17PM -0400, Melvin Davidson wrote:
  On Mon, Aug 24, 2015 at 8:00 PM, Berend Tober bto...@computer.org
 wrote:
   1) What happens if someone mis-types the account-id?
 To correct that, you also need to correct the FK field in
 the
   other dozen tables.
  
   ... ON UPDATE CASCADE ?
 
  I believe he's talking about triggers.

 Huh? Why would you use a trigger when FOREIGN KEY has ON UPDATE CASCADE?

 --
   A search of his car uncovered pornography, a homemade sex aid, women's
   stockings and a Jack Russell terrier.
 -
 http://www.dailytelegraph.com.au/news/wacky/indeed/story-e6frev20-118083480




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] fastes way to retrieve segmented without using loop

2015-08-23 Thread Melvin Davidson
Yes, I agree. I was a bit confused by the term time column. Not mention,
the O/S and PG version were not given.
If column t is truly type time, then only 24 rows can be returned
regardless of limit, as in this Universe, there are only 24 hrs in time.
However, if t is a timestamp, that is a whole other animal and the DISTINCT
would have to be adjusted to include date  hour.
Perhaps if we were given a more accurate table structure, a more exact
solution could be provided.

On Sun, Aug 23, 2015 at 6:09 PM, John McKown john.archie.mck...@gmail.com
wrote:

 Melvin's use of DISTINCT ON (...) is superior to my use of DISTINCT(...)
 because it doesn't return the value to your program. I keep forgetting this
 way. I learned it the other way. Old dog + new trick == problem.

 On Sun, Aug 23, 2015 at 5:04 PM, John McKown john.archie.mck...@gmail.com
  wrote:

 On Sun, Aug 23, 2015 at 4:05 PM, Tom Smith tomsmith198...@gmail.com
 wrote:

 Hello:

 I have a time series table,
 using below sql, loop (psque code), I can get one row for each hour


 ​s/psque/pseudo/g;​




 for( H=1: H 9; H++){
select  * from table where  t = H and t  H+1   limit 1
 }

 t (time column) is indexed).

 Is there a better way to use a issue a SINGLE SQL
 with  an array of time start/end pair
 so it will be executed once to send back result, which would be much
 faster
 than issuing sql again and again (even with prepared statement and using
 sql function).

 Thanks in Advance


 Well, I have a bit of a problem if t is a time column. Do you mean a
 time interval? Or do you really mean it is an integer of some sort. I ask
 because H sure looks like a plain old integer to me.

 In any case, if t is an int as opposed to a time interval, then you
 could start with something like:

 SELECT DISTINCT(t),  columns  FROM table WHERE t BETWEEN 1 AND
 9;

 But if t really is a time interval in the PostgreSQL sense, and H is
 like the hour portion (H -- Hour, makes sense to this weirdo). And you
 want to select one row of data where the t interval is 1 hour, another
 where the t interval is 2 hours, another where the t interval is 3
 hours, and so on up to an interval of at most 99_999 hours. Then you might
 need something like:

 SELECT DISTINCT(EXTRACT(HOUR FROM t)) AS interval_truncated_to_hour, *
 FROM table WHERE t BETWEEN 1 AND 9;

 I don't know of a way to eliminate the first field from the result. But,
 in reality, I would _never_ issue a SELECT * in a normal program. Only
 ask for the columns you are actually going to need. Because, someday,
 someone, is going to change the schema on the table and your program is
 (im)pure porcine excrement at that point. With no errors returned to it.
 IMO, it is an unshakable rule to ONLY and ALWAYS specify the variable
 names. The only exception is if your program actually examines the schema
 of the table before doing a SELECT and dynamically constructs it.






 --

 Schrodinger's backup: The condition of any backup is unknown until a
 restore is attempted.

 Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will
 be.

 He's about as useful as a wax frying pan.

 10 to the 12th power microphones = 1 Megaphone

 Maranatha! 
 John McKown




 --

 Schrodinger's backup: The condition of any backup is unknown until a
 restore is attempted.

 Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

 He's about as useful as a wax frying pan.

 10 to the 12th power microphones = 1 Megaphone

 Maranatha! 
 John McKown




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] fastes way to retrieve segmented without using loop

2015-08-23 Thread Melvin Davidson
Well, I would say the following might work:

SELECT  *
  FROM table
WHERE  t =1
LIMIT  8;

On Sun, Aug 23, 2015 at 5:05 PM, Tom Smith tomsmith198...@gmail.com wrote:

 Hello:

 I have a time series table,
 using below sql, loop (psque code), I can get one row for each hour

 for( H=1: H 9; H++){
select  * from table where  t = H and t  H+1   limit 1
 }

 t (time column) is indexed).

 Is there a better way to use a issue a SINGLE SQL
 with  an array of time start/end pair
 so it will be executed once to send back result, which would be much faster
 than issuing sql again and again (even with prepared statement and using
 sql function).

 Thanks in Advance




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] fastes way to retrieve segmented without using loop

2015-08-23 Thread Melvin Davidson
Correction:

SELECT  DISTINCT ON EXTRACT(HOUR FROM you_time_column)
*
  FROM table
WHERE  t =1
LIMIT  8;

On Sun, Aug 23, 2015 at 5:38 PM, Melvin Davidson melvin6...@gmail.com
wrote:

 Well, I would say the following might work:

 SELECT  *
   FROM table
 WHERE  t =1
 LIMIT  8;

 On Sun, Aug 23, 2015 at 5:05 PM, Tom Smith tomsmith198...@gmail.com
 wrote:

 Hello:

 I have a time series table,
 using below sql, loop (psque code), I can get one row for each hour

 for( H=1: H 9; H++){
select  * from table where  t = H and t  H+1   limit 1
 }

 t (time column) is indexed).

 Is there a better way to use a issue a SINGLE SQL
 with  an array of time start/end pair
 so it will be executed once to send back result, which would be much
 faster
 than issuing sql again and again (even with prepared statement and using
 sql function).

 Thanks in Advance




 --
 *Melvin Davidson*
 I reserve the right to fantasize.  Whether or not you
 wish to share my fantasy is entirely up to you.




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-22 Thread Melvin Davidson
Tom,

Thank you for pointing out run with standard_conforming_strings = ON..
However, that is NOT the problem.
What is occurring is that the developers are sending strings like 'Mr.
M\'vey',
which, if we set standard_conforming_strings = ON, would, and does, result
in errors and the statement failing,
which is a lot less desirable that a simple warning.

Therefore, I am trying to educate the developers in the proper method of
escaping strings,
instead of loading up the error log with annoying warnings.

On Sat, Aug 22, 2015 at 1:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Melvin Davidson melvin6...@gmail.com writes:
  Best Practice would rather be something along the lines:
  Avoid coding in a way that triggers WARNING:
  nonstandard use of escape in a string literal. If you
  cannot comply with this rule document your reasons.

  Thanks for the suggestion. For the past few months I've been dealing with
  an error log that is filled with these warnings simply because
  the developers do not comprehend how to use ( or the requirement to use)
  an escape clause.

 IMO best practice in this area is run with standard_conforming_strings =
 ON.
 If you're seeing this warning at all, it's because you aren't doing that,
 which means your code is unnecessarily unportable to other DBMSes.
 Adopting a coding policy of always using E'' would make that worse.

 regards, tom lane




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-22 Thread Melvin Davidson
The correct way to escape a quote is to double quote it:  'Mr. M''vey'

That is a matter of opinion. However, the real problem is the enclosed
backslashes, which is
beyond our control at this point. Therefore, the best solution is to use
ESCAPE E.

On Sat, Aug 22, 2015 at 3:49 PM, Andy Colson a...@squeakycode.net wrote:

 On Sat, Aug 22, 2015 at 1:16 PM, Tom Lane t...@sss.pgh.pa.us mailto:
 t...@sss.pgh.pa.us wrote:

 Melvin Davidson melvin6...@gmail.com mailto:melvin6...@gmail.com
 writes:
  Best Practice would rather be something along the lines:
  Avoid coding in a way that triggers WARNING:
  nonstandard use of escape in a string literal. If you
  cannot comply with this rule document your reasons.

  Thanks for the suggestion. For the past few months I've been
 dealing with
  an error log that is filled with these warnings simply because
  the developers do not comprehend how to use ( or the requirement to
 use)
  an escape clause.

 IMO best practice in this area is run with
 standard_conforming_strings = ON.
 If you're seeing this warning at all, it's because you aren't doing
 that,
 which means your code is unnecessarily unportable to other DBMSes.
 Adopting a coding policy of always using E'' would make that worse.

  regards, tom lane


 On 08/22/2015 02:40 PM, Melvin Davidson wrote:

 Tom,

 Thank you for pointing out run with standard_conforming_strings = ON..
 However, that is NOT the problem.
 What is occurring is that the developers are sending strings like 'Mr.
 M\'vey',
 which, if we set standard_conforming_strings = ON, would, and does,
 result in errors and the statement failing,
 which is a lot less desirable that a simple warning.

 Therefore, I am trying to educate the developers in the proper method of
 escaping strings,
 instead of loading up the error log with annoying warnings.



 Please dont top post.

 But you are not educating them correctly.  Using E'' isnt right.  The
 correct way to escape a quote is to double quote it:  'Mr. M''vey'

 -Andy



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-22 Thread Melvin Davidson
John,

I believe you and I think alike. The truth is, I was brought on as a
consultant to help this client, so I do not have the authority to fire the
developers. Rather, I am trying to help them fix the absolute worst
designed DB and coding I have seen in 15 years of working with PostgreSQL.
So I've asked for input on additional guidelines to help try to help them
understand the right way to do things.

Unfortunately, so far, people seem to fixate on item one of my guidelines
and I've had no additional suggestions.


On Sat, Aug 22, 2015 at 4:37 PM, John R Pierce pie...@hogranch.com wrote:

 On 8/22/2015 12:40 PM, Melvin Davidson wrote:

 What is occurring is that the developers are sending strings like 'Mr.
 M\'vey',
 which, if we set standard_conforming_strings = ON, would, and does,
 result in errors and the statement failing,
 which is a lot less desirable that a simple warning.


 if your developers were writing invalid C code (or whatever other language
 they program in), would you modify the compiler to accept their invalid
 syntax?   or would you tell the developer to fix their code properly ?   if
 the developers refused, why would you not fire them on the spot for
 incompetence ?



 --
 john r pierce, recycling bits in santa cruz




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] PostgreSQL Developer Best Practices

2015-08-22 Thread Melvin Davidson
I've been searching for a PostgreSQL Developer Best Practices with not
much luck,
so I've started my own. At the risk of stirring up a storm of controversy,
I would
appreciate additional suggestions and feedback.

As a way of clarifying, generally, a DBA is someone that is responsible for
maintaining the integrity of the database, while a developer is someone
that writes code and SQL statements to update the data.

I've attached a file with a few starters that although are numbered, are in
no special order.

Please keep in mind the attached are purely based on my years of experience
working with developers that are not familiar with PostgreSQL and are not
hard and fast rules, but general guidelines.

Hopefully this will result in something that brings about harmony between
PostgreSQL DBA's and Developers.

-- 
*Melvin Davidson*
PostgreSQL Developer Best Practices

1. Prefix ALL literals with an Escape
   EG:  SELECT E'This is a \'quoted literal \'';
SELECT E'This is an unquoted literal';

   Doing so will prevent the annoying WARNING:  nonstandard use of escape in a 
string literal

2. End ALL queries with a semi-colon (;)
   EG: SELECT some_column FROM a_table;
   
   Although autocommit is on by default, it is always a good idea to signal the 
query processor that a statement is complete with the semicolon. Failure to do 
so could result in IDLE IN TRANSACTION, which will 
hold locks on the tables involved and prevent other queries from being 
processed.

3. Avoid using SELECT * whenever possible. Always specify only the columns 
needed. Doing so will 
reduce the query execution time. 
   
4. Format ALL statements so that they are human readable.
   EG: 
   SELECT column1,
  column2
 FROM a_table 
WHERE column2 = 'some_value';

   UPDATE a_table
  SET column1 = 'value1',
  column2 = 'value2'
   WHERE column3 = 'some_value';
 
   DELETE FROM a_table
WHERE column1 = 'some_value';

5. Do NOT use CamelCase for object names (schema, table, column, etc.). 
PostgreSQL will 
convert all entries to lowercase by default unless quoted. So unless you 
like having to quote 
objects, it is best to use all lowercase. Users cannot and should not see 
database objects, 
so the names are transparent to them.

6. Although it is legal to use the form column TYPE PRIMARY KEY, It is best 
to specify as a CONSTRAINT, 
   that way YOU get to choose the name, otherwise postgres assigns a default 
name which may not be to your liking.
   EG: , CONSTRAINT accounts_pk PRIMARY KEY (account_id)

7. Use COMMENTs as a way of self documentation.
 EG:
 COMMENT ON TABLE accounts IS ‘Information about customer accounts’;
 COMMENT ON COLUMN accounts.account_id IS ‘Primary key for accounts 
table.’;
 
8. Do not use id as the primary key name in every table. Make it something 
meaningful.
EG: For accounts table, use account_id.
For addresses table, use address_id 

9. Do NOT arbitrarily assign an id column to a table as a primary key when 
other columns
are perfectly suited as a unique primary key.

EG: Bad example:
CREATE TABLE accounts
( id bigint NOT NULL DEFAULT nextval('seq_id'::regclass),
  account_id bigint NOT NULL ,
  
  
  CONSTRAINT accounts_pk PRIMARY KEY (id)
);
 CREATE UNIQUE INDEX accounts_id_idx ON accounts
   USING BTREE (account_id);

Good example:
CREATE TABLE accounts
( accout_id bigint NOT NULL ,
  
  
  CONSTRAINT accounts_pk PRIMARY KEY (account_id)
);

10.  Standardize Index names with the form table_name + col(s) + “idx”
EG: For accounts table:
accounts_name_idx
accounts_city_state_idx

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Developer Best Practices

2015-08-22 Thread Melvin Davidson
 This is certainly not Best Practice as the warning is
annoying for a reason.

Best Practice would rather be something along the lines:

Avoid coding in a way that triggers WARNING:
nonstandard use of escape in a string literal. If you
cannot comply with this rule document your reasons.

Thanks for the suggestion. For the past few months I've been dealing with
an error log that is filled with these warnings simply because
the developers do not comprehend how to use ( or the requirement to use)
an escape clause.

 Good example:
 CREATE TABLE accounts
 ( accout_id bigint NOT NULL ,

Typo.

So noted, I'll correct.

On Sat, Aug 22, 2015 at 11:46 AM, Karsten Hilbert karsten.hilb...@gmx.net
wrote:

 On Sat, Aug 22, 2015 at 11:15:07AM -0400, Melvin Davidson wrote:

  PostgreSQL Developer Best Practices
 
  1. Prefix ALL literals with an Escape
 EG:  SELECT E'This is a \'quoted literal \'';
  SELECT E'This is an unquoted literal';
 
 Doing so will prevent the annoying WARNING:  nonstandard use of
 escape in a string literal

 This is certainly not Best Practice as the warning is
 annoying for a reason.

 Best Practice would rather be something along the lines:

 Avoid coding in a way that triggers WARNING:
 nonstandard use of escape in a string literal. If you
 cannot comply with this rule document your reasons.

  Good example:
  CREATE TABLE accounts
  ( accout_id bigint NOT NULL ,

 Typo.

 Karsten
 --
 GPG key ID E4071346 @ eu.pool.sks-keyservers.net
 E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


<    1   2   3   4   5   6   7   >