Timescale with replication advice

2018-03-07 Thread geoff hoffman
I was wondering if any list members have experience setting up TimeScale with 
streaming replication. 

http://www.timescale.com/ 

I plan to use a collection of 2-5 EC2 instances in AWS and want to set up 1 
master -> N slaves to allow read-only connection pooling on the slaves, single 
master write.

Any advice, gotchas and recommendations before I begin would be most 
appreciated.

Thanks,
Geoff

Re: Enforce primary key on every table during dev?

2018-03-01 Thread geoff hoffman
I found this thread very interesting. 

A pivot table is a perfectly valid use case where a compound unique key on two 
or more columns performs the same function as a primary key without one. 

I’m not nearly as familiar with Postgres as I am with MySQL (which is why I 
recently joined this list)... it may be possible to define a collection of 
tables as a primary key. But if only a unique key is specified in this case, 
everyone would still be ok with the result from a logic design standpoint.

 I think Melvin, way up the thread, had the best answer- be the DBA and have a 
review process. Don’t let folks go adding tables as they like.


Re: Creating complex track changes database - challenge!

2018-02-26 Thread geoff hoffman
I would personally do that separately: write a bash script & cron job that does 
a schema dump every hour, and (if there are any changes) commits any changes to 
your schema repository; then you can use Github or Bitbucket web hooks to do 
stuff with the changeset when it’s pushed. 

https://stackoverflow.com/questions/3878624/how-do-i-programmatically-determine-if-there-are-uncommitted-changes
 
<https://stackoverflow.com/questions/3878624/how-do-i-programmatically-determine-if-there-are-uncommitted-changes>
https://stackoverflow.com/questions/24772591/check-if-git-has-changes-programmatically
 
<https://stackoverflow.com/questions/24772591/check-if-git-has-changes-programmatically>
 



> On Feb 26, 2018, at 9:36 AM, Łukasz Jarych  wrote:
> 
> Thank you geoff! 
> 
> I think that i will test http://www.liquibase.org/ 
> <http://www.liquibase.org/> this one. 
> 
> what about setting up trigger to metadata (structural table) to find if 
> column was added for example? 
> 
> Best,
> Jacek
> 
> 2018-02-26 16:43 GMT+01:00 geoff hoffman  <mailto:ge...@rxmg.com>>:
> 
> 
> There’s https://flywaydb.org/ <https://flywaydb.org/>
> and http://www.liquibase.org/ <http://www.liquibase.org/>
> 
> More: https://dbmstools.com/version-control-tools 
> <https://dbmstools.com/version-control-tools> 
> 
> Also, if you know PHP, Laravel database migrations have worked great for us!
> https://laravel.com/docs/5.6/migrations 
> <https://laravel.com/docs/5.6/migrations>
> 
> 
> 
>> On Feb 26, 2018, at 3:44 AM, Łukasz Jarych > <mailto:jarys...@gmail.com>> wrote:
>> 
>> i would like to ask you for help with track changes to my database. 
>> I am new to PosgtreeSQL but i have to learn it quickly because of my boss. 
>> 
>> I have to:
>> 
>> 1. Keep all changes within table including:
>> -adding rows
>> -deleting
>> -editing
>> 
>> 2. Save table with specific state and recover specific state (so go back to 
>> previous table versions) including comparing tables.
>> 
>> 3. Track all DLL and DML changes with possibility to ho back to previous 
>> version. 
>> 
> 
> 



Re: Creating complex track changes database - challenge!

2018-02-26 Thread geoff hoffman


There’s https://flywaydb.org/ 
and http://www.liquibase.org/ 

More: https://dbmstools.com/version-control-tools 
 

Also, if you know PHP, Laravel database migrations have worked great for us!
https://laravel.com/docs/5.6/migrations 




> On Feb 26, 2018, at 3:44 AM, Łukasz Jarych  wrote:
> 
> i would like to ask you for help with track changes to my database. 
> I am new to PosgtreeSQL but i have to learn it quickly because of my boss. 
> 
> I have to:
> 
> 1. Keep all changes within table including:
> -adding rows
> -deleting
> -editing
> 
> 2. Save table with specific state and recover specific state (so go back to 
> previous table versions) including comparing tables.
> 
> 3. Track all DLL and DML changes with possibility to ho back to previous 
> version. 
> 



Re: Any hope for more specific error message for "value too long..."?

2018-02-16 Thread geoff hoffman
Dang. +1 for that. 

Not that you hadn’t thought of it, and not that it’s actually a viable solution 
in a jiffy, but switch that mess to JSONB and your problems are over. 


Re: postgres not starting

2018-02-15 Thread geoff hoffman
Restore a backup or reinstall Postgres. The default Postgres data tables need 
to be there and owned by the user Postgres runs as.


> On Feb 15, 2018, at 5:21 PM, Azimuddin Mohammed  wrote:
> 
> Hello, 
> I am unable to start postgres on one of the server 
> I am getting below error "HINT: is another postmaster already running on port 
> 5432, if not wait a few seconds and retry"
> I checked the processes nothing is running with postgres 
> 
>  I think the error caused because I removed everything under /data directory 
> before stopping the server. 
> Can someone help. 
> 
> 
> -- 
> 
> Regards,
> Azim
> 



Re: Ensure extension exists

2018-02-02 Thread geoff hoffman
On Feb 2, 2018, at 12:38 PM, Peter Eisentraut 
 wrote:
> 
> On 2/1/18 19:47, Geoffrey Hoffman wrote:
>> I want to use the pgcrypto extension with a particular database. Does it 
>> exist permanently and survive a server reboot? Or if not, how do you ensure 
>> it runs when the server restarts?
> 
> Once an extension is installed, it stays installed, just like any other
> permanent database object such as tables and functions.
> 
>> I ask because in my Docker development environment I have to run it every 
>> time I restart the container, and just want to be prepared for production in 
>> AWS Aurora.
> 
> Maybe you have your development environment set up in a way that it
> blows away your database on each run.
> 

Yep, ok thanks for confirming!




Working with JSONB data having node lists

2018-01-30 Thread geoff hoffman
JSONB fields are very attractive for our current use, particularly as straight 
key-value pairs in the JSONB data;

but we are having trouble finding documentation on how to query lists (of 
scalars or objects) in nodes of the JSONB data.

~~~

I have the table as follows:

CREATE TABLE public.contacts
(
id integer NOT NULL DEFAULT nextval('contacts_id_seq'::regclass),
uuid uuid NOT NULL DEFAULT gen_random_uuid(),
vertical_id integer NOT NULL,
inboundlog_id integer NOT NULL,
email character varying(255) COLLATE pg_catalog."default" NOT NULL,
data jsonb NOT NULL,
created_at timestamp(0) without time zone,
updated_at timestamp(0) without time zone,
CONSTRAINT contacts_pkey PRIMARY KEY (id)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;

~~~

I have a record as follows:

INSERT INTO contacts
(uuid, vertical_id, inboundlog_id, email, data)
VALUES
(gen_random_uuid(), 1, 1, ‘p...@site.com', 
 '{"first”:"Phil","last”:"Peters”,"subscriptions”:[101,202,303]}')


How do I craft a query to find all subscribers to program 202?

~~~

I have another record as follows:

INSERT INTO contacts
(uuid, vertical_id, inboundlog_id, email, data)
VALUES
(gen_random_uuid(), 1, 1, ‘b...@domain.com', 
 '{"first”:"Bob","last”:"Baker”,"downloads":[{"date":"2018-01-01 
00:00:00","pubid”:123},{"date":"2018-02-02 00:00:00","pubid”:456}]}')

How do I craft a query to find all contacts who have downloaded pubid 123?

TIA -
Geoff