Re: [GENERAL] Replicate over pgbouncer?

2015-05-21 Thread Joseph Kregloh
You will want to setup your replication user to connect to PostgreSQL
directly. Going through pgBouncer is asking for trouble.

-Joseph Kregloh


Re: [GENERAL] Allowing postgresql to accept 0xff syntax for data types that it makes sense for?

2015-05-21 Thread Tom Lane
Bill Moran wmo...@potentialtech.com writes:
 My other question: is there a specific reason why PostgreSQL doesn't support
 this syntax, aside from nobody has bothered to add such support? Because
 I'm considering writing a patch to Postgres and submitting it, but I'm not
 going to go down that path if there's a specific reason why supporting this
 syntax would be _bad_. Personally, I feel like it would be a good thing, as
 it seems like a lot of other database systems support it, and even though
 it's not ANSI, it's pretty much the de-facto standard.

How many is a lot, and do any of the responsible vendors sit on the SQL
standards committee?

One large concern about doing anything like this is whether future
versions of the SQL standard might blindside us with some
not-terribly-compatible interpretation of that syntax.  If we do something
that is also in Oracle or DB2 or one of the other big boys, then we can
probably rely on the assumption that they'll block anything really
incompatible from becoming standardized ;-).

OTOH, if the actual meaning of a lot is MySQL, I'd be pretty worried
about this scenario.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Daniel Torres
Sorry, forgot to told you what I'm trying, I have climate data and want to
obtain mean temperature and total precipitation and that sort of things per
month and year. Think date_trunc is a good solution, but any other advice
would be very welcome.

(I need to read more about time zones, I'm new at using postgresql)

Thank you,
Daniel

2015-05-21 12:45 GMT-05:00 Paul Jungwirth p...@illuminatedcomputing.com:

 You really shouldn't use WITHOUT TIME ZONE.


 I'd like to know more about this. Can you say why? Are there any articles
 you'd recommend? I'm fond of normalizing all times to UTC and only
 presenting them in a time zone when I know the current perspective. I've
 written about that approach in a Rails context here:

 http://illuminatedcomputing.com/posts/2014/04/timezones/

 I find that this helps me to ignore time zones in most parts of my
 application and cut down on my timezone-related bugs.

 Thanks!

 Paul





 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Replicate over pgbouncer?

2015-05-21 Thread Andomar

I'm doubtful.  Why do you think you need such a capability?

For simplicity.  If I can replicate through pgbouncer, I'll need only 
one open port on the machine.  Postgres would just listen on localhost.


If not, I'll have to make Postgres listen on an interface on a different 
port.


-Andomar


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replicate over pgbouncer?

2015-05-21 Thread Peter Eisentraut
On 5/21/15 12:12 PM, Andomar wrote:
 Hi,
 
 Today I installed pgbouncer.  I added a second installation as a hot
 standby.  Before starting the standby, I configured recovery.conf to
 connect to pgbouncer.
 
 This results in an error message:
 
 Pooler Error: Unsupported startup parameter: replication
 
 Is it possible to replicate over a connection through pgbouncer?

Currently not.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replicate over pgbouncer?

2015-05-21 Thread Melvin Davidson
Since you did not specify your O/S and PostgreSQL version, I'll just point
you to the documentation for 9.1.

http://www.postgresql.org/docs/9.1/interactive/different-replication-solutions.html

It looks very much to me like you are trying to use a hammer to turn a
screw. pg_bouncer is not designed for replication.  I strongly suggest you
review the url provided and pick the method that best suits your needs.

On Thu, May 21, 2015 at 2:00 PM, Peter Eisentraut pete...@gmx.net wrote:

 On 5/21/15 12:12 PM, Andomar wrote:
  Hi,
 
  Today I installed pgbouncer.  I added a second installation as a hot
  standby.  Before starting the standby, I configured recovery.conf to
  connect to pgbouncer.
 
  This results in an error message:
 
  Pooler Error: Unsupported startup parameter: replication
 
  Is it possible to replicate over a connection through pgbouncer?

 Currently not.



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Allowing postgresql to accept 0xff syntax for data types that it makes sense for?

2015-05-21 Thread Brown, Joseph E.
Unsubscribe pgsql-general

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: Thursday, May 21, 2015 1:57 PM
To: Bill Moran
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Allowing postgresql to accept 0xff syntax for data types 
that it makes sense for?

Bill Moran wmo...@potentialtech.com writes:
 My other question: is there a specific reason why PostgreSQL doesn't 
 support this syntax, aside from nobody has bothered to add such 
 support? Because I'm considering writing a patch to Postgres and 
 submitting it, but I'm not going to go down that path if there's a 
 specific reason why supporting this syntax would be _bad_. Personally, 
 I feel like it would be a good thing, as it seems like a lot of other 
 database systems support it, and even though it's not ANSI, it's pretty much 
 the de-facto standard.

How many is a lot, and do any of the responsible vendors sit on the SQL 
standards committee?

One large concern about doing anything like this is whether future versions of 
the SQL standard might blindside us with some not-terribly-compatible 
interpretation of that syntax.  If we do something that is also in Oracle or 
DB2 or one of the other big boys, then we can probably rely on the assumption 
that they'll block anything really incompatible from becoming standardized ;-).

OTOH, if the actual meaning of a lot is MySQL, I'd be pretty worried about 
this scenario.

regards, tom lane


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Steve Crawford

On 05/21/2015 10:45 AM, Paul Jungwirth wrote:

You really shouldn't use WITHOUT TIME ZONE.


I'd like to know more about this. Can you say why?


Start by reading about the date and time data types with special 
attention to section 8.5.3:

www.postgresql.org/docs/current/static/datatype-datetime.html

Now go back and read it again and experiment a while until it makes 
sense. As Adrian Klaver so eloquently put it, If I have learned 
anything about dealing with dates and times, is that it is a set of 
exceptions bound together by a few rules. Every time you think you have 
the little rascals cornered, one gets away. This is also a very good 
reason to avoid reinventing the wheel.


When you need a break, watch this:
https://www.youtube.com/watch?v=-5wpm-gesOY

His conclusion is a good one: be very happy that someone else has done 
the dirty work for you.


The Ruby article does make one good point which is that we are talking 
about what they call an instant or what I like to refer to as a point 
in time. The point in time is actually a better way of thinking of 
timestamp with time zone since the timestamp with time zone does not 
actually store any timezone information - it stores a point in time that 
can be manipulated in the time-zone of your choosing whereas timestamp 
without time zone is not a point in time and must be combined with other 
information to do proper manipulation.


The article does also display a couple attitudes that I feel are 
especially rampant in the web-development community. The first is that 
web developers shouldn't become educated about the capabilities of a 
database but rather use the database as a dumb data-store and redo 
everything themselves (often this includes an utter failure to use the 
data-integrity capabilities of the database).


The second is the assumption that they are the only users of the 
database and that nobody will ever access the data except through their 
custom-written Ruby/PHP/Perl/Python code and that no other programming 
language will ever be used. Woe be to the poor slob who has to deal with 
ad-hoc queries, analytics platforms or reporting systems that weren't so 
brilliantly reinvented or who wants to use range-types or other nice 
PostgreSQL features.


Internally PostgreSQL stores timestamp without time zone in UTC but that 
is entirely irrelevant. What is relevant is that you can provide an 
instant/point in time in whatever time-zone representation you want 
and get it back the same way. Want to use a Unix epoch in your code. Go 
ahead:

extract(epoch from yourtstzcol)
abstime(yourepochint)

Want to assume everything is UTC? No problem:
Konsole output
set timezone to 'UTC';

Then you can reinvent wheels to your heart's content without wrecking 
the ability to easily use other tools.


By the way, use full timezone names to avoid ambiguity. I don't know 
what Ruby cooked up but PostgreSQL uses industry-standard names:

Konsole output
select * from pg_timezone_names;

Your original question had to do with month/year. You will have to 
define this for your use-case but beware that it won't necessarily get 
you away from time-zone issues as the month ticks over on a zone-by-zone 
basis.


Also note that time-intervals can be a source of interesting 
side-effects. Operator precedence is important. For example, what is one 
month? 28-days? 29? 30? 31? Every system must make a judgment call. Add 
a month to January 31 and you will get February 28. But add/subtract a 
month from February 28 and you get January 28/March 28. So you can 
create a query that takes a date, adds a month and subtracts a month and 
results in a different date. There is nothing to do here but to read the 
docs and try things.


There are similar issues when crossing DST boundaries. If I want to push 
something out a day in my time-zone on the day that DST changes I can do 
it easily and understand that PostgreSQL will handle the extra/missing 
hour. Or I can use an explicit increment of '24 hours' if that is what I 
want. No extra steps of converting the timestamp without time zone to 
UTC, converting that to the desired local zone, doing the calculations, 
converting back to UTC and back to timezone without timestamp all the 
while potentially adding an easy error such as doing things in the wrong 
order and checking for DST changeover in the wrong time-zone.


Cheers,
Steve



[GENERAL] Allowing postgresql to accept 0xff syntax for data types that it makes sense for?

2015-05-21 Thread Bill Moran

I'm working on a project converting a bunch of code from another database
system to PostgreSQL.

One of the issues is that the previous system accepted integers and binary
data in the same hex format as C ... i.e. 0xff

I understand that the proper way to handle this in postgres is x'ff', but
the problem is that a large number of #define values use the 0xff syntax
and these macros are used both in C code, as well as in SQL. The simple
fact is that a LOT of code does this, and correcting it all and ensuring
that the modified code is correct is a BIG job. Just to clarify some of
the complexity: there is about 80,000 lines of PL/PGSQL code that contains
these macros, then is run through the C preprocessor to substitute actual
values for them before being loaded into Postgres.

Obviously, there are many options for fixing this. One of those options is
modifying PostgreSQL to accept the 0xff syntax ... and evaluating that
option is the reason for my post.

So, one of my questions is: does anyone have an existing simple answer on
how to fix this?

My other question: is there a specific reason why PostgreSQL doesn't support
this syntax, aside from nobody has bothered to add such support? Because
I'm considering writing a patch to Postgres and submitting it, but I'm not
going to go down that path if there's a specific reason why supporting this
syntax would be _bad_. Personally, I feel like it would be a good thing, as
it seems like a lot of other database systems support it, and even though
it's not ANSI, it's pretty much the de-facto standard.

-- 
Bill wmo...@potentialtech.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About COPY command (and probably file fdw too)

2015-05-21 Thread Nicolas Paris
Hi,

To me this would be great. Why not the ability to restrict lines too
COPY stafflist (userid, username, staffid)
FROM 'myfile.txt'
WITH (FORMAT text, DELIMITER E'\t', COLUMNS (1, 2, 7),
LINES(2:1000,2000:3000), ENCODING 'windows-1250')
= subset of full data.



2015-05-21 22:25 GMT+02:00 Stefan Stefanov stefanov...@abv.bg:

 Hi,

 Maybe I need to clarify a little.
 The suggested option “[SKIP] COLUMNS columnslist”  would contain
 columns' positions in the file so that only some of the columns in a text
 file would be read into a table.
 Example: copy the first, second and seventh columns form myfile.txt into
 table stafflist. myfile.txt has many columns.
 COPY stafflist (userid, username, staffid)
 FROM 'myfile.txt'
 WITH (FORMAT text, DELIMITER E'\t', COLUMNS (1, 2, 7), ENCODING
 'windows-1250')

 BR, Stefan



  Оригинално писмо 
 От: Nicolas Paris nipari...@gmail.com
 Относно: Re: [GENERAL] About COPY command (and probably file fdw too)
 До: Stefan Stefanov stefanov...@abv.bg
 Изпратено на: 20.05.2015 23:21


   2015-05-20 22:16 GMT+02:00 Stefan Stefanov stefanov...@abv.bg:

   Hi,

  I have been using COPY .. FROM a lot these days for reading in tabular
 data and it does a very good job.  Still there is an inconvenience when a
 (large) text file contains more columns than the target table or the
 columns’ order differs. I can imagine three ways round and none is really
 nice -
  - mount the file as a foreign table with all the text file’s columns then
 insert into the target table a select from the foreign table;
  - create an intermediate table with all the text file’s columns, copy
 into it from the file then insert into the target table and finally drop
 the intermediate table when no more files are expected;
  - remove the unneeded columns from the file with a text editor prior to
 COPY-ing.
  I think that this is happening often in real life and therefore have a
 suggestion to add this option “[SKIP] COLUMNS columnslist”  to the WITH
 clause of COPY .. FROM. It may be very useful in file fdw too.
  To be able to re-arrange columns’ order would come as a free bonus for
 users.

  Sincerely,
  Stefan Stefanov




   ​Hi,

  I guess it already does (from documentation):

 COPY table_name [ ( column_name [, ...] ) ]
 FROM { 'filename' | STDIN }
 [ [ WITH ] ( option [, ...] ) ]

  Then you can order the column_name as the source file has.​




Re: [GENERAL] Unit tests and foreign key constraints

2015-05-21 Thread Dave Owens

 I know some say your unit tests shouldn't touch the DB but the more full
 stack tests I have, the better I sleep at night :-))


Unit tests really should be about testing individual bits of logic.  Does a
single method do the desired thing, and not the undesired thing...
Ideally, your data access layer should be interchangeable, ie: use a real
database record in production, but use a mock database record for unit
tests.

I would consider database access to be an integration test, something that
you run periodically but not at every commit.

I wondered if anyone else has run into this problem and found a good
 strategy to mitigate it.


Shouldn't failling to insert due to FK violation be considered an
integration test failure?  You may want to beef up your tests to make sure
the necessary rows get inserted in the right order.  Another option would
be to automate the creation of a complete set of test data... using psql to
load a dump, or possibly CREATE DATABASE testdbcopy WITH TEMPLATE testdb.

Integration tests can be costly to maintain with little ROI, tread
carefully...

-Dave


Re: [GENERAL] About COPY command (and probably file fdw too)

2015-05-21 Thread Melvin Davidson
I understand what you want with regards to skipping columns in input, but
rather than wait to see if that feature is added to a future version of
PostgreSQL, probably the best work around is to

1. CREATE an intermediate table with all columns in the input text file.

2. COPY into the intermediate table.
3. INSERT into your table
SELECT cola, col2, coln from intermediate table.
4. TRUNCATE intermediate table and repeat steps 2  4 as needed.

On Thu, May 21, 2015 at 4:33 PM, Nicolas Paris nipari...@gmail.com wrote:

 Hi,

 To me this would be great. Why not the ability to restrict lines too
 COPY stafflist (userid, username, staffid)
 FROM 'myfile.txt'
 WITH (FORMAT text, DELIMITER E'\t', COLUMNS (1, 2, 7),
 LINES(2:1000,2000:3000), ENCODING 'windows-1250')
 = subset of full data.



 2015-05-21 22:25 GMT+02:00 Stefan Stefanov stefanov...@abv.bg:

 Hi,

 Maybe I need to clarify a little.
 The suggested option “[SKIP] COLUMNS columnslist”  would contain
 columns' positions in the file so that only some of the columns in a text
 file would be read into a table.
 Example: copy the first, second and seventh columns form myfile.txt into
 table stafflist. myfile.txt has many columns.
 COPY stafflist (userid, username, staffid)
 FROM 'myfile.txt'
 WITH (FORMAT text, DELIMITER E'\t', COLUMNS (1, 2, 7), ENCODING
 'windows-1250')

 BR, Stefan



  Оригинално писмо 
 От: Nicolas Paris nipari...@gmail.com
 Относно: Re: [GENERAL] About COPY command (and probably file fdw too)
 До: Stefan Stefanov stefanov...@abv.bg
 Изпратено на: 20.05.2015 23:21


   2015-05-20 22:16 GMT+02:00 Stefan Stefanov stefanov...@abv.bg:

   Hi,

  I have been using COPY .. FROM a lot these days for reading in tabular
 data and it does a very good job.  Still there is an inconvenience when a
 (large) text file contains more columns than the target table or the
 columns’ order differs. I can imagine three ways round and none is really
 nice -
  - mount the file as a foreign table with all the text file’s columns
 then insert into the target table a select from the foreign table;
  - create an intermediate table with all the text file’s columns, copy
 into it from the file then insert into the target table and finally drop
 the intermediate table when no more files are expected;
  - remove the unneeded columns from the file with a text editor prior to
 COPY-ing.
  I think that this is happening often in real life and therefore have a
 suggestion to add this option “[SKIP] COLUMNS columnslist”  to the WITH
 clause of COPY .. FROM. It may be very useful in file fdw too.
  To be able to re-arrange columns’ order would come as a free bonus for
 users.

  Sincerely,
  Stefan Stefanov




   ​Hi,

  I guess it already does (from documentation):

 COPY table_name [ ( column_name [, ...] ) ]
 FROM { 'filename' | STDIN }
 [ [ WITH ] ( option [, ...] ) ]

  Then you can order the column_name as the source file has.​





-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Unit tests and foreign key constraints

2015-05-21 Thread Martijn van Oosterhout
On Thu, May 21, 2015 at 01:33:46PM -0700, Dave Owens wrote:
 
  I know some say your unit tests shouldn't touch the DB but the more full
  stack tests I have, the better I sleep at night :-))
 
 
 Unit tests really should be about testing individual bits of logic.  Does a
 single method do the desired thing, and not the undesired thing...
 Ideally, your data access layer should be interchangeable, ie: use a real
 database record in production, but use a mock database record for unit
 tests.

Nice in theory. But if you use Postgres features like timestamptz
calculations and hstore, it's generally way easier to run your unit
tests on an actual PostgreSQL database.  Otherwise you're going to spend
all your time working around the fact that your mock database is not
the real thing (and running into bugs in your emulation layer).

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [GENERAL] Unit tests and foreign key constraints

2015-05-21 Thread Martijn van Oosterhout
On Thu, May 21, 2015 at 12:39:01PM -0700, Andy Chambers wrote:
 Hey All,
 
 I've started trying to use foreign key constraints in my schema but it
 seems to make it more difficult to write unit tests that touch the database
 because each test now requires more setup data to satisfy the foreign key
 constraint. (I know some say your unit tests shouldn't touch the DB but the
 more full stack tests I have, the better I sleep at night :-))
 
 I wondered if anyone else has run into this problem and found a good
 strategy to mitigate it. I thought I might be able to make these
 constraints deferred during a test run since I have automatic rollback
 after each test but even after set constraints all deferred, I still got
 a foreign key violation during my test run if the test tries to insert data
 with a non-existent foreign key.

Foreign keys aren't deferrable by default, you have to create them that
way...

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 He who writes carelessly confesses thereby at the very outset that he does
 not attach much importance to his own thoughts.
   -- Arthur Schopenhauer


signature.asc
Description: Digital signature


Re: [GENERAL] date with month and year

2015-05-21 Thread Thomas Kellerer

Brian Dunavant wrote on 21.05.2015 21:51:

It's probably worth noting that both the Ruby 'best practice' AND
Postgres have a failure case when dealing with future dates precisely
because they are storing the data as UTC with a time zone.  This is
one case where storing the data WITHOUT TIME ZONE would actually save
your bacon.


Postgres does not store the time zone. When storing a timestamp with time zone, 
it
is normalized to UTC based on the timezone of the client. When you retrieve it,
it is adjusted to the time zone of the client.








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Karsten Hilbert
 Anyway, I agree that you have to store the time zone *somewhere*, and I 
 suppose that's the reason Joshua remarked that you really shouldn't use 
 WITHOUT TIME ZONE. And often a time has one perspective that is 
 canonical or preferred, e.g. the time zone of the user who created 
 the object. And in that case WITH TIME ZONE gives you a convenient place 
 to store that.

No it doesn't.

WITH TIME ZONE simply means that whatever the user sends to
the database is meant to represent that time zone.

As far as PostgreSQL is concerned it will convert that
to UTC, store UTC

   AND THROW AWAY THE TIMEZONE INFORMATION

While that's correct in mathematical terms (the stored instant-in-time
is *known* to be good at UTC) it does throw away information, namely
the storing-client-timezone data. If you want to retain that (rather
than convert UTC to the retrieving client's timezone) you'll have to
store that yourself in an extra field.

 I think I still prefer a more relativistic approach 
 where times have no preferred perspective,

That's meaningless. Time has, by its very definition, a perspective. It's
just that for canonical times people usually agree on storing the
perspective UTC. Other than that it would be akin to a byte string
without encoding -- you can't tell what it means.

Karsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Brian Dunavant
On Thu, May 21, 2015 at 5:27 PM, Thomas Kellerer spam_ea...@gmx.net wrote:
 Postgres does not store the time zone. When storing a timestamp with time
 zone, it
 is normalized to UTC based on the timezone of the client. When you retrieve
 it,
 it is adjusted to the time zone of the client.


Sorry, I misspoke.  Thank you for correcting it.  It is storing it as
UTC time zone.  The rest of my post still applies.  You will get the
wrong wall-clock time for the future date because it is stored as UTC
and the conversion rules will have changed giving you a different time
when you convert it back to the local time zone.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unit tests and foreign key constraints

2015-05-21 Thread Andy Chambers
On Thu, May 21, 2015 at 1:34 PM, Martijn van Oosterhout klep...@svana.org
wrote:

 On Thu, May 21, 2015 at 12:39:01PM -0700, Andy Chambers wrote:
  Hey All,
 
  I've started trying to use foreign key constraints in my schema but it
  seems to make it more difficult to write unit tests that touch the
 database
  because each test now requires more setup data to satisfy the foreign key
  constraint. (I know some say your unit tests shouldn't touch the DB but
 the
  more full stack tests I have, the better I sleep at night :-))
 
  I wondered if anyone else has run into this problem and found a good
  strategy to mitigate it. I thought I might be able to make these
  constraints deferred during a test run since I have automatic rollback
  after each test but even after set constraints all deferred, I still
 got
  a foreign key violation during my test run if the test tries to insert
 data
  with a non-existent foreign key.

 Foreign keys aren't deferrable by default, you have to create them that
 way...


Ah that's what I was missing. Thanks!

--
Andy


Re: [GENERAL] About COPY command (and probably file fdw too)

2015-05-21 Thread Stefan Stefanov
Hi, 

Maybe I need to clarify a little.
The suggested option “[SKIP] COLUMNS  ”
 would contain columns' positions in the file so that only some of the columns 
in a text file would be read into a table.
Example: copy the first, second and seventh columns form myfile.txt into table 
stafflist. myfile.txt has many columns.
COPY stafflist (userid, username, staffid) 
FROM 'myfile.txt' 
WITH (FORMAT text, DELIMITER E'\t', COLUMNS (1, 2, 7), ENCODING 'windows-1250')

BR, 

Stefan








  Оригинално писмо 

От: Nicolas Paris nipari...@gmail.com

Относно: Re: [GENERAL] About COPY command (and probably file fdw too)

До: Stefan Stefanov  

Изпратено на: 20.05.2015 23:21



 
 
   





 
 2015-05-20 22:16 GMT+02:00 Stefan Stefanov 
   stefanov...@abv.bg  :
 

   

 
  
  
  Hi,
   
  
  

   
  
  I have been using COPY .. FROM a lot these days for reading in 
tabular data and it does a very good job.
 Still there is an inconvenience when a (large) text file contains more columns 
than the target table or the columns’ order differs. I can imagine three ways 
round and none is really nice -
   
  
  - mount the file as a foreign table with all the text file’s columns 
then insert into the target table a select from the foreign table;
   
  
  - create an intermediate table with all the text file’s columns, copy 
into it from the file then insert into the target table and finally drop the 
intermediate table when no more files are expected;
   
  
  - remove the unneeded columns from the file with a text editor prior 
to COPY-ing.
   
  
  I think that this is happening often in real life and therefore have 
a suggestion to add this option “[SKIP] COLUMNS  ”
 to the WITH clause of COPY .. FROM. It may be very useful in file fdw too.
   
  
  To be able to re-arrange columns’ order would come as a free bonus 
for users.
   
  
  

   
  
  Sincerely,
   
  
  Stefan Stefanov
   
  
  

   
  
  

  
 


  
 




 
 ​Hi,
 
 
 

 
 
 I guess it already does (from documentation):
 
 
  COPY  table_name  [ (  column_name  [, ...] ) ]
FROM { ' filename ' | STDIN }
[ [ WITH ] (  option  [, ...] ) ] 
 
 
 Then you can order the column_name as the source file has.​
 




 


Re: [GENERAL] date with month and year

2015-05-21 Thread Karsten Hilbert
 It's probably worth noting that both the Ruby 'best practice' AND
 Postgres have a failure case when dealing with future dates precisely
 because they are storing the data as UTC with a time zone.  This is
 one case where storing the data WITHOUT TIME ZONE would actually save
 your bacon.
 
 From the postgres docs:  For times in the future, the assumption is
 that the latest known rules for a given time zone will continue to be
 observed indefinitely far into the future.
 
 Imagine scheduling a meeting for a certain time a few years from now.
  This will be stored as UTC + time zone.   A year later, that
 government decides to change the time zone rules for their country.
 Your operating system will get the new timezone data in an update (as
 it should).  However when the meeting comes around, you're going to be
 early/late because the wall time that you get converting back from
 UTC+time zone is no longer the time that you were supposed to have
 been at the meeting.   If you had stored that future date as a
 timestamp WITHOUT time zone you would have still been on-time.
 
 This is only an issue for future dates, not past ones.
 
 -Brian Dunavant
 (time is hard, so if I'm wrong anywhere here, someone please correct me)

You are wrong (or me, but I'll try).

 Imagine scheduling a meeting for a certain time a few years from now.
  This will be stored as UTC + time zone.

No it won't. It will store as UTC but will not store any timezone information
(apart from the fact that it knows that what is stored on disk is converted
to UTC from what the client sent in for storage).

You are right in the following aspect:

- client sends in NOW at HERE
- server knows HERE = UTC+2
- hence NOW_UTC = NOW - 2
- server stores NOW_UTC
- 2 years pass
- government at HERE says that from today on HERE = UTC + 4
- 2 years pass
- client retrieves at HERE
- server knows HERE = UTC + 4
- server also knows that HERE used to mean UTC + 2
- but server can not derive what HERE meant when NOW was stored ...
- server converts stored NOW_UTC to HERE by doing NOW_UTC + 4
- client receives NOW_HERE but this became NOW - 2 + 4

IOW, the server would need to know what HERE meant when now
was stored. This can only be solved by tracking insertion/update
timestamps.

Karsten


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Strange replication problem - segment restored from archive but still requested from master

2015-05-21 Thread Piotr Gasidło
Got strange problem. Unable to repeat, but got logs.

Simple master-slave using streaming replication.
Master is running. Slave is down.
Segment 00044C4D0090 was successfully archived and send
from master to slave.

Now I've started slave, and:

ay 21 21:23:37 d8 postgres[50645]: [3-1] 2015-05-21 21:23:37.033 CEST
@ 50645   LOG:  database system was shut down in recovery at
2015-05-21 21:22:03 CEST
May 21 21:23:37 d8 postgres[50645]: [4-1] 2015-05-21 21:23:37.034 CEST
@ 50645   LOG:  entering standby mode
May 21 21:23:37 d8 postgres[50645]: [5-1] 2015-05-21 21:23:37.058 CEST
@ 50645   LOG:  restored log file 00044C4D0088 from
archive
May 21 21:23:37 d8 postgres[50645]: [6-1] 2015-05-21 21:23:37.120 CEST
@ 50645   LOG:  redo starts at 4C4D/88493B50
May 21 21:23:37 d8 postgres[50645]: [7-1] 2015-05-21 21:23:37.226 CEST
@ 50645   LOG:  restored log file 00044C4D0089 from
archive
May 21 21:23:37 d8 postgres[50645]: [8-1] 2015-05-21 21:23:37.426 CEST
@ 50645   LOG:  restored log file 00044C4D008A from
archive
May 21 21:23:37 d8 postgres[50645]: [9-1] 2015-05-21 21:23:37.750 CEST
@ 50645   LOG:  restored log file 00044C4D008B from
archive
May 21 21:23:38 d8 postgres[50645]: [10-1] 2015-05-21 21:23:38.376
CEST @ 50645   LOG:  restored log file 00044C4D008C from
archive
May 21 21:23:38 d8 postgres[50645]: [11-1] 2015-05-21 21:23:38.690
CEST @ 50645   LOG:  restored log file 00044C4D008D from
archive
May 21 21:23:38 d8 postgres[50645]: [12-1] 2015-05-21 21:23:38.855
CEST @ 50645   LOG:  restored log file 00044C4D008E from
archive
May 21 21:23:39 d8 postgres[50645]: [13-1] 2015-05-21 21:23:39.275
CEST @ 50645   LOG:  restored log file 00044C4D008F from
archive
May 21 21:23:39 d8 postgres[50645]: [14-1] 2015-05-21 21:23:39.654
CEST @ 50645   LOG:  restored log file 00044C4D0090 from
archive
May 21 21:23:40 d8 postgres[50645]: [15-1] 2015-05-21 21:23:40.222
CEST @ 50645   LOG:  consistent recovery state reached at
4C4D/90FFF9C8
May 21 21:23:40 d8 postgres[50644]: [3-1] 2015-05-21 21:23:40.222 CEST
@ 50644   LOG:  database system is ready to accept read only
connections
May 21 21:23:40 d8 postgres[50645]: [16-1] 2015-05-21 21:23:40.223
CEST @ 50645   LOG:  unexpected pageaddr 4C46/E00 in log segment
00044C4D0091, offset 0
May 21 21:23:40 d8 postgres[50699]: [4-1] 2015-05-21 21:23:40.232 CEST
@ 50699   LOG:  started streaming WAL from primary at 4C4D/9000 on
timeline 4
May 21 21:23:40 d8 postgres[50699]: [5-1] 2015-05-21 21:23:40.232 CEST
@ 50699   FATAL:  could not receive data from WAL stream: ERROR:
requested WAL segment 00044C4D0090 has already been
removed
May 21 21:23:40 d8 postgres[50645]: [17-1] 2015-05-21 21:23:40.255
CEST @ 50645   LOG:  restored log file 00044C4D0090 from
archive
May 21 21:23:40 d8 postgres[50703]: [4-1] 2015-05-21 21:23:40.268 CEST
@ 50703   LOG:  started streaming WAL from primary at 4C4D/9000 on
timeline 4
May 21 21:23:40 d8 postgres[50703]: [5-1] 2015-05-21 21:23:40.268 CEST
@ 50703   FATAL:  could not receive data from WAL stream: ERROR:
requested WAL segment 00044C4D0090 has already been
removed
May 21 21:23:40 d8 postgres[50703]: [5-2]
...
(and so on)

So, as I understand:
- slave was started and entered restore,
- slave restored 00044C4D0090 from archive, reached
consistent recovery state
- now, it connected to master and noticed, that, it has
00044C4D0091 segment uncomplete
- and then, started yelling about missing segment (master deleted it
already after archiving) on master

Why?

I fixed it by stopping slave, doing some operations on master, copying
00044C4D0091 and 00044C4D0092 from master, and
staring the slave:


May 21 21:24:56 d8 postgres[50644]: [4-1] 2015-05-21 21:24:56.160 CEST
@ 50644   LOG:  received fast shutdown request
May 21 21:24:56 d8 postgres[50644]: [5-1] 2015-05-21 21:24:56.160 CEST
@ 50644   LOG:  aborting any active transactions
May 21 21:24:56 d8 postgres[50657]: [3-1] 2015-05-21 21:24:56.162 CEST
@ 50657   LOG:  shutting down
May 21 21:24:56 d8 postgres[50657]: [4-1] 2015-05-21 21:24:56.174 CEST
@ 50657   LOG:  database system is shut down
May 21 21:28:54 d8 postgres[53574]: [1-1] 2015-05-21 21:28:54.781 CEST
@ 53574   LOG:  loaded library pg_stat_statements
May 21 21:28:55 d8 postgres[53574]: [2-1] 2015-05-21 21:28:55.123 CEST
@ 53574   LOG:  ending log output to stderr
May 21 21:28:55 d8 postgres[53574]: [2-2] 2015-05-21 21:28:55.123 CEST
@ 53574   HINT:  Future log output will go to log destination
syslog.
May 21 21:28:55 d8 postgres[53575]: [3-1] 2015-05-21 21:28:55.124 CEST
@ 53575   LOG:  database system was shut down in recovery at
2015-05-21 21:24:56 CEST
May 21 21:28:55 d8 postgres[53575]: [4-1] 2015-05-21 21:28:55.126 CEST
@ 53575   LOG:  entering standby mode
May 21 21:28:55 d8 postgres[53575]: [5-1] 2015-05-21 

Re: [GENERAL] About COPY command (and probably file fdw too)

2015-05-21 Thread David G. Johnston
On Thu, May 21, 2015 at 1:33 PM, Nicolas Paris nipari...@gmail.com wrote:

 Hi,

 To me this would be great. Why not the ability to restrict lines too
 COPY stafflist (userid, username, staffid)
 FROM 'myfile.txt'
 WITH (FORMAT text, DELIMITER E'\t', COLUMNS (1, 2, 7),
 LINES(2:1000,2000:3000), ENCODING 'windows-1250')
 = subset of full data.


​At some level of complexity it is questionable whether a feature belongs
in core that can exist outside of it.

While I have not yet personally used pgloader it seems to accomplish much
of what is being requested.

http://pgloader.io/index.html

COPY (and \copy) serves its purpose extremely well​

​but expects the user to deal with any customization needed either before
or after it has done its thing.  I believe this is for the best since such
customizations and tools have no need to operate on the same release cycle
as the core PostgreSQL project.

David J.
​


Re: [GENERAL] date with month and year

2015-05-21 Thread Paul Jungwirth

what you've said above is incorrect.
All WITH TIME ZONE does is tell PostgreSQL to apply timezone
conversions during various operations.  The stored data is represented
as an epoch without any concept of the source data's timezone
representation.


Oh, very interesting! Thank you for pointing that out. I'll have to 
think some more about when I'd want that behavior.


Paul


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Paul Jungwirth

Hi Steve,

Thanks for such a thorough response! I agree that time is a lot trickier 
and trappier than one might expect, so it's good to learn how others 
grapple with it.


 Your original question had to do with month/year.

Just to clarify, that was Daniel's original question, but you're 
replying to my follow-up question.



The first is that
web developers shouldn't become educated about the capabilities of a
database but rather use the database as a dumb data-store and redo
everything themselves (often this includes an utter failure to use the
data-integrity capabilities of the database).


That's not a debate I can hope to settle, but for what it's worth, I 
mostly agree with you. That's why I've written these tools to let Rails 
users leverage more of the capabilities inside Postgres, especially 
integrity constraints:


https://github.com/pjungwir/db_leftovers
https://github.com/pjungwir/aggs_for_arrays/

also these efforts at education:

https://github.com/pjungwir/rails-and-sql-talk
http://illuminatedcomputing.com/posts/2015/02/postgres_lateral_join/
http://illuminatedcomputing.com/posts/2015/03/generate_series_for_time_series/

Anyway, I agree that you have to store the time zone *somewhere*, and I 
suppose that's the reason Joshua remarked that you really shouldn't use 
WITHOUT TIME ZONE. And often a time has one perspective that is 
canonical or preferred, e.g. the time zone of the user who created 
the object. And in that case WITH TIME ZONE gives you a convenient place 
to store that. I think I still prefer a more relativistic approach 
where times have no preferred perspective, and input strings are 
converted to a bare instant as quickly as possible (using whatever 
time zone is appropriate). For instance that avoids the failure scenario 
Brian described. I concede that storing the time zone separately as a 
string makes it tricker for other database clients, at least when the 
string is a name only meaningful to Rails. In the future I'll keep an 
eye out for when WITH might be handy. And maybe I'll do some research to 
see how well Rails would handle those columns.


Thanks again for your generosity!

Yours,
Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread David G. Johnston
On Thu, May 21, 2015 at 2:10 PM, Paul Jungwirth p...@illuminatedcomputing.com
 wrote:

 Anyway, I agree that you have to store the time zone *somewhere*, and I
 suppose that's the reason Joshua remarked that you really shouldn't use
 WITHOUT TIME ZONE. And often a time has one perspective that is canonical
 or preferred, e.g. the time zone of the user who created the object. And
 in that case WITH TIME ZONE gives you a convenient place to store that. I
 think I still prefer a more relativistic approach where times have no
 preferred perspective, and input strings are converted to a bare instant
 as quickly as possible (using whatever time zone is appropriate). For
 instance that avoids the failure scenario Brian described. I concede that
 storing the time zone separately as a string makes it tricker for other
 database clients, at least when the string is a name only meaningful to
 Rails. In the future I'll keep an eye out for when WITH might be handy. And
 maybe I'll do some research to see how well Rails would handle those
 columns.


​I'm not sure Brian is correct - but my head started to hurt when I
attempted to reason it out - but what you've said above is incorrect.  All
WITH TIME ZONE does is tell PostgreSQL to apply timezone conversions
during various operations.  The stored data is represented as an epoch
without any concept of the source data's timezone representation.  i.e. if
I store '2015-05-20T15:23:00-MST'::timestamptz into a table and later
retrieve it I have no way to knowing that MST was part of the original
specification.

David J.


[GENERAL] RLS policy issue

2015-05-21 Thread Ted Toth
I'm work on understanding and implementing RLS. Since I work on
systems using SELinux (MLS policy) I'm using the sepgsql module that
I've modified slightly i.e. I've added a function named
sepgsql_check_row_perm that I'm using in the policy for example I have
a 'reports' table that looks like:

 Table public.reports
 Column |  Type   |  Modifiers
  | Storage  | Stats target | Description
+-+--+--+--+-
 id | integer | not null default
nextval('reports_id_seq'::regclass) | plain|  |
 report | json|
  | extended |  |
 message_id | integer | not null
  | plain|  |
 location   | geometry(Point) |
  | main |  |
 security_label | text| default sepgsql_getcon()
  | extended |  |
Policies:
POLICY check_report_delete_selinux FOR DELETE
  USING sepgsql_check_row_perm(security_label, sepgsql_getcon(),
'delete'::text)
POLICY check_report_insert_selinux FOR INSERT
  WITH CHECK sepgsql_check_row_perm(security_label,
sepgsql_getcon(), 'insert'::text)
POLICY check_report_select_selinux FOR SELECT
  USING sepgsql_check_row_perm(sepgsql_getcon(), security_label,
'select'::text)
POLICY check_report_update_selinux FOR UPDATE
  USING sepgsql_check_row_perm(security_label, sepgsql_getcon(),
'update'::text)
  WITH CHECK sepgsql_check_row_perm(security_label,
sepgsql_getcon(), 'update'::text)

When I do a select I expect sepgsql_check_row_perm to be called and at
least output the elog message I added here's part of the patch I apply
to add the sepgsql_check_row_perm funstion to the module:

 /*
+ * BOOL sepgsql_check_row_perm(TEXT, TEXT, TEXT)
+ *
+ * Check if perm allowed for tuple.
+ * This is a variant of sepgsql_avc_check_perms_label which allows the
+ * specifying of both the source and target contexts. For MLS
+ * (write up read down) dominance purposes in the case of
+ * INSERT/UPDATE/DELETE (write) the source is the tuples context
+ * and it must dominate the peers context however in the case of
+ * SELECT (read) the source is the peers context and it must dominate
+ * the tuples context.
+ */
+PG_FUNCTION_INFO_V1(sepgsql_check_row_perm);
+Datum
+sepgsql_check_row_perm(PG_FUNCTION_ARGS)
+{
+   const char *scontext;
+   const char *tcontext;
+   const char *perm_name;
+   access_vector_t av_perm;
+
+   elog(DEBUG1, sepgsql_check_row_perm);

I'd also expect that the rewrite would have added the POLICY SELECT
USING clause to the query but I don't see any indication of that in
the details that follow:

 2015-05-21 16:59:39.030 CDT STATEMENT:  select * from reports
 2015-05-21 16:59:39.030 CDT LOG:  rewritten parse tree:
 2015-05-21 16:59:39.030 CDT DETAIL:  (
   {QUERY
   :commandType 1
   :querySource 0
   :canSetTag true
   :utilityStmt 
   :resultRelation 0
   :hasAggs false
   :hasWindowFuncs false
   :hasSubLinks false
   :hasDistinctOn false
   :hasRecursive false
   :hasModifyingCTE false
   :hasForUpdate false
   :hasRowSecurity true
   :cteList 
   :rtable (
  {RTE
  :alias 
  :eref
 {ALIAS
 :aliasname reports
 :colnames (id report message_id location security_label)
 }
  :rtekind 0
  :relid 19116
  :relkind r
  :lateral false
  :inh true
  :inFromCl true
  :requiredPerms 2
  :checkAsUser 0
  :selectedCols (b 9 10 11 12 13)
  :modifiedCols (b)
  :securityQuals 
  }
   )
   :jointree
  {FROMEXPR
  :fromlist (
 {RANGETBLREF
 :rtindex 1
 }
  )
  :quals 
  }
   :targetList (
  {TARGETENTRY
  :expr
 {VAR
 :varno 1
 :varattno 1
 :vartype 23
 :vartypmod -1
 :varcollid 0
 :varlevelsup 0
 :varnoold 1
 :varoattno 1
 :location 7
 }
  :resno 1
  :resname id
  :ressortgroupref 0
  :resorigtbl 19116
  :resorigcol 1
  :resjunk false
  }
  {TARGETENTRY
  :expr
 {VAR
 :varno 1
 :varattno 2
 :vartype 114
 :vartypmod -1
 :varcollid 0
 :varlevelsup 0
 :varnoold 1
 :varoattno 2
 :location 7
 }
  :resno 2
  :resname report
  :ressortgroupref 0
  :resorigtbl 19116
  

Re: [GENERAL] Enum in foreign table: error and correct way to handle.

2015-05-21 Thread Ian Barwick
On 21/05/15 04:23, Peter Swartz wrote:
 I'm creating a foreign table (foo_table) in database_a. foo_table lives in
 database_b.foo_table has an enum (bar_type) as one of its columns. Because
 this enum is in database_b, the creation of the foreign table fails in
 database_a. database_a doesn't understand the column type. Running the
 following in database_a
 
 CREATE FOREIGN TABLE foo_table (id integer NOT NULL, bar bar_type) SERVER
 database_b
 
 One gets the error:
 
 ERROR: type bar_type does not exist
 
 I could just create a copy of bar_type in database_a, but this feels
 duplicative and possibly a future cause of inconsistency / trouble. Would
 anyone have thoughts on best practices for handling?

A foreign table is basically an ad-hoc remote data source for the local 
database,
so the onus is on the local database to maintain its definition of the
remote table, whether it's in another (or even the same) PostgreSQL server
or a completely different data source, especially as the local definition can
be different from the remote one.

This does mean that there's no simple way of ensuring any remote dependencies 
are
present on the local server. PostgreSQL 9.5 will provide the IMPORT FOREIGN 
SCHEMA
command, however this is limited to table/view definitions.


Regards

Ian Barwick



-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] RLS policy issue

2015-05-21 Thread Stephen Frost
Ted,

* Ted Toth (txt...@gmail.com) wrote:
 I'd also expect that the rewrite would have added the POLICY SELECT
 USING clause to the query but I don't see any indication of that in
 the details that follow:

Just running 'explain' should show the policy.

Are you running this as the owner of the table or as a superuser?  As
noted in the documentation, the owner (who controls the policies
on the table anyway) and the superuser (who can bypass all
authorization) do not have the RLS policies applied unless the
'row_security' GUC is set to 'force', like so:

SET row_security = force;

By the way, you might be interested in the test_rls_hooks module which I
wrote and committed recently under src/test/modules.  That's the
approach which I was thinking about using with sepgsql to provide policy
enforcement, but using regular policies should also work.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Strange replication problem - segment restored from archive but still requested from master

2015-05-21 Thread Venkata Balaji N
On Fri, May 22, 2015 at 6:52 AM, Piotr Gasidło qua...@barbara.eu.org
wrote:

 Got strange problem. Unable to repeat, but got logs.

 Simple master-slave using streaming replication.
 Master is running. Slave is down.
 Segment 00044C4D0090 was successfully archived and send
 from master to slave.

 Now I've started slave, and:

 ay 21 21:23:37 d8 postgres[50645]: [3-1] 2015-05-21 21:23:37.033 CEST
 @ 50645   LOG:  database system was shut down in recovery at
 2015-05-21 21:22:03 CEST
 May 21 21:23:37 d8 postgres[50645]: [4-1] 2015-05-21 21:23:37.034 CEST
 @ 50645   LOG:  entering standby mode
 May 21 21:23:37 d8 postgres[50645]: [5-1] 2015-05-21 21:23:37.058 CEST
 @ 50645   LOG:  restored log file 00044C4D0088 from
 archive
 May 21 21:23:37 d8 postgres[50645]: [6-1] 2015-05-21 21:23:37.120 CEST
 @ 50645   LOG:  redo starts at 4C4D/88493B50
 May 21 21:23:37 d8 postgres[50645]: [7-1] 2015-05-21 21:23:37.226 CEST
 @ 50645   LOG:  restored log file 00044C4D0089 from
 archive
 May 21 21:23:37 d8 postgres[50645]: [8-1] 2015-05-21 21:23:37.426 CEST
 @ 50645   LOG:  restored log file 00044C4D008A from
 archive
 May 21 21:23:37 d8 postgres[50645]: [9-1] 2015-05-21 21:23:37.750 CEST
 @ 50645   LOG:  restored log file 00044C4D008B from
 archive
 May 21 21:23:38 d8 postgres[50645]: [10-1] 2015-05-21 21:23:38.376
 CEST @ 50645   LOG:  restored log file 00044C4D008C from
 archive
 May 21 21:23:38 d8 postgres[50645]: [11-1] 2015-05-21 21:23:38.690
 CEST @ 50645   LOG:  restored log file 00044C4D008D from
 archive
 May 21 21:23:38 d8 postgres[50645]: [12-1] 2015-05-21 21:23:38.855
 CEST @ 50645   LOG:  restored log file 00044C4D008E from
 archive
 May 21 21:23:39 d8 postgres[50645]: [13-1] 2015-05-21 21:23:39.275
 CEST @ 50645   LOG:  restored log file 00044C4D008F from
 archive
 May 21 21:23:39 d8 postgres[50645]: [14-1] 2015-05-21 21:23:39.654
 CEST @ 50645   LOG:  restored log file 00044C4D0090 from
 archive
 May 21 21:23:40 d8 postgres[50645]: [15-1] 2015-05-21 21:23:40.222
 CEST @ 50645   LOG:  consistent recovery state reached at
 4C4D/90FFF9C8
 May 21 21:23:40 d8 postgres[50644]: [3-1] 2015-05-21 21:23:40.222 CEST
 @ 50644   LOG:  database system is ready to accept read only
 connections
 May 21 21:23:40 d8 postgres[50645]: [16-1] 2015-05-21 21:23:40.223
 CEST @ 50645   LOG:  unexpected pageaddr 4C46/E00 in log segment
 00044C4D0091, offset 0
 May 21 21:23:40 d8 postgres[50699]: [4-1] 2015-05-21 21:23:40.232 CEST
 @ 50699   LOG:  started streaming WAL from primary at 4C4D/9000 on
 timeline 4
 May 21 21:23:40 d8 postgres[50699]: [5-1] 2015-05-21 21:23:40.232 CEST
 @ 50699   FATAL:  could not receive data from WAL stream: ERROR:
 requested WAL segment 00044C4D0090 has already been
 removed
 May 21 21:23:40 d8 postgres[50645]: [17-1] 2015-05-21 21:23:40.255
 CEST @ 50645   LOG:  restored log file 00044C4D0090 from
 archive
 May 21 21:23:40 d8 postgres[50703]: [4-1] 2015-05-21 21:23:40.268 CEST
 @ 50703   LOG:  started streaming WAL from primary at 4C4D/9000 on
 timeline 4
 May 21 21:23:40 d8 postgres[50703]: [5-1] 2015-05-21 21:23:40.268 CEST
 @ 50703   FATAL:  could not receive data from WAL stream: ERROR:
 requested WAL segment 00044C4D0090 has already been
 removed
 May 21 21:23:40 d8 postgres[50703]: [5-2]
 ...
 (and so on)

 So, as I understand:
 - slave was started and entered restore,
 - slave restored 00044C4D0090 from archive, reached
 consistent recovery state
 - now, it connected to master and noticed, that, it has
 00044C4D0091 segment uncomplete
 - and then, started yelling about missing segment (master deleted it
 already after archiving) on master

 Why?


It might be yelling about the WAL segment due to the delay in shipping it
from master to slave.
Do you have the restore_command set up in the recovery.conf file ? do you
have any automated job which is shipping WAL archives from master to slave.

Slave - if it does not find the WAL segment in master pg_xlog location, it
looks for the same in the location mentioned in the restore_command. It
yells if it is unable to find it.

Cannot give any concrete comments/suggestions, until recovery.conf entries
are known.


 I fixed it by stopping slave, doing some operations on master, copying
 00044C4D0091 and 00044C4D0092 from master, and
 staring the slave:


 May 21 21:24:56 d8 postgres[50644]: [4-1] 2015-05-21 21:24:56.160 CEST
 @ 50644   LOG:  received fast shutdown request
 May 21 21:24:56 d8 postgres[50644]: [5-1] 2015-05-21 21:24:56.160 CEST
 @ 50644   LOG:  aborting any active transactions
 May 21 21:24:56 d8 postgres[50657]: [3-1] 2015-05-21 21:24:56.162 CEST
 @ 50657   LOG:  shutting down
 May 21 21:24:56 d8 postgres[50657]: [4-1] 2015-05-21 21:24:56.174 CEST
 @ 50657   LOG:  database system is shut down
 May 21 21:28:54 d8 

Re: [GENERAL] date with month and year

2015-05-21 Thread Adrian Klaver

On 05/21/2015 11:56 AM, Steve Crawford wrote:

On 05/21/2015 10:45 AM, Paul Jungwirth wrote:

You really shouldn't use WITHOUT TIME ZONE.


I'd like to know more about this. Can you say why?


Start by reading about the date and time data types with special
attention to section 8.5.3:
www.postgresql.org/docs/current/static/datatype-datetime.html

Now go back and read it again and experiment a while until it makes
sense. As Adrian Klaver so eloquently put it, If I have learned
anything about dealing with dates and times, is that it is a set of
exceptions bound together by a few rules. Every time you think you have
the little rascals cornered, one gets away. This is also a very good
reason to avoid reinventing the wheel.



The check is in the mail:)


Cheers,
Steve




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Allowing postgresql to accept 0xff syntax for data types that it makes sense for?

2015-05-21 Thread Bill Moran
On Thu, 21 May 2015 13:57:24 -0400
Tom Lane t...@sss.pgh.pa.us wrote:

 Bill Moran wmo...@potentialtech.com writes:
  My other question: is there a specific reason why PostgreSQL doesn't support
  this syntax, aside from nobody has bothered to add such support? Because
  I'm considering writing a patch to Postgres and submitting it, but I'm not
  going to go down that path if there's a specific reason why supporting this
  syntax would be _bad_. Personally, I feel like it would be a good thing, as
  it seems like a lot of other database systems support it, and even though
  it's not ANSI, it's pretty much the de-facto standard.
 
 How many is a lot, and do any of the responsible vendors sit on the SQL
 standards committee?

Well, I've personally worked with (in addition to PostgreSQL) Microsoft
SQL Server, MySQL, and Sybase -- PostgreSQL is the only one of those 4 that
doesn't support the 0xff syntax.

I did a litle research and it appears that neither Oracle nor db2 supports
the 0xff syntax ... so not _quite_ as common as it seemed to me.

 One large concern about doing anything like this is whether future
 versions of the SQL standard might blindside us with some
 not-terribly-compatible interpretation of that syntax.  If we do something
 that is also in Oracle or DB2 or one of the other big boys, then we can
 probably rely on the assumption that they'll block anything really
 incompatible from becoming standardized ;-).

I assume that Microsoft is big enough to prevent anything that would
hurt SQL Server's compatibility from becomming a standard?

 OTOH, if the actual meaning of a lot is MySQL, I'd be pretty worried
 about this scenario.

Well, MySQL _does_ support that syntax ... but I couldn't care less. MySQL
also throws away your data instead of giving you errors and I would never
ask PostgreSQL to start behaving like that.

With all that being said, if I were to build a patch, would it be likely
to be accepted into core?

-- 
Bill Moran


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Adrian Klaver

On 05/21/2015 11:02 AM, Daniel Torres wrote:

Sorry, forgot to told you what I'm trying, I have climate data and want
to obtain mean temperature and total precipitation and that sort of
things per month and year. Think date_trunc is a good solution, but any
other advice would be very welcome.


As it turns out I am working on something similar with regards to school 
days. A quick and dirty query:


SELECT
extract (
YEAR
FROM
school_day ) AS YEAR,
extract (
MONTH
FROM
school_day ) AS MONTH,
count (
school_day )
FROM
school_calendar
GROUP BY
extract (
YEAR
FROM
school_day ),
extract (
MONTH
FROM
school_day )
ORDER BY
extract (
YEAR
FROM
school_day ),
extract (
MONTH
FROM
school_day );

Reformatting courtesy of pgFormatter(http://sqlformat.darold.net/).

Results:

 year | month | count
--+---+---
 2005 | 3 | 7
 2005 | 4 |12
 2005 | 5 |17
 2005 | 6 |14
 2005 | 7 |11
 2005 | 8 |15
 2005 | 9 |16
 2005 |10 |15
 2005 |11 |17
 2005 |12 |10
 2006 | 1 |15
 2006 | 2 |12
 2006 | 3 |18
 2006 | 4 |12
 2006 | 5 |18
 2006 | 6 |13
 2006 | 7 |11
 2006 | 8 |15
 2006 | 9 |15
 2006 |10 |18
 2006 |11 |13
 2006 |12 |10





(I need to read more about time zones, I'm new at using postgresql)

Thank you,
Daniel

2015-05-21 12:45 GMT-05:00 Paul Jungwirth p...@illuminatedcomputing.com
mailto:p...@illuminatedcomputing.com:

You really shouldn't use WITHOUT TIME ZONE.


I'd like to know more about this. Can you say why? Are there any
articles you'd recommend? I'm fond of normalizing all times to UTC
and only presenting them in a time zone when I know the current
perspective. I've written about that approach in a Rails context here:

http://illuminatedcomputing.com/posts/2014/04/timezones/

I find that this helps me to ignore time zones in most parts of my
application and cut down on my timezone-related bugs.

Thanks!

Paul





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Alvaro Herrera
Adrian Klaver wrote:

 SELECT
 extract (
 YEAR
 FROM
 school_day ) AS YEAR,

 Reformatting courtesy of pgFormatter(http://sqlformat.darold.net/).

FWIW I think this indenting of FROM inside an extract() call is odd and
ugly --- probably just an accident resulting from dealing with the
regular FROM clause.  It seems to me that the YEAR FROM school_day
part should be considered a single argument instead of breaking it in
multiple lines.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Adrian Klaver

On 05/21/2015 09:04 PM, Alvaro Herrera wrote:

Adrian Klaver wrote:


SELECT
 extract (
 YEAR
 FROM
 school_day ) AS YEAR,



Reformatting courtesy of pgFormatter(http://sqlformat.darold.net/).


FWIW I think this indenting of FROM inside an extract() call is odd and
ugly --- probably just an accident resulting from dealing with the
regular FROM clause.  It seems to me that the YEAR FROM school_day
part should be considered a single argument instead of breaking it in
multiple lines.


Probably so, but the output is a lot cleaner then what I did in psql. 
The author of pgFormatter will be interested in your comments:


https://github.com/darold/pgFormatter

--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange replication problem - segment restored from archive but still requested from master

2015-05-21 Thread Fujii Masao
On Fri, May 22, 2015 at 5:52 AM, Piotr Gasidło qua...@barbara.eu.org wrote:
 Got strange problem. Unable to repeat, but got logs.

 Simple master-slave using streaming replication.
 Master is running. Slave is down.
 Segment 00044C4D0090 was successfully archived and send
 from master to slave.

 Now I've started slave, and:

 ay 21 21:23:37 d8 postgres[50645]: [3-1] 2015-05-21 21:23:37.033 CEST
 @ 50645   LOG:  database system was shut down in recovery at
 2015-05-21 21:22:03 CEST
 May 21 21:23:37 d8 postgres[50645]: [4-1] 2015-05-21 21:23:37.034 CEST
 @ 50645   LOG:  entering standby mode
 May 21 21:23:37 d8 postgres[50645]: [5-1] 2015-05-21 21:23:37.058 CEST
 @ 50645   LOG:  restored log file 00044C4D0088 from
 archive
 May 21 21:23:37 d8 postgres[50645]: [6-1] 2015-05-21 21:23:37.120 CEST
 @ 50645   LOG:  redo starts at 4C4D/88493B50
 May 21 21:23:37 d8 postgres[50645]: [7-1] 2015-05-21 21:23:37.226 CEST
 @ 50645   LOG:  restored log file 00044C4D0089 from
 archive
 May 21 21:23:37 d8 postgres[50645]: [8-1] 2015-05-21 21:23:37.426 CEST
 @ 50645   LOG:  restored log file 00044C4D008A from
 archive
 May 21 21:23:37 d8 postgres[50645]: [9-1] 2015-05-21 21:23:37.750 CEST
 @ 50645   LOG:  restored log file 00044C4D008B from
 archive
 May 21 21:23:38 d8 postgres[50645]: [10-1] 2015-05-21 21:23:38.376
 CEST @ 50645   LOG:  restored log file 00044C4D008C from
 archive
 May 21 21:23:38 d8 postgres[50645]: [11-1] 2015-05-21 21:23:38.690
 CEST @ 50645   LOG:  restored log file 00044C4D008D from
 archive
 May 21 21:23:38 d8 postgres[50645]: [12-1] 2015-05-21 21:23:38.855
 CEST @ 50645   LOG:  restored log file 00044C4D008E from
 archive
 May 21 21:23:39 d8 postgres[50645]: [13-1] 2015-05-21 21:23:39.275
 CEST @ 50645   LOG:  restored log file 00044C4D008F from
 archive
 May 21 21:23:39 d8 postgres[50645]: [14-1] 2015-05-21 21:23:39.654
 CEST @ 50645   LOG:  restored log file 00044C4D0090 from
 archive
 May 21 21:23:40 d8 postgres[50645]: [15-1] 2015-05-21 21:23:40.222
 CEST @ 50645   LOG:  consistent recovery state reached at
 4C4D/90FFF9C8
 May 21 21:23:40 d8 postgres[50644]: [3-1] 2015-05-21 21:23:40.222 CEST
 @ 50644   LOG:  database system is ready to accept read only
 connections
 May 21 21:23:40 d8 postgres[50645]: [16-1] 2015-05-21 21:23:40.223
 CEST @ 50645   LOG:  unexpected pageaddr 4C46/E00 in log segment
 00044C4D0091, offset 0
 May 21 21:23:40 d8 postgres[50699]: [4-1] 2015-05-21 21:23:40.232 CEST
 @ 50699   LOG:  started streaming WAL from primary at 4C4D/9000 on
 timeline 4
 May 21 21:23:40 d8 postgres[50699]: [5-1] 2015-05-21 21:23:40.232 CEST
 @ 50699   FATAL:  could not receive data from WAL stream: ERROR:
 requested WAL segment 00044C4D0090 has already been
 removed
 May 21 21:23:40 d8 postgres[50645]: [17-1] 2015-05-21 21:23:40.255
 CEST @ 50645   LOG:  restored log file 00044C4D0090 from
 archive
 May 21 21:23:40 d8 postgres[50703]: [4-1] 2015-05-21 21:23:40.268 CEST
 @ 50703   LOG:  started streaming WAL from primary at 4C4D/9000 on
 timeline 4
 May 21 21:23:40 d8 postgres[50703]: [5-1] 2015-05-21 21:23:40.268 CEST
 @ 50703   FATAL:  could not receive data from WAL stream: ERROR:
 requested WAL segment 00044C4D0090 has already been
 removed
 May 21 21:23:40 d8 postgres[50703]: [5-2]
 ...
 (and so on)

 So, as I understand:
 - slave was started and entered restore,
 - slave restored 00044C4D0090 from archive, reached
 consistent recovery state
 - now, it connected to master and noticed, that, it has
 00044C4D0091 segment uncomplete
 - and then, started yelling about missing segment (master deleted it
 already after archiving) on master

 Why?

Thanks for the report! This seems to be a bug.

This problem happens when WAL record is stored in separate two WAL files and
there is no valid latter WAL file in the standby. In your case, the former file
is 00044C4D0090 and the latter is 00044C4D0091.

In this case, the first half of WAL record can be read from the former WAL file,
but the remaining half not because no valid latter file exists in the standby.
Then the standby tries to retrieve the latter WAL file via replication.
The problem here is that the standby tries to start the replication from the
starting point of WAL record, i.e., that's the location of the former WAL file.
So the already-read WAL file is requested via replication.

To address this problem, maybe we can just use the location where the remaining
half of WAL record starts from as the replication starting location. We would
need to look at carefully the side effect of that change, though.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Jan de Visser
On May 21, 2015 11:56:52 AM Steve Crawford wrote:
 The article does also display a couple attitudes that I feel are especially
 rampant in the web-development community. The first is that web developers
 shouldn't become educated about the capabilities of a database but rather
 use the database as a dumb data-store and redo everything themselves (often
 this includes an utter failure to use the data-integrity capabilities of
 the database).

Having been at the receiving end of web developer rants many times, the reason 
more often than not is that the database does the checking after the fact, i.e. 
after the user spend the time providing the data. Web developers need to know 
what's allowed when they throw up the page. And frameworks offer little or no 
help in retrieving these validation rules. So the web developer is almost 
forced to 
roll his own.




Re: [GENERAL] date with month and year

2015-05-21 Thread Paul Jungwirth

You really shouldn't use WITHOUT TIME ZONE.


I'd like to know more about this. Can you say why? Are there any 
articles you'd recommend? I'm fond of normalizing all times to UTC and 
only presenting them in a time zone when I know the current 
perspective. I've written about that approach in a Rails context here:


http://illuminatedcomputing.com/posts/2014/04/timezones/

I find that this helps me to ignore time zones in most parts of my 
application and cut down on my timezone-related bugs.


Thanks!
Paul





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Steve Crawford

On 05/21/2015 10:01 AM, Daniel Torres wrote:
I everybody, I'm new in the Postgresql world, and have an easy 
question: Is it possible to have date type data that only contain 
month and year?, how can I obtain that from a timestamp (without time 
zone) column?...


Others have offered good tips but if you tell us more about the problem 
you are attempting to solve you may get some better advice and/or 
warnings about pitfalls.


-Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replicate over pgbouncer?

2015-05-21 Thread Andomar

Since you did not specify your O/S and PostgreSQL version, I'll just
point you to the documentation for 9.1.


We're using CentOS 6.5 with PostgreSQL 9.4.1.

Like the original post mentioned, we use a hot standby for replication.

The question is whether we can do the hot standby through pgbouncer, or 
if we need a separate open port on the database server just for replication.


Cheers,
Andomar


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Brian Dunavant
It's probably worth noting that both the Ruby 'best practice' AND
Postgres have a failure case when dealing with future dates precisely
because they are storing the data as UTC with a time zone.  This is
one case where storing the data WITHOUT TIME ZONE would actually save
your bacon.

From the postgres docs:  For times in the future, the assumption is
that the latest known rules for a given time zone will continue to be
observed indefinitely far into the future.

Imagine scheduling a meeting for a certain time a few years from now.
 This will be stored as UTC + time zone.   A year later, that
government decides to change the time zone rules for their country.
Your operating system will get the new timezone data in an update (as
it should).  However when the meeting comes around, you're going to be
early/late because the wall time that you get converting back from
UTC+time zone is no longer the time that you were supposed to have
been at the meeting.   If you had stored that future date as a
timestamp WITHOUT time zone you would have still been on-time.

This is only an issue for future dates, not past ones.

-Brian Dunavant
(time is hard, so if I'm wrong anywhere here, someone please correct me)


On Thu, May 21, 2015 at 2:56 PM, Steve Crawford
scrawf...@pinpointresearch.com wrote:
 On 05/21/2015 10:45 AM, Paul Jungwirth wrote:

 You really shouldn't use WITHOUT TIME ZONE.


 I'd like to know more about this. Can you say why?


 Start by reading about the date and time data types with special attention
 to section 8.5.3:
 www.postgresql.org/docs/current/static/datatype-datetime.html

 Now go back and read it again and experiment a while until it makes sense.
 As Adrian Klaver so eloquently put it, If I have learned anything about
 dealing with dates and times, is that it is a set of exceptions bound
 together by a few rules. Every time you think you have the little rascals
 cornered, one gets away. This is also a very good reason to avoid
 reinventing the wheel.

 When you need a break, watch this:
 https://www.youtube.com/watch?v=-5wpm-gesOY

 His conclusion is a good one: be very happy that someone else has done the
 dirty work for you.

 The Ruby article does make one good point which is that we are talking about
 what they call an instant or what I like to refer to as a point in time.
 The point in time is actually a better way of thinking of timestamp with
 time zone since the timestamp with time zone does not actually store any
 timezone information - it stores a point in time that can be manipulated in
 the time-zone of your choosing whereas timestamp without time zone is not a
 point in time and must be combined with other information to do proper
 manipulation.

 The article does also display a couple attitudes that I feel are especially
 rampant in the web-development community. The first is that web developers
 shouldn't become educated about the capabilities of a database but rather
 use the database as a dumb data-store and redo everything themselves (often
 this includes an utter failure to use the data-integrity capabilities of the
 database).

 The second is the assumption that they are the only users of the database
 and that nobody will ever access the data except through their
 custom-written Ruby/PHP/Perl/Python code and that no other programming
 language will ever be used. Woe be to the poor slob who has to deal with
 ad-hoc queries, analytics platforms or reporting systems that weren't so
 brilliantly reinvented or who wants to use range-types or other nice
 PostgreSQL features.

 Internally PostgreSQL stores timestamp without time zone in UTC but that is
 entirely irrelevant. What is relevant is that you can provide an
 instant/point in time in whatever time-zone representation you want and
 get it back the same way. Want to use a Unix epoch in your code. Go ahead:
 extract(epoch from yourtstzcol)
 abstime(yourepochint)

 Want to assume everything is UTC? No problem:
 set timezone to 'UTC';

 Then you can reinvent wheels to your heart's content without wrecking the
 ability to easily use other tools.

 By the way, use full timezone names to avoid ambiguity. I don't know what
 Ruby cooked up but PostgreSQL uses industry-standard names:
 select * from pg_timezone_names;

 Your original question had to do with month/year. You will have to define
 this for your use-case but beware that it won't necessarily get you away
 from time-zone issues as the month ticks over on a zone-by-zone basis.

 Also note that time-intervals can be a source of interesting side-effects.
 Operator precedence is important. For example, what is one month? 28-days?
 29? 30? 31? Every system must make a judgment call. Add a month to January
 31 and you will get February 28. But add/subtract a month from February 28
 and you get January 28/March 28. So you can create a query that takes a
 date, adds a month and subtracts a month and results in a different date.
 There is nothing to do here but to read the docs 

Re: [GENERAL] Allowing postgresql to accept 0xff syntax for data types that it makes sense for?

2015-05-21 Thread Jan de Visser
On May 21, 2015 06:04:37 PM Brown, Joseph E. 
wrote:
 Unsubscribe pgsql-general

This doesn't work. See the footer of the posts to the 
mailing list.




[GENERAL] Unit tests and foreign key constraints

2015-05-21 Thread Andy Chambers
Hey All,

I've started trying to use foreign key constraints in my schema but it
seems to make it more difficult to write unit tests that touch the database
because each test now requires more setup data to satisfy the foreign key
constraint. (I know some say your unit tests shouldn't touch the DB but the
more full stack tests I have, the better I sleep at night :-))

I wondered if anyone else has run into this problem and found a good
strategy to mitigate it. I thought I might be able to make these
constraints deferred during a test run since I have automatic rollback
after each test but even after set constraints all deferred, I still got
a foreign key violation during my test run if the test tries to insert data
with a non-existent foreign key.

Cheers,
Andy


[GENERAL] Replicate over pgbouncer?

2015-05-21 Thread Andomar

Hi,

Today I installed pgbouncer.  I added a second installation as a hot 
standby.  Before starting the standby, I configured recovery.conf to 
connect to pgbouncer.


This results in an error message:

Pooler Error: Unsupported startup parameter: replication

Is it possible to replicate over a connection through pgbouncer?

Kind regards,
Andomar


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [SQL] extracting PII data and transforming it across table.

2015-05-21 Thread Suresh Raja
On Thu, May 21, 2015 at 11:33 AM, Steve Midgley scie...@misuse.org wrote:

 I would update the IDs using SQL before dumping if possible. If necessary
 clone the tables, adjust the IDs and then dump. SQL has better tools than
 most languages/scripts to adjust values in columns across multiple
 entities. Plus it should be easier to build some test queries in SQL to
 validate that your ID adjustments were valid.

 On Thu, May 21, 2015 at 9:27 AM, Suresh Raja suresh.raja...@gmail.com
 wrote:

 Hi Folks:

 I'm looking at directions or help in extracting data from production and
 alter employee id information while extracting.  But at the same time
 maintain referential integrity across tables. Is it possible to dump data
 to flat file and then run some script to change emp id data on all files.
 I'm looking for a easy solution.

 Thanks,
 -Suresh Raja


 Steve:
I too would like to update the id's before dumping.  can i write a sql to
union all tables and at the same time create unique key valid across tables.

Thanks for prompt reply.


Re: [GENERAL] Replicate over pgbouncer?

2015-05-21 Thread David G. Johnston
On Thursday, May 21, 2015, Andomar ando...@aule.net wrote:

 Hi,

 Today I installed pgbouncer.  I added a second installation as a hot
 standby.  Before starting the standby, I configured recovery.conf to
 connect to pgbouncer.

 This results in an error message:

 Pooler Error: Unsupported startup parameter: replication

 Is it possible to replicate over a connection through pgbouncer?


I'm doubtful.  Why do you think you need such a capability?

David J.


[GENERAL] date with month and year

2015-05-21 Thread Daniel Torres
I everybody, I'm new in the Postgresql world, and have an easy question: Is
it possible to have date type data that only contain month and year?, how
can I obtain that from a timestamp (without time zone) column?

I've made this, but I think the result is a text, not a date

select extract (Year from '2001-05-01 20:21:00'::TIMESTAMP WITHOUT TIME
ZONE)||'-'|| extract(Month from '2001-05-01 20:21:00'::TIMESTAMP WITHOUT
TIME ZONE);



Any help is welcome, thanks

Daniel


Re: [GENERAL] date with month and year

2015-05-21 Thread John McKown
On Thu, May 21, 2015 at 12:01 PM, Daniel Torres nobeea...@gmail.com wrote:

 I everybody, I'm new in the Postgresql world, and have an easy question:
 Is it possible to have date type data that only contain month and year?,
 how can I obtain that from a timestamp (without time zone) column?

 I've made this, but I think the result is a text, not a date

 select extract (Year from '2001-05-01 20:21:00'::TIMESTAMP WITHOUT TIME
 ZONE)||'-'|| extract(Month from '2001-05-01 20:21:00'::TIMESTAMP WITHOUT
 TIME ZONE);



 Any help is welcome, thanks

 Daniel


​I don't think so. Mainly because a date, at least in PostgreSQL, is by
definition a month, day, and year. You could just arbitrarily set the day
to 01 because every month starts with day 1, I guess.​ Perhaps if you
said what you want to do with this type of date field? Of course, if you
really wanted to, you could create your own data type and conversions. But
that still wouldn't be a date, exactly.


-- 
My sister opened a computer store in Hawaii. She sells C shells down by the
seashore.

If someone tell you that nothing is impossible:
Ask him to dribble a football.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! 
John McKown


[GENERAL] extracting PII data and transforming it across table.

2015-05-21 Thread Suresh Raja
Hi Folks:

I'm looking at directions or help in extracting data from production and
alter employee id information while extracting.  But at the same time
maintain referential integrity across tables. Is it possible to dump data
to flat file and then run some script to change emp id data on all files.
I'm looking for a easy solution.

Thanks,
-Suresh Raja


Re: [GENERAL] [SQL] extracting PII data and transforming it across table.

2015-05-21 Thread Steve Midgley
I would update the IDs using SQL before dumping if possible. If necessary
clone the tables, adjust the IDs and then dump. SQL has better tools than
most languages/scripts to adjust values in columns across multiple
entities. Plus it should be easier to build some test queries in SQL to
validate that your ID adjustments were valid.

On Thu, May 21, 2015 at 9:27 AM, Suresh Raja suresh.raja...@gmail.com
wrote:

 Hi Folks:

 I'm looking at directions or help in extracting data from production and
 alter employee id information while extracting.  But at the same time
 maintain referential integrity across tables. Is it possible to dump data
 to flat file and then run some script to change emp id data on all files.
 I'm looking for a easy solution.

 Thanks,
 -Suresh Raja



Re: [GENERAL] date with month and year

2015-05-21 Thread David G. Johnston
On Thursday, May 21, 2015, Daniel Torres nobeea...@gmail.com wrote:

 I everybody, I'm new in the Postgresql world, and have an easy question:
 Is it possible to have date type data that only contain month and year?,
 how can I obtain that from a timestamp (without time zone) column?

 I've made this, but I think the result is a text, not a date

 select extract (Year from '2001-05-01 20:21:00'::TIMESTAMP WITHOUT TIME
 ZONE)||'-'|| extract(Month from '2001-05-01 20:21:00'::TIMESTAMP WITHOUT
 TIME ZONE);


 You have to settle for the first of the month if you want a date type.
Date_trunc(day,...) will give you that.

I do end up having a lookup tha gassing sequential integers to sequential
year-months to make calculations easier without having to carry around a
date type for that sole purpose.  For presentation I want text, not a date.

User defined functions are nice here - I have a todo to publish my set to
PGXN...maybe someone else already has?

David J.


Re: [GENERAL] date with month and year

2015-05-21 Thread Paul Jungwirth

Is it possible to have date type data that only contain month and year?,
how can I obtain that from a timestamp (without time zone) column?


I think you want date_trunc, which will cut everything down to the first 
of the month, e.g 2015-01-01, 2015-02-01, etc. The results will still be 
dates, so you can still use date functions to manipulate them.


Paul






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date with month and year

2015-05-21 Thread Joshua D. Drake

On 05/21/2015 10:01 AM, Daniel Torres wrote:

I everybody, I'm new in the Postgresql world, and have an easy question:
Is it possible to have date type data that only contain month and year?,
how can I obtain that from a timestamp (without time zone) column?

I've made this, but I think the result is a text, not a date

select extract (Year from '2001-05-01 20:21:00'::TIMESTAMP WITHOUT TIME
ZONE)||'-'|| extract(Month from '2001-05-01 20:21:00'::TIMESTAMP WITHOUT
TIME ZONE);


date_part will get you what you want as will to_char. The above you 
could cast if you needed. You really shouldn't use WITHOUT TIME ZONE.


JD


--
The most kicking donkey PostgreSQL Infrastructure company in existence.
The oldest, the most experienced, the consulting company to the stars.
Command Prompt, Inc. http://www.commandprompt.com/ +1 -503-667-4564 -
24x7 - 365 - Proactive and Managed Professional Services!


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general