[GENERAL] numeric data type

2015-09-22 Thread Juan Pablo L .
Hi, i m writing a C module (extension),  the procedure has a parameter that is 
of type numeric, 
inside the function i can not read the parameter or so it seems, this what is 
do:

float8 db_balance,in_chgval;

in_chgval = PG_GETARG_FLOAT8(2);

elog(INFO,"in_chgval = %0.2f",in_chgval);

The above elog, always shows cero, more over, i m also trying to read a
numeric from the database and substracted from the in_chgval parameter,
the operation always gives 0,, so apparently i can not read a numeric from a 
data base:

db_balance = 
DatumGetFloat8(SPI_getbinval(SPI_tuptable->vals[0],SPI_tuptable->tupdesc,1,_null));

elog(INFO,"db_balance(%f) - in_chgval(%f) = 
%f",db_balance,in_chgval,(db_balance - in_chgval));

it shows all 0 . please any advise would be really appreciated. thank you!!

  

Re: [GENERAL] numeric data type

2015-09-22 Thread Tom Lane
"Juan Pablo L."  writes:
> Hi, i m writing a C module (extension),  the procedure has a parameter that 
> is of type numeric, 
> inside the function i can not read the parameter or so it seems, this what is 
> do:

> float8 db_balance,in_chgval;

> in_chgval = PG_GETARG_FLOAT8(2);

> elog(INFO,"in_chgval = %0.2f",in_chgval);

If the C code is written that way, the function has to be declared to take
type float8 (a/k/a double precision), not numeric.

The parser will insert a conversion from numeric to float8 automatically,
so the function will still work with a numeric data column.  You might
lose some precision in the conversion though.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Queuing query

2015-09-22 Thread Jeff Janes
On Mon, Sep 21, 2015 at 3:51 PM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:

> While awaiting the awesomeness of the upcoming "skip locked" feature in
> 9.5 I need to handle a work queue.
>
> Does anyone see any glaring issues or subtle nuances with the basic method
> below which combines CTEs with queue-handling methods posted by depesz, on
> the PG wiki and elsewhere.
>
> Note that it appears that there is the slight potential for a
> race-condition which would cause one worker to occasionally fail to get a
> record but the application code handles that issue fine.
>
> The work is sent to an externally hosted API which will ultimately reply
> to a callback API at our end so obviously there's a lot of other stuff in
> the system to update final results, recover from lost work, add to the
> queue, etc. I'm just asking about the sanity of the queue processing query
> itself:
>
> with next_up as (
>select
> the_id
> from
> queuetest
>where
> not sent_for_processing
> and pg_try_advisory_xact_lock(12345, the_id)
>order by
> the_priority
> limit 1 for update)
> update
> queuetest
> set
> sent_for_processing = true
> where
> the_id = (select the_id from next_up)
> returning
> the_work_to_do;
>

This will only be sane if the inner query can use an index to do the "order
by".  Otherwise it is going to read every row in order to sort them, and
get the advisory lock on every row, and you will run out of shared memory.
Of course, if it were sorting each time it would probably be too slow
anyway.

And it has to be a partial index:

 (the_priority) where not sent_for_processing

Because if you just have an index on the_priority, the sub select will
start getting inefficient once all the lowest numbered priority items are
marked as sent.

Also, you probably want to make sent_for_processing be some kind of token
or time stamp, to make it easier to detect lost work.  In which case NULL
would mean not yet sent, so the partial index would be "where
sent_for_processing is null".

Cheers,

Jeff


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Adrian Klaver

On 09/22/2015 08:16 AM, Michael Zoet wrote:


- Nachricht von Adrian Klaver  -
   Datum: Tue, 22 Sep 2015 07:46:24 -0700


Zone names: Time zone names ('z') cannot be parsed.


Some more digging found that DateTimeFormat can deal with Z  which is
either the offset or the timezone id, in particular as ZZZ.

http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html


Zone: 'Z' outputs offset without a colon, 'ZZ' outputs the offset with
a colon, 'ZZZ' or more outputs the zone id.


That's why I am asking "how to get the numerical offset printed in the
log files". Logstash can parse the numerical value. Otherwise I will
always have a parsing error in Logstash. We could live with this but if
it is possible I'd like to change this on the Postgres level. But I
never thought that this is much more complicated than expected.


From the above link:

 Z   time zone offset/id  zone  -0800; -08:00; 
America/Los_Angeles


So DateTimeFormat does understand names, though not necessarily 
abbreviations which is what z is for. The Logstash match is supposed to 
understand what DateTimeFormat parses.






The timezone names in Postgres are available from:

select * from pg_timezone_names ;

So in addition to Tom's suggestion, you might try setting the
log_timezone to a name. Examples: Europe/Brussels for CEST, Etc/UTC
for UTC



As far as I understand the log_timezone configuration option, it will
always print me the name if I use a name for the time zone. And that is
the no go for Logstash. So I really need a numerical value to parse it
with Logstash.

Michael






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Queuing query

2015-09-22 Thread Steve Crawford
Thanks, Jeff. Like I said, this is the way-stripped-down version of the
core query with things like "the_priority" and "the_work" standing for more
complicated expressions. Lots of other stuff is going on to make sure we
get a response, clear the queue, etc. and we will index appropriately.

I'm really looking for any things like planner ordering nuances that would
make the query operate in unexpected ways.

Cheers,
Steve



On Tue, Sep 22, 2015 at 9:26 AM, Jeff Janes  wrote:

> On Mon, Sep 21, 2015 at 3:51 PM, Steve Crawford <
> scrawf...@pinpointresearch.com> wrote:
>
>> While awaiting the awesomeness of the upcoming "skip locked" feature in
>> 9.5 I need to handle a work queue.
>>
>> Does anyone see any glaring issues or subtle nuances with the basic
>> method below which combines CTEs with queue-handling methods posted by
>> depesz, on the PG wiki and elsewhere.
>>
>> Note that it appears that there is the slight potential for a
>> race-condition which would cause one worker to occasionally fail to get a
>> record but the application code handles that issue fine.
>>
>> The work is sent to an externally hosted API which will ultimately reply
>> to a callback API at our end so obviously there's a lot of other stuff in
>> the system to update final results, recover from lost work, add to the
>> queue, etc. I'm just asking about the sanity of the queue processing query
>> itself:
>>
>> with next_up as (
>>select
>> the_id
>> from
>> queuetest
>>where
>> not sent_for_processing
>> and pg_try_advisory_xact_lock(12345, the_id)
>>order by
>> the_priority
>> limit 1 for update)
>> update
>> queuetest
>> set
>> sent_for_processing = true
>> where
>> the_id = (select the_id from next_up)
>> returning
>> the_work_to_do;
>>
>
> This will only be sane if the inner query can use an index to do the
> "order by".  Otherwise it is going to read every row in order to sort them,
> and get the advisory lock on every row, and you will run out of shared
> memory.  Of course, if it were sorting each time it would probably be too
> slow anyway.
>
> And it has to be a partial index:
>
>  (the_priority) where not sent_for_processing
>
> Because if you just have an index on the_priority, the sub select will
> start getting inefficient once all the lowest numbered priority items are
> marked as sent.
>
> Also, you probably want to make sent_for_processing be some kind of token
> or time stamp, to make it easier to detect lost work.  In which case NULL
> would mean not yet sent, so the partial index would be "where
> sent_for_processing is null".
>
> Cheers,
>
> Jeff
>


[GENERAL] Advise on memory usage limitation by PostgreSQL on Windows

2015-09-22 Thread Venkata Balaji N
Hello,

We would like to know if there is any limitation around memory utilization
by PostgreSQL on Windows systems.

As of now, we do not have the details about the exact Windows version in
use.

Windows server is with 5 GB RAM and 4 CPUs (4 cores). PostgreSQL Version is
9.1.x which will possibly be upgraded to 9.4.x.

After a detailed analysis on memory usage by OS and other processes, is it
safe to advise on configuring shared_buffers to 2 GB ?

Any advise will be appreciated.

Regards,
Venkata Balaji N

Fujitsu Australia


[GENERAL] The curious case of two inserts, a shrinking xmax, and a ShareLock on transaction

2015-09-22 Thread Jeff Dik
Hi,

I've been trying to understand this curious case of a shrinking xmax.

Suppose we have two tables: foo and bar.

CREATE TABLE foo (
foo_id text PRIMARY KEY NOT NULL
);

CREATE TABLE bar (
bar_id text NOT NULL,
foo_id text NOT NULL REFERENCES foo (foo_id) ON DELETE CASCADE
);

... and we have a foo_id1:

[console] sandbox=# insert into foo (foo_id) values ('foo_id1');
INSERT 0 1
[console] sandbox=# select *, xmin, xmax from foo;
 foo_id  | xmin | xmax
-+--+--
 foo_id1 |  694 |0
(1 row)

Now we start transaction A:

[A] sandbox=# begin;
BEGIN
[A] sandbox=# insert into bar (bar_id, foo_id) values ('bar_id1',
'foo_id1');
INSERT 0 1
[A] sandbox=# select *, xmin, xmax from bar;
 bar_id  | foo_id  | xmin | xmax
-+-+--+--
 bar_id1 | foo_id1 |  695 |0
(1 row)

[A] sandbox=# select *, xmin, xmax from foo;
 foo_id  | xmin | xmax
-+--+--
 foo_id1 |  694 |  695
(1 row)

Ok, foo_id1's xmax is 695, which locks the row as explained by this
excellent blog post: http://rhaas.blogspot.com/2011/10/deadlocks.html

Now let's start transaction B:

[B] sandbox=# begin;
BEGIN
[B] sandbox=# insert into bar (bar_id, foo_id) values ('bar_id2',
'foo_id1');
INSERT 0 1
[B] sandbox=# select *, xmin, xmax from bar;
 bar_id  | foo_id  | xmin | xmax
-+-+--+--
 bar_id2 | foo_id1 |  696 |0
(1 row)

[B] sandbox=# select *, xmin, xmax from foo;
 foo_id  | xmin | xmax
-+--+--
 foo_id1 |  694 |1
(1 row)

Wait, what?  foo_id1's xmax is 1?  What does that even mean?

If I do a SELECT FOR UPDATE on foo_id1 in transaction A, it hangs waiting
for transaction B to finish:

[A] sandbox=# select * from foo where foo_id = 'foo_id1' for update;

I can see transaction A (transactionid 695, virtualtransaction 3/41)
takes a ShareLock on transaction B (transactionid 696).  How does it
know to do that since foo_id1's xmax is 1?

[console] sandbox=# select locktype, relation::regclass, page, tuple,
virtualxid, transactionid, virtualtransaction, pid, mode, granted, fastpath
from pg_locks where virtualtransaction = '3/41' or virtualtransaction =
'4/15';
   locktype| relation | page | tuple | virtualxid | transactionid |
virtualtransaction | pid |mode | granted | fastpath
---+--+--+---++---++-+-+-+--
 relation  | foo_pkey |  |   ||   |
4/15   | 171 | AccessShareLock | t   | t
 relation  | foo  |  |   ||   |
4/15   | 171 | AccessShareLock | t   | t
 relation  | foo  |  |   ||   |
4/15   | 171 | RowShareLock| t   | t
 relation  | bar  |  |   ||   |
4/15   | 171 | AccessShareLock | t   | t
 relation  | bar  |  |   ||   |
4/15   | 171 | RowExclusiveLock| t   | t
 virtualxid|  |  |   | 4/15   |   |
4/15   | 171 | ExclusiveLock   | t   | t
 relation  | foo_pkey |  |   ||   |
3/41   | 165 | AccessShareLock | t   | t
 relation  | foo  |  |   ||   |
3/41   | 165 | AccessShareLock | t   | t
 relation  | foo  |  |   ||   |
3/41   | 165 | RowShareLock| t   | t
 relation  | bar  |  |   ||   |
3/41   | 165 | AccessShareLock | t   | t
 relation  | bar  |  |   ||   |
3/41   | 165 | RowExclusiveLock| t   | t
 virtualxid|  |  |   | 3/41   |   |
3/41   | 165 | ExclusiveLock   | t   | t
 transactionid |  |  |   ||   696 |
3/41   | 165 | ShareLock   | f   | f
 transactionid |  |  |   ||   695 |
3/41   | 165 | ExclusiveLock   | t   | f
 transactionid |  |  |   ||   696 |
4/15   | 171 | ExclusiveLock   | t   | f
 tuple | foo  |0 | 2 ||   |
3/41   | 165 | AccessExclusiveLock | t   | f

At this point, if I commit or rollback transaction B, transaction A
can continue.  If I do a SELECT FOR UPDATE on foo_id1 for transaction
B, I'll create a deadlock and the deadlock detector will kill one of
the transactions.

I'd really love to learn:

1. Why the xmax for foo_id1 goes from 696 to 1 and what does that
   mean?
2. How does transaction A know it needs to take a ShareLock on
   transaction B?
3. What is a 

Re: [GENERAL] The curious case of two inserts, a shrinking xmax, and a ShareLock on transaction

2015-09-22 Thread Alvaro Herrera
Jeff Dik wrote:

> I'd really love to learn:
> 
> 1. Why the xmax for foo_id1 goes from 696 to 1 and what does that
>mean?

When two transactions want to lock the same row, the xmax field is a
multixact, no longer a bare transaction ID.  This is an object that
resolves to multiple transaction IDs.

> 2. How does transaction A know it needs to take a ShareLock on
>transaction B?

Because it reads the two transaction ID values from pg_multixact.

> 3. What is a virtualtransaction and what do its numerator and denominator
> mean?

It's not a division operation (so no numerator/denominator).  The part
before the / is a backend ID and the part after the / is a local
transaction counter.  It's just an identifier for the transaction,
useful for the time before the transaction acquires a transaction ID.
This optimizes that a transaction that doesn't modify tuples does not
need to acquire a transaction ID (and thus keeps transaction ID
consumption rate low.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] numeric data type

2015-09-22 Thread Alvaro Herrera
Juan Pablo L. wrote:
> thank you for your answer, the function is declared as:
> 
> FUNCTION wtt_discount_account(IN in_phonenumber varchar(20),IN in_balanceid 
> integer,IN in_chgval numeric(10,2))
> 
> i chose numeric because is supposed to be better for numbers/money 
> operations, supposed to be exact,
> i would not want to loose precision because that is money, is there any other 
> way which does not 
> involve loosing precision ? thankS!!

What you need is to use PG_GETARG_NUMERIC, then, and use a Numeric *
variable rather than float8.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] numeric data type

2015-09-22 Thread Alvaro Herrera
Juan Pablo L. wrote:
> Hi Alvaro, thank you for your answer,  PG_GETARG_NUMERIC does not exist .. 
> cant find it in the source code and when running i get
> undefined symbol: PG_GETARG_NUMERIC. 

#include "utils/numeric.h"


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] numeric data type

2015-09-22 Thread Juan Pablo L .
Hi Alvaro, thank you for your answer,  PG_GETARG_NUMERIC does not exist .. cant 
find it in the source code and when running i get
undefined symbol: PG_GETARG_NUMERIC. 

> Date: Tue, 22 Sep 2015 18:11:26 -0300
> From: alvhe...@2ndquadrant.com
> To: jpablolorenze...@hotmail.com
> CC: t...@sss.pgh.pa.us; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] numeric data type
> 
> Juan Pablo L. wrote:
> > thank you for your answer, the function is declared as:
> > 
> > FUNCTION wtt_discount_account(IN in_phonenumber varchar(20),IN in_balanceid 
> > integer,IN in_chgval numeric(10,2))
> > 
> > i chose numeric because is supposed to be better for numbers/money 
> > operations, supposed to be exact,
> > i would not want to loose precision because that is money, is there any 
> > other way which does not 
> > involve loosing precision ? thankS!!
> 
> What you need is to use PG_GETARG_NUMERIC, then, and use a Numeric *
> variable rather than float8.
> 
> -- 
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
  

Re: [GENERAL] numeric data type

2015-09-22 Thread Juan Pablo L .
thank you for your answer, the function is declared as:

FUNCTION wtt_discount_account(IN in_phonenumber varchar(20),IN in_balanceid 
integer,IN in_chgval numeric(10,2))

i chose numeric because is supposed to be better for numbers/money operations, 
supposed to be exact,
i would not want to loose precision because that is money, is there any other 
way which does not 
involve loosing precision ? thankS!!

> From: t...@sss.pgh.pa.us
> To: jpablolorenze...@hotmail.com
> CC: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] numeric data type
> Date: Tue, 22 Sep 2015 16:07:36 -0400
> 
> "Juan Pablo L."  writes:
> > Hi, i m writing a C module (extension),  the procedure has a parameter that 
> > is of type numeric, 
> > inside the function i can not read the parameter or so it seems, this what 
> > is do:
> 
> > float8 db_balance,in_chgval;
> 
> > in_chgval = PG_GETARG_FLOAT8(2);
> 
> > elog(INFO,"in_chgval = %0.2f",in_chgval);
> 
> If the C code is written that way, the function has to be declared to take
> type float8 (a/k/a double precision), not numeric.
> 
> The parser will insert a conversion from numeric to float8 automatically,
> so the function will still work with a numeric data column.  You might
> lose some precision in the conversion though.
> 
>   regards, tom lane
  

Re: [GENERAL] Inject rows into query resultes

2015-09-22 Thread David G. Johnston
On Tue, Sep 22, 2015 at 6:22 PM, Alex Magnum  wrote:

> Hi,
> I am having a problem where I need to inject rows into a query result.
> That is
>
> I have a list of add records, lets say car listings which I select from a
> table. The order of the results are given either by the user or
> default/initial.
>
> I now want to inject featured car adds after every 5th record. (coming
> from the same table)
>
> Right now, I am doing this on the PHP just calling the query twice but I
> am wondering if there is a way to only run one query.
>

​I have no clue how you expect to "run one query" AND "so the original
listing should not be affected".  ​


> The main problem is that
> a) i am using paging so the original listing should not be affected​​
>
b) the injected records should not be in the results returned for the
> current page.
>

​I have no clue what these mean since if you don't intend to show them why
retrieve the feature car ads (one d) in the first place?
​

>
> Any one having an idea how to solve that?
>
>
​Solving the ordering problem has a simplistic brute-force algorithm (you
may need to fix off-by-one errors in the logic - but the idea holds)

SELECT *, (((row_number-1) / 5) * 10​) + ((row_number-1)) % 5) AS
"rows_0_to_4_in_the_current_decade"
UNION ALL
SELECT *, ((row_number - 1) * 10) + (5) AS "row_5_in_the_current_decade"

Now you have: [0-5] [10-15] [​20-25], etc... as your "row index" where
every "5" record is your injected row.

Hopefully this helps - you have provided insufficient and/or contradictory
information regarding your actual problem.

David J.


[GENERAL] Inject rows into query resultes

2015-09-22 Thread Alex Magnum
Hi,
I am having a problem where I need to inject rows into a query result. That
is

I have a list of add records, lets say car listings which I select from a
table. The order of the results are given either by the user or
default/initial.

I now want to inject featured car adds after every 5th record. (coming from
the same table)

Right now, I am doing this on the PHP just calling the query twice but I am
wondering if there is a way to only run one query.

The main problem is that
a) i am using paging so the original listing should not be affected
b) the injected records should not be in the results returned for the
current page.

Any one having an idea how to solve that?

Thanks
Alex


[GENERAL] Re: pg_dump - postgre 9.2 and "server closed the connection unexpectedly"

2015-09-22 Thread Maycon Oliveira
Thanks for the answer.

The pg_dump is running on the same machine as the database. Is a VPS running
centos.

the script is this one:

https://wiki.postgresql.org/wiki/Automated_Backup_on_Linux

im using the custom backups.





--
View this message in context: 
http://postgresql.nabble.com/pg-dump-postgre-9-2-and-server-closed-the-connection-unexpectedly-tp5866825p5866846.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Adrian Klaver

On 09/22/2015 06:31 AM, Michael Zoet wrote:

Hi Charles,

thanks for the quick response and it looked promising but did not work
as expected.

I can set the datestyle to ISO on database level but this does not seem
to effect the way the CSV logs are written. I still get 2015-09-22
13:06:01.658 UTC (or CEST and so on) in the log files. And as I see it
is not only in the CSV logs, also in the none CSV logs I have.

Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?


I don't know of a way, but it seems Logstash can be told how to do the 
right thing:


https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-locale

See match and:

http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html

z   time zonetext  Pacific Standard 
Time; PST




Regards,
Michael





Hi


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Michael Zoet
Sent: Dienstag, 22. September 2015 12:07
To: pgsql-general@postgresql.org
Subject: [GENERAL] how to show time zone with numerical offset in CSV
log?

Hello everybody,

I am saving PostgreSQL log file data (CVS logs) with Logstash and
Elasticsearch. My problem with this is that the time zone value is
with the name of the time zone like

  2015-09-22 12:02:59.836 CEST

which Logstash can not process.

What Logstash needs are date/time stamps like

2015-09-22 12:02:59.836 +0200.

How can I setup Postgres to log with a numerical offset in the CSV
logs and not with the name of the time zone?


Not exactly the same format, but this may help:

kofadmin@kofdb.localhost=> SET datestyle TO 'ISO';
SET
kofadmin@kofdb.localhost=> select now();
now

 2015-09-22 12:53:38.123+02
(1 row)

If you want the change to be persistent you can use:

ALTER DATABASE database_name SET datestyle TO 'ISO';

And then reconnect to see the change.

Bye
Charles



Any hints and links to the corresponding documentation would be
appreciated.

Regards,
Michael






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



- Ende der Nachricht von Charles Clavadetscher
 -








--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Michael Zoet

Hi Charles,

thanks for the quick response and it looked promising but did not work  
as expected.


I can set the datestyle to ISO on database level but this does not  
seem to effect the way the CSV logs are written. I still get  
2015-09-22 13:06:01.658 UTC (or CEST and so on) in the log files. And  
as I see it is not only in the CSV logs, also in the none CSV logs I  
have.


Is there a way to convince Postgres to write the date/time with  
numerical time zone values to the log files?


Regards,
Michael





Hi


-Original Message-
From: pgsql-general-ow...@postgresql.org  
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Michael Zoet

Sent: Dienstag, 22. September 2015 12:07
To: pgsql-general@postgresql.org
Subject: [GENERAL] how to show time zone with numerical offset in CSV log?

Hello everybody,

I am saving PostgreSQL log file data (CVS logs) with Logstash and
Elasticsearch. My problem with this is that the time zone value is
with the name of the time zone like

  2015-09-22 12:02:59.836 CEST

which Logstash can not process.

What Logstash needs are date/time stamps like

2015-09-22 12:02:59.836 +0200.

How can I setup Postgres to log with a numerical offset in the CSV
logs and not with the name of the time zone?


Not exactly the same format, but this may help:

kofadmin@kofdb.localhost=> SET datestyle TO 'ISO';
SET
kofadmin@kofdb.localhost=> select now();
now

 2015-09-22 12:53:38.123+02
(1 row)

If you want the change to be persistent you can use:

ALTER DATABASE database_name SET datestyle TO 'ISO';

And then reconnect to see the change.

Bye
Charles



Any hints and links to the corresponding documentation would be appreciated.

Regards,
Michael






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



- Ende der Nachricht von Charles Clavadetscher  
 -






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql-95 repo in rsync

2015-09-22 Thread Devrim GÜNDÜZ

Hi,

On Mon, 2015-09-21 at 12:46 +, Kjetil Nygård wrote:
> 
> 1. rsync
> Would it be possible to have PostgreSQL 9.5 in the rsync-repo?

This is now done.

Regards,
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Michael Zoet

Hello everybody,

I am saving PostgreSQL log file data (CVS logs) with Logstash and  
Elasticsearch. My problem with this is that the time zone value is  
with the name of the time zone like


 2015-09-22 12:02:59.836 CEST

which Logstash can not process.

What Logstash needs are date/time stamps like

2015-09-22 12:02:59.836 +0200.

How can I setup Postgres to log with a numerical offset in the CSV  
logs and not with the name of the time zone?


Any hints and links to the corresponding documentation would be appreciated.

Regards,
Michael






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Charles Clavadetscher
Hi

> -Original Message-
> From: pgsql-general-ow...@postgresql.org 
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Michael Zoet
> Sent: Dienstag, 22. September 2015 12:07
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] how to show time zone with numerical offset in CSV log?
> 
> Hello everybody,
> 
> I am saving PostgreSQL log file data (CVS logs) with Logstash and
> Elasticsearch. My problem with this is that the time zone value is
> with the name of the time zone like
> 
>   2015-09-22 12:02:59.836 CEST
> 
> which Logstash can not process.
> 
> What Logstash needs are date/time stamps like
> 
> 2015-09-22 12:02:59.836 +0200.
> 
> How can I setup Postgres to log with a numerical offset in the CSV
> logs and not with the name of the time zone?

Not exactly the same format, but this may help:

kofadmin@kofdb.localhost=> SET datestyle TO 'ISO';
SET
kofadmin@kofdb.localhost=> select now();
now

 2015-09-22 12:53:38.123+02
(1 row)

If you want the change to be persistent you can use:

ALTER DATABASE database_name SET datestyle TO 'ISO';

And then reconnect to see the change.

Bye
Charles

> 
> Any hints and links to the corresponding documentation would be appreciated.
> 
> Regards,
> Michael
> 
> 
> 
> 
> 
> 
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Michael Zoet

Hi Tom,


Michael Zoet  writes:

Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?


Try something like

log_timezone = '<-0400>+4'


OK this points me in a directions I haven't read anything about in the  
Postgres documentation so far.


Can you explain what '<-0400>+4' exactly means? And why the string  
'<+0200>-2' prints the date & time with the correct time and +0200 for  
my time zone CEST?


And how can this automatically be changed if Germany switches from  
summer time (CEST with +0200) to winter time (CET +0100)?




See the discussion of POSIX timezone names here:
http://www.postgresql.org/docs/9.4/static/datatype-datetime.html#DATATYPE-TIMEZONES



I have read this several times today but really get no connections to  
my problem on setting this for the date/time output on the log files.  
But I am beginning to understand ;-).


THX
Michael






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump - postgre 9.2 and "server closed the connection unexpectedly"

2015-09-22 Thread Maycon Oliveira
Hi there.

i´m using postgre for 2 years now and i started to have this *INTERMITENT*
error from pg_dump:


pg_dump: Dumping the contents of table "adm_log" failed: PQgetCopyData()
failed.
pg_dump: Error message from server: server closed the connection
unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
pg_dump: The command was: COPY xcp01.adm_log (seq_log, dth_evento,
des_mensagem, des_usuario, des_objeto) TO stdout;   



Looking on the postgre logs i see this error:


LOG:  could not send data to client: Connection timed out
STATEMENT:  COPY xcp01.adm_log (seq_log, dth_evento, des_mensagem,
des_usuario, des_objeto) TO stdout;


Informations: 

1 - The database has only 18MB. 
2 - The adm_log table has a text field with relative large content (java
stacktraces)
3 - The database has 12 schemas and they are generated by a script that
copies the structure from one and creates another.
4 - I never ran VACUMM ou ANALYSE or REINDEX commands.
5 - The database is using the default configuration (autovaccum on)


Anybody has a hint? I´m afraid that can lead to a bigger problem.
 



--
View this message in context: 
http://postgresql.nabble.com/pg-dump-postgre-9-2-and-server-closed-the-connection-unexpectedly-tp5866825.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Adrian Klaver

On 09/22/2015 07:04 AM, Michael Zoet wrote:




On 09/22/2015 06:31 AM, Michael Zoet wrote:

Hi Charles,

thanks for the quick response and it looked promising but did not work
as expected.

I can set the datestyle to ISO on database level but this does not seem
to effect the way the CSV logs are written. I still get 2015-09-22
13:06:01.658 UTC (or CEST and so on) in the log files. And as I see it
is not only in the CSV logs, also in the none CSV logs I have.

Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?


I don't know of a way, but it seems Logstash can be told how to do the
right thing:



Yes and no. I asked this already for Logstash ;-):

https://discuss.elastic.co/t/logstash-timestamp-error-when-cest-is-at-the-end/27843






http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html


z   time zonetext  Pacific Standard
Time; PST



A little further down it says:

Zone names: Time zone names ('z') cannot be parsed.


Some more digging found that DateTimeFormat can deal with Z  which is 
either the offset or the timezone id, in particular as ZZZ.


http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html

Zone: 'Z' outputs offset without a colon, 'ZZ' outputs the offset with a 
colon, 'ZZZ' or more outputs the zone id.


The timezone names in Postgres are available from:

select * from pg_timezone_names ;

So in addition to Tom's suggestion, you might try setting the 
log_timezone to a name. Examples: Europe/Brussels for CEST, Etc/UTC for UTC




This means Logstash (and the Joda JAVA time library it uses) can not
parse the time zone if it is provided as a name. That's why I need/want
to change it on the Postgres level. Everything else is really
complicated to do in Logstash. I would need to provide a conversion of
the time zone name to the numerical value in my Logstash configuration.
But I still hope that this is easy fixable on the Postgres level.
Otherwise it gets unnecessary complicated...

Michael







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Charles Clavadetscher
Hi

> thanks for the quick response and it looked promising but did not work
> as expected.
> 
> I can set the datestyle to ISO on database level but this does not
> seem to effect the way the CSV logs are written. I still get
> 2015-09-22 13:06:01.658 UTC (or CEST and so on) in the log files. And
> as I see it is not only in the CSV logs, also in the none CSV logs I
> have.

I guess this means that the datestyle affects the way how the client wants the 
information displayed.

> Is there a way to convince Postgres to write the date/time with
> numerical time zone values to the log files?

Unfortunately I am a bit short of time right at the moment. I would suggest 
that you look into how the tools you are using generate the csv.

Here what I could see using psql:

db=> create table testdate (timedate timestamptz);
CREATE TABLE
db=> \d testdate
 Table "public.testdate"
  Column  |   Type   | Modifiers
--+--+---
 timedate | timestamp with time zone |

db=> insert into testdate values(clock_timestamp());
INSERT 0 1
db=> insert into testdate values(clock_timestamp());
INSERT 0 1
db=> insert into testdate values(clock_timestamp());
INSERT 0 1

kofadmin@kofdb.localhost=> SET datestyle TO "GERMAN";
SET
db=> \copy testdate to testdate.csv csv header ;
COPY 3

Content of file:

timedate
22.09.2015 15:53:48.268 CEST
22.09.2015 15:53:49.612 CEST
22.09.2015 15:53:50.44 CEST

db=> SET datestyle TO "ISO";
SET
db=> \copy testdate to testdate.csv csv header ;
COPY 3

Content of file:

timedate
2015-09-22 15:53:48.268+02
2015-09-22 15:53:49.612+02
2015-09-22 15:53:50.44+02

So it is actually a matter of instructing the client to print the date in the 
format that you wish.

Hope this helps.
Bye
Charles




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Michael Zoet




On 09/22/2015 06:31 AM, Michael Zoet wrote:

Hi Charles,

thanks for the quick response and it looked promising but did not work
as expected.

I can set the datestyle to ISO on database level but this does not seem
to effect the way the CSV logs are written. I still get 2015-09-22
13:06:01.658 UTC (or CEST and so on) in the log files. And as I see it
is not only in the CSV logs, also in the none CSV logs I have.

Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?


I don't know of a way, but it seems Logstash can be told how to do  
the right thing:




Yes and no. I asked this already for Logstash ;-):

https://discuss.elastic.co/t/logstash-timestamp-error-when-cest-is-at-the-end/27843





http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html

z   time zonetext  Pacific Standard Time; PST



A little further down it says:

Zone names: Time zone names ('z') cannot be parsed.

This means Logstash (and the Joda JAVA time library it uses) can not  
parse the time zone if it is provided as a name. That's why I  
need/want to change it on the Postgres level. Everything else is  
really complicated to do in Logstash. I would need to provide a  
conversion of the time zone name to the numerical value in my Logstash  
configuration. But I still hope that this is easy fixable on the  
Postgres level. Otherwise it gets unnecessary complicated...


Michael




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] could not load library liblwgeom-2.1.7.so

2015-09-22 Thread Willy-Bas Loos
Hi,

We use the apt.postgresql.org repository for production (ubuntu) servers
and something strange just ahppended when i was working on a new server.

An upgrade of postgresql-9.3-postgis-2.1 just came in and since then
postgis is broken, because it cannot find liblwgeom-2.1.7.so

When i run a postgis function, i get this error:
=# select postgis_version();
ERROR:  could not load library "/usr/lib/postgresql/9.3/lib/postgis-2.1.so":
liblwgeom-2.1.7.so: cannot open shared object file: No such file or
directory


This could be correct considering the contents of the package
liblwgeom-2.1.7
~# dpkg -L liblwgeom-2.1.7
/.
/usr
/usr/lib
/usr/lib/liblwgeom-2.1.8.so   wrong version!?
/usr/share
/usr/share/doc
/usr/share/doc/liblwgeom-2.1.7
/usr/share/doc/liblwgeom-2.1.7/copyright
/usr/share/doc/liblwgeom-2.1.7/changelog.gz
/usr/share/doc/liblwgeom-2.1.7/changelog.Debian.gz

This looks pretty serious, what's going on?
This might not be the right place to discuss this, does anyone know where
else i should adress my question?

Cheers,
-- 
Willy-Bas Loos


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Adrian Klaver

On 09/22/2015 07:04 AM, Michael Zoet wrote:




On 09/22/2015 06:31 AM, Michael Zoet wrote:

Hi Charles,

thanks for the quick response and it looked promising but did not work
as expected.

I can set the datestyle to ISO on database level but this does not seem
to effect the way the CSV logs are written. I still get 2015-09-22
13:06:01.658 UTC (or CEST and so on) in the log files. And as I see it
is not only in the CSV logs, also in the none CSV logs I have.

Is there a way to convince Postgres to write the date/time with
numerical time zone values to the log files?


I don't know of a way, but it seems Logstash can be told how to do the
right thing:



Yes and no. I asked this already for Logstash ;-):

https://discuss.elastic.co/t/logstash-timestamp-error-when-cest-is-at-the-end/27843






http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html


z   time zonetext  Pacific Standard
Time; PST



A little further down it says:

Zone names: Time zone names ('z') cannot be parsed.


Hmm, that would be a problem. Sorry for the misdirection.



This means Logstash (and the Joda JAVA time library it uses) can not
parse the time zone if it is provided as a name. That's why I need/want
to change it on the Postgres level. Everything else is really
complicated to do in Logstash. I would need to provide a conversion of
the time zone name to the numerical value in my Logstash configuration.
But I still hope that this is easy fixable on the Postgres level.
Otherwise it gets unnecessary complicated...

Michael







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] md5(large_object_id)

2015-09-22 Thread Karsten Hilbert
> I don't think that it is possible to stream the result of a query anyway,

I was unclear. I don't expect query results to stream out to the client.
I want the "SELECT md5(OID);" to return a single md5 hash value. It is
already possible to "SELECT md5(lo_read(OID));" but that will read the
entire large object into memory. I would like the function md5(oid) to
_internally_ feed consecutive calls to lo_read() into an updating md5
function and eventually return the final hash.

> I'm not sure if there is much to be gained from storing such large
> objects in a database.

ACID. Permissions. Single access method.

> Also, the max size of a value is 1GB.

The maximum size is 4 Terabyte ATM for large objects.

Karsten Hilbert



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Michael Zoet


- Nachricht von Adrian Klaver  -
  Datum: Tue, 22 Sep 2015 07:46:24 -0700


Zone names: Time zone names ('z') cannot be parsed.


Some more digging found that DateTimeFormat can deal with Z  which  
is either the offset or the timezone id, in particular as ZZZ.


http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html

Zone: 'Z' outputs offset without a colon, 'ZZ' outputs the offset  
with a colon, 'ZZZ' or more outputs the zone id.


That's why I am asking "how to get the numerical offset printed in the  
log files". Logstash can parse the numerical value. Otherwise I will  
always have a parsing error in Logstash. We could live with this but  
if it is possible I'd like to change this on the Postgres level. But I  
never thought that this is much more complicated than expected.




The timezone names in Postgres are available from:

select * from pg_timezone_names ;

So in addition to Tom's suggestion, you might try setting the  
log_timezone to a name. Examples: Europe/Brussels for CEST, Etc/UTC  
for UTC




As far as I understand the log_timezone configuration option, it will  
always print me the name if I use a name for the time zone. And that  
is the no go for Logstash. So I really need a numerical value to parse  
it with Logstash.


Michael



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgpass (in)flexibility

2015-09-22 Thread Peter Eisentraut
On 9/15/15 1:48 AM, Ben Chobot wrote:
> We're in a situation where we would like to take advantage of the pgpass 
> hostname field to determine which password gets used. For example:
> 
> psql -h prod-server -d foo # should use the prod password
> psql -h beta-server -d foo # should use the beta password
> 
> This would *seem* to be simple, just put "prod-server" or "beta-server" into 
> the hostname field of .pgpass. But if somebody uses the FQDN of those hosts, 
> then the line does not match. If somebody uses the IP address of those hosts, 
> again, no match. It seems that the hostname must match the hostname *exactly* 
> - or match any host ("*"), which does not work for our use case.
> 
> This seems to make the hostname field unnecessarily inflexible. Has anybody 
> else experienced - and hopefully overcome - this pain? Maybe I'm just going 
> about it all wrong.

The alternative would be to do a double host name resolution before
every connection that asks for a password, which would probably also
have some concerns.

I note, for example, that the OpenSSH configuration also goes by the
host name as you wrote it, and then has additional options to
canonicalize host names.  That might be something to look into.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] md5(large_object_id)

2015-09-22 Thread Willy-Bas Loos
On Mon, Sep 21, 2015 at 10:17 PM, Karsten Hilbert 
wrote:

>  I am dealing with radiology
> studies aka DICOM data) one would want an md5 function which
> streams in parts of a large object piece by piece using
> md5_update and m5_finalize or some such.
>
> It didn't look like pgcrypto offers a streaming version either.
>
> Anything I am overlooking ?
>
>
Hi,

I don't think that it is possible to stream the result of a query anyway,
or is this some neat new feature?
I'm not sure if there is much to be gained from storing such large objects
in a database. Also, the max size of a value is 1GB.
Maybe it would be better to do some file administration in the database,
but not the actual storage. Then you could use a tool that does what you
want on the bare file and maybe store the results in the database.

HTH,

-- 
Willy-Bas Loos


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Tom Lane
Michael Zoet  writes:
> Is there a way to convince Postgres to write the date/time with  
> numerical time zone values to the log files?

Try something like

log_timezone = '<-0400>+4'

See the discussion of POSIX timezone names here:
http://www.postgresql.org/docs/9.4/static/datatype-datetime.html#DATATYPE-TIMEZONES

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump - postgre 9.2 and "server closed the connection unexpectedly"

2015-09-22 Thread Tom Lane
Maycon Oliveira  writes:
> i´m using postgre for 2 years now and i started to have this *INTERMITENT*
> error from pg_dump:
> pg_dump: Dumping the contents of table "adm_log" failed: PQgetCopyData()
> failed.
> pg_dump: Error message from server: server closed the connection
> unexpectedly

> Looking on the postgre logs i see this error:
> LOG:  could not send data to client: Connection timed out
> STATEMENT:  COPY xcp01.adm_log (seq_log, dth_evento, des_mensagem,
> des_usuario, des_objeto) TO stdout;

Evidently a connectivity problem, not a database integrity problem.

Is pg_dump running on a different machine than the server?  If so
you could be looking at network hardware issues.

Is pg_dump using an SSL-encrypted connection?  If so maybe it's a
renegotiation problem.  Try setting ssl_renegotiation_limit = 0
in the server's configuration.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Tom Lane
Michael Zoet  writes:
> Can you explain what '<-0400>+4' exactly means?

It's a POSIX-style zone name specifying the STD abbreviation "-0400",
UTC offset 4 hours west of Greenwich, and no DST behavior.

> And why the string  
> '<+0200>-2' prints the date & time with the correct time and +0200 for  
> my time zone CEST?

Same thing for 2 hours east of Greenwich.  Remember POSIX and ISO have
opposite sign conventions.

> And how can this automatically be changed if Germany switches from  
> summer time (CEST with +0200) to winter time (CET +0100)?

Well, you could write <+0200>-2<+0100> but I'm not sure I would recommend
it.  That would result in switching on the DST transition days specified
in the "posixrules" timezone database file, which by default will be USA
not European rules.  You could replace the posixrules file with some
suitable European zone file, but that would be more invasive than you
might want (especially if the zone database is shared with non-Postgres
applications); and even if that's OK, it's practically certain you'd
forget to re-fix it after some future software update overwrites the zone
files.

The best compromise might be to just use <+>+0, ie force it to
print in GMT always.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to show time zone with numerical offset in CSV log?

2015-09-22 Thread Michael Zoet

Hi Tom,


And how can this automatically be changed if Germany switches from
summer time (CEST with +0200) to winter time (CET +0100)?


Well, you could write <+0200>-2<+0100> but I'm not sure I would recommend
it.  That would result in switching on the DST transition days specified
in the "posixrules" timezone database file, which by default will be USA
not European rules.  You could replace the posixrules file with some
suitable European zone file, but that would be more invasive than you
might want (especially if the zone database is shared with non-Postgres
applications); and even if that's OK, it's practically certain you'd
forget to re-fix it after some future software update overwrites the zone
files.


Yes and that's why I would like to avoid messing around with the setup  
to much.




The best compromise might be to just use <+>+0, ie force it to
print in GMT always.



That's it! Having everything in numeric UTC + seems the easiest  
solution. With that I shouldn't have any parsing problems with  
Logstash. So I do not need to think about the offset. Great and  
obvious :-).


Michael



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql-95 repo in rsync

2015-09-22 Thread Kjetil Nygård
On ti., 2015-09-22 at 14:33 +0300, Devrim GÜNDÜZ wrote:

Hi,

On Mon, 2015-09-21 at 12:46 +, Kjetil Nygård wrote:



1. rsync
Would it be possible to have PostgreSQL 9.5 in the rsync-repo?



This is now done.


Thanks a lot :-D


--
Med vennleg helsing

Kjetil Nygård
Technical Expert | Ambita AS

k...@ambita.com
Mobile: +47 41 47 43 37
Call senter: +47 24 13 35 00
https://www.ambita.com