> On 10 May 2018, at 18:29, Adrian Klaver <adrian.kla...@aklaver.com> wrote:
> 
> Per my previous post a timestamp with timezone is going to be stored as UTC, 
> so there is no ambiguity there. On reflection I realized your concern maybe 
> with determining the original input timezone. That information is not stored 
> by Postgres, so there can be ambiguity as to their value. Doing all 
> timestamps in UTC is one one way to eliminate this. The potential issue I see 
> is that you now push the ambiguity into the app. Namely just from looking at 
> the database values you still do not know what the original timezone the app 
> lives in is.


That’s very true, I hadn’t thought of that.

The use case I have is a schema that unites billing events from disparate 
telecoms systems from a bunch of different carriers. The source of the 
non-specific timestamps is party input data files that provide different local 
timestamps for systems in different systems, but also different apps that 
submit billing events. So there are many inconsistent feeds into the database.

So when we push the ambiguity into the app, at least what is happening is the 
the transaction is rejected which means the app breaks. When the app breaks, we 
can go in and fix the timestamp bug in the particular app. Often this is 
difficult, and we need to reason about the source data, but the breakage at 
least tells us that something is wrong. Otherwise we ingress the data, do 
complex billing queries and then the only time we find out about a bug is when 
a customer complains about a particular bill. When this happens, it is really 
difficult to determine whether there is a bug in the query logic or if the 
input is bogus.

So in this sense, the database is linting the the source data.

Reply via email to