Thanks, how will ZetaSQL support higher precision as the input in general
will be Instant anyway. Will it rely on the "pending" standardized logical
types?

 _/
_/ Alex Van Boxel


On Mon, Aug 19, 2019 at 7:02 AM Rui Wang <ruw...@google.com> wrote:

> However, more challengings come from:
>
> 1. How to read data without losing precision. Beam Java SDK uses Joda
> already so it's very likely that you will need update IO somehow to support
> higher precision.
> 2. How to process higher precision in BeamSQL. It means SQL functions
> should support higher precision. If you use Beam Calcite, unfortunately it
> will only support up to millis. If you use Beam ZetaSQL (under review),
> there are opportunities to support higher precision for SQL functions.
>
>
> -Rui
>
> On Sun, Aug 18, 2019 at 9:52 PM Rui Wang <ruw...@google.com> wrote:
>
>> We have been discussing it for a long time. I think if you only want to
>> support more precision (e.g. up to nanosecond) for BeamSQL, it's actually
>> relatively straightforward to support it by using a logical type for
>> BeamSQL.
>>
>>
>> -Rui
>>
>> On Sat, Aug 17, 2019 at 7:21 AM Alex Van Boxel <a...@vanboxel.be> wrote:
>>
>>> I know it's probably futile, but the more I'm working on features that
>>> are related to schema awareness I'm getting a bit frustrated about the lack
>>> of precision of the joda instance.
>>>
>>> As soon as we have a conversion to the DateTime I need to drop
>>> precession, this happens with the Protobuf timestamp (nanoseconds), but I
>>> also notice it with BigQuery (milliseconds).
>>>
>>> Suggestions?
>>>
>>>  _/
>>> _/ Alex Van Boxel
>>>
>>

Reply via email to