srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1160937525
Merged to master
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1159353408
I think it's spurious, we can ignore it, but let's see one more time
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1156448341
Hm, I think the doc build error is unrelated
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1154024801
I think you have to retrigger on your end - can you try re-running the jobs?
or push a dummy empty commit?
--
This is an automated message from the Apache Git Service.
To respond to the
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1147421904
You're saying, basically, assume scale=18? that's seems reasonable.
Or are you saying there needs to be an arbitrary precision type? I don't see
how a DB would support that.
I'm
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1146838926
I see, so we should interpret this as "maximum scale" or something in Spark?
that seems OK, and if we're only confident about Teradata, this seems OK. Let's
add a note in the release
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1146640596
It sounds like the scale is just 'unknown' even on the Teradata side? that
doesn't sound right. But this isn't a Spark issue then, or, no assumption we
make in Spark is any more or less
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1140281363
So if I create a NUMBER in Teradata without a scale, then it uses a system
default scale. Do we know what that is?
I'm confused if Teradata doesn't record and return the actual scale
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1129130839
OK, I just wonder if this is specific to Teradata, or whether it can be
changed elsewhere higher up in the abstraction layers.
But you're saying the scale/precision info is lost in
srowen commented on PR #36499:
URL: https://github.com/apache/spark/pull/36499#issuecomment-1127758808
I don't know anything about teradata - is it documented that this should be
the result, and is it specific to teradata?
--
This is an automated message from the Apache Git Service.
To
10 matches
Mail list logo