pralabhkumar commented on pull request #33980:
URL: https://github.com/apache/spark/pull/33980#issuecomment-966254239
@BryanCutler
First of all , really thx for your time and effort to review the PR.
===
I think there are issues with simply checking the first element in the
first value to see if it's a timestamp.
===
Response : I completely agree with you , checking first value is not correct
. I'll change it .
===
Also it does not seem like much of the additions here could be used to
handle other nested types or deeper nesting, which would then all require
specialized functions to handle
===
Response : modify_timestamp_array is the method which basically does the
datatime conversion . We can use the same code for deeper nesting conversion ,
its calling method have to called it recursively . However I agree with u that
this code may not be utilized for other nested type.
As per the jira , the immediate requirement was array of timestamp , that's
why I only took consideration of Arraytype.
====
I am thinking it would be better to utilize pyarrow for all type checking
and flattening of nested columns so that all conversions are done on flat
series. Although, I understand there might be some challenges with this
approach as well.
====
Response : I tried to have code similar to what already have written . For
e.g do the time conversion , post converting to panda .
Therefore my approach/design is very same as what already there in code.
However I would like better understand your approach. If possible may be we
can have call to better understand your approach and then I can code the same
way. Please let me know , if you are ok with the same. If you think current
approach is fine , I'll address all the review comments provided by you.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]