lupko commented on issue #1513: URL: https://github.com/apache/arrow-adbc/issues/1513#issuecomment-1930538169
I have been poking into this some more and explored the optional nature of the conversion. Alas, not exactly as described in #1514 because that is just too advanced for me to do in reasonable time :(. Anyway, I have created a draft PR for you guys to consider. In this draft, there is a new statement-level option `adbc.postgresql.numeric_conversion` that can be of value `to_string`, `to_double` with the idea that if one day ADBC PostgreSQL supports (limited) conversion to decimal128/256, then the value of the option could be `to_decimal` or whatnot. That way the number of possible options is limited. I think an option like this still makes sense regardless of what is proposed in #1514. My (twisted :)) reasoning goes as follows: PostgreSQL NUMERIC cannot be always 1-1 mapped to Arrow types. The driver thus needs to compromise and fall back to some kind of conversion when creating Arrow data. There can be multiple different strategies for this fall back. Driver allows for a simple configuration of that strategy. E.g. this is not about configuring type mapping but instead configuring fallback when 'native' mapping for some type is just not possible. The approach with the option would not conflict with the fancier stuff done in #1514 - schema with desired types always wins, without it driver falls back to use the option. Anyway, here is the PR draft: https://github.com/apache/arrow-adbc/pull/1521 I understand if this is not the way you want to go. With the extra field-level metadata that is already in place, I can create workaround in client code (but was hoping I could avoid it). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
