the-davidsn commented on issue #2177: URL: https://github.com/apache/arrow-adbc/issues/2177#issuecomment-3161429928
Unfortunately, none of mentioned solutions helped here. Is there any chance that we get a solution as you wrote here, @paleolimbot?: > I think this is because the bulk insert feature requires that the schemas match exactly and because we don't take into account the existing column types when performing a bulk insertion (we should!). Specifically, that would mean that we should add a `PostgresType` argument here: > > [arrow-adbc/c/driver/postgresql/copy/writer.h](https://github.com/apache/arrow-adbc/blob/46dc748423dd4a03b227e7cd20a13898ac231dd2/c/driver/postgresql/copy/writer.h#L568-L571) > > Lines 568 to 571 in [46dc748](/apache/arrow-adbc/commit/46dc748423dd4a03b227e7cd20a13898ac231dd2) > > static inline ArrowErrorCode MakeCopyFieldWriter( > struct ArrowSchema* schema, struct ArrowArrayView* array_view, > const PostgresTypeResolver& type_resolver, > std::unique_ptr<PostgresCopyFieldWriter>* out, ArrowError* error) { > ...and return our field writers based on a many-to-many mapping (e.g., so that we can generate valid COPY for a numeric type, for example, based on a variety of types of arrow inputs. > > In the meantime, you should be able to use a parameterized `INSERT` as a workaround (i.e., `INSERT INTO some_table VALUES (?)`. (I forget exactly how to access the bulk bind via dbapi/ADBC in Python). I believe there was another issue where somebody did a bulk insert into a temporary table and used SQL to do the type casting/insert. Let me know if there is anything i could provide to help out here. :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org