paleolimbot commented on issue #2177: URL: https://github.com/apache/arrow-adbc/issues/2177#issuecomment-2364668148
I think this is because the bulk insert feature requires that the schemas match exactly and because we don't take into account the existing column types when performing a bulk insertion (we should!). Specifically, that would mean that we should add a `PostgresType` argument here: https://github.com/apache/arrow-adbc/blob/46dc748423dd4a03b227e7cd20a13898ac231dd2/c/driver/postgresql/copy/writer.h#L568-L571 ...and return our field writers based on a many-to-many mapping (e.g., so that we can generate valid COPY for a numeric type, for example, based on a variety of types of arrow inputs. In the meantime, you should be able to use a parameterized `INSERT` as a workaround (i.e., `INSERT INTO some_table VALUES (?)`. (I forget exactly how to access the bulk bind via dbapi/ADBC in Python). I believe there was another issue where somebody did a bulk insert into a temporary table and used SQL to do the type casting/insert. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
