[ 
https://issues.apache.org/jira/browse/ARROW-11390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272806#comment-17272806
 ] 

Uwe Korn commented on ARROW-11390:
----------------------------------

You should use {{pyarrow}} and {{turbodbc}} both from conda-forge. That is the 
most reliable way. When you install {{turbodbc}} with {{pip}} it is built and 
then the build is cached on your system. If you change your {{pyarrow}}, you 
need to uninstall {{turbodbc}}, delete the caches and build it from source 
again, conda on the other side takes care of all that.

{{turbodbc}} has not yet been rebuilt on conda-forge for the new Arrow version, 
probably will be available in 3-4h, just wait until then before doing new tests.

> [Python] pyarrow 3.0 issues with turbodbc
> -----------------------------------------
>
>                 Key: ARROW-11390
>                 URL: https://issues.apache.org/jira/browse/ARROW-11390
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 3.0.0
>         Environment: pyarrow 3.0.0
> fsspec 0.8.4
> adlfs v0.5.9
> pandas 1.2.1
> numpy 1.19.5
> turbodbc 4.1.1
>            Reporter: Lance Dacey
>            Priority: Major
>              Labels: python, turbodbc
>
> This is more of a turbodbc issue I think, but perhaps someone here would have 
> some idea of what changed to cause potential issues. 
> {code:java}
> cursor = connection.cursor()
> cursor.execute("select top 10 * from dbo.tickets")
> table = cursor.fetchallarrow(){code}
> I am able to run table.num_rows and it will print out 10.
> If I run table.to_pandas() or table.schema or try to write the table to a 
> dataset, my kernel dies with no explanation. I reverted back to pyarrow 2.0 
> and the same code works again.
> [https://github.com/blue-yonder/turbodbc/issues/289]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to