FWIW, with phoenix 4.7, we no longer need to spool results on the client.
Instead we rely on pacing scanners as and when needed. To utitlize the
feature though, you would need to make sure that you are using HBase
versions that are at least as new as:
HBase 0.98.17 for HBase 0.98
HBase 1.0.3 for
We ran into something similar, here is the ticket
https://issues.apache.org/jira/browse/PHOENIX-2685
The work around that mitigated this issue for us was to lower the
value of phoenix.query.spoolThresholdBytes
to 10 MB. It is counter intuitive, but, due to the way the spooling
iterator interacts
I am using an Ambari HDP distribution of the Phoenix client
(/usr/hdp/2.3.4.0-3485/phoenix/phoenix-4.4.0.2.3.4.0-3485-client.jar), and to
close database connections I am using the standard Java JDBC try-with-resources
process
What version of phoenix are you using? Is the application properly closing
statements and result sets?
On Friday, April 15, 2016, wrote:
> I am running into an issue where a huge number temporary files are being
> created in my C:\Users\myuser\AppData\Local\Temp
I am running into an issue where a huge number temporary files are being
created in my C:\Users\myuser\AppData\Local\Temp folder, they are around 20MB
big and never get cleaned up. These *.tmp files grew to around 200GB before I
stopped the server.
Example file names:
Please help.
On Apr 15, 2016 2:18 AM, "Viswanathan J" wrote:
> Hi,
>
> How to map the HBase column qualifier which is in byte type(highlighted
> below) to the view in phoenix?
>
> eg.,
>
> \x00\x00\x00\x0Bcolumn=fact:\x05, timestamp=1460666736042,
>