Re: Phoenix CSV Bulk Load fails to load a large file

2017-09-06 Thread Ted Yu
bq. hbase.bulkload.retries.retryOnIOException is disabled. Unable to recover The above is from HBASE-17165. See if the load can pass after enabling the config. On Wed, Sep 6, 2017 at 3:11 PM, Sriram Nookala wrote: > It finally times out with these exceptions > > ed Sep

Re: Phoenix CSV Bulk Load fails to load a large file

2017-09-06 Thread Sriram Nookala
It finally times out with these exceptions ed Sep 06 21:38:07 UTC 2017, RpcRetryingCaller{globalStartTime=1504731276347, pause=100, retries=35}, java.io.IOException: Call to ip-10-123-0-60.ec2.internal/10.123.0.60:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException:

Re: Phoenix CSV Bulk Load fails to load a large file

2017-09-06 Thread Sriram Nookala
Phoenix 4.11.0, HBase 1.3.1 This is what I get from jstack "main" #1 prio=5 os_prio=0 tid=0x7fb3d0017000 nid=0x5de7 waiting on condition [0x7fb3d75f7000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0xf588> (a

Re: Phoenix CSV Bulk Load fails to load a large file

2017-09-06 Thread Sergey Soldatov
Do you have more details on the version of Phoenix/HBase you are using as well as how it hangs (Exceptions/messages that may help to understand the problem)? Thanks, Sergey On Wed, Sep 6, 2017 at 1:13 PM, Sriram Nookala wrote: > I'm trying to load a 3.5G file with 60

Phoenix CSV Bulk Load fails to load a large file

2017-09-06 Thread Sriram Nookala
I'm trying to load a 3.5G file with 60 million rows using CsvBulkLoadTool. It hangs while loading HFiles. This runs successfully if I split this into 2 files, but I'd like to avoid doing that. This is on Amazon EMR, is this an issue due to disk space or memory. I have a single master and 2 region

Re: Phoenix CSV Bulk Load Tool Date format for TIMESTAMP

2017-09-06 Thread Sriram Nookala
I'm still trying to set those up in Amazon EMR. However, setting the ` phoenix.query.dateFormatTimeZone` wouldn't fix the issue for all files since we could receive a different date format in some other type of files. Is there an option to write a custom mapper to transform the date? On Tue, Sep

Re: Support of OFFSET in Phoenix 4.7

2017-09-06 Thread rafa
Hi Sumanta, Here you have the answer. You already asked the same question some months ago :) https://mail-archives.apache.org/mod_mbox/phoenix-user/201705.mbox/browser >From 4.8 regards, rafa On Wed, Sep 6, 2017 at 9:19 AM, Sumanta Gh wrote: > Hi, > From which version of

Support of OFFSET in Phoenix 4.7

2017-09-06 Thread Sumanta Gh
Hi, >From which version of Phoenix pagination with OFFSET is supported. It seems >this is not supported in 4.7 https://phoenix.apache.org/paged.html regards, Sumanta =-=-= Notice: The information contained in this e-mail message and/or attachments to it may contain

Re: How to speed up write performance

2017-09-06 Thread James Taylor
Hi Hef, Have you had a chance to read our Tuning Guide [1] yet? There's a lot of good, general guidance there. There are some optimizations for write performance that depend on how you expect/allow your data and schema to change: 1) Is your data write-once? Make sure to declare your table with the