Re: Write path blocked by MetaDataEndpoint acquiring region lock

2016-02-16 Thread Thangamani, Arun
Sorry I had pressed Control + Enter a little earlier than I wanted to, corrections inline. Thanks From: "Thangamani, Arun" mailto:arun.thangam...@cdk.com>> Reply-To: "user@phoenix.apache.org" mailto:user@phoenix.apache.org>> Date: Tuesday, February 16, 2016 at 8:

Re: Write path blocked by MetaDataEndpoint acquiring region lock

2016-02-16 Thread Thangamani, Arun
Hey Nick, Looks like you are failing to find your table in meta data cache, if you don’t find it in the meta data cache, we end up rebuilding the meta data from both the SYSTEM.CATALOG and SYSTEM.STATS table The rebuilding process from the meta data table is a scan on the both the tables. So we

Write path blocked by MetaDataEndpoint acquiring region lock

2016-02-16 Thread Nick Dimiduk
Hello, I have a high throughput ingest pipeline that's seised up. My ingest application ultimately crashes, contains the following stack trace [0]. Independently, I noticed that the RPC call time of one of the machines was significantly higher than others (95pct at multiple seconds vs 10's of ms)

Re: TinyInt, SmallInt not supported in Pig Phoenix loader

2016-02-16 Thread Parth Sawant
Update: The same method doesn't work for writing into SMALLINT columns in a Phoenix table, ie a 'bytearray' field in Pig can be written into a TINYINT column in Phoenix table but not into a SMALLINT column. On Tue, Feb 16, 2016 at 3:51 PM, Parth Sawant wrote: > Hi > We are using the Pig-Phoenix

Re: TinyInt, SmallInt not supported in Pig Phoenix loader

2016-02-16 Thread Parth Sawant
Hi We are using the Pig-Phoenix Bulkload integration to load data into a table. We realize that Pig does not support TINYINT and SMALLINT datatypes. So we are declaring those fields as INT in Pig but trying to write them into a Phoenix TINYINT datatype. I suppose we're looking to write/cast an INT

Save dataframe to Phoenix

2016-02-16 Thread Krishna
According Phoenix-Spark plugin docs, only SaveMode.Overwrite is supported for saving dataframes to Phoenix table. Are there any plans to support other save modes (append, ignore) anytime soon? Only having overwrite option makes it useful for a small number of use-cases.

Re: java core dump

2016-02-16 Thread Jonathan Leech
Yeah my fix didn't fix anything; was barking up the wrong tree. The toObject() was the right one I think hotspot just optimized out the intermediate calls. Going to try upgrading to 1.8 before downgrading to u79, will also look at the Phoenix source code with respect to concurrency issues. >

Re: Phoenix Query Server and/or Avatica Bug and/or My Misunderstanding

2016-02-16 Thread Josh Elser
Hi Steve, Sorry for the delayed response. Putting the "payload" (json or protobuf) into the POST instead of the header should be the 'recommended' way forward to avoid the limit as you ran into [1]. I think Phoenix >=4.6 was using Calcite-1.4, but my memory might be failing me. Regarding th

Multiple upserts via JDBC

2016-02-16 Thread Riesland, Zack
I have a handful of VERY small phoenix tables (< 100 entries). I wrote some javascript to interact with the tables via servlet + JDBC. I can query the data almost instantaneously, but upserting is extremely slow - on the order of tens of seconds to several minutes. The main write operation does