Re: Phoenix Performance issue

2016-05-11 Thread Mujtaba Chohan
This is with 4.5.2-HBase-0.98 and 4.x-HBase-0.98 head, got almost the same numbers with both. On Wed, May 11, 2016 at 12:19 AM, Naveen Nahata wrote: > Thanks Mujtaba. > > Could you tell me which version of phoenix are you using ? > > -Naveen Nahata > > On 11 May 2016 at

Apache Phoenix with HBaseTestingUtility and MiniKdc

2016-05-11 Thread Abel Fernández
Hello, I am trying to set up an unit test in local environment for testing Apache Phoenix with kerberos. Does anyone did something similar in the past? I am able to start the minicluster with kerberos but when I am getting the connection from the url I am always having the same error:

Re: Phoenix on HBase - Adding data to HBase reflects on Phoenix table?

2016-05-11 Thread James Taylor
Hello Emanuele, Take a look at these FAQs and hopefully they answer your questions. You can create a VIEW instead of a TABLE and there's no need to add the empty key value (though you cannot use Phoenix APIs to change the table, only read from it):

Phoenix on HBase - Adding data to HBase reflects on Phoenix table?

2016-05-11 Thread Emanuele Fumeo
Hi all, firstly I am a new HBase user and I would like to say hello to everyone and to say thanks in advance to anyone that contributes to this mailing list. Next, my question. In case I create a Phoenix table over an existing HBase table, I know that for each row Phoenix adds a new column

Re: error Loading via MapReduce

2016-05-11 Thread Gabriel Reid
Looking back at what you mentioned about your ${hadoop.tmp.dir}, could you try setting it to an unqualified path, i.e. without a protocol like file: or hdfs:. For example, as follows: hadoop.tmp.dir /tmp - Gabriel On Wed, May 11, 2016 at 11:37 AM, kevin wrote: >

Re: error Loading via MapReduce

2016-05-11 Thread kevin
*thanks,* *the property in hbase-site.xml is:* ** *hbase.tmp.dir* */home/dcos/hbase/tmp* ** *but the error is : file:/home/dcos/hdfs/tmp ,where I config in core-site.xml* 2016-05-11 16:53 GMT+08:00 Sandeep Nemuri : > There will be a temp directory property in

Re: Global Index stuck in BUILDING state

2016-05-11 Thread Ankit Singhal
Try recreating your index with ASYNC and update index using INDEX tool so that you don't face issues related to timeout or stuck during the initial load of huge data. https://phoenix.apache.org/secondary_indexing.html On Tue, May 10, 2016 at 7:26 AM, anupama agarwal wrote: >

Re: [Spark 1.5.2]Check Foreign Key constraint

2016-05-11 Thread Divya Gehlot
Can you please help me with example . Thanks, Divya On 11 May 2016 at 16:55, Ankit Singhal wrote: > You can use Joins as a substitute to subqueries. > > On Wed, May 11, 2016 at 1:27 PM, Divya Gehlot > wrote: > >> Hi, >> I am using Spark

Re: error Loading via MapReduce

2016-05-11 Thread Sandeep Nemuri
There will be a temp directory property in hbase-site.xml. You may take a look at that property. Thanks, Sandeep Nemuri ᐧ On Wed, May 11, 2016 at 1:49 PM, kevin wrote: > Thanks,I did't found fs.defaultFS property be overwritten . And I have > change to use pig to load

Re: error Loading via MapReduce

2016-05-11 Thread kevin
Thanks,I did't found fs.defaultFS property be overwritten . And I have change to use pig to load table data into Phoenix. 2016-05-11 14:23 GMT+08:00 Gabriel Reid : > Another idea: could you check in > /home/dcos/hbase-0.98.16.1-hadoop2/conf (or elsewhere) to see if there

[Spark 1.5.2]Check Foreign Key constraint

2016-05-11 Thread Divya Gehlot
Hi, I am using Spark 1.5.2 with Apache Phoenix 4.4 As Spark 1.5.2 doesn't support subquery in where conditions . https://issues.apache.org/jira/browse/SPARK-4226 Is there any alternative way to find foreign key constraints. Would really appreciate the help. Thanks, Divya

Re: Phoenix Performance issue

2016-05-11 Thread Naveen Nahata
Thanks Mujtaba. Could you tell me which version of phoenix are you using ? -Naveen Nahata On 11 May 2016 at 04:12, Mujtaba Chohan wrote: > Tried the following in Sqlline/Phoenix and HBase shell. Both take ~20ms for > point lookups with local HBase. > > hbase(main):015:0>

Re: error Loading via MapReduce

2016-05-11 Thread Gabriel Reid
Another idea: could you check in /home/dcos/hbase-0.98.16.1-hadoop2/conf (or elsewhere) to see if there is somewhere where the fs.defaultFS property is being overwritten. For example, in hbase-site.xml? On Wed, May 11, 2016 at 3:59 AM, kevin wrote: > I have tried to