Re: HBase Phoenix Integration

2016-02-28 Thread Amit Shah
help would be appreciated On Sat, Feb 27, 2016 at 8:03 AM, Amit Shah <amits...@gmail.com> wrote: > Hi Murugesan, > > What preconditions would I need on the server to execute the python > script? I have Python 2.7.5 installed on the zookeeper server. If I just > copy the sqllin

Re: HBase Phoenix Integration

2016-02-26 Thread Amit Shah
n Fri, Feb 26, 2016 at 11:26 PM, Murugesan, Rani <ranmu...@visa.com> wrote: > Did you test and confirm your phoenix shell from the zookeeper server? > > cd /etc/hbase/conf > > > phoenix-sqlline.py :2181 > > > > > > *From:* Amit Shah [mailto:amits...@gm

HBase Phoenix Integration

2016-02-26 Thread Amit Shah
Hello, I have been trying to install phoenix on my cloudera hbase cluster. Cloudera version is CDH5.5.2 while HBase version is 1.0. I copied the server & core jar (version 4.6-HBase-1.0) on the master and region servers and restarted the hbase cluster. I copied the corresponding client jar on my

Re: ***UNCHECKED*** Re: HBase Phoenix Integration

2016-02-29 Thread Amit Shah
se classpath > > P.S. Please remove the 'x' from the jar extension > Hope this helps. > > > Thanks, > Divya > > On 26 February 2016 at 20:44, Amit Shah <amits...@gmail.com> wrote: > >> Hello, >> >> I have been trying to install phoenix on my c

Re: HBase Phoenix Integration

2016-02-29 Thread Amit Shah
oudera/tree/4.6-HBase-1.0-cdh5.5 > > > > Thanks, > > James > > > > > > > > On Mon, Feb 29, 2016 at 10:19 PM, Amit Shah <amits...@gmail.com> wrote: > > Hi Sergey, > > > > I get lot of compilation errors when I compile the source code

Re: RE: HBase Phoenix Integration

2016-03-01 Thread Amit Shah
; > If no way to resolve this, I would still be using the Cloudera-Labs > phoenix version from this : > > > https://blog.cloudera.com/blog/2015/11/new-apache-phoenix-4-5-2-package-from-cloudera-labs/ > > > > > Thanks, > > Sun. > > > --

Re: Re: HBase Phoenix Integration

2016-03-01 Thread Amit Shah
t. > > Best, > Sun. > > -- > -- > > > *From:* Amit Shah <amits...@gmail.com> > *Date:* 2016-03-01 17:22 > *To:* user <user@phoenix.apache.org> > *Subject:* Re: RE: HBase Phoenix Integration > Hi All, > > I got some success in deploy

Re: Re: HBase Phoenix Integration

2016-02-29 Thread Amit Shah
and issue an workaround. > > If no way to resolve this, I would still be using the Cloudera-Labs > phoenix version from this : > > https://blog.cloudera.com/blog/2015/11/new-apache-phoenix-4-5-2-package-from-cloudera-labs/ > > > Thanks, > Sun. > > --

Missing Rows In Table After Bulk Load

2016-04-08 Thread Amit Shah
Hi, I am using phoenix 4.6 and hbase 1.0. After bulk loading 10 mil records into a table using the psql.py utility, I tried querying the table using the sqlline.py utility through a select count(*) query. I see only 0.1 million records. What could be missing? The psql.py logs are python

Re: Speeding Up Group By Queries

2016-04-11 Thread Amit Shah
:55 PM, Amit Shah <amits...@gmail.com> wrote: > Hi Mujtaba, > > Could these improvements be because of region distribution across region > servers? Along with the optimizations you had suggested I had also used > hbase-region-inspector to move regions evenly across the reg

Re: Speeding Up Group By Queries

2016-04-12 Thread Amit Shah
help > you benchmark your queries under representative data sizes? > > Thanks, > James > > [1] https://phoenix.apache.org/secondary_indexing.html > [2] https://www.youtube.com/watch?v=f4Nmh5KM6gI=youtu.be > [3] https://phoenix.apache.org/pherf.html > > On Mon, Apr 11

Understanding Phoenix Query Plans

2016-04-11 Thread Amit Shah
Hi, I am using hbase version 1.0 and phoenix version 4.6. For different queries that we are benchmarking, I am trying to understand the query plan 1. If we execute a where clause with group by query on primary key columns of the table, the plan looks like

Disabling HBase Block Cache

2016-03-25 Thread Amit Shah
Hi, I am using apache hbase (version 1.0.0) and phoenix (version 4.6) deployed through cloudera. Since my aggregations with group by query is slow, I want to try out disabling the block cache for a particular hbase table. I tried a couple of approaches but couldn't succeed. I am verifying if the

Re: Disabling HBase Block Cache

2016-03-25 Thread Amit Shah
be great if someone could throw some light on this. P.S - Though disabling the block cache didn't speed up the group by query but that seems like a separate topic of discussion. Thanks! On Fri, Mar 25, 2016 at 1:34 PM, Amit Shah <amits...@gmail.com> wrote: > Hi, > > I am using apa

Speeding Up Group By Queries

2016-03-25 Thread Amit Shah
Hi, I am trying to evaluate apache hbase (version 1.0.0) and phoenix (version 4.6) deployed through cloudera for our OLAP workfload. I have a table that has 10 mil rows. I try to execute the below roll up query and it takes around 2 mins to return 1,850 rows. SELECT SUM(UNIT_CNT_SOLD),

Re: Re: HBase Phoenix Integration

2016-03-01 Thread Amit Shah
eout=6, callDuration=69350: row 'SYSTEM.SEQUENCE,,00' > on table 'hbase:meta' at region=hbase:meta,,1.1588230740, > hostname=dev-2,60020,1456826584858, seqNum=0 > > -- > > *From:* Amit Shah <amits...@gmail.com> > *Date:*

Re: Re: HBase Phoenix Integration

2016-03-01 Thread Amit Shah
I did not find any error message from the regionserver log. That is > supre weird. > > -- > -- > > > *From:* Amit Shah <amits...@gmail.com> > *Date:* 2016-03-01 19:02 > *To:* user <user@phoenix.apache.

Re: Region Server Crash On Upsert Query Execution

2016-04-01 Thread Amit Shah
s there any useful info in GC logs? Also 2GB > heap is on the low side, can you rerun you test with setting heap to 5 and > 10GB? > > On Thu, Mar 31, 2016 at 7:01 AM, Amit Shah <amits...@gmail.com> wrote: > >> Another such instance of the crash is described below. >> &

Re: Speeding Up Group By Queries

2016-03-29 Thread Amit Shah
g UPDATE STATISTICS TRANSACTIONS SET > "phoenix.stats.guidepost.width"=5000; > > > > > On Tue, Mar 29, 2016 at 6:45 AM, Amit Shah <amits...@gmail.com> wrote: > >> Hi Mujtaba, >> >> I did try the two optimization techniques by recreating the table and >> then l

Re: Phoenix Upsert Query Failure - Could Not Get Page

2016-04-01 Thread Amit Shah
Any inputs here? On Thu, Mar 31, 2016 at 4:45 PM, Amit Shah <amits...@gmail.com> wrote: > Hi, > > I have been trying to execute a upsert query that select data from a 10 > mil records table. The query fails on the sqlline client at times with Caused > by: java.lang.Ru

Re: Speeding Up Group By Queries

2016-03-29 Thread Amit Shah
rom the hbase web UI >> > > You need to do *major_compact* from HBase shell. From UI it's minor. > > - mujtaba > > On Mon, Mar 28, 2016 at 12:32 AM, Amit Shah <amits...@gmail.com> wrote: > >> Thanks Mujtaba and James for replying back. >>

Re: Region Server Crash On Upsert Query Execution

2016-04-02 Thread Amit Shah
ings optimally given the fact you know your algorithm or workload/ > goal). > > P.S. I think we know each other, right? > > Regards, > Constantin > Pe 1 apr. 2016 4:16 p.m., "Amit Shah" <amits...@gmail.com> a scris: > >> I tried raising the region server he

Phoenix Upsert Query Failure - Could Not Get Page

2016-03-31 Thread Amit Shah
Hi, I have been trying to execute a upsert query that select data from a 10 mil records table. The query fails on the sqlline client at times with Caused by: java.lang.RuntimeException: Could not get page at index: 16. The detailed exception is pasted here - http://pastebin.com/1wTCHyJM. I tried

Phoenix Bulk Load With Column Overrides

2016-04-20 Thread Amit Shah
Hello, I am using phoenix 4.6 and trying to bulk load data into a table from a csv file using the psql.py utility. How do I map the table columns to the header values in the csv file through the "-h" argument? For e.g. Assume my phoenix table does not match the columns in the csv. The phoenix