Re: anyone relying on hadoop1 still?

2014-09-24 Thread Flavio Pompermaier
Just a curiosity..what is the difference between hbase on hadoop1 or
hadoop2 from a functional point of view?
Does HBase on hadoop2 (Hoya?) rely on YARN features?

On Tue, Sep 23, 2014 at 8:15 PM, James Taylor jamestay...@apache.org
wrote:

 We'll definitely remove hadoop1 support from 4.x, as it's causing pom
 issues. We can't support both hadoop1 and hadoop2 and make hadoop2 the
 default (see prior discussions - basically due to how other projects
 poms are authored).

 Sounds like a few folks are still relying on hadoop1 for 3.x, so I
 guess we can leave support there a while longer.

 Thanks,
 James

 On Tue, Sep 23, 2014 at 10:54 AM, lars hofhansl la...@apache.org wrote:
  At the very least we could make hadoop2 the default target in 4.x.
  Seems fair to remove all the cruft from the 4 branch, too.
 
  There's still a fair amount of usage of HBase 0.94 on top of Hadoop 1.
  So maybe keep it alive in 3.0? 3.0 can be retired when HBase 0.94 is
 retired (although I have no plans for 0.94 retirement, yet).
 
  -- Lars
 
 
  - Original Message -
  From: James Taylor jamestay...@apache.org
  To: d...@phoenix.apache.org d...@phoenix.apache.org; user 
 user@phoenix.apache.org
  Cc:
  Sent: Monday, September 22, 2014 10:19 PM
  Subject: anyone relying on hadoop1 still?
 
  Hello,
  We've been planning on dropping hadoop1 support for our 4.x releases
  for a while now and it looks like it'll happen in 4.2. It'd be nice if
  we could do the same for our 3.x releases, as the more similar the two
  branches are, the less time it takes to keep them in sync.
 
  Is anyone out there still relying on hadoop1 support for future 3.x
 releases?
 
  Thanks,
  James
 



Subqueries: Missing LPAREN

2014-09-24 Thread Jean-Marc Spaggiari
Hi,

Is it possible to run sub-queries with Phoenix? Something like this:

select * from metadata n where L = 1 AND R = (select max(R) from
metadata z where n.A = z.A);

Goel is to get all lignes where L=1 and R=max. Field A is the key.

Thanks,

JM


View composite key?

2014-09-24 Thread Jean-Marc Spaggiari
Hi,

Is it possible to create a view on and existing HBase table and describe
the composite key?

I don't see anything about that in the doc
http://phoenix.apache.org/views.html but it also doesn't say that it's not
possible.

Would like to do something like that:
CREATE VIEW t1 ( USER unsigned_long PRIMARY KEY,
ID unsigned_long PRIMARY KEY,
VERSION unsigned_long PRIMARY KEY,
   f1.A unsigned_long,
   f1.R unsigned_long,
   f1.L unsigned_long,
   f1.W unsigned_long,
   f1.P bigint,
   f1.N varchar,
   f1.E varchar,
   f1.S unsigned_long,
   f1.M unsigned_long,
   f1.T unsigned_int
   );

Where USER, ID and VERSIONS are 8 bytes longs from my HBase rowkey.

Is that doable?

Thanks,

JM


Re: View composite key?

2014-09-24 Thread James Taylor
Hi JM,
Sure, you'd do that like this:

CREATE VIEW t1 ( USER unsigned_long,
ID unsigned_long,
VERSION unsigned_long,
   f1.A unsigned_long,
   f1.R unsigned_long,
   f1.L unsigned_long,
   f1.W unsigned_long,
   f1.P bigint,
   f1.N varchar,
   f1.E varchar,
   f1.S unsigned_long,
   f1.M unsigned_long,
   f1.T unsigned_int,
   CONSTRAINT pk PRIMARY KEY (USER, ID, VERSION)
   );

Thanks,
James

On Wed, Sep 24, 2014 at 6:21 AM, Jean-Marc Spaggiari
jean-m...@spaggiari.org wrote:
 Hi,

 Is it possible to create a view on and existing HBase table and describe the
 composite key?

 I don't see anything about that in the doc
 http://phoenix.apache.org/views.html but it also doesn't say that it's not
 possible.

 Would like to do something like that:
 CREATE VIEW t1 ( USER unsigned_long PRIMARY KEY,
 ID unsigned_long PRIMARY KEY,
 VERSION unsigned_long PRIMARY KEY,
f1.A unsigned_long,
f1.R unsigned_long,
f1.L unsigned_long,
f1.W unsigned_long,
f1.P bigint,
f1.N varchar,
f1.E varchar,
f1.S unsigned_long,
f1.M unsigned_long,
f1.T unsigned_int
);

 Where USER, ID and VERSIONS are 8 bytes longs from my HBase rowkey.

 Is that doable?

 Thanks,

 JM


Re: View composite key?

2014-09-24 Thread Jean-Marc Spaggiari
Oh nice! Thanks for this example!

JM

2014-09-24 11:50 GMT-04:00 James Taylor jamestay...@apache.org:

 Hi JM,
 Sure, you'd do that like this:

 CREATE VIEW t1 ( USER unsigned_long,
 ID unsigned_long,
 VERSION unsigned_long,
f1.A unsigned_long,
f1.R unsigned_long,
f1.L unsigned_long,
f1.W unsigned_long,
f1.P bigint,
f1.N varchar,
f1.E varchar,
f1.S unsigned_long,
f1.M unsigned_long,
f1.T unsigned_int,
CONSTRAINT pk PRIMARY KEY (USER, ID, VERSION)
);

 Thanks,
 James

 On Wed, Sep 24, 2014 at 6:21 AM, Jean-Marc Spaggiari
 jean-m...@spaggiari.org wrote:
  Hi,
 
  Is it possible to create a view on and existing HBase table and describe
 the
  composite key?
 
  I don't see anything about that in the doc
  http://phoenix.apache.org/views.html but it also doesn't say that it's
 not
  possible.
 
  Would like to do something like that:
  CREATE VIEW t1 ( USER unsigned_long PRIMARY KEY,
  ID unsigned_long PRIMARY KEY,
  VERSION unsigned_long PRIMARY KEY,
 f1.A unsigned_long,
 f1.R unsigned_long,
 f1.L unsigned_long,
 f1.W unsigned_long,
 f1.P bigint,
 f1.N varchar,
 f1.E varchar,
 f1.S unsigned_long,
 f1.M unsigned_long,
 f1.T unsigned_int
 );
 
  Where USER, ID and VERSIONS are 8 bytes longs from my HBase rowkey.
 
  Is that doable?
 
  Thanks,
 
  JM



Re: Upper limit on SALT_BUCKETS?

2014-09-24 Thread James Taylor
The salt byte is the first byte in your row key and that's the max
value for a byte (i.e. it'll be 0-255).

On Wed, Sep 24, 2014 at 10:12 AM, Krishna research...@gmail.com wrote:
 Hi,

 According to Phoenix documentation

 Phoenix provides a way to transparently salt the row key with a salting
 byte for a particular table. You need to specify this in table creation time
 by specifying a table property “SALT_BUCKETS” with a value from 1 to 256


 Is 256 the max value that SALT_BUCKETS can take? If yes, could someone
 explain the reason for this upper bound?

 Krishna



Re: Upper limit on SALT_BUCKETS?

2014-09-24 Thread Krishna
Thanks... any plans of raising number of bytes for salt value?


On Wed, Sep 24, 2014 at 10:22 AM, James Taylor jamestay...@apache.org
wrote:

 The salt byte is the first byte in your row key and that's the max
 value for a byte (i.e. it'll be 0-255).

 On Wed, Sep 24, 2014 at 10:12 AM, Krishna research...@gmail.com wrote:
  Hi,
 
  According to Phoenix documentation
 
  Phoenix provides a way to transparently salt the row key with a salting
  byte for a particular table. You need to specify this in table creation
 time
  by specifying a table property “SALT_BUCKETS” with a value from 1 to
 256
 
 
  Is 256 the max value that SALT_BUCKETS can take? If yes, could someone
  explain the reason for this upper bound?
 
  Krishna
 



Re: Upper limit on SALT_BUCKETS?

2014-09-24 Thread James Taylor
Would you be able to talk about your use case a bit and explain why you'd
need this to be higher?
Thanks,
James

On Wednesday, September 24, 2014, Krishna research...@gmail.com wrote:

 Thanks... any plans of raising number of bytes for salt value?


 On Wed, Sep 24, 2014 at 10:22 AM, James Taylor jamestay...@apache.org
 javascript:_e(%7B%7D,'cvml','jamestay...@apache.org'); wrote:

 The salt byte is the first byte in your row key and that's the max
 value for a byte (i.e. it'll be 0-255).

 On Wed, Sep 24, 2014 at 10:12 AM, Krishna research...@gmail.com
 javascript:_e(%7B%7D,'cvml','research...@gmail.com'); wrote:
  Hi,
 
  According to Phoenix documentation
 
  Phoenix provides a way to transparently salt the row key with a
 salting
  byte for a particular table. You need to specify this in table
 creation time
  by specifying a table property “SALT_BUCKETS” with a value from 1 to
 256
 
 
  Is 256 the max value that SALT_BUCKETS can take? If yes, could someone
  explain the reason for this upper bound?
 
  Krishna
 





Re: JOIN and limit

2014-09-24 Thread Maryann Xue
Hi Abe,

The expected behavior should be pushing the LIMIT to a (since it's left
outer join) while checking the limit again against the final joined
results. But it does not work as expected, it should be bug.

Could you please verify it and report an issue with a test case attached?


Thanks,
Maryann

On Thu, Sep 18, 2014 at 8:51 PM, Abe Weinograd a...@flonet.com wrote:

 Given the following query


 select * from a left outer join b on a.col2 = b.col2 and b.col3 = 'X'
 WHERE a.col1 = 'Y' LIMIT 1000


 After playing with this for a while and getting results that didn't make
 sense, it seems the LIMIT is being pushed on b or something like it before
 the join is applied and not after the full resultset is computed.  I was
 digging around a little bit.  Is that expected behavior?

 Thanks,
 Abe




-- 
Thanks,
Maryann


Recursive queries?

2014-09-24 Thread Jean-Marc Spaggiari
Hi,

We have something like this that we want to translate into Phoenix
(snippet):


RETURN QUERY WITH RECURSIVE first_level AS (
-- non-recursive term
 (
SELECT a.id AS id FROM asset a
   WHERE a.parent_id = p_id AND TYPE = 2
)
UNION
  -- Recursive Term
  SELECT a.id AS id FROM first_level fflf, asset a
 WHERE a.parent_id = flf.id AND type = 2
)


Basically, let's consider we have millions of trees stored into HBase. For
any node, we want to get all the children recursively.

Is that something we can translate to Phoenix? If not, is it in the roadmap?

Thanks,

JM


Re: Getting InsufficientMemoryException

2014-09-24 Thread Maryann Xue
Hi Vijay,

I think here the query plan is scanning table *CUSTOMER_3 *while
joining the other two tables at the same time, which means the region
server memory for Phoenix should be large enough to hold 2 tables together
and you also need to expect some memory expansion for java objects.

Do you mean that after you had modified the parameters you mentioned, you
were still getting the same error message with exactly the same
numbers as global
pool of 319507660 bytes? Did you make sure that the parameters actually
took effect after modification?


Thanks,
Maryann

On Tue, Sep 23, 2014 at 1:43 AM, G.S.Vijay Raajaa gsvijayraa...@gmail.com
wrote:

 Hi,

 I am trying to do a join of three tables usng the following query:

 *select c.c_first_name, ca.ca_city, cd.cd_education_status from
 CUSTOMER_3 c join CUSTOMER_DEMOGRAPHICS_1 cd on c.c_current_cdemo_sk =
 cd.cd_demo_sk join CUSTOMER_ADDRESS_1 ca on c.c_current_addr_sk =
 ca.ca_address_sk group by ca.ca_city, cd.cd_education_status,
 c.c_first_name;*

 *The size of CUSTOMER_3 is 4.1 GB with 30million records.*

 *I get the following error:*

 ./psql.py 10.10.5.55 test.sql
 java.sql.SQLException: Encountered exception in hash plan [0] execution.
 at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:146)
 at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:211)
 at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
 at
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:54)
 at
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:204)
 at
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:193)
 at
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:147)
 at
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:152)
 at
 org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:220)
 at
 org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:193)
 at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:140)
 Caused by: java.sql.SQLException: java.util.concurrent.ExecutionException:
 java.lang.reflect.UndeclaredThrowableException
 at
 org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:199)
 at
 org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:78)
 at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:119)
 at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:114)
 at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
 at java.util.concurrent.FutureTask.run(FutureTask.java:138)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.util.concurrent.ExecutionException:
 java.lang.reflect.UndeclaredThrowableException
 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:232)
 at java.util.concurrent.FutureTask.get(FutureTask.java:91)
 at
 org.apache.phoenix.cache.ServerCacheClient.addServerCache(ServerCacheClient.java:191)
 ... 8 more
 Caused by: java.lang.reflect.UndeclaredThrowableException
 at $Proxy10.addServerCache(Unknown Source)
 at
 org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:169)
 at
 org.apache.phoenix.cache.ServerCacheClient$1.call(ServerCacheClient.java:164)
 ... 5 more
 Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
 Failed after attempts=14, exceptions:
 Tue Sep 23 00:25:53 CDT 2014,
 org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
 java.io.IOException: java.io.IOException:
 org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
 446623727 bytes is larger than global pool of 319507660 bytes.
 Tue Sep 23 00:26:02 CDT 2014,
 org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
 java.io.IOException: java.io.IOException:
 org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
 446623727 bytes is larger than global pool of 319507660 bytes.
 Tue Sep 23 00:26:18 CDT 2014,
 org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
 java.io.IOException: java.io.IOException:
 org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
 446623727 bytes is larger than global pool of 319507660 bytes.
 Tue Sep 23 00:26:43 CDT 2014,
 org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
 java.io.IOException: java.io.IOException:
 org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
 446623727 bytes is larger than global pool of 319507660 bytes.
 Tue Sep 23 00:27:01 CDT 2014,
 org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@100e398,
 java.io.IOException: java.io.IOException:
 org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of
 446623727 bytes is larger than 

Re: Upper limit on SALT_BUCKETS?

2014-09-24 Thread Krishna
50 Region Servers for 100 TB such that each RS serves 10 regions (500
regions).

At this stage, we haven't evaluated the impact on query latency when
running with fewer regions, for ex., 50 RS and 250 regions.


On Wed, Sep 24, 2014 at 11:50 AM, James Taylor jamestay...@apache.org
wrote:

 Would you be able to talk about your use case a bit and explain why you'd
 need this to be higher?
 Thanks,
 James


 On Wednesday, September 24, 2014, Krishna research...@gmail.com wrote:

 Thanks... any plans of raising number of bytes for salt value?


 On Wed, Sep 24, 2014 at 10:22 AM, James Taylor jamestay...@apache.org
 wrote:

 The salt byte is the first byte in your row key and that's the max
 value for a byte (i.e. it'll be 0-255).

 On Wed, Sep 24, 2014 at 10:12 AM, Krishna research...@gmail.com wrote:
  Hi,
 
  According to Phoenix documentation
 
  Phoenix provides a way to transparently salt the row key with a
 salting
  byte for a particular table. You need to specify this in table
 creation time
  by specifying a table property “SALT_BUCKETS” with a value from 1 to
 256
 
 
  Is 256 the max value that SALT_BUCKETS can take? If yes, could someone
  explain the reason for this upper bound?
 
  Krishna