Re: peoblem with 4.3 client and java 1.6

2015-03-01 Thread James Taylor
Hi Noam,
Java 1.6 was end of life more than a year ago, so Phoenix binaries no
longer support it. You can likely compile the Phoenix 4.3 source
against Java 1.6 yourself - I don't think we rely on 1.7 features
much.
Thanks,
James

On Sun, Mar 1, 2015 at 10:43 AM, Bulvik, Noam noam.bul...@teoco.com wrote:
 Hi,



 We are using weblogic 10.3.x which comes with java 1.6. we used phoenix 4.1
 without any problem but both 4.2 and 4.3 can’t be used because we are
 getting java.lang.UnsupportedClassVersionError:
 org/apache/phoenix/jdbc/PhoenixDriver : Unsupported major.minor version 51.0



 Can you continue to compile client jars with java 1.6 compatible ?



 Thanks ,





 Noam Bulvik

 RD Manager



 TEOCO CORPORATION

 c: +972 54 5507984

 p: +972 3 9269145

 noam.bul...@teoco.com

 www.teoco.com



 Information in this e-mail and its attachments is confidential and
 privileged under the TEOCO confidentiality terms that can be reviewed here.


PhoenixOutputFormat in MR job

2015-03-01 Thread Krishna
Could someone comment of following questions regarding the usage of
PhoenixOutputFormat in a standalone MR job:

   - Is there a need to compute hash byte in the MR job?
   - Are keys and values stored in BytesWritable before doing a
   context.write(...) in the mapper?


Re: PhoenixOutputFormat in MR job

2015-03-01 Thread Ravi Kiran
Hi Krishna,

 I assume you have already taken a look at the example here
http://phoenix.apache.org/phoenix_mr.html

 Is there a need to compute hash byte in the MR job?
   Can you please elaborate a bit more on what hash byte is ?

 Are keys and values stored in BytesWritable before doing a
context.write(...) in the mapper?
 The Key-values from a mapper to reducer are the usual
Writable/WritableComparable instances and you can definitely write
BytesWritable .

Regards
Ravi

On Sun, Mar 1, 2015 at 10:04 PM, Krishna research...@gmail.com wrote:

 Could someone comment of following questions regarding the usage of
 PhoenixOutputFormat in a standalone MR job:

- Is there a need to compute hash byte in the MR job?
- Are keys and values stored in BytesWritable before doing a
context.write(...) in the mapper?





Re: PhoenixOutputFormat in MR job

2015-03-01 Thread Krishna
Ravi, thanks.
If the target table is salted, do I need to compute the leading byte (as i
understand, its a hash value) in the mapper?

On Sunday, March 1, 2015, Ravi Kiran maghamraviki...@gmail.com wrote:

 Hi Krishna,

  I assume you have already taken a look at the example here
 http://phoenix.apache.org/phoenix_mr.html

  Is there a need to compute hash byte in the MR job?
Can you please elaborate a bit more on what hash byte is ?

  Are keys and values stored in BytesWritable before doing a
 context.write(...) in the mapper?
  The Key-values from a mapper to reducer are the usual
 Writable/WritableComparable instances and you can definitely write
 BytesWritable .

 Regards
 Ravi

 On Sun, Mar 1, 2015 at 10:04 PM, Krishna research...@gmail.com
 javascript:_e(%7B%7D,'cvml','research...@gmail.com'); wrote:

 Could someone comment of following questions regarding the usage of
 PhoenixOutputFormat in a standalone MR job:

- Is there a need to compute hash byte in the MR job?
- Are keys and values stored in BytesWritable before doing a
context.write(...) in the mapper?






Re: Phoenix Index Disabled

2015-03-01 Thread Jude K
Jeffrey,

Thank you for the reply.

We will update to 4.2 and try to use secondary local indexes.

-J

On Sat, Feb 28, 2015 at 11:18 PM, Jeffrey Zhong jzh...@hortonworks.com
wrote:


  In Phoenix4.0, global secondary index will be disabled when index update
 fails because the data integrity between data  index has to been
 maintained.

  You can manually reenable the index by using alter index to rebuild the
 index  OR upgrade to the latest version(PHOENIX-950 improves this situation
 a little bit). In addition, local secondary index, which in many cases are
 better than the global one,  is supported since 4.2

   From: Jude K j2k...@gmail.com
 Reply-To: user@phoenix.apache.org user@phoenix.apache.org
 Date: Saturday, February 28, 2015 at 7:07 PM
 To: user@phoenix.apache.org user@phoenix.apache.org
 Subject: Phoenix Index Disabled

   Hi,

  Been stuck on this issue for few hours. Hoping someone can shed some
 light.

  OS: Centos-6
 Phoenix Client: phoenix-4.0.0.2.1.5.0-695-client.jar
 Phoenix Core: phoenix-core-4.0.0.2.1.5.0-695.jar
 Hbase Version: Version 0.98.0.2.1.5.0-695-hadoop2, 6 RS
 Hbase RS Java Heap : 6 Gb

  So,

  1) Created a 1 column family, 6 column Phoenix table
 2) Have an app that is continually streaming  data into the new Phoenix
 table
 3) Created a Phoenix index on two of the columns of the new Phoenix.
 4) Compare newly created Phoenix index count to row count in Hbase. They
 agree.
 5) Wait a few minutes, do another comparison between Phoenix index count
 and Hbase row count. Row count properly increments, but Phoenix index count
 shows same value.
 6) Wait some more and get exact same outcome as in #5.
 7) Check a RS log file, and see that the Phoenix index is DISABLED
 because it can not write to a particular region.

  OK.. what would cause the Phoenix index to become DISABLED, especially
 since there were no issues during index creation? Is there a configuration
 variable that needs to be modified? Is there a suitable workaround beside
 developing a bash script to alter the index and rebuild.


  Thanks



peoblem with 4.3 client and java 1.6

2015-03-01 Thread Bulvik, Noam
Hi,

We are using weblogic 10.3.x which comes with java 1.6. we used phoenix 4.1 
without any problem but both 4.2 and 4.3 can't be used because we are getting 
java.lang.UnsupportedClassVersionError: org/apache/phoenix/jdbc/PhoenixDriver : 
Unsupported major.minor version 51.0

Can you continue to compile client jars with java 1.6 compatible ?

Thanks ,


Noam Bulvik
RD Manager

TEOCO CORPORATION
c: +972 54 5507984
p: +972 3 9269145
noam.bul...@teoco.commailto:noam.bul...@teoco.com
www.teoco.comhttp://www.teoco.com/

Information in this e-mail and its attachments is confidential and privileged 
under the TEOCO confidentiality terms that can be reviewed 
herehttp://www.teoco.com/email-disclaimer.