Re: Tuning MutationState size

2017-11-09 Thread Sergey Soldatov
Could you provide the version you are using? Do you have autocommit turned
on and have you changed the following properties:
phoenix.mutate.batchSize
phoenix.mutate.maxSize
phoenix.mutate.maxSizeBytes

Thanks,
Sergey

If you are using more recent version, than you may consider to
On Thu, Nov 9, 2017 at 5:41 AM, Marcin Januszkiewicz <
januszkiewicz.mar...@gmail.com> wrote:

> I was trying to create a global index table but it failed out with:
>
> Error: ERROR 730 (LIM02): MutationState size is bigger than maximum
> allowed number of bytes (state=LIM02,code=730)
> java.sql.SQLException: ERROR 730 (LIM02): MutationState size is bigger
> than maximum allowed number of bytes
> at org.apache.phoenix.exception.SQLExceptionCode$Factory$1.
> newException(SQLExceptionCode.java:489)
> at org.apache.phoenix.exception.SQLExceptionInfo.buildException(
> SQLExceptionInfo.java:150)
> at org.apache.phoenix.execute.MutationState.throwIfTooBig(
> MutationState.java:359)
> at org.apache.phoenix.execute.MutationState.join(
> MutationState.java:447)
> at org.apache.phoenix.compile.MutatingParallelIteratorFactor
> y$1.close(MutatingParallelIteratorFactory.java:98)
> at org.apache.phoenix.iterate.RoundRobinResultIterator$
> RoundRobinIterator.close(RoundRobinResultIterator.java:298)
> at org.apache.phoenix.iterate.RoundRobinResultIterator.next(
> RoundRobinResultIterator.java:105)
> at org.apache.phoenix.compile.UpsertCompiler$2.execute(
> UpsertCompiler.java:821)
> at org.apache.phoenix.compile.DelegateMutationPlan.execute(
> DelegateMutationPlan.java:31)
> at org.apache.phoenix.compile.PostIndexDDLCompiler$1.
> execute(PostIndexDDLCompiler.java:117)
> at org.apache.phoenix.query.ConnectionQueryServicesImpl.
> updateData(ConnectionQueryServicesImpl.java:3360)
> at org.apache.phoenix.schema.MetaDataClient.buildIndex(
> MetaDataClient.java:1283)
> at org.apache.phoenix.schema.MetaDataClient.createIndex(
> MetaDataClient.java:1595)
> at org.apache.phoenix.compile.CreateIndexCompiler$1.execute(
> CreateIndexCompiler.java:85)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(
> PhoenixStatement.java:394)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(
> PhoenixStatement.java:377)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(
> PhoenixStatement.java:376)
> at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(
> PhoenixStatement.java:364)
> at org.apache.phoenix.jdbc.PhoenixStatement.execute(
> PhoenixStatement.java:1738)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:813)
> at sqlline.SqlLine.begin(SqlLine.java:686)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:291)
>
> Is there a way to predict what max size will be sufficient, or which other
> knobs to turn?
>
>
> --
> Pozdrawiam,
> Marcin Januszkiewicz
>


Re: Cloudera parcel update

2017-11-09 Thread Pedro Boado
I'll open a Jira ticket and put together a pull request in the next few
days for Phoenix 4.11 + cdh 5.11.2 parcel.

Then I'll start working on 4.12

Sorry, I couldn't work on this in the last two weeks.



On 9 Nov 2017 15:51, "James Taylor"  wrote:

> I agree with JMS and there is interest from the PMC, but no bandwidth to
> do the work - we’d look toward others like you to do the work of putting
> together an initial pull request, regular pulls to keep things in sync,
> RMing releases, etc. These types of contributions would earn merit toward a
> committership and eventual nomination for PMC (that’s how the ASF works).
> The files would of course be hosted on Apache infrastructure (just like our
> current releases and repos).
>
> On Thu, Nov 9, 2017 at 6:15 AM Jean-Marc Spaggiari <
> jean-m...@spaggiari.org> wrote:
>
>> We "simply" need to have a place to host the file, right? From a code
>> perspective, it can be another branch in the same repo?
>>
>> 2017-11-09 8:48 GMT-05:00 Flavio Pompermaier :
>>
>>> No interest from Phoenix PMCs to provide support to the creation of
>>> official Cloudera parcels (at least from Phoenix side)...?
>>>
>>> On Tue, Oct 31, 2017 at 8:09 AM, Flavio Pompermaier <
>>> pomperma...@okkam.it> wrote:
>>>
 Anyone from Phoenix...?

 On 27 Oct 2017 16:47, "Pedro Boado"  wrote:

> For creating a CDH parcel repository the only thing needed is a web
> server where the parcels and the manifest.json is published. But we need
> one.
>
> I'm in of course. Who can help onboarding this changes and publishing
> etc and getting users to push changes to the project? How do you do this 
> in
> Phoenix? Via another mail list, right?
>
> Defining regression strategy is probably the most complex bit. And
> automating it is even more complex I think. This is where more work is
> needed.
>
> On 27 Oct 2017 15:20, "Jean-Marc Spaggiari" 
> wrote:
>
>> See below.
>>
>> 2017-10-27 8:45 GMT-04:00 Flavio Pompermaier :
>>
>>> I just need someone who tells which git repository to use, the
>>> branching/tagging policy, what should be done to release a parcel (i.e.
>>> compile, test ok, update docs, etc). For example, I need someone who 
>>> says:
>>> to release a Phoenix CDH  parcel the process is this:
>>>
>>> 1. use this repo (e.g.https://github.com/pboado/phoenix-for-cloudera
>>> 
>>>  or https://github.com/apahce/phoenix)
>>>
>>
>> Well, if Apache Phoenix maintains it, I feel this should be moved
>> under the Apache Phoenix git repository, right?
>>
>>
>>
>>> 2. create one or more branches for each supported release (i.e.
>>> 4.11-cdh-5.10 and 4.11-cdh-5.11)
>>> - this imply to create an official compatibility
>>> matrix...obviously it doesn't make sense to issue a 4.11-cdh-4.1 for
>>> example)
>>>
>>
>> Indeed.
>>
>>
>>> 3. The test that should be passed to consider a parcel ok for a
>>> release
>>>
>>
>> Ha! good idea. Don't know if this can be automated, but deploying the
>> parcel, doing rolling upgrades, minor versions and major versions 
>> upgrades
>> tests, etc. We might be able to come with a list of things to test, and
>> increase/improve the list as we move forward...
>>
>>
>>> 4. Which documentation to write
>>>
>>
>> Most probably documenting what has changed between the Apache core
>> branch and the updated parce branch? And how to build?
>>
>>
>>> 5. Who is responsible to update Phoenix site and announcements etc?
>>>
>>
>> You? ;)
>>
>>
>>> 6. Call for contributors when a new release is needed and coordinate
>>> them
>>>
>>
>> I'm already in! I have one CDH cluster and almost all CDH versions
>> VMs... So I can do a lot of tests, as long as it doesn't required a month
>> of my time every month ;)
>>
>> JMS
>>
>>
>>>
>>> Kind of those things :)
>>>
>>> On Fri, Oct 27, 2017 at 2:33 PM, Jean-Marc Spaggiari <
>>> jean-m...@spaggiari.org> wrote:
>>>
 FYI, you can also count on me for that. At least to perform some
 testing or gather information, communication, etc.

 Flavio, what can you leading do you need there?

 James, I am also interested ;) So count me in... (My very personal
 contribution)

 To setup a repo we just need to have a folder on the existing file
 storage with the correct parcel structure so people can point to it. 
 That's
 not a big deal...

 JMS

 2017-10-27 5:08 GMT-04:00 Flavio Pompermaier 

Re: Enabling Tracing makes HMaster service fail to start

2017-11-09 Thread James Taylor
Please note that we're no longer doing releases for HBase 1.2 due to lack
of interest. If this is important for you, I suggest you volunteer to be RM
for this branch (4.x-HBase-1.2) and make sure to catch up the branch with
the latest bug fixes from our upcoming 4.13 release (in particular
PHOENIX-4335).

On Thu, Nov 9, 2017 at 8:46 AM, Josh Elser  wrote:

> Phoenix-4.12.0-HBase-1.2 should be compatible with HBase 1.2.x. Similarly,
> HBase 1.2.5 should be compatible with Hadoop 2.7.2.
>
> I'll leave you to dig into the code to understand exactly why you're
> seeing the error. You should be able to find the interface/abstract-class
> that you see the error about and come up with a reason as to your error.
>
> On 11/9/17 4:57 AM, Mallieswari Dineshbabu wrote:
>
>> Hi Elser,
>>
>> Thanks for the update. I have tried with log4j.PROPERTIES as additional
>> option only. Let me remove the changes from log4j.PROPERTIES;
>>
>> Regarding version compatibility, I hope I am using compatible version of
>> Phoenix and HBase. Please find the details below,
>> Hadoop - 2.7.2
>> HBase - 1.2.5
>> Phoenix - apache-phoenix-4.12.0-HBase-1.2/
>>
>> Query:
>> Could you please suggest the compatible version of Phoenix for Hadoop
>> 2.7.2 and HBase 1.2.5?
>>
>> Regarding classpath, I have ensured that required classpath are updated
>> properly by running phoenix_utils.py; Except phoenix_classpath all other
>> variables has proper values.
>>
>> Query:
>> Could you please tell what else I miss here regarding classpath?
>>
>> Regards,
>> Mallieswari D
>>
>> On Thu, Nov 9, 2017 at 12:00 AM, Josh Elser  els...@apache.org>> wrote:
>>
>> Please note that there is a difference between Phoenix Tracing and
>> the TRACE log4j level.
>>
>> It appears that you're using a version of Phoenix which is
>> incompatible with the version of HBase/Hadoop that you're running.
>> The implementation of PhoenixMetricsSink is incompatible with the
>> interface/abstract-class that HBase/Hadoop is expecting.
>>
>> This may be a classpath or Phoenix version issue, or you may have
>> stumbled onto a bug.
>>
>> On 11/8/17 6:33 AM, Mallieswari Dineshbabu wrote:
>>
>> Hi All,
>>
>> I am working with HBase-Phoenix, /everything works fine/. In
>> addition trying to enable Tracing
>> > > in Phoenix with the
>> following steps,
>>
>>   1. Copy ‘hadoop-metrics2-hbase.PROPERTIES’ from Phoenix
>> package to
>>  HBase conf folder.
>>   2. ‘hadoop-metrics2-phoenix.PROPERTIES’ file will be in
>> ‘Phoenix/bin’
>>  location by default. So I left it as it is.
>>   3. Add the following property to phoenix configuration,
>>
>> 
>>
>>  phoenix.trace.frequency
>>
>> always
>>
>> 
>>
>> After doing the above, HBase’s HMaster fails to start with the
>> following exception; Please tell if you have any suggestion on
>> this,
>>
>> 2017-11-08 16:46:56,118 INFO  [main] regionserver.RSRpcServices:
>> master/Selfuser-VirtualBox/172.16.203.117:6
>>  
>> server-side HConnection retries=140
>>
>> 2017-11-08 16:46:56,520 INFO  [main] ipc.SimpleRpcScheduler:
>> Using deadline as user call queue, count=3
>>
>> 2017-11-08 16:46:56,554 INFO  [main] ipc.RpcServer:
>> master/Selfuser-VirtualBox/192.16.203.117:6
>>  :
>>
>> started 10 reader(s) listening on port=6
>>
>> *2017-11-08 16:46:56,839 INFO  [main] impl.MetricsConfig: loaded
>> properties from hadoop-metrics2-hbase.properties*
>>
>> *2017-11-08 16:46:56,926 INFO  [main] trace.PhoenixMetricsSink:
>> Writing tracing metrics to phoenix table*
>>
>> *2017-11-08 16:46:56,933 ERROR [main] master.HMasterCommandLine:
>> Master exiting*
>>
>> *java.lang.RuntimeException: Failed construction of Master:
>> class org.apache.hadoop.hbase.master.HMaster. *
>>
>> *at
>> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMast
>> er.java:2512)*
>>
>>
>> at
>> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaste
>> r(HMasterCommandLine.java:231)
>>
>> at
>> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMaste
>> rCommandLine.java:137)
>>
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>
>> at
>> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(Server
>> CommandLine.java:126)
>>
>> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2522)
>>
>> Caused by: java.lang.AbstractMethodError:
>> 

Re: Enabling Tracing makes HMaster service fail to start

2017-11-09 Thread Josh Elser
Phoenix-4.12.0-HBase-1.2 should be compatible with HBase 1.2.x. 
Similarly, HBase 1.2.5 should be compatible with Hadoop 2.7.2.


I'll leave you to dig into the code to understand exactly why you're 
seeing the error. You should be able to find the 
interface/abstract-class that you see the error about and come up with a 
reason as to your error.


On 11/9/17 4:57 AM, Mallieswari Dineshbabu wrote:

Hi Elser,

Thanks for the update. I have tried with log4j.PROPERTIES as additional 
option only. Let me remove the changes from log4j.PROPERTIES;


Regarding version compatibility, I hope I am using compatible version of 
Phoenix and HBase. Please find the details below,

Hadoop - 2.7.2
HBase - 1.2.5
Phoenix - apache-phoenix-4.12.0-HBase-1.2/

Query:
Could you please suggest the compatible version of Phoenix for Hadoop 
2.7.2 and HBase 1.2.5?


Regarding classpath, I have ensured that required classpath are updated 
properly by running phoenix_utils.py; Except phoenix_classpath all other 
variables has proper values.


Query:
Could you please tell what else I miss here regarding classpath?

Regards,
Mallieswari D

On Thu, Nov 9, 2017 at 12:00 AM, Josh Elser > wrote:


Please note that there is a difference between Phoenix Tracing and
the TRACE log4j level.

It appears that you're using a version of Phoenix which is
incompatible with the version of HBase/Hadoop that you're running.
The implementation of PhoenixMetricsSink is incompatible with the
interface/abstract-class that HBase/Hadoop is expecting.

This may be a classpath or Phoenix version issue, or you may have
stumbled onto a bug.

On 11/8/17 6:33 AM, Mallieswari Dineshbabu wrote:

Hi All,

I am working with HBase-Phoenix, /everything works fine/. In
addition trying to enable Tracing
> in Phoenix with the
following steps,

  1. Copy ‘hadoop-metrics2-hbase.PROPERTIES’ from Phoenix package to
     HBase conf folder.
  2. ‘hadoop-metrics2-phoenix.PROPERTIES’ file will be in
‘Phoenix/bin’
     location by default. So I left it as it is.
  3. Add the following property to phoenix configuration,



     phoenix.trace.frequency

    always



After doing the above, HBase’s HMaster fails to start with the
following exception; Please tell if you have any suggestion on this,

2017-11-08 16:46:56,118 INFO  [main] regionserver.RSRpcServices:
master/Selfuser-VirtualBox/172.16.203.117:6
 
server-side HConnection retries=140

2017-11-08 16:46:56,520 INFO  [main] ipc.SimpleRpcScheduler:
Using deadline as user call queue, count=3

2017-11-08 16:46:56,554 INFO  [main] ipc.RpcServer:
master/Selfuser-VirtualBox/192.16.203.117:6
 :
started 10 reader(s) listening on port=6

*2017-11-08 16:46:56,839 INFO  [main] impl.MetricsConfig: loaded
properties from hadoop-metrics2-hbase.properties*

*2017-11-08 16:46:56,926 INFO  [main] trace.PhoenixMetricsSink:
Writing tracing metrics to phoenix table*

*2017-11-08 16:46:56,933 ERROR [main] master.HMasterCommandLine:
Master exiting*

*java.lang.RuntimeException: Failed construction of Master:
class org.apache.hadoop.hbase.master.HMaster. *

*at

org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2512)*


at

org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:231)

at

org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:137)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at

org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)

at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2522)

Caused by: java.lang.AbstractMethodError:

org.apache.phoenix.trace.PhoenixMetricsSink.init(Lorg/apache/commons/configuration/SubsetConfiguration;)V

at

org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)

at

org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:530)

at

org.apache.hadoop.metrics2.impl.MetricsSdrddystemImpl.configureSinks(MetricsSystemImpl.java:502)

at

org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:481)

at

org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:189)

at


Re: Cloudera parcel update

2017-11-09 Thread James Taylor
I agree with JMS and there is interest from the PMC, but no bandwidth to do
the work - we’d look toward others like you to do the work of putting
together an initial pull request, regular pulls to keep things in sync,
RMing releases, etc. These types of contributions would earn merit toward a
committership and eventual nomination for PMC (that’s how the ASF works).
The files would of course be hosted on Apache infrastructure (just like our
current releases and repos).

On Thu, Nov 9, 2017 at 6:15 AM Jean-Marc Spaggiari 
wrote:

> We "simply" need to have a place to host the file, right? From a code
> perspective, it can be another branch in the same repo?
>
> 2017-11-09 8:48 GMT-05:00 Flavio Pompermaier :
>
>> No interest from Phoenix PMCs to provide support to the creation of
>> official Cloudera parcels (at least from Phoenix side)...?
>>
>> On Tue, Oct 31, 2017 at 8:09 AM, Flavio Pompermaier > > wrote:
>>
>>> Anyone from Phoenix...?
>>>
>>> On 27 Oct 2017 16:47, "Pedro Boado"  wrote:
>>>
 For creating a CDH parcel repository the only thing needed is a web
 server where the parcels and the manifest.json is published. But we need
 one.

 I'm in of course. Who can help onboarding this changes and publishing
 etc and getting users to push changes to the project? How do you do this in
 Phoenix? Via another mail list, right?

 Defining regression strategy is probably the most complex bit. And
 automating it is even more complex I think. This is where more work is
 needed.

 On 27 Oct 2017 15:20, "Jean-Marc Spaggiari" 
 wrote:

> See below.
>
> 2017-10-27 8:45 GMT-04:00 Flavio Pompermaier :
>
>> I just need someone who tells which git repository to use, the
>> branching/tagging policy, what should be done to release a parcel (i.e.
>> compile, test ok, update docs, etc). For example, I need someone who 
>> says:
>> to release a Phoenix CDH  parcel the process is this:
>>
>> 1. use this repo (e.g.https://github.com/pboado/phoenix-for-cloudera
>> 
>>  or https://github.com/apahce/phoenix)
>>
>
> Well, if Apache Phoenix maintains it, I feel this should be moved
> under the Apache Phoenix git repository, right?
>
>
>
>> 2. create one or more branches for each supported release (i.e.
>> 4.11-cdh-5.10 and 4.11-cdh-5.11)
>> - this imply to create an official compatibility
>> matrix...obviously it doesn't make sense to issue a 4.11-cdh-4.1 for
>> example)
>>
>
> Indeed.
>
>
>> 3. The test that should be passed to consider a parcel ok for a
>> release
>>
>
> Ha! good idea. Don't know if this can be automated, but deploying the
> parcel, doing rolling upgrades, minor versions and major versions upgrades
> tests, etc. We might be able to come with a list of things to test, and
> increase/improve the list as we move forward...
>
>
>> 4. Which documentation to write
>>
>
> Most probably documenting what has changed between the Apache core
> branch and the updated parce branch? And how to build?
>
>
>> 5. Who is responsible to update Phoenix site and announcements etc?
>>
>
> You? ;)
>
>
>> 6. Call for contributors when a new release is needed and coordinate
>> them
>>
>
> I'm already in! I have one CDH cluster and almost all CDH versions
> VMs... So I can do a lot of tests, as long as it doesn't required a month
> of my time every month ;)
>
> JMS
>
>
>>
>> Kind of those things :)
>>
>> On Fri, Oct 27, 2017 at 2:33 PM, Jean-Marc Spaggiari <
>> jean-m...@spaggiari.org> wrote:
>>
>>> FYI, you can also count on me for that. At least to perform some
>>> testing or gather information, communication, etc.
>>>
>>> Flavio, what can you leading do you need there?
>>>
>>> James, I am also interested ;) So count me in... (My very personal
>>> contribution)
>>>
>>> To setup a repo we just need to have a folder on the existing file
>>> storage with the correct parcel structure so people can point to it. 
>>> That's
>>> not a big deal...
>>>
>>> JMS
>>>
>>> 2017-10-27 5:08 GMT-04:00 Flavio Pompermaier :
>>>
 I can give it a try..is there someone who can lead this thing?

>>>
>>>
>>
>>
>
>>
>


Re: Cloudera parcel update

2017-11-09 Thread Jean-Marc Spaggiari
We "simply" need to have a place to host the file, right? From a code
perspective, it can be another branch in the same repo?

2017-11-09 8:48 GMT-05:00 Flavio Pompermaier :

> No interest from Phoenix PMCs to provide support to the creation of
> official Cloudera parcels (at least from Phoenix side)...?
>
> On Tue, Oct 31, 2017 at 8:09 AM, Flavio Pompermaier 
> wrote:
>
>> Anyone from Phoenix...?
>>
>> On 27 Oct 2017 16:47, "Pedro Boado"  wrote:
>>
>>> For creating a CDH parcel repository the only thing needed is a web
>>> server where the parcels and the manifest.json is published. But we need
>>> one.
>>>
>>> I'm in of course. Who can help onboarding this changes and publishing
>>> etc and getting users to push changes to the project? How do you do this in
>>> Phoenix? Via another mail list, right?
>>>
>>> Defining regression strategy is probably the most complex bit. And
>>> automating it is even more complex I think. This is where more work is
>>> needed.
>>>
>>> On 27 Oct 2017 15:20, "Jean-Marc Spaggiari" 
>>> wrote:
>>>
 See below.

 2017-10-27 8:45 GMT-04:00 Flavio Pompermaier :

> I just need someone who tells which git repository to use, the
> branching/tagging policy, what should be done to release a parcel (i.e.
> compile, test ok, update docs, etc). For example, I need someone who says:
> to release a Phoenix CDH  parcel the process is this:
>
> 1. use this repo (e.g.https://github.com/pboado/phoenix-for-cloudera
> 
>  or https://github.com/apahce/phoenix)
>

 Well, if Apache Phoenix maintains it, I feel this should be moved under
 the Apache Phoenix git repository, right?



> 2. create one or more branches for each supported release (i.e.
> 4.11-cdh-5.10 and 4.11-cdh-5.11)
> - this imply to create an official compatibility
> matrix...obviously it doesn't make sense to issue a 4.11-cdh-4.1 for
> example)
>

 Indeed.


> 3. The test that should be passed to consider a parcel ok for a release
>

 Ha! good idea. Don't know if this can be automated, but deploying the
 parcel, doing rolling upgrades, minor versions and major versions upgrades
 tests, etc. We might be able to come with a list of things to test, and
 increase/improve the list as we move forward...


> 4. Which documentation to write
>

 Most probably documenting what has changed between the Apache core
 branch and the updated parce branch? And how to build?


> 5. Who is responsible to update Phoenix site and announcements etc?
>

 You? ;)


> 6. Call for contributors when a new release is needed and coordinate
> them
>

 I'm already in! I have one CDH cluster and almost all CDH versions
 VMs... So I can do a lot of tests, as long as it doesn't required a month
 of my time every month ;)

 JMS


>
> Kind of those things :)
>
> On Fri, Oct 27, 2017 at 2:33 PM, Jean-Marc Spaggiari <
> jean-m...@spaggiari.org> wrote:
>
>> FYI, you can also count on me for that. At least to perform some
>> testing or gather information, communication, etc.
>>
>> Flavio, what can you leading do you need there?
>>
>> James, I am also interested ;) So count me in... (My very personal
>> contribution)
>>
>> To setup a repo we just need to have a folder on the existing file
>> storage with the correct parcel structure so people can point to it. 
>> That's
>> not a big deal...
>>
>> JMS
>>
>> 2017-10-27 5:08 GMT-04:00 Flavio Pompermaier :
>>
>>> I can give it a try..is there someone who can lead this thing?
>>>
>>
>>
>
>

>


Re: Cloudera parcel update

2017-11-09 Thread Flavio Pompermaier
No interest from Phoenix PMCs to provide support to the creation of
official Cloudera parcels (at least from Phoenix side)...?

On Tue, Oct 31, 2017 at 8:09 AM, Flavio Pompermaier 
wrote:

> Anyone from Phoenix...?
>
> On 27 Oct 2017 16:47, "Pedro Boado"  wrote:
>
>> For creating a CDH parcel repository the only thing needed is a web
>> server where the parcels and the manifest.json is published. But we need
>> one.
>>
>> I'm in of course. Who can help onboarding this changes and publishing etc
>> and getting users to push changes to the project? How do you do this in
>> Phoenix? Via another mail list, right?
>>
>> Defining regression strategy is probably the most complex bit. And
>> automating it is even more complex I think. This is where more work is
>> needed.
>>
>> On 27 Oct 2017 15:20, "Jean-Marc Spaggiari" 
>> wrote:
>>
>>> See below.
>>>
>>> 2017-10-27 8:45 GMT-04:00 Flavio Pompermaier :
>>>
 I just need someone who tells which git repository to use, the
 branching/tagging policy, what should be done to release a parcel (i.e.
 compile, test ok, update docs, etc). For example, I need someone who says:
 to release a Phoenix CDH  parcel the process is this:

 1. use this repo (e.g.https://github.com/pboado/phoenix-for-cloudera
 
  or https://github.com/apahce/phoenix)

>>>
>>> Well, if Apache Phoenix maintains it, I feel this should be moved under
>>> the Apache Phoenix git repository, right?
>>>
>>>
>>>
 2. create one or more branches for each supported release (i.e.
 4.11-cdh-5.10 and 4.11-cdh-5.11)
 - this imply to create an official compatibility matrix...obviously
 it doesn't make sense to issue a 4.11-cdh-4.1 for example)

>>>
>>> Indeed.
>>>
>>>
 3. The test that should be passed to consider a parcel ok for a release

>>>
>>> Ha! good idea. Don't know if this can be automated, but deploying the
>>> parcel, doing rolling upgrades, minor versions and major versions upgrades
>>> tests, etc. We might be able to come with a list of things to test, and
>>> increase/improve the list as we move forward...
>>>
>>>
 4. Which documentation to write

>>>
>>> Most probably documenting what has changed between the Apache core
>>> branch and the updated parce branch? And how to build?
>>>
>>>
 5. Who is responsible to update Phoenix site and announcements etc?

>>>
>>> You? ;)
>>>
>>>
 6. Call for contributors when a new release is needed and coordinate
 them

>>>
>>> I'm already in! I have one CDH cluster and almost all CDH versions
>>> VMs... So I can do a lot of tests, as long as it doesn't required a month
>>> of my time every month ;)
>>>
>>> JMS
>>>
>>>

 Kind of those things :)

 On Fri, Oct 27, 2017 at 2:33 PM, Jean-Marc Spaggiari <
 jean-m...@spaggiari.org> wrote:

> FYI, you can also count on me for that. At least to perform some
> testing or gather information, communication, etc.
>
> Flavio, what can you leading do you need there?
>
> James, I am also interested ;) So count me in... (My very personal
> contribution)
>
> To setup a repo we just need to have a folder on the existing file
> storage with the correct parcel structure so people can point to it. 
> That's
> not a big deal...
>
> JMS
>
> 2017-10-27 5:08 GMT-04:00 Flavio Pompermaier :
>
>> I can give it a try..is there someone who can lead this thing?
>>
>
>


>>>


Tuning MutationState size

2017-11-09 Thread Marcin Januszkiewicz
I was trying to create a global index table but it failed out with:

Error: ERROR 730 (LIM02): MutationState size is bigger than maximum allowed
number of bytes (state=LIM02,code=730)
java.sql.SQLException: ERROR 730 (LIM02): MutationState size is bigger than
maximum allowed number of bytes
at
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:489)
at
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at
org.apache.phoenix.execute.MutationState.throwIfTooBig(MutationState.java:359)
at
org.apache.phoenix.execute.MutationState.join(MutationState.java:447)
at
org.apache.phoenix.compile.MutatingParallelIteratorFactory$1.close(MutatingParallelIteratorFactory.java:98)
at
org.apache.phoenix.iterate.RoundRobinResultIterator$RoundRobinIterator.close(RoundRobinResultIterator.java:298)
at
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:105)
at
org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:821)
at
org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
at
org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3360)
at
org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1283)
at
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1595)
at
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:376)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:364)
at
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1738)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)

Is there a way to predict what max size will be sufficient, or which other
knobs to turn?


-- 
Pozdrawiam,
Marcin Januszkiewicz


Re: Enabling Tracing makes HMaster service fail to start

2017-11-09 Thread Mallieswari Dineshbabu
Hi Elser,

Thanks for the update. I have tried with log4j.PROPERTIES as additional
option only. Let me remove the changes from log4j.PROPERTIES;

Regarding version compatibility, I hope I am using compatible version of
Phoenix and HBase. Please find the details below,
Hadoop - 2.7.2
HBase - 1.2.5
Phoenix - apache-phoenix-4.12.0-HBase-1.2/

Query:
Could you please suggest the compatible version of Phoenix for Hadoop 2.7.2
and HBase 1.2.5?

Regarding classpath, I have ensured that required classpath are updated
properly by running phoenix_utils.py; Except phoenix_classpath all other
variables has proper values.

Query:
Could you please tell what else I miss here regarding classpath?

Regards,
Mallieswari D

On Thu, Nov 9, 2017 at 12:00 AM, Josh Elser  wrote:

> Please note that there is a difference between Phoenix Tracing and the
> TRACE log4j level.
>
> It appears that you're using a version of Phoenix which is incompatible
> with the version of HBase/Hadoop that you're running. The implementation of
> PhoenixMetricsSink is incompatible with the interface/abstract-class that
> HBase/Hadoop is expecting.
>
> This may be a classpath or Phoenix version issue, or you may have stumbled
> onto a bug.
>
> On 11/8/17 6:33 AM, Mallieswari Dineshbabu wrote:
>
>> Hi All,
>>
>> I am working with HBase-Phoenix, /everything works fine/. In addition
>> trying to enable Tracing  in
>> Phoenix with the following steps,
>>
>>  1. Copy ‘hadoop-metrics2-hbase.PROPERTIES’ from Phoenix package to
>> HBase conf folder.
>>  2. ‘hadoop-metrics2-phoenix.PROPERTIES’ file will be in ‘Phoenix/bin’
>> location by default. So I left it as it is.
>>  3. Add the following property to phoenix configuration,
>>
>> 
>>
>> phoenix.trace.frequency
>>
>>always
>>
>> 
>>
>> After doing the above, HBase’s HMaster fails to start with the following
>> exception; Please tell if you have any suggestion on this,
>>
>> 2017-11-08 16:46:56,118 INFO  [main] regionserver.RSRpcServices:
>> master/Selfuser-VirtualBox/172.16.203.117:6 <
>> http://172.16.203.117:6> server-side HConnection retries=140
>>
>> 2017-11-08 16:46:56,520 INFO  [main] ipc.SimpleRpcScheduler: Using
>> deadline as user call queue, count=3
>>
>> 2017-11-08 16:46:56,554 INFO  [main] ipc.RpcServer:
>> master/Selfuser-VirtualBox/192.16.203.117:6 <
>> http://192.16.203.117:6>: started 10 reader(s) listening on
>> port=6
>>
>> *2017-11-08 16:46:56,839 INFO  [main] impl.MetricsConfig: loaded
>> properties from hadoop-metrics2-hbase.properties*
>>
>> *2017-11-08 16:46:56,926 INFO  [main] trace.PhoenixMetricsSink: Writing
>> tracing metrics to phoenix table*
>>
>> *2017-11-08 16:46:56,933 ERROR [main] master.HMasterCommandLine: Master
>> exiting*
>>
>> *java.lang.RuntimeException: Failed construction of Master: class
>> org.apache.hadoop.hbase.master.HMaster. *
>>
>> *at org.apache.hadoop.hbase.master.HMaster.constructMaster(
>> HMaster.java:2512)*
>>
>>
>> at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaste
>> r(HMasterCommandLine.java:231)
>>
>> at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMaste
>> rCommandLine.java:137)
>>
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>
>> at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(Server
>> CommandLine.java:126)
>>
>> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2522)
>>
>> Caused by: java.lang.AbstractMethodError: org.apache.phoenix.trace.Phoen
>> ixMetricsSink.init(Lorg/apache/commons/configuration/SubsetC
>> onfiguration;)V
>>
>> at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(Metr
>> icsConfig.java:199)
>>
>> at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(
>> MetricsSystemImpl.java:530)
>>
>> at org.apache.hadoop.metrics2.impl.MetricsSdrddystemImpl.config
>> ureSinks(MetricsSystemImpl.java:502)
>>
>> at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(
>> MetricsSystemImpl.java:481)
>>
>> at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(Metr
>> icsSystemImpl.java:189)
>>
>> at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(Metri
>> csSystemImpl.java:164)
>>
>> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(Def
>> aultMetricsSystem.java:54)
>>
>> at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initiali
>> ze(DefaultMetricsSystem.java:50)
>>
>> at org.apache.hadoop.hbase.metrics.BaseSourceImpl$DefaultMetric
>> sSystemInitializer.init(BaseSourceImpl.java:49)
>>
>> at org.apache.hadoop.hbase.metrics.BaseSourceImpl.(BaseSo
>> urceImpl.java:72)
>>
>> at org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceImpl.> it>(MetricsHBaseServerSourceImpl.java:66)
>>
>> at org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceFactoryI
>> mpl.getSource(MetricsHBaseServerSourceFactoryImpl.java:48)
>>
>> at org.apache.hadoop.hbase.ipc.MetricsHBaseServerSourceFactoryI
>> mpl.create(MetricsHBaseServerSourceFactoryImpl.java:38)
>>
>> at