[Dev] Apache Phoenix installation

2016-04-29 Thread Pranavan Theivendiram
Hi All,

I have successfully built. Apache Phoenix from source. Can someone provide
the necessary steps that I have to follow to install/run Phoenix

Thanks
*T. Pranavan*
*BSc Eng Undergraduate| Department of Computer Science & Engineering
,University of Moratuwa*
*Mobile| *0775136836


[jira] [Commented] (PHOENIX-2840) Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test

2016-04-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265159#comment-15265159
 ] 

James Taylor commented on PHOENIX-2840:
---

Not a big deal. Three tests are ignored now - not sure how many were
before, but at least one.


> Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test
> ---
>
> Key: PHOENIX-2840
> URL: https://issues.apache.org/jira/browse/PHOENIX-2840
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: churro morales
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2840.patch
>
>
> Looks like MemoryManagerTest.testWaitForMemoryAvailable is flapping.
> {code}
> https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/443/testReport/junit/org.apache.phoenix.memory/MemoryManagerTest/testWaitForMemoryAvailable/
> {code}
> I wonder if perhaps we should change reuseForks to false for our fast unit 
> tests here in the pom.xml:
> {code}
>   
> org.apache.maven.plugins
> maven-surefire-plugin
> ${maven-surefire-plugin.version}
> 
>   ${numForkedUT}
>   true
>   -enableassertions -Xmx2250m -XX:MaxPermSize=128m
> -Djava.security.egd=file:/dev/./urandom 
> "-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
>   
> ${test.output.tofile}
>   kill
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2840) Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test

2016-04-29 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265154#comment-15265154
 ] 

churro morales commented on PHOENIX-2840:
-

[~jamestaylor]let me take a look at why this test is still failing.  I ran it 
in isolation many times and it passed, even through the maven target as well.  
I can revert it back but I believe everything was ignored before.  Let me check 
and get back to you. 



> Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test
> ---
>
> Key: PHOENIX-2840
> URL: https://issues.apache.org/jira/browse/PHOENIX-2840
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: churro morales
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2840.patch
>
>
> Looks like MemoryManagerTest.testWaitForMemoryAvailable is flapping.
> {code}
> https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/443/testReport/junit/org.apache.phoenix.memory/MemoryManagerTest/testWaitForMemoryAvailable/
> {code}
> I wonder if perhaps we should change reuseForks to false for our fast unit 
> tests here in the pom.xml:
> {code}
>   
> org.apache.maven.plugins
> maven-surefire-plugin
> ${maven-surefire-plugin.version}
> 
>   ${numForkedUT}
>   true
>   -enableassertions -Xmx2250m -XX:MaxPermSize=128m
> -Djava.security.egd=file:/dev/./urandom 
> "-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
>   
> ${test.output.tofile}
>   kill
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2840) Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test

2016-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265138#comment-15265138
 ] 

Hudson commented on PHOENIX-2840:
-

FAILURE: Integrated in Phoenix-master #1210 (See 
[https://builds.apache.org/job/Phoenix-master/1210/])
PHOENIX-2840 Fix flapping MemoryManagerTest.testWaitForMemoryAvailable 
(jamestaylor: rev 109d8329a65442da139d101b03422497ec33c14e)
* phoenix-core/src/test/java/org/apache/phoenix/memory/MemoryManagerTest.java


> Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test
> ---
>
> Key: PHOENIX-2840
> URL: https://issues.apache.org/jira/browse/PHOENIX-2840
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: churro morales
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2840.patch
>
>
> Looks like MemoryManagerTest.testWaitForMemoryAvailable is flapping.
> {code}
> https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/443/testReport/junit/org.apache.phoenix.memory/MemoryManagerTest/testWaitForMemoryAvailable/
> {code}
> I wonder if perhaps we should change reuseForks to false for our fast unit 
> tests here in the pom.xml:
> {code}
>   
> org.apache.maven.plugins
> maven-surefire-plugin
> ${maven-surefire-plugin.version}
> 
>   ${numForkedUT}
>   true
>   -enableassertions -Xmx2250m -XX:MaxPermSize=128m
> -Djava.security.egd=file:/dev/./urandom 
> "-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
>   
> ${test.output.tofile}
>   kill
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2840) Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test

2016-04-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265067#comment-15265067
 ] 

James Taylor commented on PHOENIX-2840:
---

Ignoring MemoryManagerTest.testWaitUntilResize() as it's flapping as well: 
https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/8/console

[~churromorales] - maybe put MemoryManagerTest back the way it was or up the 
failure tolerance?

> Fix flapping MemoryManagerTest.testWaitForMemoryAvailable unit test
> ---
>
> Key: PHOENIX-2840
> URL: https://issues.apache.org/jira/browse/PHOENIX-2840
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: churro morales
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2840.patch
>
>
> Looks like MemoryManagerTest.testWaitForMemoryAvailable is flapping.
> {code}
> https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/443/testReport/junit/org.apache.phoenix.memory/MemoryManagerTest/testWaitForMemoryAvailable/
> {code}
> I wonder if perhaps we should change reuseForks to false for our fast unit 
> tests here in the pom.xml:
> {code}
>   
> org.apache.maven.plugins
> maven-surefire-plugin
> ${maven-surefire-plugin.version}
> 
>   ${numForkedUT}
>   true
>   -enableassertions -Xmx2250m -XX:MaxPermSize=128m
> -Djava.security.egd=file:/dev/./urandom 
> "-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
>   
> ${test.output.tofile}
>   kill
> 
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2869) Pig-Phoenix Integration throws errors when we try to write to SmallInt and TinyInt columns

2016-04-29 Thread Anil Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265008#comment-15265008
 ] 

Anil Gupta commented on PHOENIX-2869:
-

Unfortunately, we cant try it ourselves since we are using HDP package. I can 
probably ask HDP for a backport of that patch. If its already fixed, we can 
close this ticket. Thanks.

> Pig-Phoenix Integration throws errors when we try to write to SmallInt and 
> TinyInt columns
> --
>
> Key: PHOENIX-2869
> URL: https://issues.apache.org/jira/browse/PHOENIX-2869
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Anil Gupta
>  Labels: Pig
>
> Pig-Phoenix Integration  does not works for SmallInt and TinyInt. We get this 
> kind of error when we run a pig script and load data into a SmallInt or 
> TinyInt column:
> Caused by: java.lang.RuntimeException: Unable to process column 
> TINYINT:"L"."ELIGIBLESALE", innerMessage=java.lang.Integer cannot be coerced 
> to TINYINT
> at 
> org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:66)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
> at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:184)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
> at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:281)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:274)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> ... 3 more
> --
> Some more relevant discussion on mailing list: 
> http://search-hadoop.com/m/9UY0h2HRUMW1WYQEH1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2869) Pig-Phoenix Integration throws errors when we try to write to SmallInt and TinyInt columns

2016-04-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15265006#comment-15265006
 ] 

James Taylor commented on PHOENIX-2869:
---

Would you mind trying this on 4.7, as I believe it's been fixed?

> Pig-Phoenix Integration throws errors when we try to write to SmallInt and 
> TinyInt columns
> --
>
> Key: PHOENIX-2869
> URL: https://issues.apache.org/jira/browse/PHOENIX-2869
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Anil Gupta
>  Labels: Pig
>
> Pig-Phoenix Integration  does not works for SmallInt and TinyInt. We get this 
> kind of error when we run a pig script and load data into a SmallInt or 
> TinyInt column:
> Caused by: java.lang.RuntimeException: Unable to process column 
> TINYINT:"L"."ELIGIBLESALE", innerMessage=java.lang.Integer cannot be coerced 
> to TINYINT
> at 
> org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:66)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
> at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:184)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
> at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:281)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:274)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> ... 3 more
> --
> Some more relevant discussion on mailing list: 
> http://search-hadoop.com/m/9UY0h2HRUMW1WYQEH1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2869) Pig-Phoenix Integration throws errors when we try to write to SmallInt and TinyInt columns

2016-04-29 Thread Anil Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Gupta updated PHOENIX-2869:

Labels: Pig  (was: )

> Pig-Phoenix Integration throws errors when we try to write to SmallInt and 
> TinyInt columns
> --
>
> Key: PHOENIX-2869
> URL: https://issues.apache.org/jira/browse/PHOENIX-2869
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Anil Gupta
>  Labels: Pig
>
> Pig-Phoenix Integration  does not works for SmallInt and TinyInt. We get this 
> kind of error when we run a pig script and load data into a SmallInt or 
> TinyInt column:
> Caused by: java.lang.RuntimeException: Unable to process column 
> TINYINT:"L"."ELIGIBLESALE", innerMessage=java.lang.Integer cannot be coerced 
> to TINYINT
> at 
> org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:66)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
> at 
> org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:184)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
> at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
> at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:281)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:274)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> ... 3 more
> --
> Some more relevant discussion on mailing list: 
> http://search-hadoop.com/m/9UY0h2HRUMW1WYQEH1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2869) Pig-Phoenix Integration throws errors when we try to write to SmallInt and TinyInt columns

2016-04-29 Thread Anil Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anil Gupta updated PHOENIX-2869:

Description: 
Pig-Phoenix Integration  does not works for SmallInt and TinyInt. We get this 
kind of error when we run a pig script and load data into a SmallInt or TinyInt 
column:
Caused by: java.lang.RuntimeException: Unable to process column 
TINYINT:"L"."ELIGIBLESALE", innerMessage=java.lang.Integer cannot be coerced to 
TINYINT
at 
org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:66)
at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
at 
org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:184)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:281)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:274)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
... 3 more

--
Some more relevant discussion on mailing list: 
http://search-hadoop.com/m/9UY0h2HRUMW1WYQEH1


  was:
Pig-Phoenix Integration  does not works for SmallInt and TinyInt. We get this 
kind of error when we run a pig script and load data into a SmallInt or TinyInt 
column:
Caused by: java.lang.RuntimeException: Unable to process column 
TINYINT:"L"."ELIGIBLESALE", innerMessage=java.lang.Integer cannot be coerced to 
TINYINT
at 
org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:66)
at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
at 
org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:184)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:281)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:274)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
... 3 more



> Pig-Phoenix Integration throws errors when we try to write to SmallInt and 
> TinyInt columns
> --
>
> Key: PHOENIX-2869
> URL: https://issues.apache.org/jira/browse/PHOENIX-2869
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Anil Gupta
>
> Pig-Phoenix Integration  does not works for SmallInt and TinyInt. We get this 
> kind of error when we run a pig script and load data into a SmallInt or 
> TinyInt column:
> Caused by: 

[jira] [Created] (PHOENIX-2869) Pig-Phoenix Integration throws errors when we try to write to SmallInt and TinyInt columns

2016-04-29 Thread Anil Gupta (JIRA)
Anil Gupta created PHOENIX-2869:
---

 Summary: Pig-Phoenix Integration throws errors when we try to 
write to SmallInt and TinyInt columns
 Key: PHOENIX-2869
 URL: https://issues.apache.org/jira/browse/PHOENIX-2869
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.4.0
Reporter: Anil Gupta


Pig-Phoenix Integration  does not works for SmallInt and TinyInt. We get this 
kind of error when we run a pig script and load data into a SmallInt or TinyInt 
column:
Caused by: java.lang.RuntimeException: Unable to process column 
TINYINT:"L"."ELIGIBLESALE", innerMessage=java.lang.Integer cannot be coerced to 
TINYINT
at 
org.apache.phoenix.pig.writable.PhoenixPigDBWritable.write(PhoenixPigDBWritable.java:66)
at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:78)
at 
org.apache.phoenix.mapreduce.PhoenixRecordWriter.write(PhoenixRecordWriter.java:39)
at 
org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:184)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:136)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:95)
at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:281)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:274)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
... 3 more




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2868) QueryServerBasicsIT.testSchemas is failing

2016-04-29 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov reassigned PHOENIX-2868:


Assignee: Sergey Soldatov

> QueryServerBasicsIT.testSchemas is failing
> --
>
> Key: PHOENIX-2868
> URL: https://issues.apache.org/jira/browse/PHOENIX-2868
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
>
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.464 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.QueryServerBasicsIT
> testSchemas(org.apache.phoenix.end2end.QueryServerBasicsIT)  Time elapsed: 
> 0.212 sec  <<< FAILURE!
> java.lang.AssertionError: unexpected empty resultset
>   at 
> org.apache.phoenix.end2end.QueryServerBasicsIT.testSchemas(QueryServerBasicsIT.java:103)
> See https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/456/console for more 
> info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2868) QueryServerBasicsIT.testSchemas is failing

2016-04-29 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-2868:
-
Assignee: Ankit Singhal  (was: Sergey Soldatov)

> QueryServerBasicsIT.testSchemas is failing
> --
>
> Key: PHOENIX-2868
> URL: https://issues.apache.org/jira/browse/PHOENIX-2868
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Ankit Singhal
> Fix For: 4.8.0
>
>
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.464 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.QueryServerBasicsIT
> testSchemas(org.apache.phoenix.end2end.QueryServerBasicsIT)  Time elapsed: 
> 0.212 sec  <<< FAILURE!
> java.lang.AssertionError: unexpected empty resultset
>   at 
> org.apache.phoenix.end2end.QueryServerBasicsIT.testSchemas(QueryServerBasicsIT.java:103)
> See https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/456/console for more 
> info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2868) QueryServerBasicsIT.testSchemas is failing

2016-04-29 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264924#comment-15264924
 ] 

Sergey Soldatov commented on PHOENIX-2868:
--

It's a side effect of PHOENIX-1311 Didn't dig deep, just checked that it 
started failing after the commit. [~ankit.singhal] could you please take a look?

> QueryServerBasicsIT.testSchemas is failing
> --
>
> Key: PHOENIX-2868
> URL: https://issues.apache.org/jira/browse/PHOENIX-2868
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.464 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.QueryServerBasicsIT
> testSchemas(org.apache.phoenix.end2end.QueryServerBasicsIT)  Time elapsed: 
> 0.212 sec  <<< FAILURE!
> java.lang.AssertionError: unexpected empty resultset
>   at 
> org.apache.phoenix.end2end.QueryServerBasicsIT.testSchemas(QueryServerBasicsIT.java:103)
> See https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/456/console for more 
> info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2795) Support auto partition for views

2016-04-29 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2795.
-
Resolution: Fixed

> Support auto partition for views
> 
>
> Key: PHOENIX-2795
> URL: https://issues.apache.org/jira/browse/PHOENIX-2795
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: argus
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2795-v2.patch, PHOENIX-2795-v3.patch, 
> PHOENIX-2795.patch
>
>
> When a view or base table is created, we should have an string 
> AUTO_PARTITION_SEQ parameter on CREATE TABLE which uses a sequence based on 
> the argument on the server side to generate a WHERE clause with the first PK 
> column and the unique identifier from the sequence.
> For example:
> {code}
> CREATE SEQUENCE metric_id_seq;
> CREATE TABLE metric_table (metric_id INTEGER, val DOUBLE) 
> AUTO_PARTITION_SEQ=metric_id_seq;
> CREATE VIEW my_view1 AS SELECT * FROM base_table;
> {code}
> would tack on a WHERE clause base on the next value in a sequence, logically 
> like this:
> {code}
> WHERE partition_id =  NEXT VALUE FROM metric_id_seq
> {code}
> It's important that the sequence be generated *after* the check for the 
> existence of the view so that we don't burn sequence values needlessly if the 
> view already exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2868) QueryServerBasicsIT.testSchemas is failing

2016-04-29 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264863#comment-15264863
 ] 

Josh Elser commented on PHOENIX-2868:
-

Curious. I'll try to take a look this weekend.

> QueryServerBasicsIT.testSchemas is failing
> --
>
> Key: PHOENIX-2868
> URL: https://issues.apache.org/jira/browse/PHOENIX-2868
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.464 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.QueryServerBasicsIT
> testSchemas(org.apache.phoenix.end2end.QueryServerBasicsIT)  Time elapsed: 
> 0.212 sec  <<< FAILURE!
> java.lang.AssertionError: unexpected empty resultset
>   at 
> org.apache.phoenix.end2end.QueryServerBasicsIT.testSchemas(QueryServerBasicsIT.java:103)
> See https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/456/console for more 
> info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2795) Support auto partition for views

2016-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264817#comment-15264817
 ] 

Hudson commented on PHOENIX-2795:
-

FAILURE: Integrated in Phoenix-master #1208 (See 
[https://builds.apache.org/job/Phoenix-master/1208/])
PHOENIX-2795 Support auto partition for views (tdsilva: rev 
13f38ca9c1170289fcbcf0a7d8caeeaf5fdfe873)
* phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
* 
phoenix-core/src/main/java/org/apache/phoenix/compile/TupleProjectionCompiler.java
* phoenix-protocol/src/main/MetaDataService.proto
* phoenix-core/src/test/java/org/apache/phoenix/execute/CorrelatePlanTest.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/TableProperty.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/MetaDataProtos.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
* phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* 
phoenix-core/src/test/java/org/apache/phoenix/execute/LiteralResultIteratorPlanTest.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/UnionCompiler.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/AutoPartitionViewsIT.java
* phoenix-core/src/main/java/org/apache/phoenix/util/MetaDataUtil.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
* phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/DelegateTable.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
* 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
* phoenix-protocol/src/main/PTable.proto
* phoenix-core/src/main/java/org/apache/phoenix/util/QueryUtil.java


> Support auto partition for views
> 
>
> Key: PHOENIX-2795
> URL: https://issues.apache.org/jira/browse/PHOENIX-2795
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: argus
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2795-v2.patch, PHOENIX-2795-v3.patch, 
> PHOENIX-2795.patch
>
>
> When a view or base table is created, we should have an string 
> AUTO_PARTITION_SEQ parameter on CREATE TABLE which uses a sequence based on 
> the argument on the server side to generate a WHERE clause with the first PK 
> column and the unique identifier from the sequence.
> For example:
> {code}
> CREATE SEQUENCE metric_id_seq;
> CREATE TABLE metric_table (metric_id INTEGER, val DOUBLE) 
> AUTO_PARTITION_SEQ=metric_id_seq;
> CREATE VIEW my_view1 AS SELECT * FROM base_table;
> {code}
> would tack on a WHERE clause base on the next value in a sequence, logically 
> like this:
> {code}
> WHERE partition_id =  NEXT VALUE FROM metric_id_seq
> {code}
> It's important that the sequence be generated *after* the check for the 
> existence of the view so that we don't burn sequence values needlessly if the 
> view already exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Jenkins build failures?

2016-04-29 Thread Sergey Soldatov
but the way, we need to do something with those OOMs and "unable to
create new native thread" in ITs. It's quite strange to see in 10
lines test such kind of failures. Especially when queries for table
with less than 10 rows generate over 2500 threads. Does anybody know
whether it's zk related issue?

On Fri, Apr 29, 2016 at 7:51 AM, James Taylor  wrote:
> A patch would be much appreciated, Sergey.
>
> On Fri, Apr 29, 2016 at 3:26 AM, Sergey Soldatov 
> wrote:
>
>> As for flume module - flume-ng is coming with commons-io 2.1 while
>> hadoop & hbase require org.apache.commons.io.Charsets which was
>> introduced in 2.3. Easy way is to move dependency on flume-ng after
>> the dependencies on hbase/hadoop.
>>
>> The last thing about ConcurrentHashMap - it definitely means that the
>> code was compiled with 1.8 since 1.7 returns a simple Set while 1.8
>> returns KeySetView
>>
>>
>>
>> On Thu, Apr 28, 2016 at 4:08 PM, Josh Elser  wrote:
>> > *tl;dr*
>> >
>> > * I'm removing ubuntu-us1 from all pools
>> > * Phoenix-Flume ITs look busted
>> > * UpsertValuesIT looks busted
>> > * Something is weirdly wrong with Phoenix-4.x-HBase-1.1 in its entirety.
>> >
>> > Details below...
>> >
>> > It looks like we have a bunch of different reasons for the failures.
>> > Starting with Phoenix-master:
>> >
>> 
>> > org.apache.phoenix.schema.NewerTableAlreadyExistsException: ERROR 1013
>> > (42M04): Table already exists. tableName=T
>> > at
>> >
>> org.apache.phoenix.end2end.UpsertValuesIT.testBatchedUpsert(UpsertValuesIT.java:476)
>> > <<<
>> >
>> > I've seen this coming out of a few different tests (I think I've also run
>> > into it on my own, but that's another thing)
>> >
>> > Some of them look like the Jenkins build host is just over-taxed:
>> >
>> 
>> > Java HotSpot(TM) 64-Bit Server VM warning: INFO:
>> > os::commit_memory(0x0007e760, 331350016, 0) failed; error='Cannot
>> > allocate memory' (errno=12)
>> > #
>> > # There is insufficient memory for the Java Runtime Environment to
>> continue.
>> > # Native memory allocation (malloc) failed to allocate 331350016 bytes
>> for
>> > committing reserved memory.
>> > # An error report file with more information is saved as:
>> > #
>> >
>> /home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-core/hs_err_pid26454.log
>> > Java HotSpot(TM) 64-Bit Server VM warning: INFO:
>> > os::commit_memory(0x0007ea60, 273678336, 0) failed; error='Cannot
>> > allocate memory' (errno=12)
>> > #
>> > <<<
>> >
>> > and
>> >
>> 
>> > ---
>> >  T E S T S
>> > ---
>> > Build step 'Invoke top-level Maven targets' marked build as failure
>> > <<<
>> >
>> > Both of these issues are limited to the host "ubuntu-us1". Let me just
>> > remove him from the pool (on Phoenix-master) and see if that helps at
>> all.
>> >
>> > I also see some sporadic failures of some Flume tests
>> >
>> 
>> > Running org.apache.phoenix.flume.PhoenixSinkIT
>> > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec
>> > <<< FAILURE! - in org.apache.phoenix.flume.PhoenixSinkIT
>> > org.apache.phoenix.flume.PhoenixSinkIT  Time elapsed: 0.004 sec  <<<
>> ERROR!
>> > java.lang.RuntimeException: java.io.IOException: Failed to save in any
>> > storage directories while saving namespace.
>> > Caused by: java.io.IOException: Failed to save in any storage directories
>> > while saving namespace.
>> >
>> > Running org.apache.phoenix.flume.RegexEventSerializerIT
>> > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.005 sec
>> > <<< FAILURE! - in org.apache.phoenix.flume.RegexEventSerializerIT
>> > org.apache.phoenix.flume.RegexEventSerializerIT  Time elapsed: 0.004 sec
>> > <<< ERROR!
>> > java.lang.RuntimeException: java.io.IOException: Failed to save in any
>> > storage directories while saving namespace.
>> > Caused by: java.io.IOException: Failed to save in any storage directories
>> > while saving namespace.
>> > <<<
>> >
>> > I'm not sure what the error message means at a glance.
>> >
>> > For Phoenix-HBase-1.1:
>> >
>> 
>> > org.apache.hadoop.hbase.DoNotRetryIOException:
>> java.lang.NoSuchMethodError:
>> >
>> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
>> > at
>> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156)
>> > at
>> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>> > at
>> >
>> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>> > at
>> > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>> > at java.lang.Thread.run(Thread.java:745)
>> > Caused by: java.lang.NoSuchMethodError:
>> >
>> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
>> > at

[jira] [Updated] (PHOENIX-2868) QueryServerBasicsIT.testSchemas is failing

2016-04-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2868:
--
Fix Version/s: 4.8.0

> QueryServerBasicsIT.testSchemas is failing
> --
>
> Key: PHOENIX-2868
> URL: https://issues.apache.org/jira/browse/PHOENIX-2868
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.464 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.QueryServerBasicsIT
> testSchemas(org.apache.phoenix.end2end.QueryServerBasicsIT)  Time elapsed: 
> 0.212 sec  <<< FAILURE!
> java.lang.AssertionError: unexpected empty resultset
>   at 
> org.apache.phoenix.end2end.QueryServerBasicsIT.testSchemas(QueryServerBasicsIT.java:103)
> See https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/456/console for more 
> info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2868) QueryServerBasicsIT.testSchemas is failing

2016-04-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264675#comment-15264675
 ] 

James Taylor commented on PHOENIX-2868:
---

[~ankit.singhal] - maybe related to namespace support? Or maybe something else, 
[~elserj]?

> QueryServerBasicsIT.testSchemas is failing
> --
>
> Key: PHOENIX-2868
> URL: https://issues.apache.org/jira/browse/PHOENIX-2868
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>
> Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.464 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.QueryServerBasicsIT
> testSchemas(org.apache.phoenix.end2end.QueryServerBasicsIT)  Time elapsed: 
> 0.212 sec  <<< FAILURE!
> java.lang.AssertionError: unexpected empty resultset
>   at 
> org.apache.phoenix.end2end.QueryServerBasicsIT.testSchemas(QueryServerBasicsIT.java:103)
> See https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/456/console for more 
> info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2868) QueryServerBasicsIT.testSchemas is failing

2016-04-29 Thread James Taylor (JIRA)
James Taylor created PHOENIX-2868:
-

 Summary: QueryServerBasicsIT.testSchemas is failing
 Key: PHOENIX-2868
 URL: https://issues.apache.org/jira/browse/PHOENIX-2868
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 65.464 sec <<< 
FAILURE! - in org.apache.phoenix.end2end.QueryServerBasicsIT
testSchemas(org.apache.phoenix.end2end.QueryServerBasicsIT)  Time elapsed: 
0.212 sec  <<< FAILURE!
java.lang.AssertionError: unexpected empty resultset
at 
org.apache.phoenix.end2end.QueryServerBasicsIT.testSchemas(QueryServerBasicsIT.java:103)

See https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/456/console for more 
info.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2867) IT tests for phoenix-flume fails with ClassNotFound Exception

2016-04-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264646#comment-15264646
 ] 

Hudson commented on PHOENIX-2867:
-

FAILURE: Integrated in Phoenix-master #1207 (See 
[https://builds.apache.org/job/Phoenix-master/1207/])
PHOENIX-2867 IT tests for phoenix-flume fails with ClassNotFound (samarth: rev 
7b7f3f64288e0c3a3847f0f94f206dd119739db7)
* phoenix-flume/pom.xml


> IT tests for phoenix-flume fails with ClassNotFound Exception
> -
>
> Key: PHOENIX-2867
> URL: https://issues.apache.org/jira/browse/PHOENIX-2867
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2867-1.patch
>
>
> When IT tests are running phoenix-flume tests fails with the following 
> exception:
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/commons/io/Charsets
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageUtil.(FSImageUtil.java:36)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.(FSImageFormatProtobuf.java:357)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:986)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:1039)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: org.apache.commons.io.Charsets
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 5 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2795) Support auto partition for views

2016-04-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264604#comment-15264604
 ] 

James Taylor commented on PHOENIX-2795:
---

+1. Looks great - thanks, [~tdsilva].

> Support auto partition for views
> 
>
> Key: PHOENIX-2795
> URL: https://issues.apache.org/jira/browse/PHOENIX-2795
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: argus
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2795-v2.patch, PHOENIX-2795-v3.patch, 
> PHOENIX-2795.patch
>
>
> When a view or base table is created, we should have an string 
> AUTO_PARTITION_SEQ parameter on CREATE TABLE which uses a sequence based on 
> the argument on the server side to generate a WHERE clause with the first PK 
> column and the unique identifier from the sequence.
> For example:
> {code}
> CREATE SEQUENCE metric_id_seq;
> CREATE TABLE metric_table (metric_id INTEGER, val DOUBLE) 
> AUTO_PARTITION_SEQ=metric_id_seq;
> CREATE VIEW my_view1 AS SELECT * FROM base_table;
> {code}
> would tack on a WHERE clause base on the next value in a sequence, logically 
> like this:
> {code}
> WHERE partition_id =  NEXT VALUE FROM metric_id_seq
> {code}
> It's important that the sequence be generated *after* the check for the 
> existence of the view so that we don't burn sequence values needlessly if the 
> view already exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2795) Support auto partition for views

2016-04-29 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2795:

Attachment: PHOENIX-2795-v3.patch

[~jamestaylor]

I have added a test for a view statement with a complex where clause. The 
AndParseNode and OrParseNode toSql methods generate surrounding parens so we 
should be ok without adding parens. 

For tables that have the autoPartition property set if the view statement of a 
view is null I set it to QueryConstants.EMPTY_COLUMN_VALUE_BYTES) (so that 
viewWhere is never null and we don't generate a Delete). So I just need to AND 
the autoPartition where clause if the viewWhere is not 
QueryConstants.EMPTY_COLUMN_VALUE_BYTES

> Support auto partition for views
> 
>
> Key: PHOENIX-2795
> URL: https://issues.apache.org/jira/browse/PHOENIX-2795
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>  Labels: argus
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2795-v2.patch, PHOENIX-2795-v3.patch, 
> PHOENIX-2795.patch
>
>
> When a view or base table is created, we should have an string 
> AUTO_PARTITION_SEQ parameter on CREATE TABLE which uses a sequence based on 
> the argument on the server side to generate a WHERE clause with the first PK 
> column and the unique identifier from the sequence.
> For example:
> {code}
> CREATE SEQUENCE metric_id_seq;
> CREATE TABLE metric_table (metric_id INTEGER, val DOUBLE) 
> AUTO_PARTITION_SEQ=metric_id_seq;
> CREATE VIEW my_view1 AS SELECT * FROM base_table;
> {code}
> would tack on a WHERE clause base on the next value in a sequence, logically 
> like this:
> {code}
> WHERE partition_id =  NEXT VALUE FROM metric_id_seq
> {code}
> It's important that the sequence be generated *after* the check for the 
> existence of the view so that we don't burn sequence values needlessly if the 
> view already exists.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2867) IT tests for phoenix-flume fails with ClassNotFound Exception

2016-04-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15264390#comment-15264390
 ] 

James Taylor commented on PHOENIX-2867:
---

Awesome, thanks, [~sergey.soldatov]. Would  you mind committing, [~samarthjain]?

> IT tests for phoenix-flume fails with ClassNotFound Exception
> -
>
> Key: PHOENIX-2867
> URL: https://issues.apache.org/jira/browse/PHOENIX-2867
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2867-1.patch
>
>
> When IT tests are running phoenix-flume tests fails with the following 
> exception:
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/commons/io/Charsets
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageUtil.(FSImageUtil.java:36)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.(FSImageFormatProtobuf.java:357)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:986)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:1039)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: org.apache.commons.io.Charsets
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 5 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2867) IT tests for phoenix-flume fails with ClassNotFound Exception

2016-04-29 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-2867:
-
Attachment: PHOENIX-2867-1.patch

Moved flume-ng dependency after hbase/hadoop ones to avoid shading of more 
recent version of commons-io that is used by hadoop. 

> IT tests for phoenix-flume fails with ClassNotFound Exception
> -
>
> Key: PHOENIX-2867
> URL: https://issues.apache.org/jira/browse/PHOENIX-2867
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Attachments: PHOENIX-2867-1.patch
>
>
> When IT tests are running phoenix-flume tests fails with the following 
> exception:
> {noformat}
> java.lang.NoClassDefFoundError: org/apache/commons/io/Charsets
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageUtil.(FSImageUtil.java:36)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.(FSImageFormatProtobuf.java:357)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:986)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:1039)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: org.apache.commons.io.Charsets
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 5 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2867) IT tests for phoenix-flume fails with ClassNotFound Exception

2016-04-29 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-2867:


 Summary: IT tests for phoenix-flume fails with ClassNotFound 
Exception
 Key: PHOENIX-2867
 URL: https://issues.apache.org/jira/browse/PHOENIX-2867
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov


When IT tests are running phoenix-flume tests fails with the following 
exception:
{noformat}
java.lang.NoClassDefFoundError: org/apache/commons/io/Charsets
at 
org.apache.hadoop.hdfs.server.namenode.FSImageUtil.(FSImageUtil.java:36)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Saver.(FSImageFormatProtobuf.java:357)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImage(FSImage.java:986)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage$FSImageSaver.run(FSImage.java:1039)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.commons.io.Charsets
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 5 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Jenkins build failures?

2016-04-29 Thread James Taylor
A patch would be much appreciated, Sergey.

On Fri, Apr 29, 2016 at 3:26 AM, Sergey Soldatov 
wrote:

> As for flume module - flume-ng is coming with commons-io 2.1 while
> hadoop & hbase require org.apache.commons.io.Charsets which was
> introduced in 2.3. Easy way is to move dependency on flume-ng after
> the dependencies on hbase/hadoop.
>
> The last thing about ConcurrentHashMap - it definitely means that the
> code was compiled with 1.8 since 1.7 returns a simple Set while 1.8
> returns KeySetView
>
>
>
> On Thu, Apr 28, 2016 at 4:08 PM, Josh Elser  wrote:
> > *tl;dr*
> >
> > * I'm removing ubuntu-us1 from all pools
> > * Phoenix-Flume ITs look busted
> > * UpsertValuesIT looks busted
> > * Something is weirdly wrong with Phoenix-4.x-HBase-1.1 in its entirety.
> >
> > Details below...
> >
> > It looks like we have a bunch of different reasons for the failures.
> > Starting with Phoenix-master:
> >
> 
> > org.apache.phoenix.schema.NewerTableAlreadyExistsException: ERROR 1013
> > (42M04): Table already exists. tableName=T
> > at
> >
> org.apache.phoenix.end2end.UpsertValuesIT.testBatchedUpsert(UpsertValuesIT.java:476)
> > <<<
> >
> > I've seen this coming out of a few different tests (I think I've also run
> > into it on my own, but that's another thing)
> >
> > Some of them look like the Jenkins build host is just over-taxed:
> >
> 
> > Java HotSpot(TM) 64-Bit Server VM warning: INFO:
> > os::commit_memory(0x0007e760, 331350016, 0) failed; error='Cannot
> > allocate memory' (errno=12)
> > #
> > # There is insufficient memory for the Java Runtime Environment to
> continue.
> > # Native memory allocation (malloc) failed to allocate 331350016 bytes
> for
> > committing reserved memory.
> > # An error report file with more information is saved as:
> > #
> >
> /home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-core/hs_err_pid26454.log
> > Java HotSpot(TM) 64-Bit Server VM warning: INFO:
> > os::commit_memory(0x0007ea60, 273678336, 0) failed; error='Cannot
> > allocate memory' (errno=12)
> > #
> > <<<
> >
> > and
> >
> 
> > ---
> >  T E S T S
> > ---
> > Build step 'Invoke top-level Maven targets' marked build as failure
> > <<<
> >
> > Both of these issues are limited to the host "ubuntu-us1". Let me just
> > remove him from the pool (on Phoenix-master) and see if that helps at
> all.
> >
> > I also see some sporadic failures of some Flume tests
> >
> 
> > Running org.apache.phoenix.flume.PhoenixSinkIT
> > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec
> > <<< FAILURE! - in org.apache.phoenix.flume.PhoenixSinkIT
> > org.apache.phoenix.flume.PhoenixSinkIT  Time elapsed: 0.004 sec  <<<
> ERROR!
> > java.lang.RuntimeException: java.io.IOException: Failed to save in any
> > storage directories while saving namespace.
> > Caused by: java.io.IOException: Failed to save in any storage directories
> > while saving namespace.
> >
> > Running org.apache.phoenix.flume.RegexEventSerializerIT
> > Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.005 sec
> > <<< FAILURE! - in org.apache.phoenix.flume.RegexEventSerializerIT
> > org.apache.phoenix.flume.RegexEventSerializerIT  Time elapsed: 0.004 sec
> > <<< ERROR!
> > java.lang.RuntimeException: java.io.IOException: Failed to save in any
> > storage directories while saving namespace.
> > Caused by: java.io.IOException: Failed to save in any storage directories
> > while saving namespace.
> > <<<
> >
> > I'm not sure what the error message means at a glance.
> >
> > For Phoenix-HBase-1.1:
> >
> 
> > org.apache.hadoop.hbase.DoNotRetryIOException:
> java.lang.NoSuchMethodError:
> >
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
> > at
> org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156)
> > at
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> > at
> >
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> > at
> > org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> > at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.lang.NoSuchMethodError:
> >
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
> > at
> >
> org.apache.hadoop.hbase.master.ServerManager.findServerWithSameHostnamePortWithLock(ServerManager.java:432)
> > at
> >
> org.apache.hadoop.hbase.master.ServerManager.checkAndRecordNewServer(ServerManager.java:346)
> > at
> >
> org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:264)
> > at
> >
> org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:318)
> > at
> >
> 

Re: Jenkins build failures?

2016-04-29 Thread Sergey Soldatov
As for flume module - flume-ng is coming with commons-io 2.1 while
hadoop & hbase require org.apache.commons.io.Charsets which was
introduced in 2.3. Easy way is to move dependency on flume-ng after
the dependencies on hbase/hadoop.

The last thing about ConcurrentHashMap - it definitely means that the
code was compiled with 1.8 since 1.7 returns a simple Set while 1.8
returns KeySetView



On Thu, Apr 28, 2016 at 4:08 PM, Josh Elser  wrote:
> *tl;dr*
>
> * I'm removing ubuntu-us1 from all pools
> * Phoenix-Flume ITs look busted
> * UpsertValuesIT looks busted
> * Something is weirdly wrong with Phoenix-4.x-HBase-1.1 in its entirety.
>
> Details below...
>
> It looks like we have a bunch of different reasons for the failures.
> Starting with Phoenix-master:
>

> org.apache.phoenix.schema.NewerTableAlreadyExistsException: ERROR 1013
> (42M04): Table already exists. tableName=T
> at
> org.apache.phoenix.end2end.UpsertValuesIT.testBatchedUpsert(UpsertValuesIT.java:476)
> <<<
>
> I've seen this coming out of a few different tests (I think I've also run
> into it on my own, but that's another thing)
>
> Some of them look like the Jenkins build host is just over-taxed:
>

> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
> os::commit_memory(0x0007e760, 331350016, 0) failed; error='Cannot
> allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to continue.
> # Native memory allocation (malloc) failed to allocate 331350016 bytes for
> committing reserved memory.
> # An error report file with more information is saved as:
> #
> /home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-core/hs_err_pid26454.log
> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
> os::commit_memory(0x0007ea60, 273678336, 0) failed; error='Cannot
> allocate memory' (errno=12)
> #
> <<<
>
> and
>

> ---
>  T E S T S
> ---
> Build step 'Invoke top-level Maven targets' marked build as failure
> <<<
>
> Both of these issues are limited to the host "ubuntu-us1". Let me just
> remove him from the pool (on Phoenix-master) and see if that helps at all.
>
> I also see some sporadic failures of some Flume tests
>

> Running org.apache.phoenix.flume.PhoenixSinkIT
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.004 sec
> <<< FAILURE! - in org.apache.phoenix.flume.PhoenixSinkIT
> org.apache.phoenix.flume.PhoenixSinkIT  Time elapsed: 0.004 sec  <<< ERROR!
> java.lang.RuntimeException: java.io.IOException: Failed to save in any
> storage directories while saving namespace.
> Caused by: java.io.IOException: Failed to save in any storage directories
> while saving namespace.
>
> Running org.apache.phoenix.flume.RegexEventSerializerIT
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.005 sec
> <<< FAILURE! - in org.apache.phoenix.flume.RegexEventSerializerIT
> org.apache.phoenix.flume.RegexEventSerializerIT  Time elapsed: 0.004 sec
> <<< ERROR!
> java.lang.RuntimeException: java.io.IOException: Failed to save in any
> storage directories while saving namespace.
> Caused by: java.io.IOException: Failed to save in any storage directories
> while saving namespace.
> <<<
>
> I'm not sure what the error message means at a glance.
>
> For Phoenix-HBase-1.1:
>

> org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.NoSuchMethodError:
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoSuchMethodError:
> java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
> at
> org.apache.hadoop.hbase.master.ServerManager.findServerWithSameHostnamePortWithLock(ServerManager.java:432)
> at
> org.apache.hadoop.hbase.master.ServerManager.checkAndRecordNewServer(ServerManager.java:346)
> at
> org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:264)
> at
> org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:318)
> at
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8615)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2117)
> ... 4 more
> 2016-04-28 22:54:35,497 WARN  [RS:0;hemera:41302]
> org.apache.hadoop.hbase.regionserver.HRegionServer(2279): error telling
> master we are up
>