[ 
https://issues.apache.org/jira/browse/YARN-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520599#comment-14520599
 ] 

Hadoop QA commented on YARN-3134:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 47s | Pre-patch YARN-2928 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:red}-1{color} | javac |   7m 58s | The applied patch generated  8  
additional warning messages. |
| {color:green}+1{color} | javadoc |   9m 41s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   4m  5s | The applied patch generated  2 
 additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 38s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   0m 41s | The patch appears to introduce 
10 new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | yarn tests |   0m 23s | Tests passed in 
hadoop-yarn-server-timelineservice. |
| | |  40m 18s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-timelineservice |
|  |  Found reliance on default encoding in 
org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl.write(String,
 String, String, String, long, String, TimelineEntity, 
TimelineWriteResponse):in 
org.apache.hadoop.yarn.server.timelineservice.storage.FileSystemTimelineWriterImpl.write(String,
 String, String, String, long, String, TimelineEntity, TimelineWriteResponse): 
new java.io.FileWriter(String, boolean)  At 
FileSystemTimelineWriterImpl.java:[line 86] |
|  |  
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.tryInitTable()
 may fail to clean up java.sql.Statement on checked exception  Obligation to 
clean up resource created at PhoenixTimelineWriterImpl.java:up 
java.sql.Statement on checked exception  Obligation to clean up resource 
created at PhoenixTimelineWriterImpl.java:[line 227] is not discharged |
|  |  
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.executeQuery(String)
 may fail to close Statement  At PhoenixTimelineWriterImpl.java:Statement  At 
PhoenixTimelineWriterImpl.java:[line 492] |
|  |  A prepared statement is generated from a nonconstant String in 
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.storeEntityVariableLengthFields(TimelineEntity,
 TimelineCollectorContext, Connection)   At PhoenixTimelineWriterImpl.java:from 
a nonconstant String in 
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.storeEntityVariableLengthFields(TimelineEntity,
 TimelineCollectorContext, Connection)   At 
PhoenixTimelineWriterImpl.java:[line 389] |
|  |  A prepared statement is generated from a nonconstant String in 
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.storeEvents(TimelineEntity,
 TimelineCollectorContext, Connection)   At PhoenixTimelineWriterImpl.java:from 
a nonconstant String in 
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.storeEvents(TimelineEntity,
 TimelineCollectorContext, Connection)   At 
PhoenixTimelineWriterImpl.java:[line 476] |
|  |  A prepared statement is generated from a nonconstant String in 
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.storeMetrics(TimelineEntity,
 TimelineCollectorContext, Connection)   At PhoenixTimelineWriterImpl.java:from 
a nonconstant String in 
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.storeMetrics(TimelineEntity,
 TimelineCollectorContext, Connection)   At 
PhoenixTimelineWriterImpl.java:[line 433] |
|  |  A prepared statement is generated from a nonconstant String in 
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.write(String,
 String, String, String, long, String, TimelineEntities)   At 
PhoenixTimelineWriterImpl.java:from a nonconstant String in 
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.write(String,
 String, String, String, long, String, TimelineEntities)   At 
PhoenixTimelineWriterImpl.java:[line 167] |
|  |  
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.setBytesForColumnFamily(PreparedStatement,
 Map, int) makes inefficient use of keySet iterator instead of entrySet 
iterator  At PhoenixTimelineWriterImpl.java:use of keySet iterator instead of 
entrySet iterator  At PhoenixTimelineWriterImpl.java:[line 319] |
|  |  
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl.setStringsForColumnFamily(PreparedStatement,
 Map, int) makes inefficient use of keySet iterator instead of entrySet 
iterator  At PhoenixTimelineWriterImpl.java:use of keySet iterator instead of 
entrySet iterator  At PhoenixTimelineWriterImpl.java:[line 303] |
|  |  Should 
org.apache.hadoop.yarn.server.timelineservice.storage.PhoenixTimelineWriterImpl$ColumnFamilyInfo
 be a _static_ inner class?  At PhoenixTimelineWriterImpl.java:inner class?  At 
PhoenixTimelineWriterImpl.java:[lines 277-280] |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12729345/YARN-3134-YARN-2928.001.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | YARN-2928 / b689f5d |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/7547/artifact/patchprocess/diffJavacWarnings.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/7547/artifact/patchprocess/checkstyle-result-diff.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/7547/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-timelineservice.html
 |
| hadoop-yarn-server-timelineservice test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/7547/artifact/patchprocess/testrun_hadoop-yarn-server-timelineservice.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/7547/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/7547/console |


This message was automatically generated.

> [Storage implementation] Exploiting the option of using Phoenix to access 
> HBase backend
> ---------------------------------------------------------------------------------------
>
>                 Key: YARN-3134
>                 URL: https://issues.apache.org/jira/browse/YARN-3134
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>            Reporter: Zhijie Shen
>            Assignee: Li Lu
>         Attachments: SettingupPhoenixstorageforatimelinev2end-to-endtest.pdf, 
> YARN-3134-040915_poc.patch, YARN-3134-041015_poc.patch, 
> YARN-3134-041415_poc.patch, YARN-3134-042115.patch, YARN-3134-042715.patch, 
> YARN-3134-YARN-2928.001.patch, YARN-3134DataSchema.pdf
>
>
> Quote the introduction on Phoenix web page:
> {code}
> Apache Phoenix is a relational database layer over HBase delivered as a 
> client-embedded JDBC driver targeting low latency queries over HBase data. 
> Apache Phoenix takes your SQL query, compiles it into a series of HBase 
> scans, and orchestrates the running of those scans to produce regular JDBC 
> result sets. The table metadata is stored in an HBase table and versioned, 
> such that snapshot queries over prior versions will automatically use the 
> correct schema. Direct use of the HBase API, along with coprocessors and 
> custom filters, results in performance on the order of milliseconds for small 
> queries, or seconds for tens of millions of rows.
> {code}
> It may simply our implementation read/write data from/to HBase, and can 
> easily build index and compose complex query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to