[ 
https://issues.apache.org/jira/browse/YARN-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14508025#comment-14508025
 ] 

Li Lu commented on YARN-3134:
-----------------------------

Hi [~vrushalic], sure, will add isRelatedTo and relatesTo since YARN-3431 is 
close to finish. For the metrics, my thought is we may need to have some 
time-based aggregations, like taking the average (or max) of a few time series 
data and store them in an aggregated table. The "precision" table for now 
serves as the raw data table. The user can query on the aggregation table(s) 
for data points per-hour, per-day or so. Time stamps information is split into 
two parts: the time epoch information, marked by the startTime and endTime of 
the metric object, and the actual time for a point in a time series. Epoch 
start and end times are used as PKs for the Phoenix storage for better 
indexing, and detailed time for each point is stored in the time series. We can 
certainly discuss about this design, though... 

> [Storage implementation] Exploiting the option of using Phoenix to access 
> HBase backend
> ---------------------------------------------------------------------------------------
>
>                 Key: YARN-3134
>                 URL: https://issues.apache.org/jira/browse/YARN-3134
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: timelineserver
>            Reporter: Zhijie Shen
>            Assignee: Li Lu
>         Attachments: YARN-3134-040915_poc.patch, YARN-3134-041015_poc.patch, 
> YARN-3134-041415_poc.patch, YARN-3134-042115.patch, YARN-3134DataSchema.pdf
>
>
> Quote the introduction on Phoenix web page:
> {code}
> Apache Phoenix is a relational database layer over HBase delivered as a 
> client-embedded JDBC driver targeting low latency queries over HBase data. 
> Apache Phoenix takes your SQL query, compiles it into a series of HBase 
> scans, and orchestrates the running of those scans to produce regular JDBC 
> result sets. The table metadata is stored in an HBase table and versioned, 
> such that snapshot queries over prior versions will automatically use the 
> correct schema. Direct use of the HBase API, along with coprocessors and 
> custom filters, results in performance on the order of milliseconds for small 
> queries, or seconds for tens of millions of rows.
> {code}
> It may simply our implementation read/write data from/to HBase, and can 
> easily build index and compose complex query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to