[
https://issues.apache.org/jira/browse/YARN-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Li Lu updated YARN-3134:
------------------------
Attachment: YARN-3134-YARN-2928.003.patch
Updated my patch according to the latest comments. I've rebased the patch to
the latest YARN-2928 branch, with YARN-3551 in. In this version we're no longer
swallowing exceptions. I have not made the change on the Phoenix connection
string since, according to our previous discussion, we're planning to address
this after we've decided which implementation to pursue in the future.
A special note to [~zjshen]: I'm not sure my current way to access the
"singleData" section of a TimelineMetric is correct (since the field no longer
exists). It would be great if you can take a look at it. Thanks!
> [Storage implementation] Exploiting the option of using Phoenix to access
> HBase backend
> ---------------------------------------------------------------------------------------
>
> Key: YARN-3134
> URL: https://issues.apache.org/jira/browse/YARN-3134
> Project: Hadoop YARN
> Issue Type: Sub-task
> Components: timelineserver
> Reporter: Zhijie Shen
> Assignee: Li Lu
> Attachments: SettingupPhoenixstorageforatimelinev2end-to-endtest.pdf,
> YARN-3134-040915_poc.patch, YARN-3134-041015_poc.patch,
> YARN-3134-041415_poc.patch, YARN-3134-042115.patch, YARN-3134-042715.patch,
> YARN-3134-YARN-2928.001.patch, YARN-3134-YARN-2928.002.patch,
> YARN-3134-YARN-2928.003.patch, YARN-3134DataSchema.pdf
>
>
> Quote the introduction on Phoenix web page:
> {code}
> Apache Phoenix is a relational database layer over HBase delivered as a
> client-embedded JDBC driver targeting low latency queries over HBase data.
> Apache Phoenix takes your SQL query, compiles it into a series of HBase
> scans, and orchestrates the running of those scans to produce regular JDBC
> result sets. The table metadata is stored in an HBase table and versioned,
> such that snapshot queries over prior versions will automatically use the
> correct schema. Direct use of the HBase API, along with coprocessors and
> custom filters, results in performance on the order of milliseconds for small
> queries, or seconds for tens of millions of rows.
> {code}
> It may simply our implementation read/write data from/to HBase, and can
> easily build index and compose complex query.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)