Github user alexliu68 commented on the pull request:
https://github.com/apache/spark/pull/3848#issuecomment-68409413
I rebase the commit to simply fixing the parsing issue and concatenate
cluster name, database name and table name into a single string with dot in
between. Now it can parse [clusterName].[databaseName].[tableName] type full
table name and pass it as tableName to catalog.
The following example query works for Cassandra integration
e.g. Select table.column from cluster.database.table
I leave the refactoring to better support full table name to future work.
Ideally Spark SQL should be able to join data across catalog, cluster,
database and table. There are four levels of data joining, catalog level,
cluster level, database level, and table level.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]