[ https://issues.apache.org/jira/browse/PHOENIX-3532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15752860#comment-15752860 ]
Nico Pappagianis commented on PHOENIX-3532: ------------------------------------------- The tests in PhoenixSparkITTenant are not being run here. They pass when run in intellij, so I'm not sure what the mvn verify is doing differently. You can see below that only the 30 tests in PhoenixSparkIT are being run. - Can create schema RDD and execute query - Can create schema RDD and execute query on case sensitive table (no config) - Can create schema RDD and execute constrained query - Can create schema RDD with predicate that will never match - Can create schema RDD with complex predicate - Can query an array table - Can read a table as an RDD - Can save to phoenix table - Can save Java and Joda dates to Phoenix (no config) - Can infer schema without defining columns - Spark SQL can use Phoenix as a data source with no schema specified - Spark SQL can use Phoenix as a data source with PrunedFilteredScan - Can persist a dataframe using 'DataFrame.saveToPhoenix' - Can persist a dataframe using 'DataFrame.save() - Can save arrays back to phoenix - Can read from table with schema and escaped table name - Ensure DataFrame field normalization (PHOENIX-2196) - Ensure Dataframe supports LIKE and IN filters (PHOENIX-2328) - Can load decimal types with accurate precision and scale (PHOENIX-2288) - Can load small and tiny integeger types (PHOENIX-2426) - Can save arrays from custom dataframes back to phoenix - Can save arrays of AnyVal type back to phoenix - Can save arrays of Byte type back to phoenix - Can save binary types back to phoenix - Can load Phoenix DATE columns through DataFrame API - Filter operation doesn't work for column names containing a white space (PHOENIX-2547) - Spark Phoenix cannot recognize Phoenix view fields (PHOENIX-2290) - Queries with small case column-names return empty result-set when working with Spark Datasource Plugin (PHOENIX-2336) - Can coerce Phoenix DATE columns to TIMESTAMP through DataFrame API Run completed in 2 minutes, 22 seconds. Total number of tests run: 30 Suites: completed 3, aborted 1 Tests: succeeded 30, failed 0, canceled 0, ignored 0, pending 0 > Enable DataFrames and RDDs to read from a tenant-specific table > --------------------------------------------------------------- > > Key: PHOENIX-3532 > URL: https://issues.apache.org/jira/browse/PHOENIX-3532 > Project: Phoenix > Issue Type: Bug > Reporter: Nico Pappagianis > Original Estimate: 24h > Remaining Estimate: 24h > > Currently the method phoenixTableAsDataFrame in SparkSqlContextFunctions > and phoenixTableAsRDD in SparkContextFunctions do not pass the tenantId > parameter along to the PhoenixRDD constructor. The tenantId parameter was > added as part of PHOENIX-3427 but was not properly implemented (by me). This > JIRA will fix this issue and add tests around reading tenant-specific tables > as both DataFrames and RDDs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)