mayankshriv commented on a change in pull request #5787:
URL: https://github.com/apache/incubator-pinot/pull/5787#discussion_r464474155



##########
File path: 
pinot-connectors/pinot-spark-connector/src/test/resources/schema/pinot-schema.json
##########
@@ -0,0 +1,57 @@
+{

Review comment:
       Perhaps we should have a PinotSchema <-> SparkSchema converter?

##########
File path: 
pinot-connectors/pinot-spark-connector/src/test/scala/org/apache/pinot/connector/spark/connector/PinotSplitterTest.scala
##########
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.connector.spark.connector
+
+import org.apache.pinot.connector.spark.BaseTest
+import org.apache.pinot.connector.spark.connector.Constants.PinotTableTypes
+import org.apache.pinot.connector.spark.connector.query.GeneratedSQLs
+import org.apache.pinot.connector.spark.exceptions.PinotException
+
+class PinotSplitterTest extends BaseTest {

Review comment:
       Could we have docs on all classes?

##########
File path: pinot-connectors/pinot-spark-connector/documentation/read_model.md
##########
@@ -0,0 +1,145 @@
+<!--
+
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+
+-->
+# Read Model

Review comment:
       Should this also go into https://docs.pinot.apache.org, after the PR is 
committed?

##########
File path: 
pinot-connectors/pinot-spark-connector/src/main/scala/org/apache/pinot/connector/spark/datasource/PinotDataSourceV2.scala
##########
@@ -0,0 +1,36 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.connector.spark.datasource
+
+import org.apache.spark.sql.sources.DataSourceRegister
+import org.apache.spark.sql.sources.v2.reader.DataSourceReader
+import org.apache.spark.sql.sources.v2.{DataSourceOptions, DataSourceV2, 
ReadSupport}
+import org.apache.spark.sql.types.StructType
+
+class PinotDataSourceV2 extends DataSourceV2 with ReadSupport with 
DataSourceRegister {

Review comment:
       Adding docs will help, for folks not yet hands on with Scala.

##########
File path: 
pinot-connectors/pinot-spark-connector/src/main/scala/org/apache/pinot/connector/spark/datasource/PinotDataSourceReader.scala
##########
@@ -0,0 +1,124 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.connector.spark.datasource
+
+import java.util.{List => JList}
+
+import 
org.apache.pinot.connector.spark.connector.query.SQLSelectionQueryGenerator
+import org.apache.pinot.connector.spark.connector.{
+  FilterPushDown,
+  PinotClusterClient,
+  PinotSplitter,
+  PinotUtils
+}
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.sources._
+import org.apache.spark.sql.sources.v2.DataSourceOptions
+import org.apache.spark.sql.sources.v2.reader.{
+  DataSourceReader,
+  InputPartition,
+  SupportsPushDownFilters,
+  SupportsPushDownRequiredColumns
+}
+import org.apache.spark.sql.types._
+
+import scala.collection.JavaConverters._
+
+class PinotDataSourceReader(options: DataSourceOptions, userSchema: 
Option[StructType] = None)
+  extends DataSourceReader
+  with SupportsPushDownFilters
+  with SupportsPushDownRequiredColumns {
+
+  private val pinotDataSourceOptions = PinotDataSourceReadOptions.from(options)
+  private var acceptedFilters: Array[Filter] = Array.empty
+  private var currentSchema: StructType = _
+
+  override def readSchema(): StructType = {
+    if (currentSchema == null) {
+      currentSchema = userSchema.getOrElse {
+        val pinotTableSchema = PinotClusterClient.getTableSchema(
+          pinotDataSourceOptions.controller,
+          pinotDataSourceOptions.tableName
+        )
+        PinotUtils.pinotSchemaToSparkSchema(pinotTableSchema)
+      }
+    }
+    currentSchema
+  }
+
+  override def planInputPartitions(): JList[InputPartition[InternalRow]] = {
+    val schema = readSchema()
+    val tableType = PinotUtils.getTableType(pinotDataSourceOptions.tableName)
+
+    // Time boundary is used when table is hybrid to ensure that the overlap
+    // between realtime and offline segment data is queried exactly once
+    val timeBoundaryInfo =
+      if (tableType.isDefined) {
+        None
+      } else {
+        PinotClusterClient.getTimeBoundaryInfo(
+          pinotDataSourceOptions.broker,
+          pinotDataSourceOptions.tableName
+        )
+      }
+
+    val whereCondition = 
FilterPushDown.compileFiltersToSqlWhereClause(this.acceptedFilters)
+    val generatedSQLs = SQLSelectionQueryGenerator.generate(
+      pinotDataSourceOptions.tableName,
+      timeBoundaryInfo,
+      schema.fieldNames,
+      whereCondition
+    )
+
+    val routingTable =
+      PinotClusterClient.getRoutingTable(pinotDataSourceOptions.broker, 
generatedSQLs)

Review comment:
       Connecting to pinot server directly leads to querying routing-table / 
time-boundary, which the broker does. Wondering if there is plan to connect via 
the broker to avoid this? It may have the following advantages:
   
   * No need to query routing-table / time-boundary, and unlike in this 
approach.
   * Filter push down
   
   One issue I see though, it may not be feasible to stream data out of broker 
with the current code. I am trying to see what the general direction/approach 
is with these connectors.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to