abhishekagarwal87 commented on code in PR #14510:
URL: https://github.com/apache/druid/pull/14510#discussion_r1250590016


##########
docs/api-reference/sql-jdbc.md:
##########
@@ -27,23 +27,44 @@ sidebar_label: SQL JDBC driver
 > This document describes the SQL language.
 
 
-You can make [Druid SQL](../querying/sql.md) queries using the [Avatica JDBC 
driver](https://calcite.apache.org/avatica/downloads/). We recommend using 
Avatica JDBC driver version 1.17.0 or later. Note that as of the time of this 
writing, Avatica 1.17.0, the latest version, does not support passing 
connection string parameters from the URL to Druid, so you must pass them using 
a `Properties` object. Once you've downloaded the Avatica client jar, add it to 
your classpath and use the connect string 
`jdbc:avatica:remote:url=http://BROKER:8082/druid/v2/sql/avatica/`.
+You can make [Druid SQL](./sql.md) queries using the [Avatica JDBC 
driver](https://calcite.apache.org/avatica/downloads/).
+We recommend using Avatica JDBC driver version 1.22.0 or later.
+Once you've downloaded the Avatica client jar, add it to your classpath.
 
-When using the JDBC connector for the [examples](#examples) or in general, 
it's helpful to understand the parts of the connect string stored in the `url` 
variable:
+Example connection string:
 
-  - `jdbc:avatica:remote:url=` is prepended to the hostname and port.
-  - The hostname and port number for your Druid deployment depends on whether 
you want to connect to the Router or a specific Broker. For more information, 
see [Connection stickiness](#connection-stickiness). In the case of the 
quickstart deployment, the hostname and port are `http://localhost:8888`, which 
connects to the Router running on your local machine.
-  - The SQL endpoint in Druid for the Avatica driver is  
`/druid/v2/sql/avatica/`.
+```
+jdbc:avatica:remote:url=http://localhost:8888/druid/v2/sql/avatica/;transparent_reconnect=true
+```
+
+Or, to use the protobuf protocol instead of JSON:
+
+```
+jdbc:avatica:remote:url=http://localhost:8888/druid/v2/sql/avatica-protobuf/;transparent_reconnect=true;serialization=protobuf
+```
 
-Example code:
+The `url` is the `/druid/v2/sql/avatica/` endpoint on the Router, which routes 
JDBC connections to a consistent Broker.
+For more information, see [Connection stickiness](#connection-stickiness).
+
+Set `transparent_reconnect` to `true` so your connection is not interrupted if 
the pool of Brokers changes membership,
+or if a Broker is restarted.
+
+Set `serialization` to `protobuf` if using the protobuf endpoint.
+
+Note that as of the time of this writing, Avatica 1.22.0, the latest version, 
does not support passing

Review Comment:
   it's 1.23.0



##########
processing/src/main/java/org/apache/druid/query/FilterDataSource.java:
##########
@@ -0,0 +1,178 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.query;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.collect.ImmutableList;
+import org.apache.druid.java.util.common.IAE;
+import org.apache.druid.query.filter.DimFilter;
+import org.apache.druid.query.planning.DataSourceAnalysis;
+import org.apache.druid.segment.FilterSegmentReference;
+import org.apache.druid.segment.SegmentReference;
+import org.apache.druid.utils.JvmUtils;
+
+import java.util.List;
+import java.util.Objects;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+
+public class FilterDataSource implements DataSource

Review Comment:
   please add javadocs. 



##########
processing/src/main/java/org/apache/druid/segment/SelectProjectStorageAdapter.java:
##########
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import org.apache.druid.java.util.common.granularity.Granularity;
+import org.apache.druid.java.util.common.guava.Sequence;
+import org.apache.druid.query.QueryMetrics;
+import org.apache.druid.query.filter.Filter;
+import org.apache.druid.segment.column.ColumnCapabilities;
+import org.apache.druid.segment.data.Indexed;
+import org.joda.time.DateTime;
+import org.joda.time.Interval;
+
+import javax.annotation.Nullable;
+import java.util.ArrayList;
+import java.util.List;
+
+public class SelectProjectStorageAdapter implements StorageAdapter
+{
+  private final VirtualColumns baseVirtualColumns;
+  private final StorageAdapter baseStorageAdapter;
+
+  public SelectProjectStorageAdapter(final StorageAdapter adapter, final 
VirtualColumns vc)
+  {
+    this.baseStorageAdapter = adapter;
+    this.baseVirtualColumns = vc;
+  }
+
+  @Override
+  public Sequence<Cursor> makeCursors(
+      @Nullable Filter filter,
+      Interval interval,
+      VirtualColumns virtualColumns,
+      Granularity gran,
+      boolean descending,
+      @Nullable QueryMetrics<?> queryMetrics
+  )
+  {
+    final VirtualColumns updatedVirtualColumns;
+    if (virtualColumns.getVirtualColumns().length == 0) {
+      updatedVirtualColumns = baseVirtualColumns;
+    } else {
+      List<VirtualColumn> vcs = new ArrayList<>();
+      for (VirtualColumn vc : virtualColumns.getVirtualColumns()) {
+        vcs.add(vc);
+      }
+      for (VirtualColumn vc : baseVirtualColumns.getVirtualColumns()) {
+        vcs.add(vc);
+      }
+      updatedVirtualColumns = VirtualColumns.create(vcs);
+    }
+    return baseStorageAdapter.makeCursors(filter, interval, 
updatedVirtualColumns, gran, descending, queryMetrics);
+  }
+
+  @Override
+  public Interval getInterval()
+  {
+    return baseStorageAdapter.getInterval();
+  }
+
+  @Override
+  public Indexed<String> getAvailableDimensions()
+  {
+    return baseStorageAdapter.getAvailableDimensions();

Review Comment:
   would this include names of virtual columns? 



##########
processing/src/main/java/org/apache/druid/segment/FilterStorageAdapter.java:
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.google.common.collect.ImmutableList;
+import org.apache.druid.java.util.common.granularity.Granularity;
+import org.apache.druid.java.util.common.guava.Sequence;
+import org.apache.druid.query.QueryMetrics;
+import org.apache.druid.query.filter.DimFilter;
+import org.apache.druid.query.filter.Filter;
+import org.apache.druid.segment.column.ColumnCapabilities;
+import org.apache.druid.segment.data.Indexed;
+import org.apache.druid.segment.filter.AndFilter;
+import org.joda.time.DateTime;
+import org.joda.time.Interval;
+
+import javax.annotation.Nullable;
+
+public class FilterStorageAdapter implements StorageAdapter
+{
+  private final DimFilter filterOnDataSource;
+  private final StorageAdapter baseStorageAdapter;
+
+  public FilterStorageAdapter(final StorageAdapter adapter, final DimFilter 
filter)
+  {
+    this.baseStorageAdapter = adapter;
+    this.filterOnDataSource = filter;
+  }
+
+  @Override
+  public Sequence<Cursor> makeCursors(
+      @Nullable Filter filter,
+      Interval interval,
+      VirtualColumns virtualColumns,
+      Granularity gran,
+      boolean descending,
+      @Nullable QueryMetrics<?> queryMetrics
+  )
+  {
+    final Filter andFilter;
+    if (filter == null) {
+      andFilter = filterOnDataSource.toFilter();
+    } else {
+      andFilter = new AndFilter(ImmutableList.of(filter, 
filterOnDataSource.toFilter()));
+    }
+    return baseStorageAdapter.makeCursors(andFilter, interval, virtualColumns, 
gran, descending, queryMetrics);
+  }
+
+  @Override
+  public Interval getInterval()
+  {
+    return baseStorageAdapter.getInterval();
+  }
+
+  @Override
+  public Indexed<String> getAvailableDimensions()
+  {
+    return baseStorageAdapter.getAvailableDimensions();
+  }
+
+  @Override
+  public Iterable<String> getAvailableMetrics()
+  {
+    return baseStorageAdapter.getAvailableMetrics();
+  }
+
+  @Override
+  public int getDimensionCardinality(String column)
+  {
+    return baseStorageAdapter.getDimensionCardinality(column);
+  }
+
+  @Override
+  public DateTime getMinTime()
+  {
+    return baseStorageAdapter.getMinTime();
+  }
+
+  @Override
+  public DateTime getMaxTime()
+  {
+    return baseStorageAdapter.getMaxTime();
+  }
+
+  @Nullable
+  @Override
+  public Comparable getMinValue(String column)
+  {
+    return baseStorageAdapter.getMinValue(column);
+  }
+
+  @Nullable
+  @Override
+  public Comparable getMaxValue(String column)
+  {
+    return baseStorageAdapter.getMaxValue(column);

Review Comment:
   same comment. 



##########
extensions-core/datasketches/src/main/java/org/apache/druid/query/aggregation/datasketches/hll/sql/HllSketchObjectSqlAggregator.java:
##########
@@ -35,7 +35,7 @@ public class HllSketchObjectSqlAggregator extends 
HllSketchBaseSqlAggregator imp
 {
   private static final String NAME = "DS_HLL";
   private static final SqlAggFunction FUNCTION_INSTANCE =
-      OperatorConversions.aggregatorBuilder(NAME)
+      (SqlAggFunction) OperatorConversions.aggregatorBuilder(NAME)

Review Comment:
   why do we need this cast? 



##########
processing/src/main/java/org/apache/druid/segment/FilterStorageAdapter.java:
##########
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.segment;
+
+import com.google.common.collect.ImmutableList;
+import org.apache.druid.java.util.common.granularity.Granularity;
+import org.apache.druid.java.util.common.guava.Sequence;
+import org.apache.druid.query.QueryMetrics;
+import org.apache.druid.query.filter.DimFilter;
+import org.apache.druid.query.filter.Filter;
+import org.apache.druid.segment.column.ColumnCapabilities;
+import org.apache.druid.segment.data.Indexed;
+import org.apache.druid.segment.filter.AndFilter;
+import org.joda.time.DateTime;
+import org.joda.time.Interval;
+
+import javax.annotation.Nullable;
+
+public class FilterStorageAdapter implements StorageAdapter
+{
+  private final DimFilter filterOnDataSource;
+  private final StorageAdapter baseStorageAdapter;
+
+  public FilterStorageAdapter(final StorageAdapter adapter, final DimFilter 
filter)
+  {
+    this.baseStorageAdapter = adapter;
+    this.filterOnDataSource = filter;
+  }
+
+  @Override
+  public Sequence<Cursor> makeCursors(
+      @Nullable Filter filter,
+      Interval interval,
+      VirtualColumns virtualColumns,
+      Granularity gran,
+      boolean descending,
+      @Nullable QueryMetrics<?> queryMetrics
+  )
+  {
+    final Filter andFilter;
+    if (filter == null) {
+      andFilter = filterOnDataSource.toFilter();
+    } else {
+      andFilter = new AndFilter(ImmutableList.of(filter, 
filterOnDataSource.toFilter()));
+    }
+    return baseStorageAdapter.makeCursors(andFilter, interval, virtualColumns, 
gran, descending, queryMetrics);
+  }
+
+  @Override
+  public Interval getInterval()
+  {
+    return baseStorageAdapter.getInterval();
+  }
+
+  @Override
+  public Indexed<String> getAvailableDimensions()
+  {
+    return baseStorageAdapter.getAvailableDimensions();
+  }
+
+  @Override
+  public Iterable<String> getAvailableMetrics()
+  {
+    return baseStorageAdapter.getAvailableMetrics();
+  }
+
+  @Override
+  public int getDimensionCardinality(String column)
+  {
+    return baseStorageAdapter.getDimensionCardinality(column);
+  }
+
+  @Override
+  public DateTime getMinTime()
+  {
+    return baseStorageAdapter.getMinTime();
+  }
+
+  @Override
+  public DateTime getMaxTime()
+  {
+    return baseStorageAdapter.getMaxTime();
+  }
+
+  @Nullable
+  @Override
+  public Comparable getMinValue(String column)
+  {
+    return baseStorageAdapter.getMinValue(column);

Review Comment:
   you should return null here since filter can change this min value. 



##########
processing/src/main/java/org/apache/druid/query/SelectProjectDataSource.java:
##########
@@ -0,0 +1,177 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.query;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.google.common.collect.ImmutableList;
+import org.apache.druid.java.util.common.IAE;
+import org.apache.druid.query.planning.DataSourceAnalysis;
+import org.apache.druid.segment.SegmentReference;
+import org.apache.druid.segment.SelectProjectSegmentReference;
+import org.apache.druid.segment.VirtualColumns;
+import org.apache.druid.utils.JvmUtils;
+
+import java.util.List;
+import java.util.Objects;
+import java.util.Set;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+
+public class SelectProjectDataSource implements DataSource

Review Comment:
   please add javadocs



##########
sql/src/test/java/org/apache/druid/sql/avatica/DruidAvaticaHandlerTest.java:
##########
@@ -1591,12 +1597,12 @@ public void testArrayStuff() throws SQLException
         Assert.assertArrayEquals(new Object[]{"a", null, "", "a", "abc", 
null}, (Object[]) rows.get(0).get("arr1"));
         Assert.assertArrayEquals(new Object[]{7L, 325323L, 0L, null, null, 
null}, (Object[]) rows.get(0).get("arr2"));
         Assert.assertArrayEquals(new Object[]{1.0, 1.7, 0.0, null, null, 
null}, (Object[]) rows.get(0).get("arr3"));
-        Assert.assertArrayEquals(new Object[]{1.0f, 0.1f, 0.0f, null, null, 
null}, (Object[]) rows.get(0).get("arr4"));
+        Assert.assertArrayEquals(new Object[]{1.0, 0.10000000149011612, 0.0, 
null, null, null}, (Object[]) rows.get(0).get("arr4"));

Review Comment:
   what caused the result to change? 



##########
pom.xml:
##########
@@ -80,13 +80,15 @@
         <apache.ranger.version>2.0.0</apache.ranger.version>
         <apache.ranger.gson.version>2.2.4</apache.ranger.gson.version>
         <scala.library.version>2.13.9</scala.library.version>
-        <avatica.version>1.17.0</avatica.version>
+        <avatica.version>1.23.0</avatica.version>
         <avro.version>1.11.1</avro.version>
-        <!-- sql/src/main/codegen/config.fmpp is based on a file from 
calcite-core, and needs to be
-          updated when upgrading Calcite. Refer to the top-level comments in 
that file for details.
-          Also, CalcitePlanner is a clone of Calcite's PlannerImpl and may 
require updates when
-          Calcite is upgrade. -->
-        <calcite.version>1.21.0</calcite.version>
+        <!-- When updating Calcite, also propagate updates to these files 
which we've copied and modified:
+             * sql/src/main/codegen/config.fmpp
+               - original: core/src/main/codegen/config.fmpp

Review Comment:
   is it still true? I think that we need not copy the whole config.fmpp now



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to