[
https://issues.apache.org/jira/browse/HIVE-26774?focusedWorklogId=836813&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-836813
]
ASF GitHub Bot logged work on HIVE-26774:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 04/Jan/23 06:39
Start Date: 04/Jan/23 06:39
Worklog Time Spent: 10m
Work Description: tarak271 commented on code in PR #3893:
URL: https://github.com/apache/hive/pull/3893#discussion_r1061172657
##########
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFArraySlice.java:
##########
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.udf.generic;
+
+import org.apache.hadoop.hive.ql.exec.Description;
+import org.apache.hadoop.hive.ql.exec.UDFArgumentException;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+import org.apache.hadoop.hive.serde2.objectinspector.ListObjectInspector;
+import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector;
+import org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector;
+import
org.apache.hadoop.hive.serde2.objectinspector.primitive.IntObjectInspector;
+
+import java.util.Collections;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/**
+ * GenericUDFArraySlice.
+ */
+@Description(name = "array_slice", value = "_FUNC_(array, start, length) -
Returns the subset or range of elements from"
+ + " an array (subarray).", extended = "Example:\n" + " > SELECT
_FUNC_(array(1, 2, 3,4), 2,2) FROM src LIMIT 1;\n"
+ + " 3,4")
+public class GenericUDFArraySlice extends AbstractGenericUDFArrayBase {
+ private static final String FUNC_NAME = "ARRAY_SLICE";
+ private static final int START_IDX = 1;
+ private static final int LENGTH_IDX = 2;
+
+ public GenericUDFArraySlice() {
+ super(FUNC_NAME, 3, 3, ObjectInspector.Category.LIST);
+ }
+
+ @Override public ObjectInspector initialize(ObjectInspector[] arguments)
throws UDFArgumentException {
+ ObjectInspector defaultOI = super.initialize(arguments);
+ // Check whether start and length inputs are of integer type
+ checkArgIntPrimitiveCategory((PrimitiveObjectInspector)
arguments[START_IDX], FUNC_NAME, START_IDX);
+ checkArgIntPrimitiveCategory((PrimitiveObjectInspector)
arguments[LENGTH_IDX], FUNC_NAME, LENGTH_IDX);
+ return defaultOI;
+ }
+
+ @Override public Object evaluate(DeferredObject[] arguments) throws
HiveException {
+
+ Object array = arguments[ARRAY_IDX].get();
+ if (arrayOI.getListLength(array) == 0) {
+ return Collections.emptyList();
+ } else if (arrayOI.getListLength(array) < 0) {
+ return null;
+ }
+
+ List<?> retArray = ((ListObjectInspector)
argumentOIs[ARRAY_IDX]).getList(array);
+ int start = ((IntObjectInspector)
argumentOIs[START_IDX]).get(arguments[START_IDX].get());
+ int length = ((IntObjectInspector)
argumentOIs[LENGTH_IDX]).get(arguments[LENGTH_IDX].get());
+ // return empty list if start/length are out of range of the array
+ if (start + length > retArray.size()) {
+ return Collections.emptyList();
Review Comment:
The implementation is made close to that of Spark's slice function which is
returning empty array
```
scala> val arrayStructureData = Seq(
| Row(List("aa","bb","cc","dd")),
| Row(List("aa"))
| )
arrayStructureData: Seq[org.apache.spark.sql.Row] = List([List(aa, bb, cc,
dd)], [List(aa)])
scala> val df =
spark.createDataFrame(spark.sparkContext.parallelize(arrayStructureData),new
StructType().add("str", ArrayType(StringType)))
df: org.apache.spark.sql.DataFrame = [str: array<string>]
scala> val sliceDF = df.withColumn("Sliced_str",slice(col("str"),2,3))
sliceDF: org.apache.spark.sql.DataFrame = [str: array<string>, Sliced_str:
array<string>]
scala> sliceDF.show(false)
+----------------+------------+
|str |Sliced_str |
+----------------+------------+
|[aa, bb, cc, dd]|[bb, cc, dd]|
|[aa] |[] |
+----------------+------------+
```
So that users who are familiar with Spark's slice will not see a difference
in Hive. Also I see another benefit of returning values for the rest of the
rows which is not the case when an exception is thrown
Issue Time Tracking
-------------------
Worklog Id: (was: 836813)
Time Spent: 1h 20m (was: 1h 10m)
> Implement array_slice UDF to get the subset of elements from an array
> (subarray)
> --------------------------------------------------------------------------------
>
> Key: HIVE-26774
> URL: https://issues.apache.org/jira/browse/HIVE-26774
> Project: Hive
> Issue Type: Sub-task
> Components: Hive
> Reporter: Taraka Rama Rao Lethavadla
> Assignee: Taraka Rama Rao Lethavadla
> Priority: Minor
> Labels: pull-request-available
> Time Spent: 1h 20m
> Remaining Estimate: 0h
>
> *array_slice(array, start, length)* - Returns the subset or range of elements
> from an array (subarray).
> Example:
>
> {noformat}
> > SELECT array_slice(array(1, 2, 3,4), 2,2) FROM src LIMIT 1;
> 3,4{noformat}
> Returns empty list if start/length are out of range of the array
--
This message was sent by Atlassian Jira
(v8.20.10#820010)