This is an automated email from the ASF dual-hosted git repository.
tuglu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git
The following commit(s) were added to refs/heads/master by this push:
new 407aba5d358 Allow failing on residual for Iceberg filters on
non-partition cols (#18953)
407aba5d358 is described below
commit 407aba5d358214c5de83f1d000f5c41f9e7a3a72
Author: jtuglu1 <[email protected]>
AuthorDate: Thu Jan 29 00:56:47 2026 -0800
Allow failing on residual for Iceberg filters on non-partition cols (#18953)
Currently Iceberg ingest extension may ingest more data than is necessary
due to residual data occurring from an Iceberg filter on non-partition columns.
This adds an option to ignore + log a warning or fail on filters that result in
residual so users are aware of this extra data and can action on it.
---
docs/development/extensions-contrib/iceberg.md | 32 ++++-
docs/ingestion/input-sources.md | 1 +
.../apache/druid/iceberg/input/IcebergCatalog.java | 51 +++++++-
.../druid/iceberg/input/IcebergInputSource.java | 17 ++-
.../druid/iceberg/input/ResidualFilterMode.java | 71 ++++++++++
.../iceberg/input/IcebergInputSourceTest.java | 143 ++++++++++++++++++++-
.../iceberg/input/ResidualFilterModeTest.java | 68 ++++++++++
website/.spelling | 1 +
8 files changed, 374 insertions(+), 10 deletions(-)
diff --git a/docs/development/extensions-contrib/iceberg.md
b/docs/development/extensions-contrib/iceberg.md
index e2a5a06cb9e..c2652cc7858 100644
--- a/docs/development/extensions-contrib/iceberg.md
+++ b/docs/development/extensions-contrib/iceberg.md
@@ -139,6 +139,36 @@ java \
See [Loading community
extensions](../../configuration/extensions.md#loading-community-extensions) for
more information.
+## Residual filter handling
+
+When an Iceberg filter is applied on a non-partition column, the filtering
happens at the file metadata level only (using column statistics). Files that
might contain matching rows are returned, but these files may include
"residual" rows that don't actually match the filter. These residual rows would
be ingested unless filtered by a `transformSpec` filter on the Druid side.
+
+To control this behavior, you can set the `residualFilterMode` property on the
Iceberg input source:
+
+| Mode | Description |
+|------|-------------|
+| `ignore` | Default. Residual rows are ingested with a warning log unless
filtered by `transformSpec`. |
+| `fail` | Fail the ingestion job when residual filters are detected. Use this
to ensure that filters only target partition columns. |
+
+Example:
+```json
+{
+ "type": "iceberg",
+ "tableName": "events",
+ "namespace": "analytics",
+ "icebergCatalog": { ... },
+ "icebergFilter": {
+ "type": "timeWindow",
+ "filterColumn": "event_time",
+ "lookbackDuration": "P1D"
+ },
+ "residualFilterMode": "fail",
+ "warehouseSource": { ... }
+}
+```
+
+When `residualFilterMode` is set to `fail` and a residual filter is detected,
the job will fail with an error message indicating which filter expression
produced the residual. This helps ensure data quality by preventing unintended
rows from being ingested.
+
## Known limitations
This section lists the known limitations that apply to the Iceberg extension.
@@ -146,4 +176,4 @@ This section lists the known limitations that apply to the
Iceberg extension.
- This extension does not fully utilize the Iceberg features such as
snapshotting or schema evolution.
- The Iceberg input source reads every single live file on the Iceberg table
up to the latest snapshot, which makes the table scan less performant. It is
recommended to use Iceberg filters on partition columns in the ingestion spec
in order to limit the number of data files being retrieved. Since, Druid
doesn't store the last ingested iceberg snapshot ID, it cannot identify the
files created between that snapshot and the latest snapshot on Iceberg.
- It does not handle Iceberg [schema
evolution](https://iceberg.apache.org/docs/latest/evolution/) yet. In cases
where an existing Iceberg table column is deleted and recreated with the same
name, ingesting this table into Druid may bring the data for this column before
it was deleted.
-- The Hive catalog has not been tested on Hadoop 2.x.x and is not guaranteed
to work with Hadoop 2.
\ No newline at end of file
+- The Hive catalog has not been tested on Hadoop 2.x.x and is not guaranteed
to work with Hadoop 2.
diff --git a/docs/ingestion/input-sources.md b/docs/ingestion/input-sources.md
index 49cf90cdbf5..cf6aecfe8c5 100644
--- a/docs/ingestion/input-sources.md
+++ b/docs/ingestion/input-sources.md
@@ -1063,6 +1063,7 @@ The following is a sample spec for a S3 warehouse source:
|icebergCatalog|The JSON Object used to define the catalog that manages the
configured Iceberg table.|yes|
|warehouseSource|The JSON Object that defines the native input source for
reading the data files from the warehouse.|yes|
|snapshotTime|Timestamp in ISO8601 DateTime format that will be used to fetch
the most recent snapshot as of this time.|no|
+|residualFilterMode|Controls how residual filters are handled when the filter
results in a residual. This typically happens when an Iceberg filter targets a
non-partition column: files may contain rows that don't match the filter
(residual rows). Valid values are: `ignore` (default, ingest all rows, and log
a warning), and `fail` (fail the ingestion job). Use `fail` to ensure no excess
data is being ingested if you don't have filters in transformSpec.|no|
### Catalog Object
diff --git
a/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/IcebergCatalog.java
b/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/IcebergCatalog.java
index 5dc5aa85a9a..d4bfe4f53ba 100644
---
a/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/IcebergCatalog.java
+++
b/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/IcebergCatalog.java
@@ -21,15 +21,19 @@ package org.apache.druid.iceberg.input;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import org.apache.druid.data.input.InputFormat;
+import org.apache.druid.error.DruidException;
import org.apache.druid.iceberg.filter.IcebergFilter;
import org.apache.druid.java.util.common.IAE;
import org.apache.druid.java.util.common.RE;
+import org.apache.druid.java.util.common.StringUtils;
import org.apache.druid.java.util.common.logger.Logger;
import org.apache.iceberg.FileScanTask;
import org.apache.iceberg.TableScan;
import org.apache.iceberg.catalog.Catalog;
import org.apache.iceberg.catalog.Namespace;
import org.apache.iceberg.catalog.TableIdentifier;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
import org.apache.iceberg.io.CloseableIterable;
import org.joda.time.DateTime;
@@ -58,17 +62,20 @@ public abstract class IcebergCatalog
/**
* Extract the iceberg data files upto the latest snapshot associated with
the table
*
- * @param tableNamespace The catalog namespace under which the table is
defined
- * @param tableName The iceberg table name
+ * @param tableNamespace The catalog namespace under which the table is
defined
+ * @param tableName The iceberg table name
* @param icebergFilter The iceberg filter that needs to be applied
before reading the files
- * @param snapshotTime Datetime that will be used to fetch the most
recent snapshot as of this time
+ * @param snapshotTime Datetime that will be used to fetch the most
recent snapshot as of this time
+ * @param residualFilterMode Controls how residual filters are handled. When
filtering on non-partition
+ * columns, residual rows may be returned that
need row-level filtering.
* @return a list of data file paths
*/
public List<String> extractSnapshotDataFiles(
String tableNamespace,
String tableName,
IcebergFilter icebergFilter,
- DateTime snapshotTime
+ DateTime snapshotTime,
+ ResidualFilterMode residualFilterMode
)
{
Catalog catalog = retrieveCatalog();
@@ -100,12 +107,44 @@ public abstract class IcebergCatalog
tableScan = tableScan.caseSensitive(isCaseSensitive());
CloseableIterable<FileScanTask> tasks = tableScan.planFiles();
- CloseableIterable.transform(tasks, FileScanTask::file)
- .forEach(dataFile ->
dataFilePaths.add(dataFile.path().toString()));
+
+ Expression detectedResidual = null;
+ for (FileScanTask task : tasks) {
+ dataFilePaths.add(task.file().path().toString());
+
+ // Check for residual filters
+ if (detectedResidual == null) {
+ Expression residual = task.residual();
+ if (residual != null && !residual.equals(Expressions.alwaysTrue())) {
+ detectedResidual = residual;
+ }
+ }
+ }
+
+ // Handle residual filter based on mode
+ if (detectedResidual != null) {
+ String message = StringUtils.format(
+ "Iceberg filter produced residual expression that requires
row-level filtering. "
+ + "This typically means the filter is on a non-partition column. "
+ + "Residual rows may be ingested unless filtered by transformSpec.
"
+ + "Residual filter: [%s]",
+ detectedResidual
+ );
+
+ if (residualFilterMode == ResidualFilterMode.FAIL) {
+ throw DruidException.forPersona(DruidException.Persona.DEVELOPER)
+
.ofCategory(DruidException.Category.RUNTIME_FAILURE)
+ .build(message);
+ }
+ log.warn(message);
+ }
long duration = System.currentTimeMillis() - start;
log.info("Data file scan and fetch took [%d ms] time for [%d] paths",
duration, dataFilePaths.size());
}
+ catch (DruidException e) {
+ throw e;
+ }
catch (Exception e) {
throw new RE(e, "Failed to load iceberg table with identifier [%s]",
tableIdentifier);
}
diff --git
a/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/IcebergInputSource.java
b/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/IcebergInputSource.java
index 44df3e31861..ccbb10af14d 100644
---
a/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/IcebergInputSource.java
+++
b/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/IcebergInputSource.java
@@ -22,6 +22,7 @@ package org.apache.druid.iceberg.input;
import com.fasterxml.jackson.annotation.JsonCreator;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.google.common.base.Preconditions;
+import org.apache.druid.common.config.Configs;
import org.apache.druid.data.input.InputFormat;
import org.apache.druid.data.input.InputRow;
import org.apache.druid.data.input.InputRowListPlusRawValues;
@@ -72,6 +73,9 @@ public class IcebergInputSource implements
SplittableInputSource<List<String>>
@JsonProperty
private final DateTime snapshotTime;
+ @JsonProperty
+ private final ResidualFilterMode residualFilterMode;
+
private boolean isLoaded = false;
private SplittableInputSource delegateInputSource;
@@ -83,7 +87,8 @@ public class IcebergInputSource implements
SplittableInputSource<List<String>>
@JsonProperty("icebergFilter") @Nullable IcebergFilter icebergFilter,
@JsonProperty("icebergCatalog") IcebergCatalog icebergCatalog,
@JsonProperty("warehouseSource") InputSourceFactory warehouseSource,
- @JsonProperty("snapshotTime") @Nullable DateTime snapshotTime
+ @JsonProperty("snapshotTime") @Nullable DateTime snapshotTime,
+ @JsonProperty("residualFilterMode") @Nullable ResidualFilterMode
residualFilterMode
)
{
this.tableName = Preconditions.checkNotNull(tableName, "tableName cannot
be null");
@@ -92,6 +97,7 @@ public class IcebergInputSource implements
SplittableInputSource<List<String>>
this.icebergFilter = icebergFilter;
this.warehouseSource = Preconditions.checkNotNull(warehouseSource,
"warehouseSource cannot be null");
this.snapshotTime = snapshotTime;
+ this.residualFilterMode = Configs.valueOrDefault(residualFilterMode,
ResidualFilterMode.IGNORE);
}
@Override
@@ -177,6 +183,12 @@ public class IcebergInputSource implements
SplittableInputSource<List<String>>
return snapshotTime;
}
+ @JsonProperty
+ public ResidualFilterMode getResidualFilterMode()
+ {
+ return residualFilterMode;
+ }
+
public SplittableInputSource getDelegateInputSource()
{
return delegateInputSource;
@@ -188,7 +200,8 @@ public class IcebergInputSource implements
SplittableInputSource<List<String>>
getNamespace(),
getTableName(),
getIcebergFilter(),
- getSnapshotTime()
+ getSnapshotTime(),
+ getResidualFilterMode()
);
if (snapshotDataFiles.isEmpty()) {
delegateInputSource = new EmptyInputSource();
diff --git
a/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/ResidualFilterMode.java
b/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/ResidualFilterMode.java
new file mode 100644
index 00000000000..dd3a56bdb42
--- /dev/null
+++
b/extensions-contrib/druid-iceberg-extensions/src/main/java/org/apache/druid/iceberg/input/ResidualFilterMode.java
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.iceberg.input;
+
+import com.fasterxml.jackson.annotation.JsonValue;
+
+/**
+ * Controls how residual filters are handled during Iceberg table scanning.
+ *
+ * When an Iceberg filter is applied on a non-partition column, the filtering
happens at the
+ * file metadata level only. Files that might contain matching rows are
returned, but these
+ * files may include "residual" rows that don't actually match the filter.
These residual rows
+ * would need to be filtered on the Druid side using a filter in transformSpec.
+ */
+public enum ResidualFilterMode
+{
+ /**
+ * Ignore residual filters. This is the default behavior for backward
compatibility.
+ * Residual rows will be ingested unless filtered by transformSpec.
+ */
+ IGNORE("ignore"),
+
+ /**
+ * Fail the ingestion job when residual filters are detected.
+ * Use this mode to ensure that only partition-column filters are used,
+ * preventing unintended residual rows from being ingested.
+ */
+ FAIL("fail");
+
+ private final String value;
+
+ ResidualFilterMode(String value)
+ {
+ this.value = value;
+ }
+
+ @JsonValue
+ public String getValue()
+ {
+ return value;
+ }
+
+ public static ResidualFilterMode fromString(String value)
+ {
+ for (ResidualFilterMode mode : values()) {
+ if (mode.value.equalsIgnoreCase(value)) {
+ return mode;
+ }
+ }
+ throw new IllegalArgumentException(
+ "Unknown residualFilterMode: " + value + ". Valid values are: ignore,
fail"
+ );
+ }
+}
diff --git
a/extensions-contrib/druid-iceberg-extensions/src/test/java/org/apache/druid/iceberg/input/IcebergInputSourceTest.java
b/extensions-contrib/druid-iceberg-extensions/src/test/java/org/apache/druid/iceberg/input/IcebergInputSourceTest.java
index 5a2429d6c7c..b42b3eaa030 100644
---
a/extensions-contrib/druid-iceberg-extensions/src/test/java/org/apache/druid/iceberg/input/IcebergInputSourceTest.java
+++
b/extensions-contrib/druid-iceberg-extensions/src/test/java/org/apache/druid/iceberg/input/IcebergInputSourceTest.java
@@ -25,11 +25,13 @@ import org.apache.druid.data.input.InputSplit;
import org.apache.druid.data.input.MaxSizeSplitHintSpec;
import org.apache.druid.data.input.impl.LocalInputSource;
import org.apache.druid.data.input.impl.LocalInputSourceFactory;
+import org.apache.druid.error.DruidException;
import org.apache.druid.iceberg.filter.IcebergEqualsFilter;
import org.apache.druid.java.util.common.DateTimes;
import org.apache.druid.java.util.common.FileUtils;
import org.apache.iceberg.DataFile;
import org.apache.iceberg.Files;
+import org.apache.iceberg.PartitionKey;
import org.apache.iceberg.PartitionSpec;
import org.apache.iceberg.Schema;
import org.apache.iceberg.Table;
@@ -97,6 +99,7 @@ public class IcebergInputSourceTest
null,
testCatalog,
new LocalInputSourceFactory(),
+ null,
null
);
Stream<InputSplit<List<String>>> splits = inputSource.createSplits(null,
new MaxSizeSplitHintSpec(null, null));
@@ -132,6 +135,7 @@ public class IcebergInputSourceTest
new IcebergEqualsFilter("id", "0000"),
testCatalog,
new LocalInputSourceFactory(),
+ null,
null
);
Stream<InputSplit<List<String>>> splits = inputSource.createSplits(null,
new MaxSizeSplitHintSpec(null, null));
@@ -147,6 +151,7 @@ public class IcebergInputSourceTest
new IcebergEqualsFilter("id", "123988"),
testCatalog,
new LocalInputSourceFactory(),
+ null,
null
);
Stream<InputSplit<List<String>>> splits = inputSource.createSplits(null,
new MaxSizeSplitHintSpec(null, null));
@@ -182,7 +187,8 @@ public class IcebergInputSourceTest
null,
testCatalog,
new LocalInputSourceFactory(),
- DateTimes.nowUtc()
+ DateTimes.nowUtc(),
+ null
);
Stream<InputSplit<List<String>>> splits = inputSource.createSplits(null,
new MaxSizeSplitHintSpec(null, null));
Assert.assertEquals(1, splits.count());
@@ -201,6 +207,7 @@ public class IcebergInputSourceTest
new IcebergEqualsFilter("name", "Foo"),
caseInsensitiveCatalog,
new LocalInputSourceFactory(),
+ null,
null
);
@@ -215,6 +222,97 @@ public class IcebergInputSourceTest
Assert.assertEquals(1, localInputSourceList.size());
}
+ @Test
+ public void testResidualFilterModeIgnore() throws IOException
+ {
+ // Filter on non-partition column with IGNORE mode should succeed
+ IcebergInputSource inputSource = new IcebergInputSource(
+ TABLENAME,
+ NAMESPACE,
+ new IcebergEqualsFilter("id", "123988"),
+ testCatalog,
+ new LocalInputSourceFactory(),
+ null,
+ ResidualFilterMode.IGNORE
+ );
+ Stream<InputSplit<List<String>>> splits = inputSource.createSplits(null,
new MaxSizeSplitHintSpec(null, null));
+ Assert.assertEquals(1, splits.count());
+ }
+
+ @Test
+ public void testResidualFilterModeFail() throws IOException
+ {
+ // Filter on non-partition column with FAIL mode should throw exception
+ IcebergInputSource inputSource = new IcebergInputSource(
+ TABLENAME,
+ NAMESPACE,
+ new IcebergEqualsFilter("id", "123988"),
+ testCatalog,
+ new LocalInputSourceFactory(),
+ null,
+ ResidualFilterMode.FAIL
+ );
+ DruidException exception = Assert.assertThrows(
+ DruidException.class,
+ () -> inputSource.createSplits(null, new MaxSizeSplitHintSpec(null,
null))
+ );
+ Assert.assertTrue(
+ "Expect residual error to be thrown",
+ exception.getMessage().contains("residual")
+ );
+ }
+
+ @Test
+ public void testResidualFilterModeFailWithPartitionedTable() throws
IOException
+ {
+ // Cleanup default table first
+ tearDown();
+ // Create a partitioned table and filter on the partition column
+ tableIdentifier = TableIdentifier.of(Namespace.of(NAMESPACE),
"partitionedTable");
+ createAndLoadPartitionedTable(tableIdentifier);
+
+ IcebergInputSource inputSource = new IcebergInputSource(
+ "partitionedTable",
+ NAMESPACE,
+ new IcebergEqualsFilter("id", "123988"),
+ testCatalog,
+ new LocalInputSourceFactory(),
+ null,
+ ResidualFilterMode.FAIL
+ );
+ Stream<InputSplit<List<String>>> splits = inputSource.createSplits(null,
new MaxSizeSplitHintSpec(null, null));
+ Assert.assertEquals(1, splits.count());
+ }
+
+ @Test
+ public void
testResidualFilterModeFailWithPartitionedTableNonPartitionColumn() throws
IOException
+ {
+ // Cleanup default table first
+ tearDown();
+ // Create a partitioned table and filter on a non-partition column
+ tableIdentifier = TableIdentifier.of(Namespace.of(NAMESPACE),
"partitionedTable2");
+ createAndLoadPartitionedTable(tableIdentifier);
+
+ // Filter on non-partition column with FAIL mode should throw exception
+ IcebergInputSource inputSource = new IcebergInputSource(
+ "partitionedTable2",
+ NAMESPACE,
+ new IcebergEqualsFilter("name", "Foo"),
+ testCatalog,
+ new LocalInputSourceFactory(),
+ null,
+ ResidualFilterMode.FAIL
+ );
+ DruidException exception = Assert.assertThrows(
+ DruidException.class,
+ () -> inputSource.createSplits(null, new MaxSizeSplitHintSpec(null,
null))
+ );
+ Assert.assertTrue(
+ "Expect residual error to be thrown",
+ exception.getMessage().contains("residual")
+ );
+ }
+
@After
public void tearDown()
{
@@ -255,6 +353,49 @@ public class IcebergInputSourceTest
}
+ private void createAndLoadPartitionedTable(TableIdentifier tableIdentifier)
throws IOException
+ {
+ // Create a partitioned table with 'id' as the partition column
+ PartitionSpec partitionSpec = PartitionSpec.builderFor(tableSchema)
+ .identity("id")
+ .build();
+ Table icebergTable =
testCatalog.retrieveCatalog().createTable(tableIdentifier, tableSchema,
partitionSpec);
+
+ // Generate an iceberg record and write it to a file
+ GenericRecord record = GenericRecord.create(tableSchema);
+ ImmutableList.Builder<GenericRecord> builder = ImmutableList.builder();
+
+ builder.add(record.copy(tableData));
+ String filepath = icebergTable.location() + "/data/id=123988/" +
UUID.randomUUID() + ".parquet";
+ OutputFile file = icebergTable.io().newOutputFile(filepath);
+
+ // Create a partition key for the partition spec
+ PartitionKey partitionKey = new PartitionKey(partitionSpec, tableSchema);
+ partitionKey.partition(record.copy(tableData));
+
+ DataWriter<GenericRecord> dataWriter =
+ Parquet.writeData(file)
+ .schema(tableSchema)
+ .createWriterFunc(GenericParquetWriter::buildWriter)
+ .overwrite()
+ .withSpec(partitionSpec)
+ .withPartition(partitionKey)
+ .build();
+
+ try {
+ for (GenericRecord genRecord : builder.build()) {
+ dataWriter.write(genRecord);
+ }
+ }
+ finally {
+ dataWriter.close();
+ }
+ DataFile dataFile = dataWriter.toDataFile();
+
+ // Add the data file to the iceberg table
+ icebergTable.newAppend().appendFile(dataFile).commit();
+ }
+
private void dropTableFromCatalog(TableIdentifier tableIdentifier)
{
testCatalog.retrieveCatalog().dropTable(tableIdentifier);
diff --git
a/extensions-contrib/druid-iceberg-extensions/src/test/java/org/apache/druid/iceberg/input/ResidualFilterModeTest.java
b/extensions-contrib/druid-iceberg-extensions/src/test/java/org/apache/druid/iceberg/input/ResidualFilterModeTest.java
new file mode 100644
index 00000000000..f7a9891f7ac
--- /dev/null
+++
b/extensions-contrib/druid-iceberg-extensions/src/test/java/org/apache/druid/iceberg/input/ResidualFilterModeTest.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.druid.iceberg.input;
+
+import com.fasterxml.jackson.databind.ObjectMapper;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class ResidualFilterModeTest
+{
+ @Test
+ public void testFromString()
+ {
+ Assert.assertEquals(ResidualFilterMode.IGNORE,
ResidualFilterMode.fromString("ignore"));
+ Assert.assertEquals(ResidualFilterMode.FAIL,
ResidualFilterMode.fromString("fail"));
+
+ // Test case insensitivity
+ Assert.assertEquals(ResidualFilterMode.IGNORE,
ResidualFilterMode.fromString("IGNORE"));
+ Assert.assertEquals(ResidualFilterMode.FAIL,
ResidualFilterMode.fromString("FAIL"));
+ }
+
+ @Test
+ public void testFromStringInvalid()
+ {
+ Assert.assertThrows(
+ IllegalArgumentException.class,
+ () -> ResidualFilterMode.fromString("invalid")
+ );
+ }
+
+ @Test
+ public void testGetValue()
+ {
+ Assert.assertEquals("ignore", ResidualFilterMode.IGNORE.getValue());
+ Assert.assertEquals("fail", ResidualFilterMode.FAIL.getValue());
+ }
+
+ @Test
+ public void testJsonSerialization() throws Exception
+ {
+ ObjectMapper mapper = new ObjectMapper();
+
+ // Test serialization
+ Assert.assertEquals("\"ignore\"",
mapper.writeValueAsString(ResidualFilterMode.IGNORE));
+ Assert.assertEquals("\"fail\"",
mapper.writeValueAsString(ResidualFilterMode.FAIL));
+
+ // Test deserialization
+ Assert.assertEquals(ResidualFilterMode.IGNORE,
mapper.readValue("\"ignore\"", ResidualFilterMode.class));
+ Assert.assertEquals(ResidualFilterMode.FAIL, mapper.readValue("\"fail\"",
ResidualFilterMode.class));
+ }
+}
diff --git a/website/.spelling b/website/.spelling
index 3486f8a8d88..905a4ae8aa0 100644
--- a/website/.spelling
+++ b/website/.spelling
@@ -224,6 +224,7 @@ ROUTINE_SCHEMA
ROUTINE_TYPE
Rackspace
Redis
+residualFilterMode
S3
SAS
SDK
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]