[
https://issues.apache.org/jira/browse/DRILL-7716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099370#comment-17099370
]
ASF GitHub Bot commented on DRILL-7716:
---------------------------------------
paul-rogers commented on a change in pull request #2067:
URL: https://github.com/apache/drill/pull/2067#discussion_r419742326
##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,83 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package
for the Social Sciences (SPSS) (or Statistical Product and Service Solutions)
data files. According
+ to Wikipedia: (https://en.wikipedia.org/wiki/SPSS)
+ ***
+ SPSS is a widely used program for statistical analysis in social science. It
is also used by market researchers, health researchers, survey companies,
government, education researchers, marketing organizations, data miners, and
others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as
one of "sociology's most influential books" for allowing ordinary researchers
to do their own statistical analysis. In addition to statistical analysis, data
management (case selection, file reshaping, creating derived data) and data
documentation (a metadata dictionary is stored in the datafile) are features of
the base software.
+ ***
+
+## Configuration
+To configure Drill to read SPSS files, simply add the following code to the
formats section of your file-based storage plugin. This should happen
automatically for the default
Review comment:
Do we want to add all the format plugins at bootstrap? Creates a rather
intimidating-looking hunk of JSON for newbies. Of course, it would be good for
format plugins to be independent of the storage plugin, but that will come
later.
##########
File path: contrib/format-spss/README.md
##########
@@ -0,0 +1,83 @@
+# Format Plugin for SPSS (SAV) Files
+This format plugin enables Apache Drill to read and query Statistical Package
for the Social Sciences (SPSS) (or Statistical Product and Service Solutions)
data files. According
+ to Wikipedia: (https://en.wikipedia.org/wiki/SPSS)
+ ***
+ SPSS is a widely used program for statistical analysis in social science. It
is also used by market researchers, health researchers, survey companies,
government, education researchers, marketing organizations, data miners, and
others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as
one of "sociology's most influential books" for allowing ordinary researchers
to do their own statistical analysis. In addition to statistical analysis, data
management (case selection, file reshaping, creating derived data) and data
documentation (a metadata dictionary is stored in the datafile) are features of
the base software.
Review comment:
Nit: for ease of editing, it is handy to break lines at around 80 chars.
MD will combine them to form a paragraph as if they were one long line.
##########
File path: contrib/format-spss/pom.xml
##########
@@ -0,0 +1,88 @@
+<?xml version="1.0"?>
+<!--
+
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements. See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership. The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+
+ <parent>
+ <artifactId>drill-contrib-parent</artifactId>
+ <groupId>org.apache.drill.contrib</groupId>
+ <version>1.18.0-SNAPSHOT</version>
+ </parent>
+
+ <artifactId>drill-format-spss</artifactId>
+ <name>contrib/format-spss</name>
+
+ <dependencies>
+ <dependency>
+ <groupId>org.apache.drill.exec</groupId>
+ <artifactId>drill-java-exec</artifactId>
+ <version>${project.version}</version>
+ </dependency>
+ <dependency>
+ <groupId>com.bedatadriven.spss</groupId>
+ <artifactId>spss-reader</artifactId>
+ <version>1.3</version>
+ </dependency>
+
+ <!-- Test dependencies -->
+ <dependency>
+ <groupId>org.apache.drill.exec</groupId>
+ <artifactId>drill-java-exec</artifactId>
+ <classifier>tests</classifier>
+ <version>${project.version}</version>
+ <scope>test</scope>
+ </dependency>
+
+ <dependency>
+ <groupId>org.apache.drill</groupId>
+ <artifactId>drill-common</artifactId>
+ <classifier>tests</classifier>
+ <version>${project.version}</version>
+ <scope>test</scope>
+ </dependency>
+ </dependencies>
+ <build>
+ <plugins>
+ <plugin>
+ <artifactId>maven-resources-plugin</artifactId>
+ <executions>
+ <execution>
+ <id>copy-java-sources</id>
+ <phase>process-sources</phase>
+ <goals>
+ <goal>copy-resources</goal>
+ </goals>
+ <configuration>
+
<outputDirectory>${basedir}/target/classes/org/apache/drill/exec/store/syslog
Review comment:
`syslog`?
##########
File path:
contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import
org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+ private static final Logger logger =
LoggerFactory.getLogger(SpssBatchReader.class);
+
+ private static final String VALUE_LABEL = "_value";
+
+ private FileSplit split;
+
+ private InputStream fsStream;
+
+ private SpssDataFileReader spssReader;
+
+ private RowSetLoader rowWriter;
+
+ private List<SpssVariable> variableList;
+
+ private List<SpssColumnWriter> writerList;
+
+ private CustomErrorContext errorContext;
+
+
+ public static class SpssReaderConfig {
+
+ protected final SpssFormatPlugin plugin;
+
+ public SpssReaderConfig(SpssFormatPlugin plugin) {
+ this.plugin = plugin;
+ }
+ }
+
+ @Override
+ public boolean open(FileSchemaNegotiator negotiator) {
+ split = negotiator.split();
+ openFile(negotiator);
+ negotiator.tableSchema(buildSchema(), true);
+ errorContext = negotiator.parentErrorContext();
+ ResultSetLoader loader = negotiator.build();
+ rowWriter = loader.writer();
+ buildReaderList();
+
+ return true;
+ }
+
+ @Override
+ public boolean next() {
+ while (!rowWriter.isFull()) {
+ if (!processNextRow()) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ @Override
+ public void close() {
+ if (fsStream != null) {
+ try {
+ fsStream.close();
Review comment:
I think `Closeables.closeSilently(fsStream)` is the preferred approach
these days.
##########
File path:
contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import
org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+ private static final Logger logger =
LoggerFactory.getLogger(SpssBatchReader.class);
+
+ private static final String VALUE_LABEL = "_value";
+
+ private FileSplit split;
+
+ private InputStream fsStream;
+
+ private SpssDataFileReader spssReader;
+
+ private RowSetLoader rowWriter;
+
+ private List<SpssVariable> variableList;
+
+ private List<SpssColumnWriter> writerList;
+
+ private CustomErrorContext errorContext;
+
+
+ public static class SpssReaderConfig {
+
+ protected final SpssFormatPlugin plugin;
+
+ public SpssReaderConfig(SpssFormatPlugin plugin) {
+ this.plugin = plugin;
+ }
+ }
+
+ @Override
+ public boolean open(FileSchemaNegotiator negotiator) {
+ split = negotiator.split();
+ openFile(negotiator);
+ negotiator.tableSchema(buildSchema(), true);
+ errorContext = negotiator.parentErrorContext();
+ ResultSetLoader loader = negotiator.build();
+ rowWriter = loader.writer();
+ buildReaderList();
+
+ return true;
+ }
+
+ @Override
+ public boolean next() {
+ while (!rowWriter.isFull()) {
+ if (!processNextRow()) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ @Override
+ public void close() {
+ if (fsStream != null) {
+ try {
+ fsStream.close();
+ } catch (IOException e) {
+ logger.warn("Error when closing SPSS File Stream resource: {}",
e.getMessage());
+ }
+ fsStream = null;
+ }
+ }
+
+ private void openFile(FileSchemaNegotiator negotiator) {
+ try {
+ fsStream =
negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+ spssReader = new SpssDataFileReader(fsStream);
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Unable to open SPSS File %s", split.getPath())
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ }
+
+ private boolean processNextRow() {
+ try {
+ // Stop reading when you run out of data
+ if (!spssReader.readNextCase()) {
+ return false;
+ }
+
+ rowWriter.start();
+ for (SpssColumnWriter spssColumnWriter : writerList) {
+ spssColumnWriter.load(spssReader);
+ }
+ rowWriter.save();
+
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Error reading SPSS File.")
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ return true;
+ }
+
+ private TupleMetadata buildSchema() {
+ SchemaBuilder builder = new SchemaBuilder();
+ variableList = spssReader.getVariables();
+
+ for (SpssVariable variable : variableList) {
+ String varName = variable.getVariableName();
+
+ if (variable.isNumeric()) {
+ builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+ // Check if the column has lookups associated with it
+ if (variable.getValueLabels() != null &&
variable.getValueLabels().size() > 0) {
+ builder.addNullable(varName + VALUE_LABEL,
TypeProtos.MinorType.VARCHAR);
+ }
+
+ } else {
+ builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+ }
+ }
+ return builder.buildSchema();
+ }
+
+ private void buildReaderList() {
+ writerList = new ArrayList<>();
+
+ for(SpssVariable variable : variableList) {
Review comment:
Nit: `for (`
##########
File path:
contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import
org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+ private static final Logger logger =
LoggerFactory.getLogger(SpssBatchReader.class);
+
+ private static final String VALUE_LABEL = "_value";
+
+ private FileSplit split;
+
+ private InputStream fsStream;
+
+ private SpssDataFileReader spssReader;
+
+ private RowSetLoader rowWriter;
+
+ private List<SpssVariable> variableList;
+
+ private List<SpssColumnWriter> writerList;
+
+ private CustomErrorContext errorContext;
+
+
+ public static class SpssReaderConfig {
+
+ protected final SpssFormatPlugin plugin;
+
+ public SpssReaderConfig(SpssFormatPlugin plugin) {
+ this.plugin = plugin;
+ }
+ }
+
+ @Override
+ public boolean open(FileSchemaNegotiator negotiator) {
+ split = negotiator.split();
+ openFile(negotiator);
+ negotiator.tableSchema(buildSchema(), true);
+ errorContext = negotiator.parentErrorContext();
+ ResultSetLoader loader = negotiator.build();
+ rowWriter = loader.writer();
+ buildReaderList();
+
+ return true;
+ }
+
+ @Override
+ public boolean next() {
+ while (!rowWriter.isFull()) {
+ if (!processNextRow()) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ @Override
+ public void close() {
+ if (fsStream != null) {
+ try {
+ fsStream.close();
+ } catch (IOException e) {
+ logger.warn("Error when closing SPSS File Stream resource: {}",
e.getMessage());
+ }
+ fsStream = null;
+ }
+ }
+
+ private void openFile(FileSchemaNegotiator negotiator) {
+ try {
+ fsStream =
negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+ spssReader = new SpssDataFileReader(fsStream);
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Unable to open SPSS File %s", split.getPath())
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ }
+
+ private boolean processNextRow() {
+ try {
+ // Stop reading when you run out of data
+ if (!spssReader.readNextCase()) {
+ return false;
+ }
+
+ rowWriter.start();
+ for (SpssColumnWriter spssColumnWriter : writerList) {
+ spssColumnWriter.load(spssReader);
+ }
+ rowWriter.save();
+
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Error reading SPSS File.")
+ .addContext(e.getMessage())
Review comment:
Not necessary. Better is:
```
.context("Error reading SPSS File.")
```
And let the message be the underlying error.
##########
File path:
contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import
org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+ private static final Logger logger =
LoggerFactory.getLogger(SpssBatchReader.class);
+
+ private static final String VALUE_LABEL = "_value";
+
+ private FileSplit split;
+
+ private InputStream fsStream;
+
+ private SpssDataFileReader spssReader;
+
+ private RowSetLoader rowWriter;
+
+ private List<SpssVariable> variableList;
+
+ private List<SpssColumnWriter> writerList;
+
+ private CustomErrorContext errorContext;
+
+
+ public static class SpssReaderConfig {
+
+ protected final SpssFormatPlugin plugin;
+
+ public SpssReaderConfig(SpssFormatPlugin plugin) {
+ this.plugin = plugin;
+ }
+ }
+
+ @Override
+ public boolean open(FileSchemaNegotiator negotiator) {
+ split = negotiator.split();
+ openFile(negotiator);
+ negotiator.tableSchema(buildSchema(), true);
+ errorContext = negotiator.parentErrorContext();
+ ResultSetLoader loader = negotiator.build();
+ rowWriter = loader.writer();
+ buildReaderList();
+
+ return true;
+ }
+
+ @Override
+ public boolean next() {
+ while (!rowWriter.isFull()) {
+ if (!processNextRow()) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ @Override
+ public void close() {
+ if (fsStream != null) {
+ try {
+ fsStream.close();
+ } catch (IOException e) {
+ logger.warn("Error when closing SPSS File Stream resource: {}",
e.getMessage());
+ }
+ fsStream = null;
+ }
+ }
+
+ private void openFile(FileSchemaNegotiator negotiator) {
+ try {
+ fsStream =
negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+ spssReader = new SpssDataFileReader(fsStream);
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Unable to open SPSS File %s", split.getPath())
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ }
+
+ private boolean processNextRow() {
+ try {
+ // Stop reading when you run out of data
+ if (!spssReader.readNextCase()) {
+ return false;
+ }
+
+ rowWriter.start();
+ for (SpssColumnWriter spssColumnWriter : writerList) {
+ spssColumnWriter.load(spssReader);
+ }
+ rowWriter.save();
+
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Error reading SPSS File.")
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ return true;
+ }
+
+ private TupleMetadata buildSchema() {
+ SchemaBuilder builder = new SchemaBuilder();
+ variableList = spssReader.getVariables();
+
+ for (SpssVariable variable : variableList) {
+ String varName = variable.getVariableName();
+
+ if (variable.isNumeric()) {
+ builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+ // Check if the column has lookups associated with it
+ if (variable.getValueLabels() != null &&
variable.getValueLabels().size() > 0) {
+ builder.addNullable(varName + VALUE_LABEL,
TypeProtos.MinorType.VARCHAR);
+ }
+
+ } else {
+ builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+ }
+ }
+ return builder.buildSchema();
+ }
+
+ private void buildReaderList() {
+ writerList = new ArrayList<>();
+
+ for(SpssVariable variable : variableList) {
+ if (variable.isNumeric()) {
+ writerList.add(new NumericSpssColumnWriter(variable.getVariableName(),
rowWriter, spssReader));
+ } else {
+ writerList.add(new StringSpssColumnWriter(variable.getVariableName(),
rowWriter));
+ }
+ }
+ }
+
+ public abstract static class SpssColumnWriter {
+ final String columnName;
+
+ final ScalarWriter writer;
Review comment:
Nit: no need to double-space fields. Single-spacing is more compact.
##########
File path:
contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import
org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+ private static final Logger logger =
LoggerFactory.getLogger(SpssBatchReader.class);
+
+ private static final String VALUE_LABEL = "_value";
+
+ private FileSplit split;
+
+ private InputStream fsStream;
+
+ private SpssDataFileReader spssReader;
+
+ private RowSetLoader rowWriter;
+
+ private List<SpssVariable> variableList;
+
+ private List<SpssColumnWriter> writerList;
+
+ private CustomErrorContext errorContext;
+
+
+ public static class SpssReaderConfig {
+
+ protected final SpssFormatPlugin plugin;
+
+ public SpssReaderConfig(SpssFormatPlugin plugin) {
+ this.plugin = plugin;
+ }
+ }
+
+ @Override
+ public boolean open(FileSchemaNegotiator negotiator) {
+ split = negotiator.split();
+ openFile(negotiator);
+ negotiator.tableSchema(buildSchema(), true);
+ errorContext = negotiator.parentErrorContext();
+ ResultSetLoader loader = negotiator.build();
+ rowWriter = loader.writer();
+ buildReaderList();
+
+ return true;
+ }
+
+ @Override
+ public boolean next() {
+ while (!rowWriter.isFull()) {
+ if (!processNextRow()) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ @Override
+ public void close() {
+ if (fsStream != null) {
+ try {
+ fsStream.close();
+ } catch (IOException e) {
+ logger.warn("Error when closing SPSS File Stream resource: {}",
e.getMessage());
+ }
+ fsStream = null;
+ }
+ }
+
+ private void openFile(FileSchemaNegotiator negotiator) {
+ try {
+ fsStream =
negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+ spssReader = new SpssDataFileReader(fsStream);
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Unable to open SPSS File %s", split.getPath())
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ }
+
+ private boolean processNextRow() {
+ try {
+ // Stop reading when you run out of data
+ if (!spssReader.readNextCase()) {
+ return false;
+ }
+
+ rowWriter.start();
+ for (SpssColumnWriter spssColumnWriter : writerList) {
+ spssColumnWriter.load(spssReader);
+ }
+ rowWriter.save();
+
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Error reading SPSS File.")
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ return true;
+ }
+
+ private TupleMetadata buildSchema() {
+ SchemaBuilder builder = new SchemaBuilder();
+ variableList = spssReader.getVariables();
+
+ for (SpssVariable variable : variableList) {
+ String varName = variable.getVariableName();
+
+ if (variable.isNumeric()) {
+ builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+ // Check if the column has lookups associated with it
+ if (variable.getValueLabels() != null &&
variable.getValueLabels().size() > 0) {
+ builder.addNullable(varName + VALUE_LABEL,
TypeProtos.MinorType.VARCHAR);
+ }
+
+ } else {
+ builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+ }
+ }
+ return builder.buildSchema();
+ }
+
+ private void buildReaderList() {
+ writerList = new ArrayList<>();
+
+ for(SpssVariable variable : variableList) {
+ if (variable.isNumeric()) {
+ writerList.add(new NumericSpssColumnWriter(variable.getVariableName(),
rowWriter, spssReader));
+ } else {
+ writerList.add(new StringSpssColumnWriter(variable.getVariableName(),
rowWriter));
+ }
+ }
+ }
+
+ public abstract static class SpssColumnWriter {
+ final String columnName;
+
+ final ScalarWriter writer;
+
+ public SpssColumnWriter(String columnName, ScalarWriter writer) {
+ this.columnName = columnName;
+ this.writer = writer;
+ }
+
+
Review comment:
Nit: extra newlines.
##########
File path:
contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import
org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+ private static final Logger logger =
LoggerFactory.getLogger(SpssBatchReader.class);
+
+ private static final String VALUE_LABEL = "_value";
+
+ private FileSplit split;
+
+ private InputStream fsStream;
+
+ private SpssDataFileReader spssReader;
+
+ private RowSetLoader rowWriter;
+
+ private List<SpssVariable> variableList;
+
+ private List<SpssColumnWriter> writerList;
+
+ private CustomErrorContext errorContext;
+
+
+ public static class SpssReaderConfig {
+
+ protected final SpssFormatPlugin plugin;
+
+ public SpssReaderConfig(SpssFormatPlugin plugin) {
+ this.plugin = plugin;
+ }
+ }
+
+ @Override
+ public boolean open(FileSchemaNegotiator negotiator) {
+ split = negotiator.split();
+ openFile(negotiator);
+ negotiator.tableSchema(buildSchema(), true);
+ errorContext = negotiator.parentErrorContext();
+ ResultSetLoader loader = negotiator.build();
+ rowWriter = loader.writer();
+ buildReaderList();
+
+ return true;
+ }
+
+ @Override
+ public boolean next() {
+ while (!rowWriter.isFull()) {
+ if (!processNextRow()) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ @Override
+ public void close() {
+ if (fsStream != null) {
+ try {
+ fsStream.close();
+ } catch (IOException e) {
+ logger.warn("Error when closing SPSS File Stream resource: {}",
e.getMessage());
+ }
+ fsStream = null;
+ }
+ }
+
+ private void openFile(FileSchemaNegotiator negotiator) {
+ try {
+ fsStream =
negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+ spssReader = new SpssDataFileReader(fsStream);
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Unable to open SPSS File %s", split.getPath())
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ }
+
+ private boolean processNextRow() {
+ try {
+ // Stop reading when you run out of data
+ if (!spssReader.readNextCase()) {
+ return false;
+ }
+
+ rowWriter.start();
+ for (SpssColumnWriter spssColumnWriter : writerList) {
+ spssColumnWriter.load(spssReader);
+ }
+ rowWriter.save();
+
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Error reading SPSS File.")
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ return true;
+ }
+
+ private TupleMetadata buildSchema() {
+ SchemaBuilder builder = new SchemaBuilder();
+ variableList = spssReader.getVariables();
+
+ for (SpssVariable variable : variableList) {
+ String varName = variable.getVariableName();
+
+ if (variable.isNumeric()) {
+ builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+ // Check if the column has lookups associated with it
+ if (variable.getValueLabels() != null &&
variable.getValueLabels().size() > 0) {
+ builder.addNullable(varName + VALUE_LABEL,
TypeProtos.MinorType.VARCHAR);
+ }
+
+ } else {
+ builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+ }
+ }
+ return builder.buildSchema();
+ }
+
+ private void buildReaderList() {
+ writerList = new ArrayList<>();
+
+ for(SpssVariable variable : variableList) {
+ if (variable.isNumeric()) {
+ writerList.add(new NumericSpssColumnWriter(variable.getVariableName(),
rowWriter, spssReader));
+ } else {
+ writerList.add(new StringSpssColumnWriter(variable.getVariableName(),
rowWriter));
+ }
+ }
+ }
+
+ public abstract static class SpssColumnWriter {
+ final String columnName;
+
+ final ScalarWriter writer;
+
+ public SpssColumnWriter(String columnName, ScalarWriter writer) {
+ this.columnName = columnName;
+ this.writer = writer;
+ }
+
+
+ public abstract void load (SpssDataFileReader reader);
+ }
+
+ public static class StringSpssColumnWriter extends SpssColumnWriter {
+
+ StringSpssColumnWriter (String columnName, RowSetLoader rowWriter) {
+ super(columnName, rowWriter.scalar(columnName));
+ }
+
+ @Override
+ public void load(SpssDataFileReader reader) {
+ writer.setString(reader.getStringValue(columnName));
+ }
+ }
+
+ public static class NumericSpssColumnWriter extends SpssColumnWriter {
+
+ ScalarWriter labelWriter;
+
+ Map<Double, String> labels;
+
+ boolean hasLabels;
+
+ NumericSpssColumnWriter(String columnName, RowSetLoader rowWriter,
SpssDataFileReader reader) {
+ super(columnName, rowWriter.scalar(columnName));
+
+ if (reader.getValueLabels(columnName) != null &&
reader.getValueLabels(columnName).size() != 0) {
+ labelWriter = rowWriter.scalar(columnName + VALUE_LABEL);
+ labels = reader.getValueLabels(columnName);
+ hasLabels = true;
+ }
+ }
+
+ @Override
+ public void load(SpssDataFileReader reader) {
+
+ double value = reader.getDoubleValue(columnName);
Review comment:
Nice use of your `SpssColumnWriter` class to avoid a name lookup for
each column write. I wonder, does SPSS provide an indexed way to get values? Do
the values form a row (tuple) in addition to a map?
##########
File path:
contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import
org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+ private static final Logger logger =
LoggerFactory.getLogger(SpssBatchReader.class);
+
+ private static final String VALUE_LABEL = "_value";
+
+ private FileSplit split;
+
+ private InputStream fsStream;
+
+ private SpssDataFileReader spssReader;
+
+ private RowSetLoader rowWriter;
+
+ private List<SpssVariable> variableList;
+
+ private List<SpssColumnWriter> writerList;
+
+ private CustomErrorContext errorContext;
+
+
+ public static class SpssReaderConfig {
+
+ protected final SpssFormatPlugin plugin;
+
+ public SpssReaderConfig(SpssFormatPlugin plugin) {
+ this.plugin = plugin;
+ }
+ }
+
+ @Override
+ public boolean open(FileSchemaNegotiator negotiator) {
+ split = negotiator.split();
+ openFile(negotiator);
+ negotiator.tableSchema(buildSchema(), true);
+ errorContext = negotiator.parentErrorContext();
+ ResultSetLoader loader = negotiator.build();
+ rowWriter = loader.writer();
+ buildReaderList();
+
+ return true;
+ }
+
+ @Override
+ public boolean next() {
+ while (!rowWriter.isFull()) {
+ if (!processNextRow()) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ @Override
+ public void close() {
+ if (fsStream != null) {
+ try {
+ fsStream.close();
+ } catch (IOException e) {
+ logger.warn("Error when closing SPSS File Stream resource: {}",
e.getMessage());
+ }
+ fsStream = null;
+ }
+ }
+
+ private void openFile(FileSchemaNegotiator negotiator) {
+ try {
+ fsStream =
negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+ spssReader = new SpssDataFileReader(fsStream);
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Unable to open SPSS File %s", split.getPath())
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ }
+
+ private boolean processNextRow() {
+ try {
+ // Stop reading when you run out of data
+ if (!spssReader.readNextCase()) {
+ return false;
+ }
+
+ rowWriter.start();
+ for (SpssColumnWriter spssColumnWriter : writerList) {
+ spssColumnWriter.load(spssReader);
+ }
+ rowWriter.save();
+
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Error reading SPSS File.")
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ return true;
+ }
+
+ private TupleMetadata buildSchema() {
+ SchemaBuilder builder = new SchemaBuilder();
+ variableList = spssReader.getVariables();
+
+ for (SpssVariable variable : variableList) {
+ String varName = variable.getVariableName();
+
+ if (variable.isNumeric()) {
+ builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+ // Check if the column has lookups associated with it
+ if (variable.getValueLabels() != null &&
variable.getValueLabels().size() > 0) {
+ builder.addNullable(varName + VALUE_LABEL,
TypeProtos.MinorType.VARCHAR);
+ }
+
+ } else {
+ builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+ }
+ }
+ return builder.buildSchema();
+ }
+
+ private void buildReaderList() {
+ writerList = new ArrayList<>();
+
+ for(SpssVariable variable : variableList) {
+ if (variable.isNumeric()) {
+ writerList.add(new NumericSpssColumnWriter(variable.getVariableName(),
rowWriter, spssReader));
+ } else {
+ writerList.add(new StringSpssColumnWriter(variable.getVariableName(),
rowWriter));
+ }
+ }
+ }
+
+ public abstract static class SpssColumnWriter {
+ final String columnName;
+
+ final ScalarWriter writer;
+
+ public SpssColumnWriter(String columnName, ScalarWriter writer) {
+ this.columnName = columnName;
+ this.writer = writer;
+ }
+
+
+ public abstract void load (SpssDataFileReader reader);
+ }
+
+ public static class StringSpssColumnWriter extends SpssColumnWriter {
+
+ StringSpssColumnWriter (String columnName, RowSetLoader rowWriter) {
+ super(columnName, rowWriter.scalar(columnName));
+ }
+
+ @Override
+ public void load(SpssDataFileReader reader) {
+ writer.setString(reader.getStringValue(columnName));
+ }
+ }
+
+ public static class NumericSpssColumnWriter extends SpssColumnWriter {
+
+ ScalarWriter labelWriter;
+
+ Map<Double, String> labels;
+
+ boolean hasLabels;
+
+ NumericSpssColumnWriter(String columnName, RowSetLoader rowWriter,
SpssDataFileReader reader) {
+ super(columnName, rowWriter.scalar(columnName));
+
+ if (reader.getValueLabels(columnName) != null &&
reader.getValueLabels(columnName).size() != 0) {
+ labelWriter = rowWriter.scalar(columnName + VALUE_LABEL);
+ labels = reader.getValueLabels(columnName);
+ hasLabels = true;
Review comment:
Nit: `hasLabels` is redundant: can check if `labelWriter` is `null`
below.
##########
File path:
contrib/format-spss/src/main/java/org/apache/drill/exec/store/spss/SpssBatchReader.java
##########
@@ -0,0 +1,240 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.drill.exec.store.spss;
+
+import com.bedatadriven.spss.SpssDataFileReader;
+import com.bedatadriven.spss.SpssVariable;
+import org.apache.drill.common.exceptions.CustomErrorContext;
+import org.apache.drill.common.exceptions.UserException;
+import org.apache.drill.common.types.TypeProtos;
+import
org.apache.drill.exec.physical.impl.scan.file.FileScanFramework.FileSchemaNegotiator;
+import org.apache.drill.exec.physical.impl.scan.framework.ManagedReader;
+import org.apache.drill.exec.physical.resultSet.ResultSetLoader;
+import org.apache.drill.exec.physical.resultSet.RowSetLoader;
+import org.apache.drill.exec.record.metadata.SchemaBuilder;
+import org.apache.drill.exec.record.metadata.TupleMetadata;
+import org.apache.drill.exec.vector.accessor.ScalarWriter;
+import org.apache.hadoop.mapred.FileSplit;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+public class SpssBatchReader implements ManagedReader<FileSchemaNegotiator> {
+
+ private static final Logger logger =
LoggerFactory.getLogger(SpssBatchReader.class);
+
+ private static final String VALUE_LABEL = "_value";
+
+ private FileSplit split;
+
+ private InputStream fsStream;
+
+ private SpssDataFileReader spssReader;
+
+ private RowSetLoader rowWriter;
+
+ private List<SpssVariable> variableList;
+
+ private List<SpssColumnWriter> writerList;
+
+ private CustomErrorContext errorContext;
+
+
+ public static class SpssReaderConfig {
+
+ protected final SpssFormatPlugin plugin;
+
+ public SpssReaderConfig(SpssFormatPlugin plugin) {
+ this.plugin = plugin;
+ }
+ }
+
+ @Override
+ public boolean open(FileSchemaNegotiator negotiator) {
+ split = negotiator.split();
+ openFile(negotiator);
+ negotiator.tableSchema(buildSchema(), true);
+ errorContext = negotiator.parentErrorContext();
+ ResultSetLoader loader = negotiator.build();
+ rowWriter = loader.writer();
+ buildReaderList();
+
+ return true;
+ }
+
+ @Override
+ public boolean next() {
+ while (!rowWriter.isFull()) {
+ if (!processNextRow()) {
+ return false;
+ }
+ }
+ return true;
+ }
+
+ @Override
+ public void close() {
+ if (fsStream != null) {
+ try {
+ fsStream.close();
+ } catch (IOException e) {
+ logger.warn("Error when closing SPSS File Stream resource: {}",
e.getMessage());
+ }
+ fsStream = null;
+ }
+ }
+
+ private void openFile(FileSchemaNegotiator negotiator) {
+ try {
+ fsStream =
negotiator.fileSystem().openPossiblyCompressedStream(split.getPath());
+ spssReader = new SpssDataFileReader(fsStream);
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Unable to open SPSS File %s", split.getPath())
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ }
+
+ private boolean processNextRow() {
+ try {
+ // Stop reading when you run out of data
+ if (!spssReader.readNextCase()) {
+ return false;
+ }
+
+ rowWriter.start();
+ for (SpssColumnWriter spssColumnWriter : writerList) {
+ spssColumnWriter.load(spssReader);
+ }
+ rowWriter.save();
+
+ } catch (IOException e) {
+ throw UserException
+ .dataReadError(e)
+ .message("Error reading SPSS File.")
+ .addContext(e.getMessage())
+ .addContext(errorContext)
+ .build(logger);
+ }
+ return true;
+ }
+
+ private TupleMetadata buildSchema() {
+ SchemaBuilder builder = new SchemaBuilder();
+ variableList = spssReader.getVariables();
+
+ for (SpssVariable variable : variableList) {
+ String varName = variable.getVariableName();
+
+ if (variable.isNumeric()) {
+ builder.addNullable(varName, TypeProtos.MinorType.FLOAT8);
+
+ // Check if the column has lookups associated with it
+ if (variable.getValueLabels() != null &&
variable.getValueLabels().size() > 0) {
+ builder.addNullable(varName + VALUE_LABEL,
TypeProtos.MinorType.VARCHAR);
+ }
+
+ } else {
+ builder.addNullable(varName, TypeProtos.MinorType.VARCHAR);
+ }
+ }
+ return builder.buildSchema();
+ }
+
+ private void buildReaderList() {
+ writerList = new ArrayList<>();
+
+ for(SpssVariable variable : variableList) {
+ if (variable.isNumeric()) {
+ writerList.add(new NumericSpssColumnWriter(variable.getVariableName(),
rowWriter, spssReader));
+ } else {
+ writerList.add(new StringSpssColumnWriter(variable.getVariableName(),
rowWriter));
+ }
+ }
+ }
+
+ public abstract static class SpssColumnWriter {
+ final String columnName;
+
+ final ScalarWriter writer;
+
+ public SpssColumnWriter(String columnName, ScalarWriter writer) {
+ this.columnName = columnName;
+ this.writer = writer;
+ }
+
+
+ public abstract void load (SpssDataFileReader reader);
+ }
+
+ public static class StringSpssColumnWriter extends SpssColumnWriter {
+
+ StringSpssColumnWriter (String columnName, RowSetLoader rowWriter) {
+ super(columnName, rowWriter.scalar(columnName));
+ }
+
+ @Override
+ public void load(SpssDataFileReader reader) {
+ writer.setString(reader.getStringValue(columnName));
+ }
+ }
+
+ public static class NumericSpssColumnWriter extends SpssColumnWriter {
+
+ ScalarWriter labelWriter;
+
+ Map<Double, String> labels;
Review comment:
This makes me nervous: `double` is a fragile thing to map from. Does
SPSS require that indexed columns have integer values? If so, map from an
`Integer`, which is more reliable.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Create Format Plugin for SPSS Files
> -----------------------------------
>
> Key: DRILL-7716
> URL: https://issues.apache.org/jira/browse/DRILL-7716
> Project: Apache Drill
> Issue Type: Improvement
> Components: Storage - Text & CSV
> Affects Versions: 1.17.0
> Reporter: Charles Givre
> Assignee: Charles Givre
> Priority: Major
> Labels: enhancement, ready-to-commit
> Fix For: 1.18.0
>
>
> # Format Plugin for SPSS (SAV) Files
> This format plugin enables Apache Drill to read and query Statistical Package
> for the Social Sciences (SPSS) (or Statistical Product and Service Solutions)
> data files. According
> to Wikipedia: [1]
>
> SPSS is a widely used program for statistical analysis in social science. It
> is also used by market researchers, health researchers, survey companies,
> government, education researchers, marketing organizations, data miners, and
> others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described
> as one of "sociology's most influential books" for allowing ordinary
> researchers to do their own statistical analysis. In addition to statistical
> analysis, data management (case selection, file reshaping, creating derived
> data) and data documentation (a metadata dictionary is stored in the
> datafile) are features of the base software.
>
>
> ## Configuration
> To configure Drill to read SPSS files, simply add the following code to the
> formats section of your file-based storage plugin. This should happen
> automatically for the default
> `cp`, `dfs`, and `S3` storage plugins.
>
> Other than the file extensions, there are no variables to configure.
>
> ```json
> "spss": {
> "type": "spss",
> "extensions": [
> "sav"
> ]
> }
> ```
> ## Data Model
> SPSS only supports two data types: Numeric and Strings. Drill maps these to
> `DOUBLE` and `VARCHAR` respectively. However, for some numeric columns, SPSS
> maps these numbers to
> text, similar to an `enum` field in Java.
>
> For instance, a field called `Survey` might have labels as shown below:
>
> <table>
> <tr>
> <th>Value</th>
> <th>Text</th>
> </tr>
> <tr>
> <td>1</td>
> <td>Yes</td>
> </tr>
> <tr>
> <td>2</td>
> <td>No</td>
> </tr>
> <tr>
> <td>99</td>
> <td>No Answer</td>
> </tr>
> </table>
> For situations like this, Drill will create two columns. In the example above
> you would get a column called `Survey` which has the numeric value (1,2 or
> 99) as well as a column
> called `Survey_value` which will map the integer to the appropriate value.
> Thus, the results would look something like this:
>
> <table>
> <tr>
> <th>`Survey`</th>
> <th>`Survey_value`</th>
> </tr>
> <tr>
> <td>1</td>
> <td>Yes</td>
> </tr>
> <tr>
> <td>1</td>
> <td>Yes</td>
> </tr>
> <tr>
> <td>1</td>
> <td>Yes</td>
> </tr>
> <tr>
> <td>2</td>
> <td>No</td>
> </tr>
> <tr>
> <td>1</td>
> <td>Yes</td>
> </tr>
> <tr>
> <td>2</td>
> <td>No</td>
> </tr>
> <tr>
> <td>99</td>
> <td>No Answer</td>
> </tr>
> </table>
> [1]: https://en.wikipedia.org/wiki/SPSS
--
This message was sent by Atlassian Jira
(v8.3.4#803005)