[
https://issues.apache.org/jira/browse/BEAM-5309?focusedWorklogId=157567&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-157567
]
ASF GitHub Bot logged work on BEAM-5309:
----------------------------------------
Author: ASF GitHub Bot
Created on: 23/Oct/18 13:56
Start Date: 23/Oct/18 13:56
Worklog Time Spent: 10m
Work Description: b923 commented on a change in pull request #6691:
WIP:[BEAM-5309] Add streaming support for HadoopFormatIO
URL: https://github.com/apache/beam/pull/6691#discussion_r227402756
##########
File path:
sdks/java/io/hadoop-format/src/main/java/org/apache/beam/sdk/io/hadoop/format/HadoopFormatIO.java
##########
@@ -0,0 +1,1247 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional
information regarding
+ * copyright ownership. The ASF licenses this file to you under the Apache
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the
License. You may obtain a
+ * copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
distributed under the License
+ * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express
+ * or implied. See the License for the specific language governing permissions
and limitations under
+ * the License.
+ */
+package org.apache.beam.sdk.io.hadoop.format;
+
+import static com.google.common.base.Preconditions.checkArgument;
+import static java.util.Objects.requireNonNull;
+
+import com.google.auto.value.AutoValue;
+import com.google.common.collect.Iterables;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Objects;
+import java.util.Random;
+import javax.annotation.Nullable;
+import org.apache.beam.sdk.annotations.Experimental;
+import org.apache.beam.sdk.coders.AtomicCoder;
+import org.apache.beam.sdk.options.PipelineOptions;
+import org.apache.beam.sdk.transforms.Combine;
+import org.apache.beam.sdk.transforms.CombineFnBase;
+import org.apache.beam.sdk.transforms.Create;
+import org.apache.beam.sdk.transforms.DoFn;
+import org.apache.beam.sdk.transforms.GroupByKey;
+import org.apache.beam.sdk.transforms.PTransform;
+import org.apache.beam.sdk.transforms.ParDo;
+import org.apache.beam.sdk.transforms.View;
+import org.apache.beam.sdk.transforms.display.DisplayData;
+import org.apache.beam.sdk.transforms.windowing.BoundedWindow;
+import org.apache.beam.sdk.transforms.windowing.DefaultTrigger;
+import org.apache.beam.sdk.values.KV;
+import org.apache.beam.sdk.values.PCollection;
+import org.apache.beam.sdk.values.PCollectionView;
+import org.apache.beam.sdk.values.PDone;
+import org.apache.beam.sdk.values.TypeDescriptor;
+import org.apache.beam.sdk.values.TypeDescriptors;
+import org.apache.beam.sdk.values.WindowingStrategy;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.mapreduce.JobID;
+import org.apache.hadoop.mapreduce.MRJobConfig;
+import org.apache.hadoop.mapreduce.OutputCommitter;
+import org.apache.hadoop.mapreduce.OutputFormat;
+import org.apache.hadoop.mapreduce.Partitioner;
+import org.apache.hadoop.mapreduce.RecordWriter;
+import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.mapreduce.TaskID;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+import org.apache.hadoop.mapreduce.task.JobContextImpl;
+import org.joda.time.Duration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A {@link HadoopFormatIO} is a Transform for writing data to any sink which
implements Hadoop
+ * {@link OutputFormat}. For example - Cassandra, Elasticsearch, HBase, Redis,
Postgres etc. {@link
+ * HadoopFormatIO} has to make several performance trade-offs in connecting to
{@link OutputFormat},
+ * so if there is another Beam IO Transform specifically for connecting to
your data sink of choice,
+ * we would recommend using that one, but this IO Transform allows you to
connect to many data sinks
+ * that do not yet have a Beam IO Transform.
+ *
+ * <p>You will need to pass a Hadoop {@link Configuration} with parameters
specifying how the write
+ * will occur. Many properties of the Configuration are optional, and some are
required for certain
+ * {@link OutputFormat} classes, but the following properties must be set for
all OutputFormats:
+ *
+ * <ul>
+ * <li>{@code mapreduce.job.outputformat.class}: The {@link OutputFormat}
class used to connect to
+ * your data sink of choice.
+ * <li>{@code mapreduce.job.output.key.class}: The key class passed to the
{@link OutputFormat} in
+ * {@code mapreduce.job.outputformat.class}.
+ * <li>{@code mapreduce.job.output.value.class}: The value class passed to
the {@link
+ * OutputFormat} in {@code mapreduce.job.outputformat.class}.
+ * <li>{@code mapreduce.job.reduces}: Number of reduce tasks. Value is equal
to number of write
+ * tasks which will be genarated. This property is not required for
{@link
+ * Write.Builder#withConfigurationWithoutPartitioning(Configuration)}
write.
+ * <li>{@code mapreduce.job.partitioner.class}: Hadoop partitioner class
which will be used for
+ * distributing of records among partitions. This property is not
required for {@link
+ * Write.Builder#withConfigurationWithoutPartitioning(Configuration)}
write.
+ * </ul>
+ *
+ * <b>Note:</b> All mentioned values have appropriate constants. E.g.: {@link
+ * #OUTPUT_FORMAT_CLASS_ATTR}.
+ *
+ * <p>For example:
+ *
+ * <pre>{@code
+ * Configuration myHadoopConfiguration = new Configuration(false);
+ * // Set Hadoop OutputFormat, key and value class in configuration
+ * myHadoopConfiguration.setClass("mapreduce.job.outputformat.class",
+ * MyDbOutputFormatClass, OutputFormat.class);
+ * myHadoopConfiguration.setClass("mapreduce.job.output.key.class",
+ * MyDbOutputFormatKeyClass, Object.class);
+ * myHadoopConfiguration.setClass("mapreduce.job.output.value.class",
+ * MyDbOutputFormatValueClass, Object.class);
+ * myHadoopConfiguration.setClass("mapreduce.job.output.value.class",
+ * MyPartitionerClass, Object.class);
+ * myHadoopConfiguration.setInt("mapreduce.job.reduces", 2);
+ * }</pre>
+ *
+ * <p>You will need to set OutputFormat key and value class (i.e.
"mapreduce.job.output.key.class"
+ * and "mapreduce.job.output.value.class") in Hadoop {@link Configuration}
which are equal to {@code
+ * KeyT} and {@code ValueT}. If you set different OutputFormat key or value
class than
+ * OutputFormat's actual key or value class then, it will throw {@link
IllegalArgumentException}
+ *
+ * <h3>Writing using {@link HadoopFormatIO}</h3>
+ *
+ * <pre>{@code
+ * Pipeline p = ...; // Create pipeline.
+ * // Read data only with Hadoop configuration.
+ * p.apply("read",
+ * HadoopFormatIO.<OutputFormatKeyClass, OutputFormatKeyClass>write()
+ * .withConfiguration(myHadoopConfiguration);
+ * }</pre>
Review comment:
I added the example. Thanks for the note
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 157567)
Time Spent: 4h 50m (was: 4h 40m)
> Add streaming support for HadoopOutputFormatIO
> ----------------------------------------------
>
> Key: BEAM-5309
> URL: https://issues.apache.org/jira/browse/BEAM-5309
> Project: Beam
> Issue Type: Sub-task
> Components: io-java-hadoop
> Reporter: Alexey Romanenko
> Assignee: David Hrbacek
> Priority: Minor
> Time Spent: 4h 50m
> Remaining Estimate: 0h
>
> design doc: https://s.apache.org/beam-streaming-hofio
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)