Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/566#discussion_r12035701
--- Diff:
external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala
---
@@ -36,7 +36,25 @@ object FlumeUtils {
port: Int,
storageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK_SER_2
): ReceiverInputDStream[SparkFlumeEvent] = {
- val inputStream = new FlumeInputDStream[SparkFlumeEvent](ssc,
hostname, port, storageLevel)
+ createStream(ssc, hostname, port, storageLevel, false)
+ }
+
+ /**
+ * Create a input stream from a Flume source.
+ * @param ssc StreamingContext object
+ * @param hostname Hostname of the slave machine to which the flume data
will be sent
+ * @param port Port of the slave machine to which the flume data
will be sent
+ * @param storageLevel Storage level to use for storing the received
objects
+ * @param enableCompression Should Netty Server decode input stream
from client
--- End diff --
Also, shouldnt this parameter be `enableDecompression` as from the Spark
Streaming's point of view it will be decompression the data.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---