Github user hmcl commented on a diff in the pull request:

    https://github.com/apache/storm/pull/2637#discussion_r186253597
  
    --- Diff: docs/storm-kafka-client.md ---
    @@ -313,4 +313,39 @@ KafkaSpoutConfig<String, String> kafkaConf = 
KafkaSpoutConfig
       .setTupleTrackingEnforced(true)
     ```
     
    -Note: This setting has no effect with AT_LEAST_ONCE processing guarantee, 
where tuple tracking is required and therefore always enabled.
    \ No newline at end of file
    +Note: This setting has no effect with AT_LEAST_ONCE processing guarantee, 
where tuple tracking is required and therefore always enabled.
    +
    +# Translation from `storm-kafka` to `storm-kafka-client` spout properties
    +
    +This may not be an exhaustive list because the `storm-kafka` configs were 
taken from Storm 0.9.6
    
+[SpoutConfig](https://svn.apache.org/repos/asf/storm/site/releases/0.9.6/javadocs/storm/kafka/SpoutConfig.html)
 and
    
+[KafkaConfig](https://svn.apache.org/repos/asf/storm/site/releases/0.9.6/javadocs/storm/kafka/KafkaConfig.html).
    +`storm-kafka-client` spout configurations were taken from Storm 1.0.6
    
+[KafkaSpoutConfig](https://storm.apache.org/releases/1.0.6/javadocs/org/apache/storm/kafka/spout/KafkaSpoutConfig.html)
 
    +and Kafka 0.10.1.0 
[ConsumerConfig](https://kafka.apache.org/0101/javadoc/index.html?org/apache/kafka/clients/consumer/ConsumerConfig.html).
    +
    +| SpoutConfig   | KafkaSpoutConfig/ConsumerConfig Name | KafkaSpoutConfig 
Usage |
    +| ------------- | ------------------------------------ | 
--------------------------- |
    +| **Setting:** `startOffsetTime`<br><br> **Default:** 
`EarliestTime`<br>________________________________________________ <br> 
**Setting:** `forceFromStart` <br><br> **Default:** `false` <br><br> 
`startOffsetTime` & `forceFromStart` together determine the starting offset. 
`forceFromStart` determines whether the Zookeeper offset is ignored. 
`startOffsetTime` sets the timestamp that determines the beginning offset, in 
case there is no offset in Zookeeper, or the Zookeeper offset is ignored | 
**Setting:** 
[`FirstPollOffsetStrategy`](javadocs/org/apache/storm/kafka/spout/KafkaSpoutConfig.FirstPollOffsetStrategy.html)<br><br>
 **Default:** `UNCOMMITTED_EARLIEST` <br><br> [Refer to the helper 
table](#helper-table-for-setting-firstpolloffsetstrategy) for picking 
`FirstPollOffsetStrategy` based on your `startOffsetTime` & `forceFromStart` 
settings | 
[`<KafkaSpoutConfig-Builder>.setFirstPollOffsetStrategy(<strategy-name>)`](javadocs/org/apache/storm/kafka/spout/KafkaSpoutConfig.Builder.htm
 
l#setFirstPollOffsetStrategy-org.apache.storm.kafka.spout.KafkaSpoutConfig.FirstPollOffsetStrategy-)|
    +| **Setting:** `scheme`<br><br> The interface that specifies how a 
`ByteBuffer` from a Kafka topic is transformed into Storm tuple 
<br>**Default:** `RawMultiScheme` | **Setting:** 
[`Deserializers`](https://kafka.apache.org/11/javadoc/org/apache/kafka/common/serialization/Deserializer.html)|
 
[`<KafkaSpoutConfig-Builder>.setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
 
<deserializer-class>)`](javadocs/org/apache/storm/kafka/spout/KafkaSpoutConfig.Builder.html#setProp-java.lang.String-java.lang.Object-)<br><br>
 
[`<KafkaSpoutConfig-Builder>.setProp(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
 
<deserializer-class>)`](javadocs/org/apache/storm/kafka/spout/KafkaSpoutConfig.Builder.html#setProp-java.lang.String-java.lang.Object-)|
    +| **Setting:** `fetchSizeBytes`<br><br> Message fetch size -- the number 
of bytes to attempt to fetch in one request to a Kafka server <br> **Default:** 
`1MB` | **Setting:** 
[`max.partition.fetch.bytes`](https://kafka.apache.org/11/javadoc/org/apache/kafka/clients/consumer/ConsumerConfig.html#MAX_PARTITION_FETCH_BYTES_CONFIG)<br><br>
 **Default:** `1MB`| 
[`<KafkaSpoutConfig-Builder>.setProp(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG,
 
<int-value>)`](javadocs/org/apache/storm/kafka/spout/KafkaSpoutConfig.Builder.html#setProp-java.lang.String-java.lang.Object-)|
    +| **Setting:** `bufferSizeBytes`<br><br> Buffer size (in bytes) for 
network requests. The buffer size which consumer has for pulling data from 
producer <br> **Default:** `1MB`| **Setting:** 
[`receive.buffer.bytes`](https://kafka.apache.org/11/javadoc/org/apache/kafka/clients/consumer/ConsumerConfig.html#RECEIVE_BUFFER_CONFIG)
 <br><br> The size of the TCP receive buffer (SO_RCVBUF) to use when reading 
data. If the value is -1, the OS default will be used | 
[`<KafkaSpoutConfig-Builder>.setProp(ConsumerConfig.RECEIVE_BUFFER_CONFIG, 
<int-value>)`](javadocs/org/apache/storm/kafka/spout/KafkaSpoutConfig.Builder.html#setProp-java.lang.String-java.lang.Object-)|
    +| **Setting:** `socketTimeoutMs`<br><br> **Default:** `10000` | **N/A** ||
    +| **Setting:** `useStartOffsetTimeIfOffsetOutOfRange`<br><br> **Default:** 
`true` | **Setting:** 
[`auto.offset.reset`](https://kafka.apache.org/11/javadoc/org/apache/kafka/clients/consumer/ConsumerConfig.html#AUTO_OFFSET_RESET_CONFIG)<br><br>
 **Possible values:** `"latest"`, `"earliest"`, `"none"`<br> **Default:** 
`latest`. Exception: `earliest` if 
[`ProcessingGuarantee`](javadocs/org/apache/storm/kafka/spout/KafkaSpoutConfig.ProcessingGuarantee.html)
 is set to `AT_LEAST_ONCE` | 
[`<KafkaSpoutConfig-Builder>.setProp(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, 
<String>)`](javadocs/org/apache/storm/kafka/spout/KafkaSpoutConfig.Builder.html#setProp-java.lang.String-java.lang.Object-)|
    --- End diff --
    
    I am having a difficult time understanding what you mean by: "_Exception: 
earliest if ProcessingGuarantee is set to AT_LEAST_ONCE_"


---

Reply via email to