Repository: kafka
Updated Branches:
  refs/heads/trunk 34a594472 -> e79d9af3c


KAFKA-3461: Fix typos in Kafka web documentations.

This PR fixes 8 typos in HTML files of `docs` module. I wrote explicitly here 
since Github sometimes does not highlight the corrections on long lines 
correctly.
- docs/api.html: compatability => compatibility
- docs/connect.html: simultaneoulsy => simultaneously
- docs/implementation.html: LATIEST_TIME => LATEST_TIME, nPartions => 
nPartitions
- docs/migration.html: Decomission => Decommission
- docs/ops.html: stoping => stopping, ConumserGroupCommand => 
ConsumerGroupCommand, youre => you're

Author: Dongjoon Hyun <[email protected]>

Reviewers: Ismael Juma

Closes #1138 from dongjoon-hyun/KAFKA-3461


Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/e79d9af3
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/e79d9af3
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/e79d9af3

Branch: refs/heads/trunk
Commit: e79d9af3cfbb8884e00424f84f3c687114497998
Parents: 34a5944
Author: Dongjoon Hyun <[email protected]>
Authored: Tue Apr 12 13:48:18 2016 -0700
Committer: Gwen Shapira <[email protected]>
Committed: Tue Apr 12 13:48:18 2016 -0700

----------------------------------------------------------------------
 docs/api.html            |  2 +-
 docs/connect.html        | 28 ++++++++++++++--------------
 docs/implementation.html |  4 ++--
 docs/migration.html      |  2 +-
 docs/ops.html            |  6 +++---
 5 files changed, 21 insertions(+), 21 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kafka/blob/e79d9af3/docs/api.html
----------------------------------------------------------------------
diff --git a/docs/api.html b/docs/api.html
index d303244..8d5be9b 100644
--- a/docs/api.html
+++ b/docs/api.html
@@ -15,7 +15,7 @@
  limitations under the License.
 -->
 
-Apache Kafka includes new java clients (in the org.apache.kafka.clients 
package). These are meant to supplant the older Scala clients, but for 
compatability they will co-exist for some time. These clients are available in 
a separate jar with minimal dependencies, while the old Scala clients remain 
packaged with the server.
+Apache Kafka includes new java clients (in the org.apache.kafka.clients 
package). These are meant to supplant the older Scala clients, but for 
compatibility they will co-exist for some time. These clients are available in 
a separate jar with minimal dependencies, while the old Scala clients remain 
packaged with the server.
 
 <h3><a id="producerapi" href="#producerapi">2.1 Producer API</a></h3>
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/e79d9af3/docs/connect.html
----------------------------------------------------------------------
diff --git a/docs/connect.html b/docs/connect.html
index dc6ad6e..88b8c2b 100644
--- a/docs/connect.html
+++ b/docs/connect.html
@@ -108,7 +108,7 @@ This guide describes how developers can write new 
connectors for Kafka Connect t
 
 To copy data between Kafka and another system, users create a 
<code>Connector</code> for the system they want to pull data from or push data 
to. Connectors come in two flavors: <code>SourceConnectors</code> import data 
from another system (e.g. <code>JDBCSourceConnector</code> would import a 
relational database into Kafka) and <code>SinkConnectors</code> export data 
(e.g. <code>HDFSSinkConnector</code> would export the contents of a Kafka topic 
to an HDFS file).
 
-<code>Connectors</code> do not perform any data copying themselves: their 
configuration describes the data to be copied, and the <code>Connector</code> 
is responsible for breaking that job into a set of <code>Tasks</code> that can 
be distributed to workers. These <code>Tasks</code> also come in two 
corresponding flavors: <code>SourceTask</code>and <code>SinkTask</code>.
+<code>Connectors</code> do not perform any data copying themselves: their 
configuration describes the data to be copied, and the <code>Connector</code> 
is responsible for breaking that job into a set of <code>Tasks</code> that can 
be distributed to workers. These <code>Tasks</code> also come in two 
corresponding flavors: <code>SourceTask</code> and <code>SinkTask</code>.
 
 With an assignment in hand, each <code>Task</code> must copy its subset of the 
data to or from Kafka. In Kafka Connect, it should always be possible to frame 
these assignments as a set of input and output streams consisting of records 
with consistent schemas. Sometimes this mapping is obvious: each file in a set 
of log files can be considered a stream with each parsed line forming a record 
using the same schema and offsets stored as byte offsets in the file. In other 
cases it may require more effort to map to this model: a JDBC connector can map 
each table to a stream, but the offset is less clear. One possible mapping uses 
a timestamp column to generate queries incrementally returning new data, and 
the last queried timestamp can be used as the offset.
 
@@ -242,11 +242,11 @@ public List&lt;SourceRecord&gt; poll() throws 
InterruptedException {
 
 Again, we've omitted some details, but we can see the important steps: the 
<code>poll()</code> method is going to be called repeatedly, and for each call 
it will loop trying to read records from the file. For each line it reads, it 
also tracks the file offset. It uses this information to create an output 
<code>SourceRecord</code> with four pieces of information: the source partition 
(there is only one, the single file being read), source offset (byte offset in 
the file), output topic name, and output value (the line, and we include a 
schema indicating this value will always be a string). Other variants of the 
<code>SourceRecord</code> constructor can also include a specific output 
partition and a key.
 
-Note that this implementation uses the normal Java 
<code>InputStream</code>interface and may sleep if data is not available. This 
is acceptable because Kafka Connect provides each task with a dedicated thread. 
While task implementations have to conform to the basic 
<code>poll()</code>interface, they have a lot of flexibility in how they are 
implemented. In this case, an NIO-based implementation would be more efficient, 
but this simple approach works, is quick to implement, and is compatible with 
older versions of Java.
+Note that this implementation uses the normal Java <code>InputStream</code> 
interface and may sleep if data is not available. This is acceptable because 
Kafka Connect provides each task with a dedicated thread. While task 
implementations have to conform to the basic <code>poll()</code> interface, 
they have a lot of flexibility in how they are implemented. In this case, an 
NIO-based implementation would be more efficient, but this simple approach 
works, is quick to implement, and is compatible with older versions of Java.
 
 <h5><a id="connect_sinktasks" href="#connect_sinktasks">Sink Tasks</a></h5>
 
-The previous section described how to implement a simple 
<code>SourceTask</code>. Unlike <code>SourceConnector</code>and 
<code>SinkConnector</code>, <code>SourceTask</code>and 
<code>SinkTask</code>have very different interfaces because 
<code>SourceTask</code>uses a pull interface and <code>SinkTask</code>uses a 
push interface. Both share the common lifecycle methods, but the 
<code>SinkTask</code>interface is quite different:
+The previous section described how to implement a simple 
<code>SourceTask</code>. Unlike <code>SourceConnector</code> and 
<code>SinkConnector</code>, <code>SourceTask</code> and <code>SinkTask</code> 
have very different interfaces because <code>SourceTask</code> uses a pull 
interface and <code>SinkTask</code> uses a push interface. Both share the 
common lifecycle methods, but the <code>SinkTask</code> interface is quite 
different:
 
 <pre>
 public abstract class SinkTask implements Task {
@@ -257,17 +257,17 @@ public abstract void put(Collection&lt;SinkRecord&gt; 
records);
 public abstract void flush(Map&lt;TopicPartition, Long&gt; offsets);
 </pre>
 
-The <code>SinkTask</code> documentation contains full details, but this 
interface is nearly as simple as the the <code>SourceTask</code>. The 
<code>put()</code>method should contain most of the implementation, accepting 
sets of <code>SinkRecords</code>, performing any required translation, and 
storing them in the destination system. This method does not need to ensure the 
data has been fully written to the destination system before returning. In 
fact, in many cases internal buffering will be useful so an entire batch of 
records can be sent at once, reducing the overhead of inserting events into the 
downstream data store. The <code>SinkRecords</code>contain essentially the same 
information as <code>SourceRecords</code>: Kafka topic, partition, offset and 
the event key and value.
+The <code>SinkTask</code> documentation contains full details, but this 
interface is nearly as simple as the the <code>SourceTask</code>. The 
<code>put()</code> method should contain most of the implementation, accepting 
sets of <code>SinkRecords</code>, performing any required translation, and 
storing them in the destination system. This method does not need to ensure the 
data has been fully written to the destination system before returning. In 
fact, in many cases internal buffering will be useful so an entire batch of 
records can be sent at once, reducing the overhead of inserting events into the 
downstream data store. The <code>SinkRecords</code> contain essentially the 
same information as <code>SourceRecords</code>: Kafka topic, partition, offset 
and the event key and value.
 
-The <code>flush()</code>method is used during the offset commit process, which 
allows tasks to recover from failures and resume from a safe point such that no 
events will be missed. The method should push any outstanding data to the 
destination system and then block until the write has been acknowledged. The 
<code>offsets</code>parameter can often be ignored, but is useful in some cases 
where implementations want to store offset information in the destination store 
to provide exactly-once
-delivery. For example, an HDFS connector could do this and use atomic move 
operations to make sure the <code>flush()</code>operation atomically commits 
the data and offsets to a final location in HDFS.
+The <code>flush()</code> method is used during the offset commit process, 
which allows tasks to recover from failures and resume from a safe point such 
that no events will be missed. The method should push any outstanding data to 
the destination system and then block until the write has been acknowledged. 
The <code>offsets</code> parameter can often be ignored, but is useful in some 
cases where implementations want to store offset information in the destination 
store to provide exactly-once
+delivery. For example, an HDFS connector could do this and use atomic move 
operations to make sure the <code>flush()</code> operation atomically commits 
the data and offsets to a final location in HDFS.
 
 
 <h5><a id="connect_resuming" href="#connect_resuming">Resuming from Previous 
Offsets</a></h5>
 
-The <code>SourceTask</code>implementation included a stream ID (the input 
filename) and offset (position in the file) with each record. The framework 
uses this to commit offsets periodically so that in the case of a failure, the 
task can recover and minimize the number of events that are reprocessed and 
possibly duplicated (or to resume from the most recent offset if Kafka Connect 
was stopped gracefully, e.g. in standalone mode or due to a job 
reconfiguration). This commit process is completely automated by the framework, 
but only the connector knows how to seek back to the right position in the 
input stream to resume from that location.
+The <code>SourceTask</code> implementation included a stream ID (the input 
filename) and offset (position in the file) with each record. The framework 
uses this to commit offsets periodically so that in the case of a failure, the 
task can recover and minimize the number of events that are reprocessed and 
possibly duplicated (or to resume from the most recent offset if Kafka Connect 
was stopped gracefully, e.g. in standalone mode or due to a job 
reconfiguration). This commit process is completely automated by the framework, 
but only the connector knows how to seek back to the right position in the 
input stream to resume from that location.
 
-To correctly resume upon startup, the task can use the 
<code>SourceContext</code>passed into its <code>initialize()</code>method to 
access the offset data. In <code>initialize()</code>, we would add a bit more 
code to read the offset (if it exists) and seek to that position:
+To correctly resume upon startup, the task can use the 
<code>SourceContext</code> passed into its <code>initialize()</code> method to 
access the offset data. In <code>initialize()</code>, we would add a bit more 
code to read the offset (if it exists) and seek to that position:
 
 <pre>
     stream = new FileInputStream(filename);
@@ -285,7 +285,7 @@ Of course, you might need to read many keys for each of the 
input streams. The <
 
 Kafka Connect is intended to define bulk data copying jobs, such as copying an 
entire database rather than creating many jobs to copy each table individually. 
One consequence of this design is that the set of input or output streams for a 
connector can vary over time.
 
-Source connectors need to monitor the source system for changes, e.g. table 
additions/deletions in a database. When they pick up changes, they should 
notify the framework via the <code>ConnectorContext</code>object that 
reconfiguration is necessary. For example, in a <code>SourceConnector</code>:
+Source connectors need to monitor the source system for changes, e.g. table 
additions/deletions in a database. When they pick up changes, they should 
notify the framework via the <code>ConnectorContext</code> object that 
reconfiguration is necessary. For example, in a <code>SourceConnector</code>:
 
 
 <pre>
@@ -293,11 +293,11 @@ if (inputsChanged())
     this.context.requestTaskReconfiguration();
 </pre>
 
-The framework will promptly request new configuration information and update 
the tasks, allowing them to gracefully commit their progress before 
reconfiguring them. Note that in the <code>SourceConnector</code>this 
monitoring is currently left up to the connector implementation. If an extra 
thread is required to perform this monitoring, the connector must allocate it 
itself.
+The framework will promptly request new configuration information and update 
the tasks, allowing them to gracefully commit their progress before 
reconfiguring them. Note that in the <code>SourceConnector</code> this 
monitoring is currently left up to the connector implementation. If an extra 
thread is required to perform this monitoring, the connector must allocate it 
itself.
 
-Ideally this code for monitoring changes would be isolated to the 
<code>Connector</code>and tasks would not need to worry about them. However, 
changes can also affect tasks, most commonly when one of their input streams is 
destroyed in the input system, e.g. if a table is dropped from a database. If 
the <code>Task</code>encounters the issue before the <code>Connector</code>, 
which will be common if the <code>Connector</code>needs to poll for changes, 
the <code>Task</code>will need to handle the subsequent error. Thankfully, this 
can usually be handled simply by catching and handling the appropriate 
exception.
+Ideally this code for monitoring changes would be isolated to the 
<code>Connector</code> and tasks would not need to worry about them. However, 
changes can also affect tasks, most commonly when one of their input streams is 
destroyed in the input system, e.g. if a table is dropped from a database. If 
the <code>Task</code> encounters the issue before the <code>Connector</code>, 
which will be common if the <code>Connector</code> needs to poll for changes, 
the <code>Task</code> will need to handle the subsequent error. Thankfully, 
this can usually be handled simply by catching and handling the appropriate 
exception.
 
-<code>SinkConnectors</code> usually only have to handle the addition of 
streams, which may translate to new entries in their outputs (e.g., a new 
database table). The framework manages any changes to the Kafka input, such as 
when the set of input topics changes because of a regex subscription. 
<code>SinkTasks</code>should expect new input streams, which may require 
creating new resources in the downstream system, such as a new table in a 
database. The trickiest situation to handle in these cases may be conflicts 
between multiple <code>SinkTasks</code>seeing a new input stream for the first 
time and simultaneoulsy trying to create the new resource. 
<code>SinkConnectors</code>, on the other hand, will generally require no 
special code for handling a dynamic set of streams.
+<code>SinkConnectors</code> usually only have to handle the addition of 
streams, which may translate to new entries in their outputs (e.g., a new 
database table). The framework manages any changes to the Kafka input, such as 
when the set of input topics changes because of a regex subscription. 
<code>SinkTasks</code> should expect new input streams, which may require 
creating new resources in the downstream system, such as a new table in a 
database. The trickiest situation to handle in these cases may be conflicts 
between multiple <code>SinkTasks</code> seeing a new input stream for the first 
time and simultaneously trying to create the new resource. 
<code>SinkConnectors</code>, on the other hand, will generally require no 
special code for handling a dynamic set of streams.
 
 <h4><a id="connect_schemas" href="#connect_schemas">Working with 
Schemas</a></h4>
 
@@ -305,7 +305,7 @@ The FileStream connectors are good examples because they 
are simple, but they al
 
 To create more complex data, you'll need to work with the Kafka Connect 
<code>data</code> API. Most structured records will need to interact with two 
classes in addition to primitive types: <code>Schema</code> and 
<code>Struct</code>.
 
-The API documentation provides a complete reference, but here is a simple 
example creating a <code>Schema</code>and <code>Struct</code>:
+The API documentation provides a complete reference, but here is a simple 
example creating a <code>Schema</code> and <code>Struct</code>:
 
 <pre>
 Schema schema = SchemaBuilder.struct().name(NAME)
@@ -322,7 +322,7 @@ Struct struct = new Struct(schema)
 
 If you are implementing a source connector, you'll need to decide when and how 
to create schemas. Where possible, you should avoid recomputing them as much as 
possible. For example, if your connector is guaranteed to have a fixed schema, 
create it statically and reuse a single instance.
 
-However, many connectors will have dynamic schemas. One simple example of this 
is a database connector. Considering even just a single table, the schema will 
not be predefined for the entire connector (as it varies from table to table). 
But it also may not be fixed for a single table over the lifetime of the 
connector since the user may execute an <code>ALTER TABLE</code>command. The 
connector must be able to detect these changes and react appropriately.
+However, many connectors will have dynamic schemas. One simple example of this 
is a database connector. Considering even just a single table, the schema will 
not be predefined for the entire connector (as it varies from table to table). 
But it also may not be fixed for a single table over the lifetime of the 
connector since the user may execute an <code>ALTER TABLE</code> command. The 
connector must be able to detect these changes and react appropriately.
 
 Sink connectors are usually simpler because they are consuming data and 
therefore do not need to create schemas. However, they should take just as much 
care to validate that the schemas they receive have the expected format. When 
the schema does not match -- usually indicating the upstream producer is 
generating invalid data that cannot be correctly translated to the destination 
system -- sink connectors should throw an exception to indicate this error to 
the system.
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/e79d9af3/docs/implementation.html
----------------------------------------------------------------------
diff --git a/docs/implementation.html b/docs/implementation.html
index ecd99e7..be81227 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -90,7 +90,7 @@ class SimpleConsumer {
    * Get a list of valid offsets (up to maxSize) before the given time.
    * The result is a list of offsets, in descending order.
    * @param time: time in millisecs,
-   *              if set to OffsetRequest$.MODULE$.LATIEST_TIME(), get from 
the latest offset available.
+   *              if set to OffsetRequest$.MODULE$.LATEST_TIME(), get from the 
latest offset available.
    *              if set to OffsetRequest$.MODULE$.EARLIEST_TIME(), get from 
the earliest offset available.
    */
   public long[] getOffsetsBefore(String topic, int partition, long time, int 
maxNumOffsets);
@@ -292,7 +292,7 @@ Since the broker registers itself in ZooKeeper using 
ephemeral znodes, this regi
 </p>
 <h4><a id="impl_zktopic" href="#impl_zktopic">Broker Topic Registry</a></h4>
 <pre>
-/brokers/topics/[topic]/[0...N] --> nPartions (ephemeral node)
+/brokers/topics/[topic]/[0...N] --> nPartitions (ephemeral node)
 </pre>
 
 <p>

http://git-wip-us.apache.org/repos/asf/kafka/blob/e79d9af3/docs/migration.html
----------------------------------------------------------------------
diff --git a/docs/migration.html b/docs/migration.html
index 2da6a7e..5240d86 100644
--- a/docs/migration.html
+++ b/docs/migration.html
@@ -27,7 +27,7 @@
     <li>Use the 0.7 to 0.8 <a href="tools.html">migration tool</a> to mirror 
data from the 0.7 cluster into the 0.8 cluster.
     <li>When the 0.8 cluster is fully caught up, redeploy all data 
<i>consumers</i> running the 0.8 client and reading from the 0.8 cluster.
     <li>Finally migrate all 0.7 producers to 0.8 client publishing data to the 
0.8 cluster.
-    <li>Decomission the 0.7 cluster.
+    <li>Decommission the 0.7 cluster.
     <li>Drink.
 </ol>
 

http://git-wip-us.apache.org/repos/asf/kafka/blob/e79d9af3/docs/ops.html
----------------------------------------------------------------------
diff --git a/docs/ops.html b/docs/ops.html
index b239a0e..8b1cc23 100644
--- a/docs/ops.html
+++ b/docs/ops.html
@@ -70,7 +70,7 @@ Instructions for changing the replication factor of a topic 
can be found <a href
 
 <h4><a id="basic_ops_restarting" href="#basic_ops_restarting">Graceful 
shutdown</a></h4>
 
-The Kafka cluster will automatically detect any broker shutdown or failure and 
elect new leaders for the partitions on that machine. This will occur whether a 
server fails or it is brought down intentionally for maintenance or 
configuration changes. For the latter cases Kafka supports a more graceful 
mechanism for stoping a server than just killing it.
+The Kafka cluster will automatically detect any broker shutdown or failure and 
elect new leaders for the partitions on that machine. This will occur whether a 
server fails or it is brought down intentionally for maintenance or 
configuration changes. For the latter cases Kafka supports a more graceful 
mechanism for stopping a server than just killing it.
 
 When a server is stopped gracefully it has two optimizations it will take 
advantage of:
 <ol>
@@ -138,7 +138,7 @@ Note, however, after 0.9.0, the 
kafka.tools.ConsumerOffsetChecker tool is deprec
 
 <h4><a id="basic_ops_consumer_group" href="#basic_ops_consumer_group">Managing 
Consumer Groups</a></h4>
 
-With the ConumserGroupCommand tool, we can list, delete, or describe consumer 
groups. For example, to list all consumer groups across all topics:
+With the ConsumerGroupCommand tool, we can list, delete, or describe consumer 
groups. For example, to list all consumer groups across all topics:
 
 <pre>
  &gt; bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list
@@ -156,7 +156,7 @@ test-consumer-group            test-foo                     
  0          1
 </pre>
 
 
-When youre using the <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design";>new
 consumer-groups API</a> where the broker handles coordination of partition 
handling and rebalance, you can manage the groups with the "--new-consumer" 
flags:
+When you're using the <a 
href="https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design";>new
 consumer-groups API</a> where the broker handles coordination of partition 
handling and rebalance, you can manage the groups with the "--new-consumer" 
flags:
 
 <pre>
  &gt; bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server 
broker1:9092 --list

Reply via email to