This is an automated email from the ASF dual-hosted git repository.

rabbah pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/openwhisk-package-kafka.git


The following commit(s) were added to refs/heads/master by this push:
     new 37773d4  chore: fix spelling and grammar (#377)
37773d4 is described below

commit 37773d47be757759c2f7e72be4e59cd8635b975f
Author: John Bampton <[email protected]>
AuthorDate: Wed Mar 10 02:15:52 2021 +1000

    chore: fix spelling and grammar (#377)
---
 CONTRIBUTING.md                                              |  4 ++--
 README.md                                                    |  8 ++++----
 docs/arch/README.md                                          | 12 ++++++------
 docs/dev/README.md                                           |  2 +-
 provider/consumer.py                                         |  6 +++---
 .../scala/system/packages/MessageHubMultiWorkersTest.scala   |  2 +-
 6 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 5906a2b..5ddc2e0 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -20,7 +20,7 @@
 
 # Contributing to Apache OpenWhisk
 
-Anyone can contribute to the OpenWhisk project and we welcome your 
contributions.
+Anyone can contribute to the OpenWhisk project, and we welcome your 
contributions.
 
 There are multiple ways to contribute: report bugs, improve the docs, and
 contribute code, but you must follow these prerequisites and guidelines:
@@ -49,7 +49,7 @@ Please raise any bug reports on the respective project 
repository's GitHub issue
 list to see if your issue has already been raised.
 
 A good bug report is one that make it easy for us to understand what you were 
trying to do and what went wrong.
-Provide as much context as possible so we can try to recreate the issue.
+Provide as much context as possible, so we can try to recreate the issue.
 
 ### Discussion
 
diff --git a/README.md b/README.md
index 785e211..75d4182 100644
--- a/README.md
+++ b/README.md
@@ -49,7 +49,7 @@ While this list of parameters may seem daunting, they can be 
automatically set f
 
 1. Create an instance of Message Hub service under your current organization 
and space that you are using for OpenWhisk.
 
-2. Verify that the the topic you want to listen to already exists in Message 
Hub or create a new topic, for example `mytopic`.
+2. Verify that the topic you want to listen to already exists in Message Hub 
or create a new topic, for example `mytopic`.
 
 3. Refresh the packages in your namespace. The refresh automatically creates a 
package binding for the Message Hub service instance that you created.
 
@@ -121,7 +121,7 @@ The payload of that trigger will contain a `messages` field 
which is an array of
 
 In Kafka terms, these fields should be self-evident. However, `key` has an 
optional feature `isBinaryKey` that allows the `key` to transmit binary data. 
Additionally, the `value` requires special consideration. Optional fields 
`isJSONData` and `isBinaryValue` are available to handle JSON and binary 
messages. These fields, `isJSONData` and `isBinaryValue`, cannot be used in 
conjunction with each other.
 
-As an example, if `isBinaryKey` was set to `true` when the trigger was 
created, the `key` will be encoded as a Base64 string when returned from they 
payload of a fired trigger.
+As an example, if `isBinaryKey` was set to `true` when the trigger was 
created, the `key` will be encoded as a Base64 string when returned from the 
payload of a fired trigger.
 
 For example, if a `key` of `Some key` is posted with `isBinaryKey` set to 
`true`, the trigger payload will resemble the below:
 
@@ -288,7 +288,7 @@ e.g.
 }
 ```
 
- Triggers may become inactive when certain exceptional behavior occurs. For 
example, there was an error firing the trigger or it was not possible to 
connect to the kafka brokers. When a trigger becomes inactive the status object 
will contain additional information as to the cause.
+ Triggers may become inactive when certain exceptional behavior occurs. For 
example, there was an error firing the trigger, or it was not possible to 
connect to the kafka brokers. When a trigger becomes inactive the status object 
will contain additional information as to the cause.
 
  e.g
 
@@ -380,7 +380,7 @@ The action caller (you, or your code) must first Base64 
encode the data, for exa
 Example that integrates OpenWhisk with IBM Message Hub, Node Red, IBM Watson 
IoT, IBM Object Storage, IBM Data Science Experience (Spark) service can be 
[found 
here](https://medium.com/openwhisk/transit-flexible-pipeline-for-iot-data-with-bluemix-and-openwhisk-4824cf20f1e0).
 
 ## Architecture
-Archtecture documentation and diagrams, please refer to the [Architecture 
Docs](docs/arch/README.md)
+Architecture documentation and diagrams, please refer to the [Architecture 
Docs](docs/arch/README.md)
 
 ## Development and Testing
 If you wish to deploy the feed service yourself, please refer to the 
[Development Guide](docs/dev/README.md).
diff --git a/docs/arch/README.md b/docs/arch/README.md
index 0eb6b0b..71cf12b 100644
--- a/docs/arch/README.md
+++ b/docs/arch/README.md
@@ -30,7 +30,7 @@
 4. Developer creates trigger `trigger1` on OpenWhisk, the trigger stores the 
annotation `feed` with the feedAction name from system package or binded 
package.(`/whisk.system/messagingWeb/messageHubFeed`).
 5. Developer invokes action feedAction to create trigger feed passing input 
parameters (lifeCycle:`CREATE`, `trigger1`, Credentials1, Options:`topic1`)
 6. The feedAction invokes feedWebAction forwarding input parameter.
-7. The feedWebAction inserts trigger feed doc into DB for worker group 0 
(feedWebAction protects DB credentials)
+7. The feedWebAction inserts trigger feed doc into the DB for worker group 0 
(feedWebAction protects DB credentials)
 8. DB insertion notifies workers group 0 via Cloudant/CouchDB changes API, 
workers listen on DB view with a filter for their group `worker0` and gets the 
DB doc.
 9. Kafka Consumer is created on each worker in a consumer group and starts 
polling for messages on `topic1` from `instance1` using `Credentials-1`.
 10. Developer creates `rule1` indicating that when `trigger1` fires invoke 
`action1`.
@@ -50,7 +50,7 @@
 2. Developer gets the annotation `feed` from trigger `trigger1`.
 3. Developer invokes feedAction to update trigger feed passing input 
parameters (lifeCycle:`UPDATE`, `trigger1`, Options:`topic2`).
 4. The feedAction invokes feedWebAction forwarding input parameter.
-5. The feedWebAction inserts trigger feed doc into DB for worker group 0 
(feedWebAction protects DB credentials).
+5. The feedWebAction inserts trigger feed doc into the DB for worker group 0 
(feedWebAction protects DB credentials).
 6. DB insertion notifies workers group 0 via Cloudant/CouchDB changes API, 
workers listen on DB view with a filter for their group `worker0` and gets the 
DB doc.
 7. Kafka Consumer is re-created on each worker in a consumer group and starts 
polling for messages on `topic2` from `instance1` using `Credentials-1`.
 8. Event source produces messages on `topic2`.
@@ -60,12 +60,12 @@
 ### Read Trigger Feed
 ![MessageHub Trigger Read](images/Arch-Provider-MHV1-Read.png)
 
-**Scenario:** User wants to read the configuration and staus for trigger 
`trigger1`.
+**Scenario:** User wants to read the configuration and status for trigger 
`trigger1`.
 
 1. Developer gets the annotation `feed` from trigger `trigger1`.
 2. Developer invokes feedAction to read the trigger feed passing input 
parameters (lifeCycle:`READ`, `trigger1`).
 3. The feedAction invokes feedWebAction forwarding input parameter.
-4. The feedWebAction gets the trigger feed doc from DB (feedWebAction protects 
DB credentials).
+4. The feedWebAction gets the trigger feed doc from the DB (feedWebAction 
protects DB credentials).
 5. The DB returns the trigger feed doc for `trigger1`.
 6. The feedWebAction returns a response to feedAction.
 7. The feedAction returns response (config, status) to Developer.
@@ -79,7 +79,7 @@
 2. Developer gets the annotation `feed` from trigger `trigger1`.
 3. Developer invokes feedAction to delete the trigger feed passing input 
parameters (lifeCycle:`DELETE`, `trigger1`).
 4. The feedAction invokes feedWebAction forwarding input parameter.
-5. The feedWebAction updates the trigger feed doc into DB with a field 
`delete:true`(feedWebAction protects DB credentials).
+5. The feedWebAction updates the trigger feed doc into the DB with a field 
`delete:true`(feedWebAction protects DB credentials).
 6. DB update notifies workers group 0 via Cloudant/CouchDB changes API, 
workers listen on DB view with a filter for their group `worker0` and gets the 
DB doc. The Kafka consumers for `trigger1/topic2` get destroyed.
-7. The feedWebAction deletes the trigger feed doc from DB.
+7. The feedWebAction deletes the trigger feed doc from the DB.
 8. The Developer deletes trigger `trigger1`
diff --git a/docs/dev/README.md b/docs/dev/README.md
index d34f05c..e81bca6 100644
--- a/docs/dev/README.md
+++ b/docs/dev/README.md
@@ -40,7 +40,7 @@ Now we need to start the provider service. This is also a 
simple matter of runni
 |---|---|---|
 |INSTANCE|String|A unique identifier for this service. This is useful to 
differentiate log messages if you run multiple instances of the service|
 |LOCAL_DEV|Boolean|If you are using a locally-deployed OpenWhisk core system, 
it likely has a self-signed certificate. Set `LOCAL_DEV` to `true` to allow 
firing triggers without checking the certificate validity. *Do not use this for 
production systems!*|
-|PAYLOAD_LIMIT|Integer (default=900000)|The maxmimum payload size, in bytes, 
allowed during message batching. This value should be less than your OpenWhisk 
deployment's payload limit.|
+|PAYLOAD_LIMIT|Integer (default=900000)|The maximum payload size, in bytes, 
allowed during message batching. This value should be less than your OpenWhisk 
deployment's payload limit.|
 |WORKER|String|The ID of this running instances. Useful when running multiple 
instances. This should be of the form `workerX`. e.g. `worker0`.
 
 With that in mind, starting the feed service might look something like:
diff --git a/provider/consumer.py b/provider/consumer.py
index d525c84..efbc737 100644
--- a/provider/consumer.py
+++ b/provider/consumer.py
@@ -284,7 +284,7 @@ class ConsumerProcess (Process):
                 self.consumer.close()
                 logging.info('[{}] Successfully closed 
KafkaConsumer'.format(self.trigger))
 
-                logging.debug('[{}] Dellocating 
KafkaConsumer'.format(self.trigger))
+                logging.debug('[{}] Deallocating 
KafkaConsumer'.format(self.trigger))
                 self.consumer = None
                 logging.info('[{}] Successfully cleaned up 
consumer'.format(self.trigger))
         except Exception as e:
@@ -395,7 +395,7 @@ class ConsumerProcess (Process):
                 try:
                     response = requests.post(self.triggerURL, json=payload, 
auth=self.authHandler, timeout=10.0, verify=check_ssl)
                     status_code = response.status_code
-                    logging.info("[{}] Repsonse status code 
{}".format(self.trigger, status_code))
+                    logging.info("[{}] Response status code 
{}".format(self.trigger, status_code))
 
                     # Manually commit offset if the trigger was fired 
successfully. Retry firing the trigger
                     # for a select set of status codes
@@ -559,7 +559,7 @@ class ConsumerProcess (Process):
         logging.info('[{}] Completed partition assignment. Connected to 
broker(s)'.format(self.trigger))
 
         if self.currentState() == Consumer.State.Initializing and 
self.__shouldRun():
-            logging.info('[{}] Setting consumer state to 
runnning.'.format(self.trigger))
+            logging.info('[{}] Setting consumer state to 
running.'.format(self.trigger))
             self.__recordState(Consumer.State.Running)
 
     def __on_revoke(self, consumer, partitions):
diff --git 
a/tests/src/test/scala/system/packages/MessageHubMultiWorkersTest.scala 
b/tests/src/test/scala/system/packages/MessageHubMultiWorkersTest.scala
index a73bf49..6f8f06f 100644
--- a/tests/src/test/scala/system/packages/MessageHubMultiWorkersTest.scala
+++ b/tests/src/test/scala/system/packages/MessageHubMultiWorkersTest.scala
@@ -67,7 +67,7 @@ class MessageHubMultiWorkersTest extends FlatSpec
   val dbName = s"${dbPrefix}ow_kafka_triggers"
   val client = new ExtendedCouchDbRestClient(dbProtocol, dbHost, dbPort, 
dbUsername, dbPassword, dbName)
 
-  behavior of "Mussage Hub Feed"
+  behavior of "Message Hub Feed"
 
   ignore should "assign two triggers to same worker when only worker0 is 
available" in withAssetCleaner(wskprops) {
 

Reply via email to