This is an automated email from the ASF dual-hosted git repository.
hulee pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/helix.git
The following commit(s) were added to refs/heads/master by this push:
new 0d28d72 Improve helix tutorial and code formatting (#1931) (#1932)
0d28d72 is described below
commit 0d28d726c829d816cc232b6852d07d5a50394b27
Author: Qi (Quincy) Qu <[email protected]>
AuthorDate: Tue Jan 11 11:48:54 2022 -0800
Improve helix tutorial and code formatting (#1931) (#1932)
Improve helix tutorial and code formatting
---
.../apache/helix/webapp/resources/ConfigResource.java | 3 ++-
.../apache/helix/controller/GenericHelixController.java | 4 ++--
.../helix/controller/rebalancer/topology/Topology.java | 6 +++---
.../0.9.8/src/site/markdown/tutorial_task_framework.md | 16 ++++++++--------
.../0.9.9/src/site/markdown/tutorial_task_framework.md | 16 ++++++++--------
.../1.0.1/src/site/markdown/tutorial_task_framework.md | 16 ++++++++--------
.../1.0.2/src/site/markdown/tutorial_task_framework.md | 17 ++++++++---------
7 files changed, 39 insertions(+), 39 deletions(-)
diff --git
a/helix-admin-webapp/src/main/java/org/apache/helix/webapp/resources/ConfigResource.java
b/helix-admin-webapp/src/main/java/org/apache/helix/webapp/resources/ConfigResource.java
index a28ad09..a1148de 100644
---
a/helix-admin-webapp/src/main/java/org/apache/helix/webapp/resources/ConfigResource.java
+++
b/helix-admin-webapp/src/main/java/org/apache/helix/webapp/resources/ConfigResource.java
@@ -190,7 +190,8 @@ public class ConfigResource extends ServerResource {
/**
* set or remove configs depends on "command" field of jsonParameters in
POST body
* @param entity
- * @param scopeStr
+ * @param type
+ * @param scopeArgs
* @throws Exception
*/
void setConfigs(Representation entity, ConfigScopeProperty type, String
scopeArgs)
diff --git
a/helix-core/src/main/java/org/apache/helix/controller/GenericHelixController.java
b/helix-core/src/main/java/org/apache/helix/controller/GenericHelixController.java
index 95416d4..b7e9fa0 100644
---
a/helix-core/src/main/java/org/apache/helix/controller/GenericHelixController.java
+++
b/helix-core/src/main/java/org/apache/helix/controller/GenericHelixController.java
@@ -119,8 +119,8 @@ import static org.apache.helix.HelixConstants.ChangeType;
/**
* Cluster Controllers main goal is to keep the cluster state as close as
possible to Ideal State.
* It does this by listening to changes in cluster state and scheduling new
tasks to get cluster
- * state to best possible ideal state. Every instance of this class can
control can control only one
- * cluster Get all the partitions use IdealState, CurrentState and Messages
<br>
+ * state to the best possible ideal state. Every instance of this class can
control only one cluster.
+ * Get all the partitions use IdealState, CurrentState and Messages <br>
* foreach partition <br>
* 1. get the (instance,state) from IdealState, CurrentState and
PendingMessages <br>
* 2. compute best possible state (instance,state) pair. This needs previous
step data and state
diff --git
a/helix-core/src/main/java/org/apache/helix/controller/rebalancer/topology/Topology.java
b/helix-core/src/main/java/org/apache/helix/controller/rebalancer/topology/Topology.java
index 3bc2e3a..3d2a878 100644
---
a/helix-core/src/main/java/org/apache/helix/controller/rebalancer/topology/Topology.java
+++
b/helix-core/src/main/java/org/apache/helix/controller/rebalancer/topology/Topology.java
@@ -148,8 +148,8 @@ public class Topology {
List<Node> children = root.getChildren();
if (children != null) {
- for (int i = 0; i < children.size(); i++) {
- Node newChild = cloneTree(children.get(i), newNodeWeight, failedNodes);
+ for (Node child : children) {
+ Node newChild = cloneTree(child, newNodeWeight, failedNodes);
newChild.setParent(root);
newRoot.addChild(newChild);
}
@@ -166,7 +166,7 @@ public class Topology {
root.setType(Types.ROOT.name());
// TODO: Currently we add disabled instance to the topology tree. Since
they are not considered
- // TODO: in relabalnce, maybe we should skip adding them to the tree for
consistence.
+ // TODO: in rebalance, maybe we should skip adding them to the tree for
consistence.
for (String instanceName : _allInstances) {
InstanceConfig insConfig = _instanceConfigMap.get(instanceName);
try {
diff --git a/website/0.9.8/src/site/markdown/tutorial_task_framework.md
b/website/0.9.8/src/site/markdown/tutorial_task_framework.md
index d348544..f6513e7 100644
--- a/website/0.9.8/src/site/markdown/tutorial_task_framework.md
+++ b/website/0.9.8/src/site/markdown/tutorial_task_framework.md
@@ -29,9 +29,9 @@ Task framework not only can abstract three layers task logics
but also helps doi

### Key Concepts
-* Task is the basic unit in Helix task framework. It can represents the a
single runnable logics that user prefer to execute for each partition
(distributed units).
+* Task is the smallest unit of work in Helix Task Framework. It represents a
single runnable logics that user prefer to execute for each partition
(distributed units).
* Job defines one time operation across all the partitions. It contains
multiple Tasks and configuration of tasks, such as how many tasks, timeout per
task and so on.
-* Workflow is directed acyclic graph represents the relationships and running
orders of Jobs. In addition, a workflow can also provide customized
configuration, for example, Job dependencies.
+* Workflow is a directed acyclic graph that represents the relationships and
running orders of Jobs. In addition, a workflow can also provide customized
configuration, for example, Job dependencies.
* JobQueue is another type of Workflow. Different from normal one, JobQueue is
not terminated until user kill it. Also JobQueue can keep accepting newly
coming jobs.
### Implement Your Task
@@ -71,7 +71,7 @@ For these four fields:
#### Share Content Across Tasks and Jobs
-Task framework also provides a feature that user can store the key-value data
per task, job and workflow. The content stored at workflow layer can shared by
different jobs belong to this workflow. Similarly content persisted at job
layer can shared by different tasks nested in this job. Currently, user can
extend the abstract class
[UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java)
and use two methods [...]
+Task framework also provides a feature that user can store the key-value data
per task, job and workflow. The content stored at workflow layer can shared by
different jobs belong to this workflow. Similarly, content persisted at job
layer can shared by different tasks nested in this job. Currently, user can
extend the abstract class
[UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java)
and use two methods [...]
```
public class MyTask extends UserContentStore implements Task {
@@ -99,7 +99,7 @@ TaskResult run() {
#### Task Retry and Abort
-Helix provides retry logics to users. User can specify the how many times
allowed to tolerant failure of tasks under a job. It is a method will be
introduced in Following Job Section. Another choice offered to user that if
user thinks a task is very critical and do not want to do the retry once it is
failed, user can return a TaskResult stated above with FATAL_FAILED status.
Then Helix will not do the retry for that task.
+Helix provides retry logics to users. User can specify the number of task
failures to allow under a job. It is a method will be introduced in Following
Job Section. Another choice offered to user that if user thinks a task is very
critical and do not want to do the retry once it is failed, user can return a
TaskResult stated above with FATAL_FAILED status. Then Helix will not do the
retry for that task.
```
return new TaskResult(TaskResult.Status.FATAL_FAILED, "DO NOT WANT TO RETRY,
ERROR MESSAGE");
@@ -194,7 +194,7 @@ taskDriver.delete(myWorkflow);
#### Add a Job
-WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig
built, no job can be added! For creating a Job, please refering following
section (Create a Job)
+WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig
is built, no job can be added! For creating a Job, please refer to the
following section (Create a Job)
```
myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
@@ -202,7 +202,7 @@ myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
#### Add a Job dependency
-Jobs can have dependencies. If one job2 depends job1, job2 will not be
scheduled until job1 finished.
+Jobs can have dependencies. If one job2 depends on job1, job2 will not be
scheduled until job1 finished.
```
myWorkflowBuilder.addParentChildDependency(ParentJobName, ChildJobName);
@@ -224,7 +224,7 @@
myWorkflowBuilder.setScheduleConfig(ScheduleConfig.oneTimeDelayedStart(new Date(
| _setExpiry(long v, TimeUnit unit)_ | Set the expiration time for this
workflow. |
| _setFailureThreshold(int failureThreshold)_ | Set the failure threshold for
this workflow, once job failures reach this number, the workflow will be
failed. |
| _setWorkflowType(String workflowType)_ | Set the user defined workflowType
for this workflow. |
-| _setTerminable(boolean isTerminable)_ | Set the whether this workflow is
terminable or not. |
+| _setTerminable(boolean isTerminable)_ | Specify whether this workflow is
terminable or not. |
| _setCapacity(int capacity)_ | Set the number of jobs that workflow can hold
before reject further jobs. Only used when workflow is not terminable. |
| _setTargetState(TargetState v)_ | Set the final state of this workflow. |
@@ -255,7 +255,7 @@ jobQueueBuilder.enqueueJob("JobName", jobConfigBuilder);
####Delete Job from Queue
-Helix allowed user to delete a job from existing queue. We offers delete API
in TaskDriver to do this. Delete job from queue and this queue has to be
stopped. Then user can resume the job once delete success.
+Helix allowed user to delete a job from existing queue. We offer delete API in
TaskDriver to do this. The queue has to be stopped in order for a job to be
deleted. User can resume the queue once deletion succeeds.
```
taskDriver.stop("QueueName");
diff --git a/website/0.9.9/src/site/markdown/tutorial_task_framework.md
b/website/0.9.9/src/site/markdown/tutorial_task_framework.md
index d348544..f6513e7 100644
--- a/website/0.9.9/src/site/markdown/tutorial_task_framework.md
+++ b/website/0.9.9/src/site/markdown/tutorial_task_framework.md
@@ -29,9 +29,9 @@ Task framework not only can abstract three layers task logics
but also helps doi

### Key Concepts
-* Task is the basic unit in Helix task framework. It can represents the a
single runnable logics that user prefer to execute for each partition
(distributed units).
+* Task is the smallest unit of work in Helix Task Framework. It represents a
single runnable logics that user prefer to execute for each partition
(distributed units).
* Job defines one time operation across all the partitions. It contains
multiple Tasks and configuration of tasks, such as how many tasks, timeout per
task and so on.
-* Workflow is directed acyclic graph represents the relationships and running
orders of Jobs. In addition, a workflow can also provide customized
configuration, for example, Job dependencies.
+* Workflow is a directed acyclic graph that represents the relationships and
running orders of Jobs. In addition, a workflow can also provide customized
configuration, for example, Job dependencies.
* JobQueue is another type of Workflow. Different from normal one, JobQueue is
not terminated until user kill it. Also JobQueue can keep accepting newly
coming jobs.
### Implement Your Task
@@ -71,7 +71,7 @@ For these four fields:
#### Share Content Across Tasks and Jobs
-Task framework also provides a feature that user can store the key-value data
per task, job and workflow. The content stored at workflow layer can shared by
different jobs belong to this workflow. Similarly content persisted at job
layer can shared by different tasks nested in this job. Currently, user can
extend the abstract class
[UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java)
and use two methods [...]
+Task framework also provides a feature that user can store the key-value data
per task, job and workflow. The content stored at workflow layer can shared by
different jobs belong to this workflow. Similarly, content persisted at job
layer can shared by different tasks nested in this job. Currently, user can
extend the abstract class
[UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java)
and use two methods [...]
```
public class MyTask extends UserContentStore implements Task {
@@ -99,7 +99,7 @@ TaskResult run() {
#### Task Retry and Abort
-Helix provides retry logics to users. User can specify the how many times
allowed to tolerant failure of tasks under a job. It is a method will be
introduced in Following Job Section. Another choice offered to user that if
user thinks a task is very critical and do not want to do the retry once it is
failed, user can return a TaskResult stated above with FATAL_FAILED status.
Then Helix will not do the retry for that task.
+Helix provides retry logics to users. User can specify the number of task
failures to allow under a job. It is a method will be introduced in Following
Job Section. Another choice offered to user that if user thinks a task is very
critical and do not want to do the retry once it is failed, user can return a
TaskResult stated above with FATAL_FAILED status. Then Helix will not do the
retry for that task.
```
return new TaskResult(TaskResult.Status.FATAL_FAILED, "DO NOT WANT TO RETRY,
ERROR MESSAGE");
@@ -194,7 +194,7 @@ taskDriver.delete(myWorkflow);
#### Add a Job
-WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig
built, no job can be added! For creating a Job, please refering following
section (Create a Job)
+WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig
is built, no job can be added! For creating a Job, please refer to the
following section (Create a Job)
```
myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
@@ -202,7 +202,7 @@ myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
#### Add a Job dependency
-Jobs can have dependencies. If one job2 depends job1, job2 will not be
scheduled until job1 finished.
+Jobs can have dependencies. If one job2 depends on job1, job2 will not be
scheduled until job1 finished.
```
myWorkflowBuilder.addParentChildDependency(ParentJobName, ChildJobName);
@@ -224,7 +224,7 @@
myWorkflowBuilder.setScheduleConfig(ScheduleConfig.oneTimeDelayedStart(new Date(
| _setExpiry(long v, TimeUnit unit)_ | Set the expiration time for this
workflow. |
| _setFailureThreshold(int failureThreshold)_ | Set the failure threshold for
this workflow, once job failures reach this number, the workflow will be
failed. |
| _setWorkflowType(String workflowType)_ | Set the user defined workflowType
for this workflow. |
-| _setTerminable(boolean isTerminable)_ | Set the whether this workflow is
terminable or not. |
+| _setTerminable(boolean isTerminable)_ | Specify whether this workflow is
terminable or not. |
| _setCapacity(int capacity)_ | Set the number of jobs that workflow can hold
before reject further jobs. Only used when workflow is not terminable. |
| _setTargetState(TargetState v)_ | Set the final state of this workflow. |
@@ -255,7 +255,7 @@ jobQueueBuilder.enqueueJob("JobName", jobConfigBuilder);
####Delete Job from Queue
-Helix allowed user to delete a job from existing queue. We offers delete API
in TaskDriver to do this. Delete job from queue and this queue has to be
stopped. Then user can resume the job once delete success.
+Helix allowed user to delete a job from existing queue. We offer delete API in
TaskDriver to do this. The queue has to be stopped in order for a job to be
deleted. User can resume the queue once deletion succeeds.
```
taskDriver.stop("QueueName");
diff --git a/website/1.0.1/src/site/markdown/tutorial_task_framework.md
b/website/1.0.1/src/site/markdown/tutorial_task_framework.md
index d348544..f6513e7 100644
--- a/website/1.0.1/src/site/markdown/tutorial_task_framework.md
+++ b/website/1.0.1/src/site/markdown/tutorial_task_framework.md
@@ -29,9 +29,9 @@ Task framework not only can abstract three layers task logics
but also helps doi

### Key Concepts
-* Task is the basic unit in Helix task framework. It can represents the a
single runnable logics that user prefer to execute for each partition
(distributed units).
+* Task is the smallest unit of work in Helix Task Framework. It represents a
single runnable logics that user prefer to execute for each partition
(distributed units).
* Job defines one time operation across all the partitions. It contains
multiple Tasks and configuration of tasks, such as how many tasks, timeout per
task and so on.
-* Workflow is directed acyclic graph represents the relationships and running
orders of Jobs. In addition, a workflow can also provide customized
configuration, for example, Job dependencies.
+* Workflow is a directed acyclic graph that represents the relationships and
running orders of Jobs. In addition, a workflow can also provide customized
configuration, for example, Job dependencies.
* JobQueue is another type of Workflow. Different from normal one, JobQueue is
not terminated until user kill it. Also JobQueue can keep accepting newly
coming jobs.
### Implement Your Task
@@ -71,7 +71,7 @@ For these four fields:
#### Share Content Across Tasks and Jobs
-Task framework also provides a feature that user can store the key-value data
per task, job and workflow. The content stored at workflow layer can shared by
different jobs belong to this workflow. Similarly content persisted at job
layer can shared by different tasks nested in this job. Currently, user can
extend the abstract class
[UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java)
and use two methods [...]
+Task framework also provides a feature that user can store the key-value data
per task, job and workflow. The content stored at workflow layer can shared by
different jobs belong to this workflow. Similarly, content persisted at job
layer can shared by different tasks nested in this job. Currently, user can
extend the abstract class
[UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java)
and use two methods [...]
```
public class MyTask extends UserContentStore implements Task {
@@ -99,7 +99,7 @@ TaskResult run() {
#### Task Retry and Abort
-Helix provides retry logics to users. User can specify the how many times
allowed to tolerant failure of tasks under a job. It is a method will be
introduced in Following Job Section. Another choice offered to user that if
user thinks a task is very critical and do not want to do the retry once it is
failed, user can return a TaskResult stated above with FATAL_FAILED status.
Then Helix will not do the retry for that task.
+Helix provides retry logics to users. User can specify the number of task
failures to allow under a job. It is a method will be introduced in Following
Job Section. Another choice offered to user that if user thinks a task is very
critical and do not want to do the retry once it is failed, user can return a
TaskResult stated above with FATAL_FAILED status. Then Helix will not do the
retry for that task.
```
return new TaskResult(TaskResult.Status.FATAL_FAILED, "DO NOT WANT TO RETRY,
ERROR MESSAGE");
@@ -194,7 +194,7 @@ taskDriver.delete(myWorkflow);
#### Add a Job
-WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig
built, no job can be added! For creating a Job, please refering following
section (Create a Job)
+WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig
is built, no job can be added! For creating a Job, please refer to the
following section (Create a Job)
```
myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
@@ -202,7 +202,7 @@ myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
#### Add a Job dependency
-Jobs can have dependencies. If one job2 depends job1, job2 will not be
scheduled until job1 finished.
+Jobs can have dependencies. If one job2 depends on job1, job2 will not be
scheduled until job1 finished.
```
myWorkflowBuilder.addParentChildDependency(ParentJobName, ChildJobName);
@@ -224,7 +224,7 @@
myWorkflowBuilder.setScheduleConfig(ScheduleConfig.oneTimeDelayedStart(new Date(
| _setExpiry(long v, TimeUnit unit)_ | Set the expiration time for this
workflow. |
| _setFailureThreshold(int failureThreshold)_ | Set the failure threshold for
this workflow, once job failures reach this number, the workflow will be
failed. |
| _setWorkflowType(String workflowType)_ | Set the user defined workflowType
for this workflow. |
-| _setTerminable(boolean isTerminable)_ | Set the whether this workflow is
terminable or not. |
+| _setTerminable(boolean isTerminable)_ | Specify whether this workflow is
terminable or not. |
| _setCapacity(int capacity)_ | Set the number of jobs that workflow can hold
before reject further jobs. Only used when workflow is not terminable. |
| _setTargetState(TargetState v)_ | Set the final state of this workflow. |
@@ -255,7 +255,7 @@ jobQueueBuilder.enqueueJob("JobName", jobConfigBuilder);
####Delete Job from Queue
-Helix allowed user to delete a job from existing queue. We offers delete API
in TaskDriver to do this. Delete job from queue and this queue has to be
stopped. Then user can resume the job once delete success.
+Helix allowed user to delete a job from existing queue. We offer delete API in
TaskDriver to do this. The queue has to be stopped in order for a job to be
deleted. User can resume the queue once deletion succeeds.
```
taskDriver.stop("QueueName");
diff --git a/website/1.0.2/src/site/markdown/tutorial_task_framework.md
b/website/1.0.2/src/site/markdown/tutorial_task_framework.md
index d348544..4721276 100644
--- a/website/1.0.2/src/site/markdown/tutorial_task_framework.md
+++ b/website/1.0.2/src/site/markdown/tutorial_task_framework.md
@@ -29,9 +29,9 @@ Task framework not only can abstract three layers task logics
but also helps doi

### Key Concepts
-* Task is the basic unit in Helix task framework. It can represents the a
single runnable logics that user prefer to execute for each partition
(distributed units).
+* Task is the smallest unit of work in Helix Task Framework. It represents a
single runnable logics that user prefer to execute for each partition
(distributed units).
* Job defines one time operation across all the partitions. It contains
multiple Tasks and configuration of tasks, such as how many tasks, timeout per
task and so on.
-* Workflow is directed acyclic graph represents the relationships and running
orders of Jobs. In addition, a workflow can also provide customized
configuration, for example, Job dependencies.
+* Workflow is a directed acyclic graph that represents the relationships and
running orders of Jobs. In addition, a workflow can also provide customized
configuration, for example, Job dependencies.
* JobQueue is another type of Workflow. Different from normal one, JobQueue is
not terminated until user kill it. Also JobQueue can keep accepting newly
coming jobs.
### Implement Your Task
@@ -71,8 +71,7 @@ For these four fields:
#### Share Content Across Tasks and Jobs
-Task framework also provides a feature that user can store the key-value data
per task, job and workflow. The content stored at workflow layer can shared by
different jobs belong to this workflow. Similarly content persisted at job
layer can shared by different tasks nested in this job. Currently, user can
extend the abstract class
[UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java)
and use two methods [...]
-
+Task framework also provides a feature that user can store the key-value data
per task, job and workflow. The content stored at workflow layer can shared by
different jobs belong to this workflow. Similarly, content persisted at job
layer can shared by different tasks nested in this job. Currently, user can
extend the abstract class
[UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java)
and use two methods [...]
```
public class MyTask extends UserContentStore implements Task {
@Override
@@ -99,7 +98,7 @@ TaskResult run() {
#### Task Retry and Abort
-Helix provides retry logics to users. User can specify the how many times
allowed to tolerant failure of tasks under a job. It is a method will be
introduced in Following Job Section. Another choice offered to user that if
user thinks a task is very critical and do not want to do the retry once it is
failed, user can return a TaskResult stated above with FATAL_FAILED status.
Then Helix will not do the retry for that task.
+Helix provides retry logics to users. User can specify the number of task
failures to allow under a job. It is a method will be introduced in Following
Job Section. Another choice offered to user that if user thinks a task is very
critical and do not want to do the retry once it is failed, user can return a
TaskResult stated above with FATAL_FAILED status. Then Helix will not do the
retry for that task.
```
return new TaskResult(TaskResult.Status.FATAL_FAILED, "DO NOT WANT TO RETRY,
ERROR MESSAGE");
@@ -194,7 +193,7 @@ taskDriver.delete(myWorkflow);
#### Add a Job
-WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig
built, no job can be added! For creating a Job, please refering following
section (Create a Job)
+WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig
is built, no job can be added! For creating a Job, please refer to the
following section (Create a Job)
```
myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
@@ -202,7 +201,7 @@ myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
#### Add a Job dependency
-Jobs can have dependencies. If one job2 depends job1, job2 will not be
scheduled until job1 finished.
+Jobs can have dependencies. If one job2 depends on job1, job2 will not be
scheduled until job1 finished.
```
myWorkflowBuilder.addParentChildDependency(ParentJobName, ChildJobName);
@@ -224,7 +223,7 @@
myWorkflowBuilder.setScheduleConfig(ScheduleConfig.oneTimeDelayedStart(new Date(
| _setExpiry(long v, TimeUnit unit)_ | Set the expiration time for this
workflow. |
| _setFailureThreshold(int failureThreshold)_ | Set the failure threshold for
this workflow, once job failures reach this number, the workflow will be
failed. |
| _setWorkflowType(String workflowType)_ | Set the user defined workflowType
for this workflow. |
-| _setTerminable(boolean isTerminable)_ | Set the whether this workflow is
terminable or not. |
+| _setTerminable(boolean isTerminable)_ | Specify whether this workflow is
terminable or not. |
| _setCapacity(int capacity)_ | Set the number of jobs that workflow can hold
before reject further jobs. Only used when workflow is not terminable. |
| _setTargetState(TargetState v)_ | Set the final state of this workflow. |
@@ -255,7 +254,7 @@ jobQueueBuilder.enqueueJob("JobName", jobConfigBuilder);
####Delete Job from Queue
-Helix allowed user to delete a job from existing queue. We offers delete API
in TaskDriver to do this. Delete job from queue and this queue has to be
stopped. Then user can resume the job once delete success.
+Helix allowed user to delete a job from existing queue. We offer delete API in
TaskDriver to do this. The queue has to be stopped in order for a job to be
deleted. User can resume the queue once deletion succeeds.
```
taskDriver.stop("QueueName");