This is an automated email from the ASF dual-hosted git repository.
hansva pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/hop.git
The following commit(s) were added to refs/heads/master by this push:
new 48f31a3042 Merge pull request #1666 from cvdenzen/patch-1
new 8780e58ae5 Merge pull request #1667 from hansva/master
48f31a3042 is described below
commit 48f31a3042d1c92ddb7b66426222dc14137c743e
Author: Hans Van Akelyen <[email protected]>
AuthorDate: Wed Aug 31 15:11:26 2022 +0200
Merge pull request #1666 from cvdenzen/patch-1
Typo: "through" must be "true"
---
docs/hop-user-manual/modules/ROOT/pages/pipeline/pipelines.adoc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/hop-user-manual/modules/ROOT/pages/pipeline/pipelines.adoc
b/docs/hop-user-manual/modules/ROOT/pages/pipeline/pipelines.adoc
index abc46d879c..0ce83d4c85 100644
--- a/docs/hop-user-manual/modules/ROOT/pages/pipeline/pipelines.adoc
+++ b/docs/hop-user-manual/modules/ROOT/pages/pipeline/pipelines.adoc
@@ -41,7 +41,7 @@ The example below shows a very basic pipeline. This is what
happens when we run
* the pipeline has 7 transforms. All 7 of these transforms become active when
we start the pipeline.
* the "read-25M-records" transform starts reading data from a file, and pushes
that data down the stream to "perform-calculations" and the following
transforms. Since reading 25 million records takes a while, some data may
already have finished processing while we're still reading records from the
file.
* the "lookup-sql-data" matches data we read from the file with data we
retrieved from the "read-sql-data" transform. The
xref:pipeline/transforms/streamlookup.adoc[Stream Lookup] accepts input from
the "read-sql-data", which is shown with the information icon
image:icons/info.svg[] on the hop.
-* once the data from the file and sql query are matched, we check a condition
with the xref:pipeline/transforms/filterrows.adoc[Filter Rows] transform in
"condition?". The output of this data is passed to "write-to-table" or
"write-to-file", depending on whether the condition outcome was through or
false.
+* once the data from the file and sql query are matched, we check a condition
with the xref:pipeline/transforms/filterrows.adoc[Filter Rows] transform in
"condition?". The output of this data is passed to "write-to-table" or
"write-to-file", depending on whether the condition outcome was true or false.
image:hop-gui/pipeline/basic-pipeline.png[Pipelines - basic pipeline,
width="65%"]