vibhatha commented on a change in pull request #12033:
URL: https://github.com/apache/arrow/pull/12033#discussion_r778143970
##########
File path: docs/source/cpp/streaming_execution.rst
##########
@@ -305,3 +305,601 @@ Datasets may be scanned multiple times; just make
multiple scan
nodes from that dataset. (Useful for a self-join, for example.)
Note that producing two scan nodes like this will perform all
reads and decodes twice.
+
+Constructing ``ExecNode`` using Options
+=======================================
+
+Using the execution plan we can construct various queries.
+To construct such queries, we have provided a set of building blocks
+or referred as :class:`ExecNode` s. These nodes provide the ability to
+construct operations like filtering, projection, join, etc.
+
+This is the list of :class:`ExecutionNode` s exposed;
+
+1. :class:`SourceNode`
+2. :class:`FilterNode`
+3. :class:`ProjectNode`
+4. :class:`ScalarAggregateNode`
+5. :class:`SinkNode`
+6. :class:`ConsumingSinkNode`
+7. :struct:`OrderBySinkNode`
+8. SelectK-SinkNode
+9. Scan-Node
+10. :class:`HashJoinNode`
+11. Write-Node
+12. :class:`UnionNode`
+
+There are a set of :class:`ExecNode` s designed to provide various operations
required
+in designing a streaming execution plan.
+
+``SourceNode``
+--------------
+
+:struct:`arrow::compute::SourceNode` can be considered as an entry point to
create a streaming execution plan.
+A source node can be constructed as follows.
+
+:class:`arrow::compute::SoureNodeOptions` are used to create the
:struct:`arrow::compute::SourceNode`.
+The :class:`Schema` of the data passing through and a function to generate
data
+`std::function<arrow::Future<arrow::util::optional<arrow::compute::ExecBatch>>()>`
+are required to create this option::
+
+ // data generator
+ arrow::AsyncGenerator<arrow::util::optional<cp::ExecBatch>> gen() { ... }
+
+ // data schema
+ auto schema = arrow::schema({...})
+
+ // source node options
+ auto source_node_options = arrow::compute::SourceNodeOptions{schema, gen};
+
+ // create a source node
+ ARROW_ASSIGN_OR_RAISE(arrow::compute::ExecNode * source,
+ arrow::compute::MakeExecNode("source", plan.get(),
{},
+ source_node_options));
+
+``FilterNode``
+--------------
+
+:class:`FilterNode`, as the name suggests, provide a container to define a
data filtering criteria.
+Filter can be written using :class:`arrow::compute::Expression`. For instance
if the row values
+of a particular column needs to be filtered by a boundary value, ex: all
values of column b
+greater than 3, can be written using
:class:`arrow::compute::FilterNodeOptions` as follows::
+
+ // a > 3
+ arrow::compute::Expression filter_opt = arrow::compute::greater(
+ arrow::compute::field_ref("a"),
+
arrow::compute::literal(3));
Review comment:
I think I have added an additional tab or 2. Misunderstood that the tabs
are required to highlight the code.
I fixed it through out the rest of the code snippets.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]