twalthr commented on a change in pull request #7070: [FLINK-10625] 
Documentation for MATCH_RECOGNIZE clause
URL: https://github.com/apache/flink/pull/7070#discussion_r232604689
 
 

 ##########
 File path: docs/dev/table/streaming/match_recognize.md
 ##########
 @@ -0,0 +1,654 @@
+---
+title: 'Detecting event patterns <span class="label label-danger" 
style="font-size:50%">Experimental</span>'
+nav-parent_id: streaming_tableapi
+nav-title: 'Detecting event patterns'
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+It is a common use-case to search for set event patterns, especially in case 
of data streams. Apache Flink
+comes with [CEP library]({{ site.baseurl }}/dev/libs/cep.html) which allows 
for pattern detection in event streams. On the other hand Flink's 
+Table API & SQL provides a relational way to express queries that comes with 
multiple functions and 
+optimizations that can be used out of the box. In December 2016, ISO released 
a new version of the 
+international SQL standard (ISO/IEC 9075:2016) including the Row Pattern 
Recognition for complex event processing,
+which allowed to consolidate those two APIs using MATCH_RECOGNIZE clause.
+
+* This will be replaced by the TOC
+{:toc}
+
+Example query
+-------------
+
+Row Pattern Recognition in SQL is performed using the MATCH_RECOGNIZE clause. 
MATCH_RECOGNIZE enables you to do the following tasks:
+* Logically partition and order the data that is used in the MATCH_RECOGNIZE 
clause with its PARTITION BY and ORDER BY clauses.
+* Define patterns of rows to seek using the PATTERN clause of the 
MATCH_RECOGNIZE clause. 
+  These patterns use regular expression syntax, a powerful and expressive 
feature, applied to the pattern variables you define.
+* Specify the logical conditions required to map a row to a row pattern 
variable in the DEFINE clause.
+* Define measures, which are expressions usable in other parts of the SQL 
query, in the MEASURES clause.
+
+For example to find periods of constantly decreasing price of a Ticker one 
could write a query like this:
+
+{% highlight sql %}
+SELECT *
+FROM Ticker 
+MATCH_RECOGNIZE (
+    PARTITION BY symbol
+    ORDER BY rowtime
+    MEASURES  
+       STRT_ROW.rowtime AS start_tstamp,
+       LAST(PRICE_DOWN.rowtime) AS bottom_tstamp,
+       LAST(PRICE_UP.rowtime) AS end_tstamp
+    ONE ROW PER MATCH
+    AFTER MATCH SKIP TO LAST UP
+    PATTERN (STRT_ROW PRICE_DOWN+ PRICE_UP+)
+    DEFINE
+       PRICE_DOWN AS PRICE_DOWN.price < LAST(PRICE_DOWN.price, 1) OR
+               (LAST(PRICE_DOWN.price, 1) IS NULL AND PRICE_DOWN.price < 
STRT_ROW.price))
+       PRICE_UP AS PRICE_UP.price > LAST(PRICE_UP.price, 1) OR 
LAST(PRICE_UP.price, 1) IS NULL
+    ) MR;
+{% endhighlight %}
+
+This query given following input data:
+ 
+{% highlight text %}
+SYMBOL         ROWTIME         PRICE
+======  ====================  =======
+'ACME'  '01-Apr-11 10:00:00'   12
+'ACME'  '01-Apr-11 10:00:01'   17
+'ACME'  '01-Apr-11 10:00:02'   19
+'ACME'  '01-Apr-11 10:00:03'   21
+'ACME'  '01-Apr-11 10:00:04'   25
+'ACME'  '01-Apr-11 10:00:05'   12
+'ACME'  '01-Apr-11 10:00:06'   15
+'ACME'  '01-Apr-11 10:00:07'   20
+'ACME'  '01-Apr-11 10:00:08'   24
+'ACME'  '01-Apr-11 10:00:09'   25
+'ACME'  '01-Apr-11 10:00:10'   19
+{% endhighlight %}
+
+will produce a summary row for each found period in which the price was 
constantly decreasing.
+
+{% highlight text %}
+SYMBOL          START_TST          BOTTOM_TS         END_TSTAM
+=========  ==================  ==================  ==================
+ACME       01-APR-11 10:00:04  01-APR-11 10:00:05  01-APR-11 10:00:09
+{% endhighlight %}
+
+The aforementioned query consists of following clauses:
+
+* [PARTITION BY](#partitioning) - defines logical partitioning of the stream, 
similar to `GROUP BY` operations.
+* [ORDER BY](#order-of-events) - specifies how should the incoming events be 
order, this is essential as patterns define order.
+* [MEASURES](#define--measures) - defines output of the clause, similar to 
`SELECT` clause
+* [ONE ROW PER MATCH](#output-mode) - output mode which defines how many rows 
per match will be produced
+* [AFTER MATCH SKIP](#after-match-skip) - allows to specify where next match 
should start, this is also a way to control to how many distinct matches a 
single event can belong
+* [PATTERN](#defining-pattern) - clause that allows constructing patterns that 
will be searched for, pro 
+* [DEFINE](#define--measures) - this section defines conditions on events that 
should be met in order to be qualified to corresponding pattern variable
+
+
+Installation guide
+------------------
+
+Match recognize uses Apache Flink's CEP library internally. In order to be 
able to use this clause one has to add
+this library as dependency. Either by adding it to your uber-jar by adding 
dependency on:
+
+{% highlight xml %}
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-cep{{ site.scala_version_suffix }}</artifactId>
+  <version>{{ site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+or by adding it to the cluster classpath (see [here]({{ 
site.baseurl}}/dev/linking.html)). If you want to use
+MATCH_RECOGNIZE from [sql-client]({{ site.baseurl}}/dev/table/sqlClient.html) 
you don't have to do anything as all the dependencies are included by default.
+
+Partitioning
+------------
+It is possible to look for patterns in a partitioned data, e.g. trends for a 
single ticker. This can be expressed using `PARTITION BY` clause. It is 
equivalent to applying
+[`keyBy`]({{ site.baseurl 
}}/dev/stream/operators/index.html#datastream-transformations) transformation 
to a `DataStream`, or similar to using `GROUP BY` for aggregations.
+
+<span class="label label-danger" style="font-size:75%">Attention:</span> It is 
highly advised to apply partitioning because otherwise `MATCH_RECOGNIZE` will 
be translated
+into a non-parallel operator to ensure global ordering.
+
+Order of events
+---------------
+
+Apache Flink allows searching for patterns based on time, either 
[processing-time or event-time](time_attributes.html). This assumption
+is very important, because it allows sorting events, before fed into pattern 
state machine. Because of that one may be true
+that the produced output will be correct in regards to order in which those 
events happened.
+
+As a consequence one has to provide time indicator as the first argument to 
`ORDER BY` clause.
+
+That means for a table:
+
+{% highlight text %}
+Ticker
+     |-- symbol: Long
+     |-- price: Long
+     |-- tax: Long
+     |-- rowTime: TimeIndicatorTypeInfo(rowtime)
+{% endhighlight %}
+
+Definition like:
 
 Review comment:
   I would not list all things that are not supported. Only show supported 
things and mention the unsupported things in one or two sentences afterwards.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to