twalthr commented on a change in pull request #7070: [FLINK-10625] 
Documentation for MATCH_RECOGNIZE clause
URL: https://github.com/apache/flink/pull/7070#discussion_r232600187
 
 

 ##########
 File path: docs/dev/table/streaming/match_recognize.md
 ##########
 @@ -0,0 +1,654 @@
+---
+title: 'Detecting event patterns <span class="label label-danger" 
style="font-size:50%">Experimental</span>'
+nav-parent_id: streaming_tableapi
+nav-title: 'Detecting event patterns'
+nav-pos: 5
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+It is a common use-case to search for set event patterns, especially in case 
of data streams. Apache Flink
+comes with [CEP library]({{ site.baseurl }}/dev/libs/cep.html) which allows 
for pattern detection in event streams. On the other hand Flink's 
+Table API & SQL provides a relational way to express queries that comes with 
multiple functions and 
+optimizations that can be used out of the box. In December 2016, ISO released 
a new version of the 
+international SQL standard (ISO/IEC 9075:2016) including the Row Pattern 
Recognition for complex event processing,
+which allowed to consolidate those two APIs using MATCH_RECOGNIZE clause.
+
+* This will be replaced by the TOC
+{:toc}
+
+Example query
+-------------
+
+Row Pattern Recognition in SQL is performed using the MATCH_RECOGNIZE clause. 
MATCH_RECOGNIZE enables you to do the following tasks:
+* Logically partition and order the data that is used in the MATCH_RECOGNIZE 
clause with its PARTITION BY and ORDER BY clauses.
+* Define patterns of rows to seek using the PATTERN clause of the 
MATCH_RECOGNIZE clause. 
+  These patterns use regular expression syntax, a powerful and expressive 
feature, applied to the pattern variables you define.
+* Specify the logical conditions required to map a row to a row pattern 
variable in the DEFINE clause.
+* Define measures, which are expressions usable in other parts of the SQL 
query, in the MEASURES clause.
+
+For example to find periods of constantly decreasing price of a Ticker one 
could write a query like this:
+
+{% highlight sql %}
+SELECT *
+FROM Ticker 
+MATCH_RECOGNIZE (
+    PARTITION BY symbol
+    ORDER BY rowtime
+    MEASURES  
+       STRT_ROW.rowtime AS start_tstamp,
+       LAST(PRICE_DOWN.rowtime) AS bottom_tstamp,
+       LAST(PRICE_UP.rowtime) AS end_tstamp
+    ONE ROW PER MATCH
+    AFTER MATCH SKIP TO LAST UP
+    PATTERN (STRT_ROW PRICE_DOWN+ PRICE_UP+)
+    DEFINE
+       PRICE_DOWN AS PRICE_DOWN.price < LAST(PRICE_DOWN.price, 1) OR
+               (LAST(PRICE_DOWN.price, 1) IS NULL AND PRICE_DOWN.price < 
STRT_ROW.price))
+       PRICE_UP AS PRICE_UP.price > LAST(PRICE_UP.price, 1) OR 
LAST(PRICE_UP.price, 1) IS NULL
+    ) MR;
+{% endhighlight %}
+
+This query given following input data:
 
 Review comment:
   Introduce the input data first and explain each column. Esp. that rowtime is 
a time attribute. Then show the query with explanation. Then the output data 
with explanation.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to