Airblader commented on a change in pull request #460:
URL: https://github.com/apache/flink-web/pull/460#discussion_r690950672



##########
File path: _posts/2021-08-16-connector-table-sql-api-part1.md
##########
@@ -0,0 +1,234 @@
+---
+layout: post
+title:  "Implementing a custom source connector for Table API and SQL - Part 
One "
+date: 2021-08-18T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+excerpt: 
+---
+
+{% toc %}
+
+# Introduction
+
+Apache Flink is a data processing engine that keeps 
[state](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/)
 locally in order to do computations but does not store data. This means that 
it does not include its own fault-tolerant storage component by default and 
relies on external systems to ingest and persist data. Connecting to external 
data input (**sources**) and external data storage (**sinks**) is achieved with 
interfaces called **connectors**.   
+
+Since connectors are such important components, Flink ships with [connectors 
for some popular 
systems](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/).
 But sometimes you may need to read in an uncommon data format and what Flink 
provides is not enough. This is why Flink also provides [APIs](#) for building 
custom connectors if you want to connect to a system that is not supported by 
an existing connector.   
+
+Once you have a source and a sink defined for Flink, you can use its 
declarative APIs (in the form of the [Table API and 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/overview/))
 to execute queries for data analysis without modification to the underlying 
data.  
+
+The **Table API** has the same operations as **SQL** but it extends and 
improves SQL's functionality. It is named Table API because of its relational 
functions on tables: how to obtain a table, how to output a table, and how to 
perform query operation on the table.
+
+In this two-part tutorial, you will explore some of these APIs and concepts by 
implementing your own custom source connector for reading in data from a 
mailbox. You will use Flink to process an email inbox through the IMAP protocol 
and sort them out by subject to a sink. 

Review comment:
       Having read the entirety now, we never actually achieved this goal? Did 
I misunderstand the intention here?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to