Airblader commented on a change in pull request #460:
URL: https://github.com/apache/flink-web/pull/460#discussion_r690950672



##########
File path: _posts/2021-08-16-connector-table-sql-api-part1.md
##########
@@ -0,0 +1,234 @@
+---
+layout: post
+title:  "Implementing a custom source connector for Table API and SQL - Part 
One "
+date: 2021-08-18T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+excerpt: 
+---
+
+{% toc %}
+
+# Introduction
+
+Apache Flink is a data processing engine that keeps 
[state](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/)
 locally in order to do computations but does not store data. This means that 
it does not include its own fault-tolerant storage component by default and 
relies on external systems to ingest and persist data. Connecting to external 
data input (**sources**) and external data storage (**sinks**) is achieved with 
interfaces called **connectors**.   
+
+Since connectors are such important components, Flink ships with [connectors 
for some popular 
systems](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/).
 But sometimes you may need to read in an uncommon data format and what Flink 
provides is not enough. This is why Flink also provides [APIs](#) for building 
custom connectors if you want to connect to a system that is not supported by 
an existing connector.   
+
+Once you have a source and a sink defined for Flink, you can use its 
declarative APIs (in the form of the [Table API and 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/overview/))
 to execute queries for data analysis without modification to the underlying 
data.  
+
+The **Table API** has the same operations as **SQL** but it extends and 
improves SQL's functionality. It is named Table API because of its relational 
functions on tables: how to obtain a table, how to output a table, and how to 
perform query operation on the table.
+
+In this two-part tutorial, you will explore some of these APIs and concepts by 
implementing your own custom source connector for reading in data from a 
mailbox. You will use Flink to process an email inbox through the IMAP protocol 
and sort them out by subject to a sink. 

Review comment:
       Having read the entirety now, we never actually achieved this goal? Did 
I misunderstand the intention here?

##########
File path: _posts/2021-08-16-connector-table-sql-api-part2.md
##########
@@ -0,0 +1,434 @@
+---
+layout: post
+title:  "Implementing a custom source connector for Table API and SQL - Part 
Two "
+date: 2021-08-18T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+excerpt: 
+---
+
+{% toc %}
+
+# Introduction
+
+In [part one](#) of this tutorial, you learned how to build a custom source 
connector for Flink. In part two, you will learn how to integrate the connector 
with a test email inbox through the IMAP protocol, filter out emails, and 
execute [Flink SQL on the Ververica 
Platform](https://www.ververica.com/apache-flink-sql-on-ververica-platform). 
+
+# Goals
+
+Part two of the tutorial will teach you how to: 
+
+- integrate a source connector which connects to a mailbox using the IMAP 
protocol
+- use [Jakarta Mail](https://eclipse-ee4j.github.io/mail/), a Java library 
that can send and receive email via the IMAP protocol  
+- write [Flink 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/overview/)
 and execute the queries in the Ververica Platform
+
+You are encouraged to follow along with the code in this 
[repository](github.com/Airblader/blog-imap). It provides a boilerplate project 
that also comes with a bundled 
[docker-compose](https://docs.docker.com/compose/) setup that lets you easily 
run the connector. You can then try it out with Flink’s SQL client.
+
+
+# Prerequisites
+
+This tutorial assumes that you have:
+
+- followed the steps outlined in [part one](#) of this tutorial
+- some familiarity with Java and objected-oriented programming
+
+
+# Understand how to fetch emails via the IMAP protocol
+
+Now that you have a working source connector that can run on Flink, it is time 
to connect to an email server via IMAP (an Internet protocol that allows email 
clients to retrieve messages from a mail server) so that Flink can process 
emails instead of test static data.  
+
+You will use [Jakarta Mail](https://eclipse-ee4j.github.io/mail/), a Java 
library that can be used to send and receive email via IMAP. For simplicity, 
authentication will use a plain username and password.
+
+This tutorial will focus more on how to implement a connector for Flink. If 
you want to learn more about the details of how IMAP or Jakarta Mail work, you 
are encouraged to explore a more extensive implementation at this 
[repository](github.com/Airblader/flink-connector-email). 
+
+In order to fetch emails, you will need to connect to the email server, 
register a listener for new emails and collect them whenever they arrive, and 
enter a loop to keep the connector running. 
+
+
+# Add configuration options - server information and credentials
+
+In order to connect to your IMAP server, you will need at least the following:
+
+- hostname (of the mail server)
+- port number
+- username
+- password
+
+You will start by creating a class to encapsulate the configuration options. 
You will make use of [Lombok](https://projectlombok.org/setup/maven) to help 
with some boilerplate code. By adding the `@Data` and `@Builder` annotations, 
Lombok will generate these for all the fields of the immutable class. 
+
+```java
+@Data
+@Builder
+public class ImapSourceOptions implements Serializable {
+    private static final long serialVersionUID = 1L;
+
+    private final String host;
+    private final Integer port;
+    private final String user;
+    private final String password;
+}
+```
+
+Now you can add an instance of this class to the `ImapSourceFunction` and 
`ImapTableSource` classes so it can be used there. Take note of the column 
names with which the table has been created. This will help later.
+
+// QUESTION: what would the column names be here??
+
+```java
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+    private final ImapSourceOptions options;
+    private final List<String> columnNames;
+
+    public ImapSourceFunction(
+        ImapSourceOptions options, 
+        List<String> columnNames
+    ) {
+        this.options = options;
+        this.columnNames = columnNames.stream()
+            .map(String::toUpperCase)
+            .collect(Collectors.toList());
+    }
+
+    // ...
+}
+```
+
+```java
+public class ImapTableSource implements ScanTableSource {
+
+    private final ImapSourceOptions options;
+    private final List<String> columnNames;
+
+    public ImapTableSource(
+        ImapSourceOptions options,
+        List<String> columnNames
+    ) {
+        this.options = options;
+        this.columnNames = columnNames;
+    }
+
+    // …
+
+    @Override
+    public ScanRuntimeProvider getScanRuntimeProvider(ScanContext ctx) {
+        final ImapSourceFunction sourceFunction = new 
ImapSourceFunction(options, columnNames);
+        return SourceFunctionProvider.of(sourceFunction, true);
+    }
+
+    @Override
+    public DynamicTableSource copy() {
+        return new ImapTableSource(options, columnNames);
+    }
+
+    // …
+}
+```
+
+Finally, in the `ImapTableSourceFactory` class, you need to create a 
`ConfigOption<Type>Name` for the hostname, port number, username, and password. 
 Then you need to report them to Flink. Since all of the current options are 
mandatory, you can add them to the `requiredOptions()` method in order to do 
this. 
+
+```java
+public class ImapTableSourceFactory implements DynamicTableSourceFactory {
+
+    public static final ConfigOption<String> HOST = 
ConfigOptions.key("host").stringType().noDefaultValue();
+    public static final ConfigOption<Integer> PORT = 
ConfigOptions.key("port").intType().noDefaultValue();
+    public static final ConfigOption<String> USER = 
ConfigOptions.key("user").stringType().noDefaultValue();
+    public static final ConfigOption<String> PASSWORD = 
ConfigOptions.key("password").stringType().noDefaultValue();
+
+    // …
+
+    @Override
+    public Set<ConfigOption<?>> requiredOptions() {
+        final Set<ConfigOption<?>> options = new HashSet<>();
+        options.add(HOST);
+        options.add(PORT);
+        options.add(USER);
+        options.add(PASSWORD);
+        return options;
+    }
+
+    // …
+}
+```
+
+Now take a look at the `createDynamicTableSource()` function in the 
`ImapTableSouceFactory` class.  Recall that previously (in part one) you had 
created a small helper utility 
[TableFactoryHelper](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/factories/FactoryUtil.TableFactoryHelper.html),
 that Flink offers which ensures that required options are set and that no 
unknown options are provided. You can now use it to automatically make sure 
that the required options of hostname, port number, username, and password are 
all provided when creating a table using this connector. The helper function 
will throw an error message if one required option is missing. You can also use 
it to access the provided options (`getOptions()`), convert them into an 
instance of the `ImapTableSource` class created earlier, and provide the 
instance to the table source:
+
+// why would you want to do the latter??
+
+```java
+public class ImapTableSourceFactory implements DynamicTableSourceFactory {
+
+    // ...
+
+    @Override
+    public DynamicTableSource createDynamicTableSource(Context ctx) {
+        final FactoryUtil.TableFactoryHelper factoryHelper = 
FactoryUtil.createTableFactoryHelper(this, ctx);
+        factoryHelper.validate();
+
+        final ImapSourceOptions options = ImapSourceOptions.builder()
+            .host(factoryHelper.getOptions().get(HOST))
+            .port(factoryHelper.getOptions().get(PORT))
+            .user(factoryHelper.getOptions().get(USER))
+            .password(factoryHelper.getOptions().get(PASSWORD))
+            .build();
+        final List<String> columnNames = 
ctx.getCatalogTable().getResolvedSchema().getColumnNames();
+        return new ImapTableSource(options, columnNames);
+    }
+}
+```
+
+To test these new configuration options, run:
+
+```sh
+$ cd testing/
+$ ./build_and_run.sh
+```
+
+Once you see the Flink SQL client start up, execute the following statements 
to create a table with your connector:
+
+```sql
+CREATE TABLE T (subject STRING, content STRING) WITH ('connector' = 'imap');
+
+SELECT * FROM T;
+```
+
+This time it will fail because the required options are not provided.  
+
+```
+[ERROR] Could not execute SQL statement. Reason:
+org.apache.flink.table.api.ValidationException: One or more required options 
are missing.
+
+Missing required options are:
+
+host
+password
+user
+``` 
+
+
+#  Connect to the source email server
+
+Now that you have configured the required options to connect to the email 
server, it is time to actually connect to the server. 
+
+Going back to the `ImapSourceFunction` class, you first need to convert the 
options given to the table source into a `Properties` object, which is what you 
can pass to the Jakarta library. You can also set various other properties here 
as well (i.e. enabling SSL).
+
+// is there more information on this properties object??
+
+```java
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+   // …
+
+   private Properties getSessionProperties() {
+        Properties props = new Properties();
+        props.put("mail.store.protocol", "imap");
+        props.put("mail.imap.auth", true);
+        props.put("mail.imap.host", options.getHost());
+        if (options.getPort() != null) {
+            props.put("mail.imap.port", options.getPort());
+        }
+
+        return props;
+    }
+}
+```
+
+Now create a method (`connect()`) which sets up the connection:
+
+```java 
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+    // …
+
+    private transient Store store;
+    private transient IMAPFolder folder;
+
+    private void connect() throws Exception {
+        var session = Session.getInstance(getSessionProperties(), null);
+        store = session.getStore();
+        store.connect(options.getUser(), options.getPassword());
+
+        var genericFolder = store.getFolder("INBOX");
+        folder = (IMAPFolder) genericFolder;
+
+        if (!folder.isOpen()) {
+            folder.open(Folder.READ_ONLY);
+        }
+    }
+}
+```
+
+You can now use this method to connect to the mail server when the source is 
created. Create a loop to keep the source running while collecting email 
counts. Lastly, implement methods to cancel and close the connection:
+
+```java
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+    private transient volatile boolean running = false;
+
+    // …
+
+    @Override
+    public void run(SourceFunction.SourceContext<RowData> ctx) throws 
Exception {
+        connect();
+        running = true;
+
+        // TODO: Listen for new messages
+
+        while (running) {
+            // Trigger some IMAP request to force the server to send a 
notification
+            folder.getMessageCount();
+            Thread.sleep(250);
+        }
+    }
+
+    @Override
+    public void cancel() {
+        running = false;
+    }
+
+    @Override
+    public void close() throws Exception {
+        if (folder != null) {
+            folder.close();
+        }
+
+        if (store != null) {
+            store.close();
+        }
+    }
+}
+```
+
+There is a request trigger to the server in every loop iteration. This is 
crucial as it ensures that the server will keep sending notifications. A more 
sophisticated approach would be to make use of the IDLE protocol. 
+
+
+## Collect incoming emails
+
+Now you need to listen for new emails arriving in the inbox folder and collect 
them. To begin, hardcode the schema and only return the email’s subject. 
Fortunately, Jakarta provides a simple hook to get notified when new messages 
arrive on the server. You can use this in place of the “TODO” comment above:
+
+```java
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+    @Override
+    public void run(SourceFunction.SourceContext<RowData> ctx) throws 
Exception {
+        // …
+
+        folder.addMessageCountListener(new MessageCountAdapter() {
+            @Override
+            public void messagesAdded(MessageCountEvent e) {
+                collectMessages(ctx, e.getMessages());
+            }
+        });
+
+        // …
+    }
+
+    private void collectMessages(SourceFunction.SourceContext<RowData> ctx, 
Message[] messages) {
+        for (Message message : messages) {
+            try {
+                
ctx.collect(GenericRowData.of(StringData.fromString(message.getSubject())));
+            } catch (MessagingException ignored) {}
+        }
+    }
+}
+```
+
+We can now once again run build_and_run.sh to build the project and drop into 
the SQL client. This time, we’ll be connecting to a Greenmail server which is 
started as part of the setup:
+
+```sql
+CREATE TABLE T (
+    subject STRING
+) WITH (
+    'connector' = 'imap', 
+    'host' = 'greenmail',
+    'port' = '3143', 
+    'user' = 'alice', 
+    'password' = 'alice'
+);
+
+SELECT * FROM T;
+```
+
+The query should now run continuously, but of course no rows will be produced. 
For that, we need to actually send an email to the server. If you have 
mailutils’ mailx installed, you can do so using
+
+```java
+$ echo "This is the email body" | mailx -Sv15-compat \
+        -s"Test Subject" \
+        -Smta=smtp://bob:bob@localhost:3025 \
+        [email protected]
+
+```
+
+The rows “Test Subject” should now have appeared as a row in your output. Our 
source is working!
+
+However, we’re still hard-coding the schema produced by the source, and e.g. 
defining the table with a different schema will produce errors. We want to be 
able to define which fields of an email interest us, however, and then produce 
the data accordingly. For this, we’ll use the list of column names we held onto 
earlier, and then simply look at it when we collect the emails. For brevity, 
we’ll only include a few of the possible fields here:
+
+```java
+private void collectMessages(SourceFunction.SourceContext<RowData> ctx, 
Message[] messages) {
+        for (Message message : messages) {
+            try {
+                collectMessage(ctx, message);
+            } catch (MessagingException ignored) {}
+        }
+    }
+
+    private void collectMessage(SourceFunction.SourceContext<RowData> ctx, 
Message message)
+        throws MessagingException {
+        var row = new GenericRowData(columnNames.size());
+
+        for (int i = 0; i < columnNames.size(); i++) {
+            switch (columnNames.get(i)) {
+                case "SUBJECT":
+                    row.setField(i, 
StringData.fromString(message.getSubject()));
+                    break;
+                case "SENT":
+                    row.setField(i, 
TimestampData.fromInstant(message.getSentDate().toInstant()));
+                    break;
+                case "RECEIVED":
+                    row.setField(i, 
TimestampData.fromInstant(message.getReceivedDate().toInstant()));
+                    break;
+                // ...
+            }
+        }
+
+        ctx.collect(row);
+    }
+```
+
+You should now have a working source that we can select any of the columns 
from which we support. We can try it out once again, but this time specifying 
all the columns we support above:
+
+```sql
+CREATE TABLE T (
+    subject STRING,
+    sent TIMESTAMP(3),
+    received TIMESTAMP(3)
+) WITH (
+    'connector' = 'imap', 
+    'host' = 'greenmail',
+    'port' = '3143', 
+    'user' = 'alice', 
+    'password' = 'alice'
+);
+
+SELECT * FROM T;
+```
+
+Use the command from earlier to send emails to the greenmail server and you 
should see them appear. You can also try selecting only some of the columns, or 
write more complex queries. Note, however, that there are quite a few more 
things we haven’t covered here, such as advancing watermarks.
+
+
+# Test the connector with a real mail server on the Ververica Platform 
+
+If you want to test the connector with a real mail server, you can import it 
into [Ververica Platform Community 
Edition](https://www.ververica.com/getting-started). 
+
+Since our example connector in this blog post is still a bit limited, we’ll 
actually use github.com/Airblader/flink-connector-imap instead this time. We’ll 
also assume you already have Ververica Platform up and running (see the link 
above).

Review comment:
       Rename this to flink-connector-email and convert it into a link. The 
correct link has also changed now to `github.com/TNG/flink-connector-email`.

##########
File path: _posts/2021-08-16-connector-table-sql-api-part2.md
##########
@@ -0,0 +1,434 @@
+---
+layout: post
+title:  "Implementing a custom source connector for Table API and SQL - Part 
Two "
+date: 2021-08-18T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+excerpt: 
+---
+
+{% toc %}
+
+# Introduction
+
+In [part one](#) of this tutorial, you learned how to build a custom source 
connector for Flink. In part two, you will learn how to integrate the connector 
with a test email inbox through the IMAP protocol, filter out emails, and 
execute [Flink SQL on the Ververica 
Platform](https://www.ververica.com/apache-flink-sql-on-ververica-platform). 
+
+# Goals
+
+Part two of the tutorial will teach you how to: 
+
+- integrate a source connector which connects to a mailbox using the IMAP 
protocol
+- use [Jakarta Mail](https://eclipse-ee4j.github.io/mail/), a Java library 
that can send and receive email via the IMAP protocol  
+- write [Flink 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/overview/)
 and execute the queries in the Ververica Platform
+
+You are encouraged to follow along with the code in this 
[repository](github.com/Airblader/blog-imap). It provides a boilerplate project 
that also comes with a bundled 
[docker-compose](https://docs.docker.com/compose/) setup that lets you easily 
run the connector. You can then try it out with Flink’s SQL client.
+
+
+# Prerequisites
+
+This tutorial assumes that you have:
+
+- followed the steps outlined in [part one](#) of this tutorial
+- some familiarity with Java and objected-oriented programming
+
+
+# Understand how to fetch emails via the IMAP protocol
+
+Now that you have a working source connector that can run on Flink, it is time 
to connect to an email server via IMAP (an Internet protocol that allows email 
clients to retrieve messages from a mail server) so that Flink can process 
emails instead of test static data.  
+
+You will use [Jakarta Mail](https://eclipse-ee4j.github.io/mail/), a Java 
library that can be used to send and receive email via IMAP. For simplicity, 
authentication will use a plain username and password.
+
+This tutorial will focus more on how to implement a connector for Flink. If 
you want to learn more about the details of how IMAP or Jakarta Mail work, you 
are encouraged to explore a more extensive implementation at this 
[repository](github.com/Airblader/flink-connector-email). 

Review comment:
       The correct link has changed to `github.com/TNG/flink-connector-email` 
now.

##########
File path: _posts/2021-08-16-connector-table-sql-api-part1.md
##########
@@ -0,0 +1,234 @@
+---
+layout: post
+title:  "Implementing a custom source connector for Table API and SQL - Part 
One "
+date: 2021-08-18T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+excerpt: 
+---
+
+{% toc %}
+
+# Introduction
+
+Apache Flink is a data processing engine that keeps 
[state](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/)
 locally in order to do computations but does not store data. This means that 
it does not include its own fault-tolerant storage component by default and 
relies on external systems to ingest and persist data. Connecting to external 
data input (**sources**) and external data storage (**sinks**) is achieved with 
interfaces called **connectors**.   
+
+Since connectors are such important components, Flink ships with [connectors 
for some popular 
systems](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/).
 But sometimes you may need to read in an uncommon data format and what Flink 
provides is not enough. This is why Flink also provides [APIs](#) for building 
custom connectors if you want to connect to a system that is not supported by 
an existing connector.   
+
+Once you have a source and a sink defined for Flink, you can use its 
declarative APIs (in the form of the [Table API and 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/overview/))
 to execute queries for data analysis without modification to the underlying 
data.  
+
+The **Table API** has the same operations as **SQL** but it extends and 
improves SQL's functionality. It is named Table API because of its relational 
functions on tables: how to obtain a table, how to output a table, and how to 
perform query operation on the table.
+
+In this two-part tutorial, you will explore some of these APIs and concepts by 
implementing your own custom source connector for reading in data from a 
mailbox. You will use Flink to process an email inbox through the IMAP protocol 
and sort them out by subject to a sink. 
+
+Part one will focus on building a custom source connector and [part two](#) 
will focus on integrating it. 
+
+# Goals
+
+Part one of this tutorial will teach you how to build and run a custom source 
connector to be used with Table API and SQL, two high-level abstractions in 
Flink.
+
+You are encouraged to follow along with the code in this 
[repository](github.com/Airblader/blog-imap). It provides a boilerplate project 
that also comes with a bundled 
[docker-compose](https://docs.docker.com/compose/) setup that lets you easily 
run the connector. You can then try it out with Flink’s SQL client.
+
+
+# Prerequisites
+
+This tutorial assumes that you have some familiarity with Java and 
objected-oriented programming. 
+
+It would also be useful to have 
[docker-compose](https://docs.docker.com/compose/install/) installed on your 
system in order to use the script included in the repository that builds and 
runs the connector. 
+
+
+# Understand the infrastructure required for a connector
+
+In order to create a connector which works with Flink, you need:
+
+1. A _factory class_ (a blueprint for creating other objects) that tells Flink 
with which identifier (in this case, “imap”) our connector can be addressed, 
which configuration options it exposes, and how the connector can be 
instantiated. Since Flink uses the Java Service Provider Interface (SPI) to 
discover factories located in different modules, you will also need to add some 
configuration details.
+
+2. The _table source_ object as a specific instance of the connector during 
the planning stage. It tells Flink some information about this instance and how 
it can create the connector runtime implementation. There are also more 
advanced features, such as 
[abilities](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/connector/source/abilities/package-summary.html),
 that can be implemented to improve connector performance.
+
+3. A _runtime implementation_ from the connector obtained during the planning 
stage. The runtime logic is implemented in Flink's core connector interfaces 
and does the actual work of producing rows of dynamic table data. The runtime 
instances are shipped to the Flink cluster. 
+
+Let us look at  this sequence (factory class → table source → runtime 
implementation) in reverse order.
+
+# Establish the runtime implementation of the connector
+
+You first need to have a source connector which can be used in Flink's runtime 
system, defining how data goes in and how it can be executed in the cluster. 
There are a few different interfaces available for implementing the actual 
source of the data and have it be discoverable in Flink.  
+
+For complex connectors, you may want to implement the [Source 
interface](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/api/connector/source/Source.html)
 which gives you a lot of control. For simpler use cases, you can use the 
[SourceFunction 
interface](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/SourceFunction.html),
 which is the base interface for all stream data sources in Flink. There are 
already a few different implementations of SourceFunction interfaces for common 
use cases such as the 
[FromElementsFunction](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/FromElementsFunction.html)
 class and the 
[RichSourceFunction](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/RichSourceFunction.html)
 class. You will use the latter.  
+
+`RichSourceFunction` is a base class for implementing a parallel data source 
that has access to context information and some lifecycle methods. There is a 
`run()` method inherited from the `SourceFunction` interface that you need to 
implement. It is invoked once and can be used to produce the data either once 
for a bounded result or within a loop for an unbounded stream.

Review comment:
       I'm not sure about the description of bounded / unbounded here. Bounded 
really just means that the number of produced records is known to be finite, 
while unbounded sources can produce infinitely many records. This means bounded 
sources finish, while unbounded ones do not. There are more implications of 
this, but the point is that bounded sources can certainly run "in a loop" and 
for long times as well, it's just that they will eventually finish.

##########
File path: _posts/2021-08-16-connector-table-sql-api-part2.md
##########
@@ -0,0 +1,434 @@
+---
+layout: post
+title:  "Implementing a custom source connector for Table API and SQL - Part 
Two "
+date: 2021-08-18T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+excerpt: 
+---
+
+{% toc %}
+
+# Introduction
+
+In [part one](#) of this tutorial, you learned how to build a custom source 
connector for Flink. In part two, you will learn how to integrate the connector 
with a test email inbox through the IMAP protocol, filter out emails, and 
execute [Flink SQL on the Ververica 
Platform](https://www.ververica.com/apache-flink-sql-on-ververica-platform). 
+
+# Goals
+
+Part two of the tutorial will teach you how to: 
+
+- integrate a source connector which connects to a mailbox using the IMAP 
protocol
+- use [Jakarta Mail](https://eclipse-ee4j.github.io/mail/), a Java library 
that can send and receive email via the IMAP protocol  
+- write [Flink 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/sql/overview/)
 and execute the queries in the Ververica Platform
+
+You are encouraged to follow along with the code in this 
[repository](github.com/Airblader/blog-imap). It provides a boilerplate project 
that also comes with a bundled 
[docker-compose](https://docs.docker.com/compose/) setup that lets you easily 
run the connector. You can then try it out with Flink’s SQL client.
+
+
+# Prerequisites
+
+This tutorial assumes that you have:
+
+- followed the steps outlined in [part one](#) of this tutorial
+- some familiarity with Java and objected-oriented programming
+
+
+# Understand how to fetch emails via the IMAP protocol
+
+Now that you have a working source connector that can run on Flink, it is time 
to connect to an email server via IMAP (an Internet protocol that allows email 
clients to retrieve messages from a mail server) so that Flink can process 
emails instead of test static data.  
+
+You will use [Jakarta Mail](https://eclipse-ee4j.github.io/mail/), a Java 
library that can be used to send and receive email via IMAP. For simplicity, 
authentication will use a plain username and password.
+
+This tutorial will focus more on how to implement a connector for Flink. If 
you want to learn more about the details of how IMAP or Jakarta Mail work, you 
are encouraged to explore a more extensive implementation at this 
[repository](github.com/Airblader/flink-connector-email). 
+
+In order to fetch emails, you will need to connect to the email server, 
register a listener for new emails and collect them whenever they arrive, and 
enter a loop to keep the connector running. 
+
+
+# Add configuration options - server information and credentials
+
+In order to connect to your IMAP server, you will need at least the following:
+
+- hostname (of the mail server)
+- port number
+- username
+- password
+
+You will start by creating a class to encapsulate the configuration options. 
You will make use of [Lombok](https://projectlombok.org/setup/maven) to help 
with some boilerplate code. By adding the `@Data` and `@Builder` annotations, 
Lombok will generate these for all the fields of the immutable class. 
+
+```java
+@Data
+@Builder
+public class ImapSourceOptions implements Serializable {
+    private static final long serialVersionUID = 1L;
+
+    private final String host;
+    private final Integer port;
+    private final String user;
+    private final String password;
+}
+```
+
+Now you can add an instance of this class to the `ImapSourceFunction` and 
`ImapTableSource` classes so it can be used there. Take note of the column 
names with which the table has been created. This will help later.
+
+// QUESTION: what would the column names be here??
+
+```java
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+    private final ImapSourceOptions options;
+    private final List<String> columnNames;
+
+    public ImapSourceFunction(
+        ImapSourceOptions options, 
+        List<String> columnNames
+    ) {
+        this.options = options;
+        this.columnNames = columnNames.stream()
+            .map(String::toUpperCase)
+            .collect(Collectors.toList());
+    }
+
+    // ...
+}
+```
+
+```java
+public class ImapTableSource implements ScanTableSource {
+
+    private final ImapSourceOptions options;
+    private final List<String> columnNames;
+
+    public ImapTableSource(
+        ImapSourceOptions options,
+        List<String> columnNames
+    ) {
+        this.options = options;
+        this.columnNames = columnNames;
+    }
+
+    // …
+
+    @Override
+    public ScanRuntimeProvider getScanRuntimeProvider(ScanContext ctx) {
+        final ImapSourceFunction sourceFunction = new 
ImapSourceFunction(options, columnNames);
+        return SourceFunctionProvider.of(sourceFunction, true);
+    }
+
+    @Override
+    public DynamicTableSource copy() {
+        return new ImapTableSource(options, columnNames);
+    }
+
+    // …
+}
+```
+
+Finally, in the `ImapTableSourceFactory` class, you need to create a 
`ConfigOption<Type>Name` for the hostname, port number, username, and password. 
 Then you need to report them to Flink. Since all of the current options are 
mandatory, you can add them to the `requiredOptions()` method in order to do 
this. 
+
+```java
+public class ImapTableSourceFactory implements DynamicTableSourceFactory {
+
+    public static final ConfigOption<String> HOST = 
ConfigOptions.key("host").stringType().noDefaultValue();
+    public static final ConfigOption<Integer> PORT = 
ConfigOptions.key("port").intType().noDefaultValue();
+    public static final ConfigOption<String> USER = 
ConfigOptions.key("user").stringType().noDefaultValue();
+    public static final ConfigOption<String> PASSWORD = 
ConfigOptions.key("password").stringType().noDefaultValue();
+
+    // …
+
+    @Override
+    public Set<ConfigOption<?>> requiredOptions() {
+        final Set<ConfigOption<?>> options = new HashSet<>();
+        options.add(HOST);
+        options.add(PORT);
+        options.add(USER);
+        options.add(PASSWORD);
+        return options;
+    }
+
+    // …
+}
+```
+
+Now take a look at the `createDynamicTableSource()` function in the 
`ImapTableSouceFactory` class.  Recall that previously (in part one) you had 
created a small helper utility 
[TableFactoryHelper](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/factories/FactoryUtil.TableFactoryHelper.html),
 that Flink offers which ensures that required options are set and that no 
unknown options are provided. You can now use it to automatically make sure 
that the required options of hostname, port number, username, and password are 
all provided when creating a table using this connector. The helper function 
will throw an error message if one required option is missing. You can also use 
it to access the provided options (`getOptions()`), convert them into an 
instance of the `ImapTableSource` class created earlier, and provide the 
instance to the table source:
+
+// why would you want to do the latter??
+
+```java
+public class ImapTableSourceFactory implements DynamicTableSourceFactory {
+
+    // ...
+
+    @Override
+    public DynamicTableSource createDynamicTableSource(Context ctx) {
+        final FactoryUtil.TableFactoryHelper factoryHelper = 
FactoryUtil.createTableFactoryHelper(this, ctx);
+        factoryHelper.validate();
+
+        final ImapSourceOptions options = ImapSourceOptions.builder()
+            .host(factoryHelper.getOptions().get(HOST))
+            .port(factoryHelper.getOptions().get(PORT))
+            .user(factoryHelper.getOptions().get(USER))
+            .password(factoryHelper.getOptions().get(PASSWORD))
+            .build();
+        final List<String> columnNames = 
ctx.getCatalogTable().getResolvedSchema().getColumnNames();

Review comment:
       I discussed this offline with @twalthr. In the interest of finding a 
balance between "correctness" and brevity, let's change this to this:
   
   ```
           final List<String> columnNames = 
ctx.getCatalogTable().getResolvedSchema().getColumns().stream()
               .filter(Column::isPhysical)
               .map(Column::getName)
               .collect(Collectors.toList());
   ```
   
   Additionally we should add an explanation that ideally we'd be making use of 
connector metadata instead (with a link), but that we'll keep it simple here 
instead and maybe refer again to github.com/TNG/flink-connector-email which 
does implement this using metadata now.

##########
File path: _posts/2021-08-16-connector-table-sql-api-part1.md
##########
@@ -0,0 +1,234 @@
+---
+layout: post
+title:  "Implementing a custom source connector for Table API and SQL - Part 
One "
+date: 2021-08-18T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+excerpt: 
+---
+
+{% toc %}
+
+# Introduction
+
+Apache Flink is a data processing engine that keeps 
[state](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/)
 locally in order to do computations but does not store data. This means that 
it does not include its own fault-tolerant storage component by default and 
relies on external systems to ingest and persist data. Connecting to external 
data input (**sources**) and external data storage (**sinks**) is achieved with 
interfaces called **connectors**.   
+
+Since connectors are such important components, Flink ships with [connectors 
for some popular 
systems](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/).
 But sometimes you may need to read in an uncommon data format and what Flink 
provides is not enough. This is why Flink also provides [APIs](#) for building 
custom connectors if you want to connect to a system that is not supported by 
an existing connector.   
+
+Once you have a source and a sink defined for Flink, you can use its 
declarative APIs (in the form of the [Table API and 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/overview/))
 to execute queries for data analysis without modification to the underlying 
data.  
+
+The **Table API** has the same operations as **SQL** but it extends and 
improves SQL's functionality. It is named Table API because of its relational 
functions on tables: how to obtain a table, how to output a table, and how to 
perform query operation on the table.
+
+In this two-part tutorial, you will explore some of these APIs and concepts by 
implementing your own custom source connector for reading in data from a 
mailbox. You will use Flink to process an email inbox through the IMAP protocol 
and sort them out by subject to a sink. 
+
+Part one will focus on building a custom source connector and [part two](#) 
will focus on integrating it. 
+
+# Goals
+
+Part one of this tutorial will teach you how to build and run a custom source 
connector to be used with Table API and SQL, two high-level abstractions in 
Flink.
+
+You are encouraged to follow along with the code in this 
[repository](github.com/Airblader/blog-imap). It provides a boilerplate project 
that also comes with a bundled 
[docker-compose](https://docs.docker.com/compose/) setup that lets you easily 
run the connector. You can then try it out with Flink’s SQL client.
+
+
+# Prerequisites
+
+This tutorial assumes that you have some familiarity with Java and 
objected-oriented programming. 
+
+It would also be useful to have 
[docker-compose](https://docs.docker.com/compose/install/) installed on your 
system in order to use the script included in the repository that builds and 
runs the connector. 
+
+
+# Understand the infrastructure required for a connector
+
+In order to create a connector which works with Flink, you need:
+
+1. A _factory class_ (a blueprint for creating other objects) that tells Flink 
with which identifier (in this case, “imap”) our connector can be addressed, 
which configuration options it exposes, and how the connector can be 
instantiated. Since Flink uses the Java Service Provider Interface (SPI) to 
discover factories located in different modules, you will also need to add some 
configuration details.
+
+2. The _table source_ object as a specific instance of the connector during 
the planning stage. It tells Flink some information about this instance and how 
it can create the connector runtime implementation. There are also more 
advanced features, such as 
[abilities](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/connector/source/abilities/package-summary.html),
 that can be implemented to improve connector performance.
+
+3. A _runtime implementation_ from the connector obtained during the planning 
stage. The runtime logic is implemented in Flink's core connector interfaces 
and does the actual work of producing rows of dynamic table data. The runtime 
instances are shipped to the Flink cluster. 
+
+Let us look at  this sequence (factory class → table source → runtime 
implementation) in reverse order.
+
+# Establish the runtime implementation of the connector
+
+You first need to have a source connector which can be used in Flink's runtime 
system, defining how data goes in and how it can be executed in the cluster. 
There are a few different interfaces available for implementing the actual 
source of the data and have it be discoverable in Flink.  
+
+For complex connectors, you may want to implement the [Source 
interface](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/api/connector/source/Source.html)
 which gives you a lot of control. For simpler use cases, you can use the 
[SourceFunction 
interface](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/SourceFunction.html),
 which is the base interface for all stream data sources in Flink. There are 
already a few different implementations of SourceFunction interfaces for common 
use cases such as the 
[FromElementsFunction](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/FromElementsFunction.html)
 class and the 
[RichSourceFunction](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/RichSourceFunction.html)
 class. You will use the latter.  
+
+`RichSourceFunction` is a base class for implementing a parallel data source 
that has access to context information and some lifecycle methods. There is a 
`run()` method inherited from the `SourceFunction` interface that you need to 
implement. It is invoked once and can be used to produce the data either once 
for a bounded result or within a loop for an unbounded stream.
+
+For example, to create a bounded data source, you could implement this method 
so that it reads all existing emails and then closes. To create an unbounded 
source, you could only look at new emails coming in while the source is active. 
You can also combine these behaviors and expose them through configuration 
options.
+
+When you first create the class and implement the interface, it should look 
something like this:
+
+```java
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.RowData;
+
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+  @Override
+  public void run(SourceContext<RowData> ctx) throws Exception {}
+
+  @Override
+  public void cancel() {}
+}
+```
+
+In the `run()` method, you get access to a 
[context](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/SourceFunction.SourceContext.html)
 object inherited from the SourceFunction interface, which is a bridge to Flink 
and allows you to output data. Since the source does not produce any data yet, 
the next step is to make it produce some static data in order to test that the 
data flows correctly: 
+
+```java
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData
+
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+  @Override
+  public void run(SourceContext<RowData> ctx) throws Exception {
+      ctx.collect(GenericRowData.of(
+          StringData.fromString("Subject 1"), 
+          StringData.fromString("Hello, World!")
+      ));
+  }
+
+  @Override
+  public void cancel() {}
+}
+```
+
+// explain the collect function?
+
+You do not need to implement the `cancel()` method yet because the source 
finishes instantly. 
+
+
+# Create and configure a dynamic table source for the data stream
+
+[Dynamic 
tables](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/dynamic_tables/)
 are the core concept of Flink’s Table API and SQL support for streaming data 
and, like its name suggests, change over time. You can imagine a data stream 
being logically converted into a table that is constantly changing. For this 
tutorial, the emails that will be read in will be interpreted as a (source) 
table that is queryable. It can be viewed as a specific instance of a connector 
class. 
+
+You will now implement a 
[DynamicTableSource](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/connector/source/DynamicTableSource.html)
 interface. There are two types of dynamic table sources: 
[ScanTableSource](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/connector/source/ScanTableSource.html)
 and 
[LookupTableSource](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/connector/source/LookupTableSource.html).
 Scan sources read the entire table on the external system while lookup sources 
look for specific rows based on keys. The former will fit the use case of this 
tutorial. 
+
+This is what a scan table source implementation would look like:
+
+```java
+import org.apache.flink.table.connector.ChangelogMode;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import org.apache.flink.table.connector.source.SourceFunctionProvider;
+
+public class ImapTableSource implements ScanTableSource {
+  @Override
+  public ChangelogMode getChangelogMode() {
+    return ChangelogMode.insertOnly();
+  }
+
+  @Override
+  public ScanRuntimeProvider getScanRuntimeProvider(ScanContext ctx) {
+    boolean bounded = true;

Review comment:
       Hm, I guess you extracted this `true` to give it a name, but as a 
developer, this local variable looks useless and a code review would probably 
point that out. IDEs will typically name the argument for you anyway, i.e. 
`of(…, true)` will be shown as `of(…, bounded: true)`.

##########
File path: _posts/2021-08-16-connector-table-sql-api-part1.md
##########
@@ -0,0 +1,234 @@
+---
+layout: post
+title:  "Implementing a custom source connector for Table API and SQL - Part 
One "
+date: 2021-08-18T00:00:00.000Z
+authors:
+- Ingo Buerk:
+  name: "Ingo Buerk"
+excerpt: 
+---
+
+{% toc %}
+
+# Introduction
+
+Apache Flink is a data processing engine that keeps 
[state](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/)
 locally in order to do computations but does not store data. This means that 
it does not include its own fault-tolerant storage component by default and 
relies on external systems to ingest and persist data. Connecting to external 
data input (**sources**) and external data storage (**sinks**) is achieved with 
interfaces called **connectors**.   
+
+Since connectors are such important components, Flink ships with [connectors 
for some popular 
systems](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/overview/).
 But sometimes you may need to read in an uncommon data format and what Flink 
provides is not enough. This is why Flink also provides [APIs](#) for building 
custom connectors if you want to connect to a system that is not supported by 
an existing connector.   
+
+Once you have a source and a sink defined for Flink, you can use its 
declarative APIs (in the form of the [Table API and 
SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/overview/))
 to execute queries for data analysis without modification to the underlying 
data.  
+
+The **Table API** has the same operations as **SQL** but it extends and 
improves SQL's functionality. It is named Table API because of its relational 
functions on tables: how to obtain a table, how to output a table, and how to 
perform query operation on the table.
+
+In this two-part tutorial, you will explore some of these APIs and concepts by 
implementing your own custom source connector for reading in data from a 
mailbox. You will use Flink to process an email inbox through the IMAP protocol 
and sort them out by subject to a sink. 
+
+Part one will focus on building a custom source connector and [part two](#) 
will focus on integrating it. 
+
+# Goals
+
+Part one of this tutorial will teach you how to build and run a custom source 
connector to be used with Table API and SQL, two high-level abstractions in 
Flink.
+
+You are encouraged to follow along with the code in this 
[repository](github.com/Airblader/blog-imap). It provides a boilerplate project 
that also comes with a bundled 
[docker-compose](https://docs.docker.com/compose/) setup that lets you easily 
run the connector. You can then try it out with Flink’s SQL client.
+
+
+# Prerequisites
+
+This tutorial assumes that you have some familiarity with Java and 
objected-oriented programming. 
+
+It would also be useful to have 
[docker-compose](https://docs.docker.com/compose/install/) installed on your 
system in order to use the script included in the repository that builds and 
runs the connector. 
+
+
+# Understand the infrastructure required for a connector
+
+In order to create a connector which works with Flink, you need:
+
+1. A _factory class_ (a blueprint for creating other objects) that tells Flink 
with which identifier (in this case, “imap”) our connector can be addressed, 
which configuration options it exposes, and how the connector can be 
instantiated. Since Flink uses the Java Service Provider Interface (SPI) to 
discover factories located in different modules, you will also need to add some 
configuration details.
+
+2. The _table source_ object as a specific instance of the connector during 
the planning stage. It tells Flink some information about this instance and how 
it can create the connector runtime implementation. There are also more 
advanced features, such as 
[abilities](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/connector/source/abilities/package-summary.html),
 that can be implemented to improve connector performance.
+
+3. A _runtime implementation_ from the connector obtained during the planning 
stage. The runtime logic is implemented in Flink's core connector interfaces 
and does the actual work of producing rows of dynamic table data. The runtime 
instances are shipped to the Flink cluster. 
+
+Let us look at  this sequence (factory class → table source → runtime 
implementation) in reverse order.
+
+# Establish the runtime implementation of the connector
+
+You first need to have a source connector which can be used in Flink's runtime 
system, defining how data goes in and how it can be executed in the cluster. 
There are a few different interfaces available for implementing the actual 
source of the data and have it be discoverable in Flink.  
+
+For complex connectors, you may want to implement the [Source 
interface](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/api/connector/source/Source.html)
 which gives you a lot of control. For simpler use cases, you can use the 
[SourceFunction 
interface](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/SourceFunction.html),
 which is the base interface for all stream data sources in Flink. There are 
already a few different implementations of SourceFunction interfaces for common 
use cases such as the 
[FromElementsFunction](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/FromElementsFunction.html)
 class and the 
[RichSourceFunction](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/RichSourceFunction.html)
 class. You will use the latter.  
+
+`RichSourceFunction` is a base class for implementing a parallel data source 
that has access to context information and some lifecycle methods. There is a 
`run()` method inherited from the `SourceFunction` interface that you need to 
implement. It is invoked once and can be used to produce the data either once 
for a bounded result or within a loop for an unbounded stream.
+
+For example, to create a bounded data source, you could implement this method 
so that it reads all existing emails and then closes. To create an unbounded 
source, you could only look at new emails coming in while the source is active. 
You can also combine these behaviors and expose them through configuration 
options.
+
+When you first create the class and implement the interface, it should look 
something like this:
+
+```java
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.RowData;
+
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+  @Override
+  public void run(SourceContext<RowData> ctx) throws Exception {}
+
+  @Override
+  public void cancel() {}
+}
+```
+
+In the `run()` method, you get access to a 
[context](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streaming/api/functions/source/SourceFunction.SourceContext.html)
 object inherited from the SourceFunction interface, which is a bridge to Flink 
and allows you to output data. Since the source does not produce any data yet, 
the next step is to make it produce some static data in order to test that the 
data flows correctly: 
+
+```java
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.table.data.GenericRowData;
+import org.apache.flink.table.data.RowData;
+import org.apache.flink.table.data.StringData
+
+public class ImapSourceFunction extends RichSourceFunction<RowData> {
+  @Override
+  public void run(SourceContext<RowData> ctx) throws Exception {
+      ctx.collect(GenericRowData.of(
+          StringData.fromString("Subject 1"), 
+          StringData.fromString("Hello, World!")
+      ));
+  }
+
+  @Override
+  public void cancel() {}
+}
+```
+
+// explain the collect function?
+
+You do not need to implement the `cancel()` method yet because the source 
finishes instantly. 
+
+
+# Create and configure a dynamic table source for the data stream
+
+[Dynamic 
tables](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/dynamic_tables/)
 are the core concept of Flink’s Table API and SQL support for streaming data 
and, like its name suggests, change over time. You can imagine a data stream 
being logically converted into a table that is constantly changing. For this 
tutorial, the emails that will be read in will be interpreted as a (source) 
table that is queryable. It can be viewed as a specific instance of a connector 
class. 
+
+You will now implement a 
[DynamicTableSource](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/connector/source/DynamicTableSource.html)
 interface. There are two types of dynamic table sources: 
[ScanTableSource](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/connector/source/ScanTableSource.html)
 and 
[LookupTableSource](https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/table/connector/source/LookupTableSource.html).
 Scan sources read the entire table on the external system while lookup sources 
look for specific rows based on keys. The former will fit the use case of this 
tutorial. 
+
+This is what a scan table source implementation would look like:
+
+```java
+import org.apache.flink.table.connector.ChangelogMode;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import org.apache.flink.table.connector.source.ScanTableSource;
+import org.apache.flink.table.connector.source.SourceFunctionProvider;
+
+public class ImapTableSource implements ScanTableSource {
+  @Override
+  public ChangelogMode getChangelogMode() {
+    return ChangelogMode.insertOnly();
+  }
+
+  @Override
+  public ScanRuntimeProvider getScanRuntimeProvider(ScanContext ctx) {
+    boolean bounded = true;

Review comment:
       Fair enough. I meant to delete this thread but apparently that doesn't 
work, so my original comment is gone now. We can just ignore this thread. :-)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to