[jira] [Closed] (FLINK-20446) NoMatchingTableFactoryException

2020-12-01 Thread Ke Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Li closed FLINK-20446.
-
Resolution: Duplicate

> NoMatchingTableFactoryException
> ---
>
> Key: FLINK-20446
> URL: https://issues.apache.org/jira/browse/FLINK-20446
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.11.2
> Environment: * Version:1.11.2
>Reporter: Ke Li
>Priority: Major
>
> When I use sql client configuration, an error is reported, the instruction is 
> as follows:
> {code:java}
> ./sql-client.sh embedded -e /root/flink-sql-client/sql-client-demo.yml
> {code}
> sql-client-demo.yml:
> {code:java}
> tables:
>   - name: SourceTable
> type: source-table
> update-mode: append
> connector:
>   type: datagen
>   rows-per-second: 5
>   fields:
> f_sequence:
>   kind: sequence
>   start: 1
>   end: 1000
> f_random:
>   min: 1
>   max: 1000
> f_random_str:
>   length: 10
> schema:
>   - name: f_sequence
> data-type: INT
>   - name: f_random
> data-type: INT
>   - name: f_random_str
> data-type: STRING
> {code}
> The error is as follows:
> {code:java}
> No default environment specified.No default environment specified.Searching 
> for 
> '/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yaml'...found.Reading
>  default environment from: 
> file:/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yamlReading 
> session environment from: 
> file:/root/flink-sql-client/sql-client-demo.ymlException in thread "main" 
> org.apache.flink.table.client.SqlClientException: Unexpected exception. This 
> is a bug. Please consider filing an issue. at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not create 
> execution context. at 
> org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:870)
>  at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:227)
>  at org.apache.flink.table.client.SqlClient.start(SqlClient.java:108) at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)Caused by: 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.TableSourceFactory' inthe classpath.
> Reason: Required context properties mismatch.
> The matching 
> candidates:org.apache.flink.table.sources.CsvAppendTableSourceFactoryMismatched
>  properties:'connector.type' expects 'filesystem', but is 
> 'datagen''format.type' expects 'csv', but is 'json'
> The following properties are 
> requested:connector.fields.f_random.max=1000connector.fields.f_random.min=1connector.fields.f_random_str.length=10connector.fields.f_sequence.end=1000connector.fields.f_sequence.kind=sequenceconnector.fields.f_sequence.start=1connector.rows-per-second=5connector.type=datagenformat.type=jsonschema.0.data-type=INTschema.0.name=f_sequenceschema.1.data-type=INTschema.1.name=f_randomschema.2.data-type=STRINGschema.2.name=f_random_strupdate-mode=append
> The following factories have been 
> considered:org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryorg.apache.flink.connector.jdbc.table.JdbcTableSourceSinkFactoryorg.apache.flink.table.sources.CsvBatchTableSourceFactoryorg.apache.flink.table.sources.CsvAppendTableSourceFactoryorg.apache.flink.table.filesystem.FileSystemTableFactory
>  at 
> org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
>  at 
> org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
>  at 
> org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
>  at 
> org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:113)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:384)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$7(ExecutionContext.java:638)
>  at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:636)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:523)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:183)
>  at 
> org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:136)
>  at 
> 

[jira] [Created] (FLINK-20446) NoMatchingTableFactoryException

2020-12-01 Thread Ke Li (Jira)
Ke Li created FLINK-20446:
-

 Summary: NoMatchingTableFactoryException
 Key: FLINK-20446
 URL: https://issues.apache.org/jira/browse/FLINK-20446
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.11.2
 Environment: * Version:1.11.2
Reporter: Ke Li


When I use sql client configuration, an error is reported, the instruction is 
as follows:
{code:java}
./sql-client.sh embedded -e /root/flink-sql-client/sql-client-demo.yml
{code}
sql-client-demo.yml:
{code:java}
tables:
  - name: SourceTable
type: source-table
update-mode: append
connector:
  type: datagen
  rows-per-second: 5
  fields:
f_sequence:
  kind: sequence
  start: 1
  end: 1000
f_random:
  min: 1
  max: 1000
f_random_str:
  length: 10
schema:
  - name: f_sequence
data-type: INT
  - name: f_random
data-type: INT
  - name: f_random_str
data-type: STRING
{code}
The error is as follows:
{code:java}
No default environment specified.No default environment specified.Searching for 
'/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yaml'...found.Reading
 default environment from: 
file:/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yamlReading 
session environment from: 
file:/root/flink-sql-client/sql-client-demo.ymlException in thread "main" 
org.apache.flink.table.client.SqlClientException: Unexpected exception. This is 
a bug. Please consider filing an issue. at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
org.apache.flink.table.client.gateway.SqlExecutionException: Could not create 
execution context. at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:870)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:227)
 at org.apache.flink.table.client.SqlClient.start(SqlClient.java:108) at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)Caused by: 
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' inthe classpath.
Reason: Required context properties mismatch.
The matching 
candidates:org.apache.flink.table.sources.CsvAppendTableSourceFactoryMismatched 
properties:'connector.type' expects 'filesystem', but is 'datagen''format.type' 
expects 'csv', but is 'json'
The following properties are 
requested:connector.fields.f_random.max=1000connector.fields.f_random.min=1connector.fields.f_random_str.length=10connector.fields.f_sequence.end=1000connector.fields.f_sequence.kind=sequenceconnector.fields.f_sequence.start=1connector.rows-per-second=5connector.type=datagenformat.type=jsonschema.0.data-type=INTschema.0.name=f_sequenceschema.1.data-type=INTschema.1.name=f_randomschema.2.data-type=STRINGschema.2.name=f_random_strupdate-mode=append
The following factories have been 
considered:org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryorg.apache.flink.connector.jdbc.table.JdbcTableSourceSinkFactoryorg.apache.flink.table.sources.CsvBatchTableSourceFactoryorg.apache.flink.table.sources.CsvAppendTableSourceFactoryorg.apache.flink.table.filesystem.FileSystemTableFactory
 at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
 at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
 at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
 at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:113)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:384)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$7(ExecutionContext.java:638)
 at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:636)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:523)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:183)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:136)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:859)
 ... 3 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20445) NoMatchingTableFactoryException

2020-12-01 Thread Ke Li (Jira)
Ke Li created FLINK-20445:
-

 Summary: NoMatchingTableFactoryException
 Key: FLINK-20445
 URL: https://issues.apache.org/jira/browse/FLINK-20445
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.11.2
 Environment: * Version:1.11.2
Reporter: Ke Li


When I use sql client configuration, an error is reported, the instruction is 
as follows:
{code:java}
./sql-client.sh embedded -e /root/flink-sql-client/sql-client-demo.yml
{code}
sql-client-demo.yml:
{code:java}
tables:
  - name: SourceTable
type: source-table
update-mode: append
connector:
  type: datagen
  rows-per-second: 5
  fields:
f_sequence:
  kind: sequence
  start: 1
  end: 1000
f_random:
  min: 1
  max: 1000
f_random_str:
  length: 10
schema:
  - name: f_sequence
data-type: INT
  - name: f_random
data-type: INT
  - name: f_random_str
data-type: STRING
{code}
The error is as follows:
{code:java}
No default environment specified.No default environment specified.Searching for 
'/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yaml'...found.Reading
 default environment from: 
file:/data/data_gas/flink/flink-1.11.2/conf/sql-client-defaults.yamlReading 
session environment from: 
file:/root/flink-sql-client/sql-client-demo.ymlException in thread "main" 
org.apache.flink.table.client.SqlClientException: Unexpected exception. This is 
a bug. Please consider filing an issue. at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:213)Caused by: 
org.apache.flink.table.client.gateway.SqlExecutionException: Could not create 
execution context. at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:870)
 at 
org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:227)
 at org.apache.flink.table.client.SqlClient.start(SqlClient.java:108) at 
org.apache.flink.table.client.SqlClient.main(SqlClient.java:201)Caused by: 
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' inthe classpath.
Reason: Required context properties mismatch.
The matching 
candidates:org.apache.flink.table.sources.CsvAppendTableSourceFactoryMismatched 
properties:'connector.type' expects 'filesystem', but is 'datagen''format.type' 
expects 'csv', but is 'json'
The following properties are 
requested:connector.fields.f_random.max=1000connector.fields.f_random.min=1connector.fields.f_random_str.length=10connector.fields.f_sequence.end=1000connector.fields.f_sequence.kind=sequenceconnector.fields.f_sequence.start=1connector.rows-per-second=5connector.type=datagenformat.type=jsonschema.0.data-type=INTschema.0.name=f_sequenceschema.1.data-type=INTschema.1.name=f_randomschema.2.data-type=STRINGschema.2.name=f_random_strupdate-mode=append
The following factories have been 
considered:org.apache.flink.streaming.connectors.kafka.KafkaTableSourceSinkFactoryorg.apache.flink.connector.jdbc.table.JdbcTableSourceSinkFactoryorg.apache.flink.table.sources.CsvBatchTableSourceFactoryorg.apache.flink.table.sources.CsvAppendTableSourceFactoryorg.apache.flink.table.filesystem.FileSystemTableFactory
 at 
org.apache.flink.table.factories.TableFactoryService.filterByContext(TableFactoryService.java:322)
 at 
org.apache.flink.table.factories.TableFactoryService.filter(TableFactoryService.java:190)
 at 
org.apache.flink.table.factories.TableFactoryService.findSingleInternal(TableFactoryService.java:143)
 at 
org.apache.flink.table.factories.TableFactoryService.find(TableFactoryService.java:113)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.createTableSource(ExecutionContext.java:384)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.lambda$initializeCatalogs$7(ExecutionContext.java:638)
 at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeCatalogs(ExecutionContext.java:636)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.initializeTableEnvironment(ExecutionContext.java:523)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:183)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext.(ExecutionContext.java:136)
 at 
org.apache.flink.table.client.gateway.local.ExecutionContext$Builder.build(ExecutionContext.java:859)
 ... 3 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-13395) Add source and sink connector for Alibaba Log Service

2019-09-15 Thread Ke Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Li updated FLINK-13395:
--
Description: 
Alibaba Log Service is a big data service which has been widely used in Alibaba 
Group and thousands of customers of Alibaba Cloud. The core storage engine of 
Log Service is named Loghub which is a large scale distributed storage system 
which provides producer and consumer to push and pull data like Kafka, AWS 
Kinesis and Azure Eventhub does. 

Log Service provides a complete solution to help user collect data from both on 
premise and cloud data sources. More than 10 PB data is sent to and consumed 
from Loghub every day.  And hundreds of thousands of users implemented their 
DevOPS and big data system based on Log Service.

Log Service and Flink/Blink has became the de facto standard of big data 
architecture for unified data processing in Alibaba Group and more users of 
Alibaba Cloud.

 

  was:
Alibaba Log Service is a big data service which has been widely used in Alibaba 
Group and thousands of customers of Alibaba Cloud. The core storage engine of 
Log Service is named Loghub which is a large scale distributed storage system 
which provides producer and consumer to push and pull data like Kafka, AWS 
Kinesis and Azure Eventhub does. 

There are a lot of users are using Log Service to collect and analysis data 
from both on premise and cloud data sources, and consuming data stored in Log 
Service from Flink or Blink for streaming computing. 


> Add source and sink connector for Alibaba Log Service
> -
>
> Key: FLINK-13395
> URL: https://issues.apache.org/jira/browse/FLINK-13395
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Ke Li
>Priority: Major
>
> Alibaba Log Service is a big data service which has been widely used in 
> Alibaba Group and thousands of customers of Alibaba Cloud. The core storage 
> engine of Log Service is named Loghub which is a large scale distributed 
> storage system which provides producer and consumer to push and pull data 
> like Kafka, AWS Kinesis and Azure Eventhub does. 
> Log Service provides a complete solution to help user collect data from both 
> on premise and cloud data sources. More than 10 PB data is sent to and 
> consumed from Loghub every day.  And hundreds of thousands of users 
> implemented their DevOPS and big data system based on Log Service.
> Log Service and Flink/Blink has became the de facto standard of big data 
> architecture for unified data processing in Alibaba Group and more users of 
> Alibaba Cloud.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (FLINK-13395) Add source and sink connector for Alibaba Log Service

2019-09-02 Thread Ke Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Li updated FLINK-13395:
--
Summary: Add source and sink connector for Alibaba Log Service  (was: Add 
source and sink connector for Aliyun Log Service)

> Add source and sink connector for Alibaba Log Service
> -
>
> Key: FLINK-13395
> URL: https://issues.apache.org/jira/browse/FLINK-13395
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Ke Li
>Priority: Major
>
> Alibaba Log Service is a big data service which has been widely used in 
> Alibaba Group and thousands of customers of Alibaba Cloud. The core storage 
> engine of Log Service is named Loghub which is a large scale distributed 
> storage system which provides producer and consumer to push and pull data 
> like Kafka, AWS Kinesis and Azure Eventhub does. 
> There are a lot of users are using Log Service to collect and analysis data 
> from both on premise and cloud data sources, and consuming data stored in Log 
> Service from Flink or Blink for streaming computing. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (FLINK-13395) Add source and sink connector for Aliyun Log Service

2019-08-31 Thread Ke Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Li updated FLINK-13395:
--
Description: 
Alibaba Log Service is a big data service which has been widely used in Alibaba 
Group and thousands of customers of Alibaba Cloud. The core storage engine of 
Log Service is named Loghub which is a large scale distributed storage system 
which provides producer and consumer to push and pull data like Kafka, AWS 
Kinesis and Azure Eventhub does. 

There are a lot of users are using Log Service to collect and analysis data 
from both on premise and cloud data sources, and consuming data stored in Log 
Service from Flink or Blink for streaming computing. 

  was:
Aliyun Log Service is a big data service which has been widely used in Alibaba 
Group and thousands of customers of Alibaba Cloud. The core storage engine of 
Log Service is named Loghub which is a large scale distributed storage system 
which provides producer and consumer to push and pull data like Kafka, AWS 
Kinesis and Azure Eventhub does. 

There are a lot of users are using Log Service to collect and analysis data 
from both on premise and cloud data sources, and consuming data stored in Log 
Service from Flink or Blink for streaming computing. 


> Add source and sink connector for Aliyun Log Service
> 
>
> Key: FLINK-13395
> URL: https://issues.apache.org/jira/browse/FLINK-13395
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Ke Li
>Priority: Major
>
> Alibaba Log Service is a big data service which has been widely used in 
> Alibaba Group and thousands of customers of Alibaba Cloud. The core storage 
> engine of Log Service is named Loghub which is a large scale distributed 
> storage system which provides producer and consumer to push and pull data 
> like Kafka, AWS Kinesis and Azure Eventhub does. 
> There are a lot of users are using Log Service to collect and analysis data 
> from both on premise and cloud data sources, and consuming data stored in Log 
> Service from Flink or Blink for streaming computing. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (FLINK-13395) Add source and sink connector for Aliyun Log Service

2019-08-24 Thread Ke Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Li updated FLINK-13395:
--
Description: 
Aliyun Log Service is a big data service which has been widely used in Alibaba 
Group and thousands of customers of Alibaba Cloud. The core storage engine of 
Log Service is named Loghub which is a large scale distributed storage system 
which provides producer and consumer to push and pull data like Kafka, AWS 
Kinesis and Azure Eventhub does. 

There are a lot of users are using Log Service to collect and analysis data 
from both on premise and cloud data sources, and consuming data stored in Log 
Service from Flink or Blink for streaming computing. 

  was:
 Aliyun Log Service is a big data service which has been widely used in Alibaba 
Group and thousands of companies on Alibaba Cloud. The core storage engine of 
Log Service is called Loghub which is a large scale distributed storage system 
and provides producer/consumer API like Kafka or AWS Kinesis. 

There are a lot of users of Flink are using Log Service to collect and analysis 
data from both on premise and cloud data sources, and consuming data stored in 
Log Service from Flink or Blink for streaming compute. 


> Add source and sink connector for Aliyun Log Service
> 
>
> Key: FLINK-13395
> URL: https://issues.apache.org/jira/browse/FLINK-13395
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Ke Li
>Priority: Major
>
> Aliyun Log Service is a big data service which has been widely used in 
> Alibaba Group and thousands of customers of Alibaba Cloud. The core storage 
> engine of Log Service is named Loghub which is a large scale distributed 
> storage system which provides producer and consumer to push and pull data 
> like Kafka, AWS Kinesis and Azure Eventhub does. 
> There are a lot of users are using Log Service to collect and analysis data 
> from both on premise and cloud data sources, and consuming data stored in Log 
> Service from Flink or Blink for streaming computing. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (FLINK-13444) Translate English content of FLINK-13396 into Chinese

2019-07-27 Thread Ke Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16894347#comment-16894347
 ] 

Ke Li commented on FLINK-13444:
---

Hi [~jark] , Can I work on this issue? 

> Translate English content of FLINK-13396 into Chinese
> -
>
> Key: FLINK-13444
> URL: https://issues.apache.org/jira/browse/FLINK-13444
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> Translate new added English content of FLINK-13396 to Chinese.
> The markdown file is located in {{docs/dev/connectors/filesystem_sink.zh.md}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13395) Add source and sink connector for Aliyun Log Service

2019-07-23 Thread Ke Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Li updated FLINK-13395:
--
Description: 
 Aliyun Log Service is a big data service which has been widely used in Alibaba 
Group and thousands of companies on Alibaba Cloud. The core storage engine of 
Log Service is called Loghub which is a large scale distributed storage system 
and provides producer/consumer API like Kafka or AWS Kinesis. 

There are a lot of users of Flink are using Log Service to collect and analysis 
data from both on premise and cloud data sources, and consuming data stored in 
Log Service from Flink or Blink for streaming compute. 

  was:
 Aliyun Log Service is a big data service which has been widely used in Alibaba 
Group and thousand of companies on Alibaba Cloud. The core storage engine of 
Log Service is called Loghub which is a large scale distributed storage system 
and provides producer/consumer API like Kafka or AWS Kinesis. 

There are a lot of users of Flink are using Log Service to collect and analysis 
data from both on premise and cloud data sources, and consuming data stored in 
Log Service from Flink or Blink for streaming compute. 


> Add source and sink connector for Aliyun Log Service
> 
>
> Key: FLINK-13395
> URL: https://issues.apache.org/jira/browse/FLINK-13395
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Ke Li
>Priority: Major
>
>  Aliyun Log Service is a big data service which has been widely used in 
> Alibaba Group and thousands of companies on Alibaba Cloud. The core storage 
> engine of Log Service is called Loghub which is a large scale distributed 
> storage system and provides producer/consumer API like Kafka or AWS Kinesis. 
> There are a lot of users of Flink are using Log Service to collect and 
> analysis data from both on premise and cloud data sources, and consuming data 
> stored in Log Service from Flink or Blink for streaming compute. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13395) Add source and sink connector for Aliyun Log Service

2019-07-23 Thread Ke Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ke Li updated FLINK-13395:
--
Description: 
 Aliyun Log Service is a big data service which has been widely used in Alibaba 
Group and thousand of companies on Alibaba Cloud. The core storage engine of 
Log Service is called Loghub which is a large scale distributed storage system 
and provides producer/consumer API like Kafka or AWS Kinesis. 

There are a lot of users of Flink are using Log Service to collect and analysis 
data from both on premise and cloud data sources, and consuming data stored in 
Log Service from Flink or Blink for streaming compute. 

  was:
 Aliyun Log Service is a storage service which has been widely used in Alibaba 
Group and a lot of customers on Alibaba Cloud. The core storage engine is call 
Loghub which is a large scale distributed storage system and provides 
producer/consumer API as Kafka/Kinesis does. 

There are a lot of users are using Log Service to collect data from on premise 
and cloud and consuming from Flink or Blink for streaming compute. 


> Add source and sink connector for Aliyun Log Service
> 
>
> Key: FLINK-13395
> URL: https://issues.apache.org/jira/browse/FLINK-13395
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Common
>Reporter: Ke Li
>Priority: Major
>
>  Aliyun Log Service is a big data service which has been widely used in 
> Alibaba Group and thousand of companies on Alibaba Cloud. The core storage 
> engine of Log Service is called Loghub which is a large scale distributed 
> storage system and provides producer/consumer API like Kafka or AWS Kinesis. 
> There are a lot of users of Flink are using Log Service to collect and 
> analysis data from both on premise and cloud data sources, and consuming data 
> stored in Log Service from Flink or Blink for streaming compute. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13395) Add source and sink connector for Aliyun Log Service

2019-07-23 Thread Ke Li (JIRA)
Ke Li created FLINK-13395:
-

 Summary: Add source and sink connector for Aliyun Log Service
 Key: FLINK-13395
 URL: https://issues.apache.org/jira/browse/FLINK-13395
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Common
Reporter: Ke Li


 Aliyun Log Service is a storage service which has been widely used in Alibaba 
Group and a lot of customers on Alibaba Cloud. The core storage engine is call 
Loghub which is a large scale distributed storage system and provides 
producer/consumer API as Kafka/Kinesis does. 

There are a lot of users are using Log Service to collect data from on premise 
and cloud and consuming from Flink or Blink for streaming compute. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)