[jira] [Updated] (FLINK-11959) Introduce window operator for blink streaming runtime

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11959:
---
Labels: pull-request-available  (was: )

> Introduce window operator for blink streaming runtime
> -
>
> Key: FLINK-11959
> URL: https://issues.apache.org/jira/browse/FLINK-11959
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Runtime
>Reporter: Kurt Young
>Assignee: Kurt Young
>Priority: Major
>  Labels: pull-request-available
>
> We introduced a new window operator in blink streaming runtime, the 
> differences between blink's window operator and the one used in DataStream 
> API are:
>  # The blink's window operator is mainly used by window aggregate. It work 
> closely with SQL's aggregate function, hence we didn't provide the 
> flexibility to apply arbitrary `WindowFunction` like DataStream did. Instead, 
> we only need to save the intermediate accumulate state for aggregate 
> functions. There is no need for us to save original input rows into state, 
> which will be much more efficient.
>  # This new window operator can deal with retract messages.
>  # We did some pane based optimization within sliding window operator, 
> similar with [FLINK-7001|https://issues.apache.org/jira/browse/FLINK-7001]. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #8058: [FLINK-11959][table-runtime-blink] Introduce window operator for blink streaming runtime

2019-03-26 Thread GitBox
flinkbot commented on issue #8058: [FLINK-11959][table-runtime-blink] Introduce 
window operator for blink streaming runtime
URL: https://github.com/apache/flink/pull/8058#issuecomment-476999774
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung opened a new pull request #8058: [FLINK-11959][table-runtime-blink] Introduce window operator for blink streaming runtime

2019-03-26 Thread GitBox
KurtYoung opened a new pull request #8058: [FLINK-11959][table-runtime-blink] 
Introduce window operator for blink streaming runtime
URL: https://github.com/apache/flink/pull/8058
 
 
   
   
   ## What is the purpose of the change
   
   introduced a new window operator in blink streaming runtime, the differences 
between blink's window operator and the one used in DataStream API are:
   
 - The blink's window operator is mainly used by window aggregate. It work 
closely with SQL's aggregate function, hence we didn't provide the flexibility 
to apply arbitrary `WindowFunction` like DataStream did. Instead, we only need 
to save the intermediate accumulate state for aggregate functions. There is no 
need for us to save original input rows into state, which will be much more 
efficient.
 - This new window operator can deal with retract messages.
 - We did some pane based optimization within sliding window operator, 
similar with FLINK-7001. 
   
   ## Brief change log
   
 - introduced WindowOperator to table-runtime-blink
   
   ## Verifying this change
   
   This change added some tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zhijiangW commented on issue #7713: [FLINK-10995][network] Copy intermediate serialization results only once for broadcast mode

2019-03-26 Thread GitBox
zhijiangW commented on issue #7713: [FLINK-10995][network] Copy intermediate 
serialization results only once for broadcast mode
URL: https://github.com/apache/flink/pull/7713#issuecomment-476993783
 
 
   @pnowojski , thanks for review again! 
   I rebase the latest codes and squash the commits for addressing above 
comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] maqingxiang commented on a change in pull request #8008: [FLINK-11963][History Server]Add time-based cleanup mechanism in history server

2019-03-26 Thread GitBox
maqingxiang commented on a change in pull request #8008: [FLINK-11963][History 
Server]Add time-based cleanup mechanism in history server
URL: https://github.com/apache/flink/pull/8008#discussion_r268164156
 
 

 ##
 File path: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java
 ##
 @@ -123,7 +125,9 @@ void stop() {
 
private static final String JSON_FILE_ENDING = ".json";
 
-   JobArchiveFetcherTask(List 
refreshDirs, File webDir, CountDownLatch numFinishedPolls) {
+   JobArchiveFetcherTask(long retainedApplicationsMillis, 
List refreshDirs, File webDir,
+   CountDownLatch 
numFinishedPolls) {
+   this.retainedApplicationsMillis = 
retainedApplicationsMillis;
 
 Review comment:
   The test logic has been added by increasing the isCleanupEnabled variable. 
Thanks a lot for reviewing it again @klion26 !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yanghua commented on issue #8055: [FLINK-12024] Bump universal Kafka connector to Kafka dependency to 2.2.0

2019-03-26 Thread GitBox
yanghua commented on issue #8055: [FLINK-12024] Bump universal Kafka connector 
to Kafka dependency to 2.2.0
URL: https://github.com/apache/flink/pull/8055#issuecomment-476986958
 
 
   cc @aljoscha and @pnowojski 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yanghua commented on issue #8048: [FLINK-12009] Fix wrong check message about heartbeat interval for HeartbeatServices

2019-03-26 Thread GitBox
yanghua commented on issue #8048: [FLINK-12009] Fix wrong check message about 
heartbeat interval for HeartbeatServices
URL: https://github.com/apache/flink/pull/8048#issuecomment-476986088
 
 
   @sunjincheng121 Can you review this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12028) Add Column Operators(add/rename/drop)

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12028:
---
Labels: pull-request-available  (was: )

> Add Column Operators(add/rename/drop)
> -
>
> Key: FLINK-12028
> URL: https://issues.apache.org/jira/browse/FLINK-12028
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>  Labels: pull-request-available
>
> In this Jira will add column operators/operations as follows:
> 1)   Table(schema) operators
>  * Add columns
>  * Replace columns
>  * Drop columns
>  * Rename columns
> See [google 
> doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
>  And I also have done some 
> [prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #8057: [FLINK-12028][table] Add `addColumns`,`renameColumns`, `dropColumns` …

2019-03-26 Thread GitBox
flinkbot commented on issue #8057: [FLINK-12028][table] Add 
`addColumns`,`renameColumns`, `dropColumns` …
URL: https://github.com/apache/flink/pull/8057#issuecomment-476975009
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 opened a new pull request #8057: [FLINK-12028][table] Add `addColumns`,`renameColumns`, `dropColumns` …

2019-03-26 Thread GitBox
sunjincheng121 opened a new pull request #8057: [FLINK-12028][table] Add 
`addColumns`,`renameColumns`, `dropColumns` …
URL: https://github.com/apache/flink/pull/8057
 
 
   ## What is the purpose of the change
   In this PR will add column operators as follows:
   
   - Add columns
   - Replace columns
   - Drop columns
   - Rename columns
   
   See [google 
doc](https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit#)
   
   ## Brief change log
   
 - Add `addColumns`,`renameColumns` and  `dropColumns` interfaces in Table.
 - Add the implementation of `addColumns`, `renameColumns` and 
`dropColumns` in `TableImpl`.
 - Add docs for `addColumns`,`renameColumns` and  `dropColumns`.
   
   ## Verifying this change
   This change added tests and can be verified as follows:
 - Added integration tests `addColumns`, `renameColumns` and `dropColumns`
 - Added validation tests `addColumns`, `renameColumns` and `dropColumns`
 - Added plan check tests `addColumns`, `renameColumns` and `dropColumns`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (docs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on issue #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on issue #8007: [FLINK-11474][table] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#issuecomment-476966857
 
 
   @flinkbot attention @aljoscha @twalthr 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-12029) Add Column selections

2019-03-26 Thread Hequn Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng reassigned FLINK-12029:
---

Assignee: Hequn Cheng

> Add Column selections
> -
>
> Key: FLINK-12029
> URL: https://issues.apache.org/jira/browse/FLINK-12029
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>
> In this Jira will add column operators/operations as follows:
> Fine-grained column/row operations
>  * Column selection
>  * Row package and flatten
> See [google 
> doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
>  And I also have done some 
> [prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-12029) Add Column selections

2019-03-26 Thread Hequn Cheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hequn Cheng updated FLINK-12029:

Description: 
In this Jira will add column operators/operations as follows:

Fine-grained column operations
 * Column selection

See [google 
doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
 And I also have done some 
[prototype|https://github.com/sunjincheng121/flink/pull/94/files]

  was:
In this Jira will add column operators/operations as follows:

Fine-grained column/row operations
 * Column selection
 * Row package and flatten

See [google 
doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
 And I also have done some 
[prototype|https://github.com/sunjincheng121/flink/pull/94/files]


> Add Column selections
> -
>
> Key: FLINK-12029
> URL: https://issues.apache.org/jira/browse/FLINK-12029
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: Hequn Cheng
>Priority: Major
>
> In this Jira will add column operators/operations as follows:
> Fine-grained column operations
>  * Column selection
> See [google 
> doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
>  And I also have done some 
> [prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11967) Add Column Operators/Operations

2019-03-26 Thread sunjincheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng reassigned FLINK-11967:
---

Assignee: sunjincheng

> Add Column Operators/Operations
> ---
>
> Key: FLINK-11967
> URL: https://issues.apache.org/jira/browse/FLINK-11967
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>
> In this Jira will add column operators/operations as follows:
> 1)   Table(schema) operators
>  * Add columns
>  * Replace columns
>  * Drop columns
>  * Rename columns
> 2)Fine-grained column/row operations
>  * Column selection
>  * Row package and flatten
> See [google 
> doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
>  And I also have done some 
> [prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #8056: [FLINK-12013][table-planner-blink] Support calc and correlate in blink batch

2019-03-26 Thread GitBox
flinkbot commented on issue #8056: [FLINK-12013][table-planner-blink] Support 
calc and correlate in blink batch
URL: https://github.com/apache/flink/pull/8056#issuecomment-476952996
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12013) Support calc and correlate in blink

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12013:
---
Labels: pull-request-available  (was: )

> Support calc and correlate in blink
> ---
>
> Key: FLINK-12013
> URL: https://issues.apache.org/jira/browse/FLINK-12013
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
>
> 1.Support filter and projection in calc, e.g.: select a + b, c from T where a 
> > 5;
> 2.Support udf in calc.
> 3.Support udtf in correlate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] JingsongLi opened a new pull request #8056: [FLINK-12013][table-planner-blink] Support calc and correlate in blink batch

2019-03-26 Thread GitBox
JingsongLi opened a new pull request #8056: [FLINK-12013][table-planner-blink] 
Support calc and correlate in blink batch
URL: https://github.com/apache/flink/pull/8056
 
 
   ## What is the purpose of the change
   
   Introduce blink batch sql calc and correlate.
   
   ## Brief change log
   
   1.Support source and sink conversion between internal format and external 
format.
   2.Support filter and projection in calc, e.g.: select a + b, c from T where 
a > 5;
   3.Support udf in calc.
   4.Support udtf in correlate.
   
   ## Verifying this change
   
   ut
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes)
 - If yes, how is the feature documented? (JavaDocs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-12029) Add Column selections

2019-03-26 Thread sunjincheng (JIRA)
sunjincheng created FLINK-12029:
---

 Summary: Add Column selections
 Key: FLINK-12029
 URL: https://issues.apache.org/jira/browse/FLINK-12029
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: sunjincheng


In this Jira will add column operators/operations as follows:

Fine-grained column/row operations
 * Column selection
 * Row package and flatten

See [google 
doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
 And I also have done some 
[prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12028) Add Column Operators(add/rename/drop)

2019-03-26 Thread sunjincheng (JIRA)
sunjincheng created FLINK-12028:
---

 Summary: Add Column Operators(add/rename/drop)
 Key: FLINK-12028
 URL: https://issues.apache.org/jira/browse/FLINK-12028
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: sunjincheng
Assignee: sunjincheng


In this Jira will add column operators/operations as follows:

1)   Table(schema) operators
 * Add columns
 * Replace columns
 * Drop columns
 * Rename columns

See [google 
doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
 And I also have done some 
[prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11967) Add Column Operators/Operations

2019-03-26 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802388#comment-16802388
 ] 

sunjincheng commented on FLINK-11967:
-

 Thanks a lot for your quick reply, [~dian.fu]! I'll open the sub-JIRAs and 
start the PRs. :)

> Add Column Operators/Operations
> ---
>
> Key: FLINK-11967
> URL: https://issues.apache.org/jira/browse/FLINK-11967
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: sunjincheng
>Priority: Major
>
> In this Jira will add column operators/operations as follows:
> 1)   Table(schema) operators
>  * Add columns
>  * Replace columns
>  * Drop columns
>  * Rename columns
> 2)Fine-grained column/row operations
>  * Column selection
>  * Row package and flatten
> See [google 
> doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
>  And I also have done some 
> [prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-12027) Flink Web UI is not accessible in local mode

2019-03-26 Thread Jeff Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Zhang updated FLINK-12027:
---
Description: 
When I use flink in local mode (LocalEnvironment), I can not access flink web 
ui. That means what all I can depend on is log, but sometimes log can not tell 
me everything, I still need to access web ui, so it would be nice to have flink 
web ui accessible in local mode.

My scenario is in intellij, flink web ui is still not accessible even I add 
flink-runtime-web as dependency. 

  was:When I use flink in local mode (LocalEnvironment), I can not access flink 
web ui. That means what all I can depend on is log, but sometimes log can not 
tell me everything, I still need to access web ui, so it would be nice to have 
flink web ui accessible in local mode


> Flink Web UI is not accessible in local mode
> 
>
> Key: FLINK-12027
> URL: https://issues.apache.org/jira/browse/FLINK-12027
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jeff Zhang
>Priority: Major
>
> When I use flink in local mode (LocalEnvironment), I can not access flink web 
> ui. That means what all I can depend on is log, but sometimes log can not 
> tell me everything, I still need to access web ui, so it would be nice to 
> have flink web ui accessible in local mode.
> My scenario is in intellij, flink web ui is still not accessible even I add 
> flink-runtime-web as dependency. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-12026) Remove the `xxxInternal` method from TableImpl

2019-03-26 Thread sunjincheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-12026:

Description: 
At present, each operator of TableImpl has an internal method of `xxxInternal`, 
and `xxxInternal` is a temp method. I think it can be removed at present to 
further simplify the code. Such as:

From:
{code:java}
override def select(fields: String): Table = {
  select(ExpressionParser.parseExpressionList(fields): _*)
}

override def select(fields: Expression*): Table = {
  selectInternal(fields.map(tableImpl.expressionBridge.bridge))
}

private def selectInternal(fields: Seq[PlannerExpression]): Table = {
...
// implementtition logic
...
}{code}
To:
{code:java}
override def select(fields: String): Table = { 
select(ExpressionParser.parseExpressionList(fields): _*) 
} 

override def select(fields: Expression*): Table = {
... 
// implementtition logic 
... 
}{code}
 

I think the implementtition logic can move into `select(fields: Expression*)`.  
What do you think? [~dawidwys] [~hequn8128]

  was:
At present, each operator of TableImpl has an internal method of `xxxInternal`, 
and `xxxInternal` is a temp method. I think it can be removed at present to 
further simplify the code.

What do you think? [~dawidwys] [~hequn8128]


> Remove the `xxxInternal` method from TableImpl
> --
>
> Key: FLINK-12026
> URL: https://issues.apache.org/jira/browse/FLINK-12026
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>Priority: Major
>
> At present, each operator of TableImpl has an internal method of 
> `xxxInternal`, and `xxxInternal` is a temp method. I think it can be removed 
> at present to further simplify the code. Such as:
> From:
> {code:java}
> override def select(fields: String): Table = {
>   select(ExpressionParser.parseExpressionList(fields): _*)
> }
> override def select(fields: Expression*): Table = {
>   selectInternal(fields.map(tableImpl.expressionBridge.bridge))
> }
> private def selectInternal(fields: Seq[PlannerExpression]): Table = {
> ...
> // implementtition logic
> ...
> }{code}
> To:
> {code:java}
> override def select(fields: String): Table = { 
> select(ExpressionParser.parseExpressionList(fields): _*) 
> } 
> override def select(fields: Expression*): Table = {
> ... 
> // implementtition logic 
> ... 
> }{code}
>  
> I think the implementtition logic can move into `select(fields: 
> Expression*)`.  What do you think? [~dawidwys] [~hequn8128]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12027) Flink Web UI is not accessible in local mode

2019-03-26 Thread Jeff Zhang (JIRA)
Jeff Zhang created FLINK-12027:
--

 Summary: Flink Web UI is not accessible in local mode
 Key: FLINK-12027
 URL: https://issues.apache.org/jira/browse/FLINK-12027
 Project: Flink
  Issue Type: Improvement
Reporter: Jeff Zhang


When I use flink in local mode (LocalEnvironment), I can not access flink web 
ui. That means what all I can depend on is log, but sometimes log can not tell 
me everything, I still need to access web ui, so it would be nice to have flink 
web ui accessible in local mode



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12026) Remove the `xxxInternal` method from TableImpl

2019-03-26 Thread sunjincheng (JIRA)
sunjincheng created FLINK-12026:
---

 Summary: Remove the `xxxInternal` method from TableImpl
 Key: FLINK-12026
 URL: https://issues.apache.org/jira/browse/FLINK-12026
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: sunjincheng
Assignee: sunjincheng


At present, each operator of TableImpl has an internal method of `xxxInternal`, 
and `xxxInternal` is a temp method. I think it can be removed at present to 
further simplify the code.

What do you think? [~dawidwys] [~hequn8128]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-12024) Bump universal Kafka connector to Kafka dependency to 2.2.0

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12024:
---
Labels: pull-request-available  (was: )

> Bump universal Kafka connector to Kafka dependency to 2.2.0
> ---
>
> Key: FLINK-12024
> URL: https://issues.apache.org/jira/browse/FLINK-12024
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.7.2
>Reporter: Elias Levy
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> Update the Kafka client dependency to version 2.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #8055: [FLINK-12024] Bump universal Kafka connector to Kafka dependency to 2.2.0

2019-03-26 Thread GitBox
flinkbot commented on issue #8055: [FLINK-12024] Bump universal Kafka connector 
to Kafka dependency to 2.2.0
URL: https://github.com/apache/flink/pull/8055#issuecomment-476945512
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] yanghua opened a new pull request #8055: [FLINK-12024] Bump universal Kafka connector to Kafka dependency to 2.2.0

2019-03-26 Thread GitBox
yanghua opened a new pull request #8055: [FLINK-12024] Bump universal Kafka 
connector to Kafka dependency to 2.2.0
URL: https://github.com/apache/flink/pull/8055
 
 
   ## What is the purpose of the change
   
   *This pull request bumps universal Kafka connector to Kafka dependency to 
2.2.0*
   
   ## Brief change log
   
 - *Bump universal Kafka connector to Kafka dependency to 2.2.0*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (**yes** / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11982) BatchTableSourceFactory support Json Format File

2019-03-26 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802361#comment-16802361
 ] 

frank wang commented on FLINK-11982:


can you provide your test.json file, let me try that, thx

> BatchTableSourceFactory support Json Format File
> 
>
> Key: FLINK-11982
> URL: https://issues.apache.org/jira/browse/FLINK-11982
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.6.4, 1.7.2
>Reporter: pingle wang
>Assignee: frank wang
>Priority: Major
>
> java code :
> {code:java}
> val connector = FileSystem().path("data/in/test.json")
> val desc = tEnv.connect(connector)
> .withFormat(
> new Json()
> .schema(
> Types.ROW(
> Array[String]("id", "name", "age"),
> Array[TypeInformation[_]](Types.STRING, Types.STRING, Types.INT))
> )
> .failOnMissingField(true)
> ).registerTableSource("persion")
> val sql = "select * from person"
> val result = tEnv.sqlQuery(sql)
> {code}
> Exception info :
> {code:java}
> Exception in thread "main" 
> org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
> suitable table factory for 
> 'org.apache.flink.table.factories.BatchTableSourceFactory' in
> the classpath.
> Reason: No context matches.
> The following properties are requested:
> connector.path=file:///Users/batch/test.json
> connector.property-version=1
> connector.type=filesystem
> format.derive-schema=true
> format.fail-on-missing-field=true
> format.property-version=1
> format.type=json
> The following factories have been considered:
> org.apache.flink.table.sources.CsvBatchTableSourceFactory
> org.apache.flink.table.sources.CsvAppendTableSourceFactory
> org.apache.flink.table.sinks.CsvBatchTableSinkFactory
> org.apache.flink.table.sinks.CsvAppendTableSinkFactory
> org.apache.flink.formats.avro.AvroRowFormatFactory
> org.apache.flink.formats.json.JsonRowFormatFactory
> org.apache.flink.streaming.connectors.kafka.Kafka010TableSourceSinkFactory
> org.apache.flink.streaming.connectors.kafka.Kafka09TableSourceSinkFactory
> at 
> org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214)
> at 
> org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130)
> at 
> org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81)
> at 
> org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:44)
> at 
> org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46)
> at com.meitu.mlink.sql.batch.JsonExample.main(JsonExample.java:36){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-12025) Support FOR SYSTEM_TIME AS OF in temporal table

2019-03-26 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802357#comment-16802357
 ] 

vinoyang commented on FLINK-12025:
--

[~x1q1j1] yes, you are right. So I pinged [~twalthr] in the FLINK-11921.

> Support FOR SYSTEM_TIME AS OF in temporal table
> ---
>
> Key: FLINK-12025
> URL: https://issues.apache.org/jira/browse/FLINK-12025
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> Since Calcite 1.19 has been released and "FOR SYSTEM_TIME AS OF" been 
> supported. I suggest we can support it in the temporal table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-12025) Support FOR SYSTEM_TIME AS OF in temporal table

2019-03-26 Thread Forward Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802355#comment-16802355
 ] 

Forward Xu commented on FLINK-12025:


Cool, this is the syntax for the dimension table. Firstly, calcite version in 
flink needs to be upgraded to 1.19.0

> Support FOR SYSTEM_TIME AS OF in temporal table
> ---
>
> Key: FLINK-12025
> URL: https://issues.apache.org/jira/browse/FLINK-12025
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> Since Calcite 1.19 has been released and "FOR SYSTEM_TIME AS OF" been 
> supported. I suggest we can support it in the temporal table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-12003) Revert the config option about mapreduce.output.basename in HadoopOutputFormatBase

2019-03-26 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802334#comment-16802334
 ] 

vinoyang edited comment on FLINK-12003 at 3/27/19 1:59 AM:
---

[~twalthr] It does not matter. What is your opinion? [~fhueske] If we revoke 
it, at least in Flink I didn't see it would cause compatibility issues. But 
maybe give the documentation statement is a more safety solution.


was (Author: yanghua):
[~twalthr] It does not matter. What is your opinion? [~fhueske] If we revoke 
it, at least in Flink I didn't see it would cause compatibility issues.

> Revert the config option about mapreduce.output.basename in 
> HadoopOutputFormatBase
> --
>
> Key: FLINK-12003
> URL: https://issues.apache.org/jira/browse/FLINK-12003
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> In {{HadoopOutputFormatBase}} open method, the config option 
> {{mapreduce.output.basename}} was changed to "tmp" and there is not any 
> documentation state this change.
> By default, HDFS will use this format "part-x-y" to name its file, the x 
> and y means : 
>  * {{x}} is either 'm' or 'r', depending on whether the job was a map only 
> job, or reduce
>  * {{y}} is the mapper or reducer task number (zero based)
>  
> The keyword "part" has used in many place in user's business logic to match 
> the hdfs's file name. So I suggest to revert this config option or document 
> it.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11921) Upgrade Calcite dependency to 1.19

2019-03-26 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802349#comment-16802349
 ] 

vinoyang commented on FLINK-11921:
--

Hi [~twalthr] Calcite official announced 1.19 has been released: 
[https://calcite.apache.org/news/2019/03/25/release-1.19.0/]

> Upgrade Calcite dependency to 1.19
> --
>
> Key: FLINK-11921
> URL: https://issues.apache.org/jira/browse/FLINK-11921
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Timo Walther
>Priority: Major
>
> Umbrella issue for all tasks related to the next Calcite upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12025) Support FOR SYSTEM_TIME AS OF in temporal table

2019-03-26 Thread vinoyang (JIRA)
vinoyang created FLINK-12025:


 Summary: Support FOR SYSTEM_TIME AS OF in temporal table
 Key: FLINK-12025
 URL: https://issues.apache.org/jira/browse/FLINK-12025
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: vinoyang
Assignee: vinoyang


Since Calcite 1.19 has been released and "FOR SYSTEM_TIME AS OF" been 
supported. I suggest we can support it in the temporal table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11967) Add Column Operators/Operations

2019-03-26 Thread Dian Fu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802348#comment-16802348
 ] 

Dian Fu commented on FLINK-11967:
-

Hi [~sunjincheng121], I have not started the work yet. Feel free to take it. :)

> Add Column Operators/Operations
> ---
>
> Key: FLINK-11967
> URL: https://issues.apache.org/jira/browse/FLINK-11967
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: sunjincheng
>Priority: Major
>
> In this Jira will add column operators/operations as follows:
> 1)   Table(schema) operators
>  * Add columns
>  * Replace columns
>  * Drop columns
>  * Rename columns
> 2)Fine-grained column/row operations
>  * Column selection
>  * Row package and flatten
> See [google 
> doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
>  And I also have done some 
> [prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11967) Add Column Operators/Operations

2019-03-26 Thread Dian Fu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-11967:
---

Assignee: (was: Dian Fu)

> Add Column Operators/Operations
> ---
>
> Key: FLINK-11967
> URL: https://issues.apache.org/jira/browse/FLINK-11967
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: sunjincheng
>Priority: Major
>
> In this Jira will add column operators/operations as follows:
> 1)   Table(schema) operators
>  * Add columns
>  * Replace columns
>  * Drop columns
>  * Rename columns
> 2)Fine-grained column/row operations
>  * Column selection
>  * Row package and flatten
> See [google 
> doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
>  And I also have done some 
> [prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-12023) Override TaskManager memory when submitting a Flink job

2019-03-26 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802342#comment-16802342
 ] 

vinoyang commented on FLINK-12023:
--

Hi [~RustedBones] What's the deploying mode you want this feature?

> Override TaskManager memory when submitting a Flink job
> ---
>
> Key: FLINK-12023
> URL: https://issues.apache.org/jira/browse/FLINK-12023
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Configuration
>Reporter: Michel Davit
>Assignee: vinoyang
>Priority: Minor
>
> Currently a Flink session can only run Task managers of the same size.
> However, depending on the jar or even the program arguments, we can have more 
> intensive job that other.
> In order to improve memory usage and avoid resource waste, It would be useful 
> to have an option that overrides the default task manager memory setting when 
> submitting a new job.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12023) Override TaskManager memory when submitting a Flink job

2019-03-26 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-12023:


Assignee: vinoyang

> Override TaskManager memory when submitting a Flink job
> ---
>
> Key: FLINK-12023
> URL: https://issues.apache.org/jira/browse/FLINK-12023
> Project: Flink
>  Issue Type: Wish
>  Components: Runtime / Configuration
>Reporter: Michel Davit
>Assignee: vinoyang
>Priority: Minor
>
> Currently a Flink session can only run Task managers of the same size.
> However, depending on the jar or even the program arguments, we can have more 
> intensive job that other.
> In order to improve memory usage and avoid resource waste, It would be useful 
> to have an option that overrides the default task manager memory setting when 
> submitting a new job.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12024) Bump universal Kafka connector to Kafka dependency to 2.2.0

2019-03-26 Thread vinoyang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

vinoyang reassigned FLINK-12024:


Assignee: vinoyang

> Bump universal Kafka connector to Kafka dependency to 2.2.0
> ---
>
> Key: FLINK-12024
> URL: https://issues.apache.org/jira/browse/FLINK-12024
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.7.2
>Reporter: Elias Levy
>Assignee: vinoyang
>Priority: Major
>
> Update the Kafka client dependency to version 2.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-12003) Revert the config option about mapreduce.output.basename in HadoopOutputFormatBase

2019-03-26 Thread vinoyang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802334#comment-16802334
 ] 

vinoyang commented on FLINK-12003:
--

[~twalthr] It does not matter. What is your opinion? [~fhueske] If we revoke 
it, at least in Flink I didn't see it would cause compatibility issues.

> Revert the config option about mapreduce.output.basename in 
> HadoopOutputFormatBase
> --
>
> Key: FLINK-12003
> URL: https://issues.apache.org/jira/browse/FLINK-12003
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> In {{HadoopOutputFormatBase}} open method, the config option 
> {{mapreduce.output.basename}} was changed to "tmp" and there is not any 
> documentation state this change.
> By default, HDFS will use this format "part-x-y" to name its file, the x 
> and y means : 
>  * {{x}} is either 'm' or 'r', depending on whether the job was a map only 
> job, or reduce
>  * {{y}} is the mapper or reducer task number (zero based)
>  
> The keyword "part" has used in many place in user's business logic to match 
> the hdfs's file name. So I suggest to revert this config option or document 
> it.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11967) Add Column Operators/Operations

2019-03-26 Thread sunjincheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802329#comment-16802329
 ] 

sunjincheng commented on FLINK-11967:
-

Hi [~dian.fu] Thanks for taking this ticket, have you start the work? if you 
already start the work I suggest split multi commits, such as, one for columns 
operator, one for column selection, one for Row package and flatten.  If you 
have not started the work, I want to split the JIRA into three sub-JIRAs,  What 
do you think? :-) 

> Add Column Operators/Operations
> ---
>
> Key: FLINK-11967
> URL: https://issues.apache.org/jira/browse/FLINK-11967
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: sunjincheng
>Assignee: Dian Fu
>Priority: Major
>
> In this Jira will add column operators/operations as follows:
> 1)   Table(schema) operators
>  * Add columns
>  * Replace columns
>  * Drop columns
>  * Rename columns
> 2)Fine-grained column/row operations
>  * Column selection
>  * Row package and flatten
> See [google 
> doc|https://docs.google.com/document/d/1tryl6swt1K1pw7yvv5pdvFXSxfrBZ3_OkOObymis2ck/edit],
>  And I also have done some 
> [prototype|https://github.com/sunjincheng121/flink/pull/94/files]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] xuefuz edited a comment on issue #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
xuefuz edited a comment on issue #8007: [FLINK-11474][table] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#issuecomment-476812342
 
 
   > Thanks for the PR, I think catalog is very important for Table/SQL. Great 
job @xuefuz!
   > 
   > I quickly browsed the PR and left some comments.
   > There are two other suggestions as follows:
   > 
   > * The overall PR does not have a test, it is recommended that  it better 
to add the  corresponding test case.
   > * This PR just adds some interfaces, and there is no specific 
implementation and corresponding feature. This will confuse reviewer who have 
no context background, IMHO, each PR preferably has a specific function, of the 
function is large, it can be divided into multiple commits in one PR.
   > 
   > I am not sure whether my suggestion is reasonable or not, maybe @aljoscha 
and @twalthr can give us more advice.
   > 
   > What do you think?
   > 
   > Best,
   > Jincheng
   
   Thanks for the review, Jincheng! You have some good points.
   First all, for those missing the contact, please check 
[FLIP-30.](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827366)
   
   Secondly, this one does have a test and the reason is that this is mostly 
new interfaces, for which, tests are not really applicable.
   
   Lastly, I generally agree that it's better to complete a feature or 
functionality with a single PR. However, for larger efforts such as this one, 
Incremental commits are preferable in my opinion because it makes review 
lighter and easier, especially the incremental code doesn't break existing 
stuff.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269288171
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/ReadableWritableCatalog.java
 ##
 @@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import 
org.apache.flink.table.api.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.api.catalog.exceptions.DatabaseNotExistException;
+import 
org.apache.flink.table.api.catalog.exceptions.TableAlreadyExistException;
+import org.apache.flink.table.api.catalog.exceptions.TableNotExistException;
+
+/**
+ * An interface responsible for manipulating catalog metadata.
+ */
+public interface ReadableWritableCatalog extends ReadableCatalog {
+
+   // -- databases --
+
+   /**
+* Create a database.
+*
+* @param name   Name of the database to be created
+* @param database   The database definition
+* @param ignoreIfExists Flag to specify behavior when a database with 
the given name already exists:
+*   if set to false, throw a 
DatabaseAlreadyExistException,
+*   if set to true, do nothing.
+* @throws DatabaseAlreadyExistException if the given database already 
exists and ignoreIfExists is false
+*/
+   void createDatabase(String name, CatalogDatabase database, boolean 
ignoreIfExists)
+   throws DatabaseAlreadyExistException;
+
+   /**
+* Drop a database.
+*
+* @param name  Name of the database to be dropped.
+* @param ignoreIfNotExists Flag to specify behavior when the database 
does not exist:
+*  if set to false, throw an exception,
+*  if set to true, do nothing.
+* @throws DatabaseNotExistException if the given database does not 
exist
+*/
+   void dropDatabase(String name, boolean ignoreIfNotExists) throws 
DatabaseNotExistException;
+
+   /**
+* Modify an existing database.
+*
+* @param nameName of the database to be modified
+* @param newDatabaseThe new database definition
+* @param ignoreIfNotExists Flag to specify behavior when the given 
database does not exist:
+*  if set to false, throw an exception,
+*  if set to true, do nothing.
+* @throws DatabaseNotExistException if the given database does not 
exist
+*/
+   void alterDatabase(String name, CatalogDatabase newDatabase, boolean 
ignoreIfNotExists)
+   throws DatabaseNotExistException;
+
+   /**
+* Rename an existing database.
+*
+* @param nameName of the database to be renamed
+* @param newName the new name of the database
+* @param ignoreIfNotExists Flag to specify behavior when the given 
database does not exist:
+*  if set to false, throw an exception,
+*  if set to true, do nothing.
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   void renameDatabase(String name, String newName, boolean 
ignoreIfNotExists)
+   throws DatabaseNotExistException;
+
+   // -- tables and views --
+
+   /**
+* Drop a table or view.
+*
+* @param tablePath Path of the table or view to be dropped
+* @param ignoreIfNotExists Flag to specify behavior when the table or 
view does not exist:
+*  if set to false, throw an exception,
+*  if set to true, do nothing.
+* @throws TableNotExistException if the table or view does not exist
+*/
+   void dropTable(ObjectPath tablePath, boolean ignoreIfNotExists) throws 
TableNotExistException;
+
+   // -- tables --
+
+   

[GitHub] [flink] xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269286117
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/ReadableCatalog.java
 ##
 @@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import org.apache.flink.table.api.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.api.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.plan.stats.TableStats;
+
+import java.util.List;
+
+/**
+ * This interface is responsible for reading database/table/views/UDFs from a 
registered catalog.
+ * It connects a registered catalog and Flink's Table API.
+ */
+public interface ReadableCatalog {
+
+   /**
+* Open the catalog. Used for any required preparation in 
initialization phase.
+*/
+   void open();
+
+   /**
+* Close the catalog when it is no longer needed and release any 
resource that it might be holding.
+*/
+   void close();
+
+   /**
+* Get the default database of this type of catalog. This is used when 
users refers an object in the catalog
+* without specifying a database. For example, the default db in a Hive 
Metastore is 'default' by default.
+*
+* @return the name of the default database
+*/
+   String getDefaultDatabaseName();
+
+   /**
+* Set the default database. A default database is used when users 
refers an object in the catalog
+* without specifying a database.
+*
+* @param databaseName  the name of the database
+*/
+   void setDefaultDatabaseName(String databaseName);
+
+   // -- databases --
+   /**
+* Get the names of all databases in this catalog.
+*
+* @return The list of the names of all databases
+*/
+   List listDatabases();
+
+   /**
+* Get a database from this catalog.
+*
+* @param dbNameName of the database
+* @return The requested database
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   CatalogDatabase getDatabase(String dbName) throws 
DatabaseNotExistException;
+
+   /**
+* Check if a database exists in this catalog.
+*
+* @param dbNameName of the database
+*/
+   boolean databaseExists(String dbName);
+
+   /**
+* Gets paths of all tables and views under this database. An empty 
list is returned if none exists.
+*
+* @return A list of the names of all tables and views in this database
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   List listTables(String dbName) throws DatabaseNotExistException;
+
+   /**
+* Gets a CatalogTable or CatalogView identified by objectPath.
+*
+* @param objectPathPath of the table or view
+* @throws TableNotExistException if the target does not exist
+* @return The requested table or view
+*/
+   CommonTable getTable(ObjectPath objectPath) throws 
TableNotExistException;
+
+   /**
+* Checks if a table or view exists in this catalog.
+*
+* @param objectPathPath of the table or view
+*/
+   boolean tableExists(ObjectPath objectPath);
+
+   /**
+* Gets paths of all views under this database. An empty list is 
returned if none exists.
+*
+* @param databaseName the name of the given database
+* @return the list of the names of all views in the given database
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   List listViews(String databaseName) throws 
DatabaseNotExistException;
+
+
+   /**
+* Gets the statistics of a table.
+*
+* @param tablePath Path of the table
+* @return The statistics of the given table
+* @throws TableNotEx

[GitHub] [flink] xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269285533
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/ReadableCatalog.java
 ##
 @@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import org.apache.flink.table.api.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.api.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.plan.stats.TableStats;
+
+import java.util.List;
+
+/**
+ * This interface is responsible for reading database/table/views/UDFs from a 
registered catalog.
+ * It connects a registered catalog and Flink's Table API.
+ */
+public interface ReadableCatalog {
+
+   /**
+* Open the catalog. Used for any required preparation in 
initialization phase.
+*/
+   void open();
+
+   /**
+* Close the catalog when it is no longer needed and release any 
resource that it might be holding.
+*/
+   void close();
+
+   /**
+* Get the default database of this type of catalog. This is used when 
users refers an object in the catalog
+* without specifying a database. For example, the default db in a Hive 
Metastore is 'default' by default.
+*
+* @return the name of the default database
+*/
+   String getDefaultDatabaseName();
+
+   /**
+* Set the default database. A default database is used when users 
refers an object in the catalog
+* without specifying a database.
+*
+* @param databaseName  the name of the database
+*/
+   void setDefaultDatabaseName(String databaseName);
+
+   // -- databases --
+   /**
+* Get the names of all databases in this catalog.
+*
+* @return The list of the names of all databases
+*/
+   List listDatabases();
+
+   /**
+* Get a database from this catalog.
+*
+* @param dbNameName of the database
+* @return The requested database
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   CatalogDatabase getDatabase(String dbName) throws 
DatabaseNotExistException;
+
+   /**
+* Check if a database exists in this catalog.
+*
+* @param dbNameName of the database
+*/
+   boolean databaseExists(String dbName);
+
+   /**
+* Gets paths of all tables and views under this database. An empty 
list is returned if none exists.
+*
+* @return A list of the names of all tables and views in this database
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   List listTables(String dbName) throws DatabaseNotExistException;
 
 Review comment:
   Different catalog may have specific ways to get tables or views only. 
However, it's commonly a practice that listTables() gives both while 
listViews() gives only the views.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269284622
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/CatalogView.java
 ##
 @@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+/**
+ * Represents a view in a catalog.
+ */
+public interface CatalogView extends CommonTable {
+
+   /**
+* Original text of the view definition.
+*
+* @return the original string literal provided by the user.
+*/
+   String getOriginalQuery();
+
+   /**
+* Expanded text of the original view definition
+* This is needed because the context such as current DB is
+* lost after the session, in which view is defined, is gone.
+* Expanded query text takes care of this, as an example.
+*
+* For example, for a view that is defined in the context of 
"default" database with a query "select * from test1",
+* the expanded query text might become "select `test1`.`name`, 
`test1`.`value` from `default`.`test1`", where
 
 Review comment:
   Good idea!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269282987
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/CatalogDatabase.java
 ##
 @@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Represents a database object in a catalog.
+ */
+public class CatalogDatabase {
+   // Property map for the database.
+   private final Map properties;
+
+   public CatalogDatabase() {
 
 Review comment:
   While for all meta objects there is a name, the name only matters in its 
enclosing parent. Thus, our design chose to not include name in the object 
definition. (Pls see FLIP-30.)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
xuefuz commented on a change in pull request #8007: [FLINK-11474][table] Add 
ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269282239
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/CatalogDatabase.java
 ##
 @@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Represents a database object in a catalog.
+ */
+public class CatalogDatabase {
+   // Property map for the database.
 
 Review comment:
   Interface is fine to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on issue #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
xuefuz commented on issue #8007: [FLINK-11474][table] Add ReadableCatalog, 
ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#issuecomment-476812342
 
 
   > Thanks for the PR, I think catalog is very important for Table/SQL. Great 
job @xuefuz!
   > 
   > I quickly browsed the PR and left some comments.
   > There are two other suggestions as follows:
   > 
   > * The overall PR does not have a test, it is recommended that  it better 
to add the  corresponding test case.
   > * This PR just adds some interfaces, and there is no specific 
implementation and corresponding feature. This will confuse reviewer who have 
no context background, IMHO, each PR preferably has a specific function, of the 
function is large, it can be divided into multiple commits in one PR.
   > 
   > I am not sure whether my suggestion is reasonable or not, maybe @aljoscha 
and @twalthr can give us more advice.
   > 
   > What do you think?
   > 
   > Best,
   > Jincheng
   
   Thanks for the review, Jincheng! You have some good points.
   First all, for those missing the contact, please check 
[FLIP-30.](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827366)
   
   Secondly, this one does have a test and the reason is that this is mostly 
new interfaces, for which, tests are not really applicable.
   
   Lastly, I don't believe that every merge needs to complete a feature or 
functionality. Incremental commits are preferable in my opinion because it 
makes review lighter and easier.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #8054: [hotfix][docs] fix description error of checkpoint directory

2019-03-26 Thread GitBox
flinkbot commented on issue #8054: [hotfix][docs] fix description error of 
checkpoint directory
URL: https://github.com/apache/flink/pull/8054#issuecomment-476798688
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] aloyszhang opened a new pull request #8054: [hotfix][docs] fix description error of checkpoint directory

2019-03-26 Thread GitBox
aloyszhang opened a new pull request #8054: [hotfix][docs] fix description 
error of checkpoint directory
URL: https://github.com/apache/flink/pull/8054
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-9477) Support SQL 2016 JSON functions in Flink SQL

2019-03-26 Thread Shuyi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16802008#comment-16802008
 ] 

Shuyi Chen commented on FLINK-9477:
---

That sounds like a good idea. I will split the tasks according to 
[CALCITE-2867|https://issues.apache.org/jira/browse/CALCITE-2867].

> Support SQL 2016 JSON functions in Flink SQL
> 
>
> Key: FLINK-9477
> URL: https://issues.apache.org/jira/browse/FLINK-9477
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-12024) Bump universal Kafka connector to Kafka dependency to 2.2.0

2019-03-26 Thread Elias Levy (JIRA)
Elias Levy created FLINK-12024:
--

 Summary: Bump universal Kafka connector to Kafka dependency to 
2.2.0
 Key: FLINK-12024
 URL: https://issues.apache.org/jira/browse/FLINK-12024
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Kafka
Affects Versions: 1.7.2
Reporter: Elias Levy


Update the Kafka client dependency to version 2.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-12017) Support translation from Rank/FirstLastRow to StreamTransformation

2019-03-26 Thread aloyszhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aloyszhang updated FLINK-12017:
---
Description: 
Support translation from Rank/FirstLastRow to StreamTransformation, So 
following sql can be run:
 1. 
 SELECT a, b, c
 FROM (
 SELECT *,
 ROW_NUMBER() OVER (PARTITION BY b ORDER BY proc DESC) as rowNum
 FROM T
 )
 WHERE rowNum = 1
 2. 
 SELECT *
 FROM (
 SELECT category, shopId, num,
 ROW_NUMBER() OVER (PARTITION BY category ORDER BY num DESC) as rank_num
 FROM T)
 WHERE rank_num <= 2

 

  was:
Support translation from Rank/FirstLastRow to StreamTransformation, So 
following sql can be run:
1. 
SELECT a, b, c
FROM (
  SELECT *,
ROW_NUMBER() OVER (PARTITION BY b ORDER BY proc DESC) as rowNum
  FROM T
)
WHERE rowNum = 1
2. 
SELECT *
FROM (
  SELECT category, shopId, num,
  ROW_NUMBER() OVER (PARTITION BY category ORDER BY num DESC) as rank_num
  FROM T)
WHERE rank_num <= 2


> Support translation from Rank/FirstLastRow to StreamTransformation
> --
>
> Key: FLINK-12017
> URL: https://issues.apache.org/jira/browse/FLINK-12017
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Priority: Major
>
> Support translation from Rank/FirstLastRow to StreamTransformation, So 
> following sql can be run:
>  1. 
>  SELECT a, b, c
>  FROM (
>  SELECT *,
>  ROW_NUMBER() OVER (PARTITION BY b ORDER BY proc DESC) as rowNum
>  FROM T
>  )
>  WHERE rowNum = 1
>  2. 
>  SELECT *
>  FROM (
>  SELECT category, shopId, num,
>  ROW_NUMBER() OVER (PARTITION BY category ORDER BY num DESC) as rank_num
>  FROM T)
>  WHERE rank_num <= 2
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #8053: [FLINK-12015] Fix TaskManagerRunnerTest instability

2019-03-26 Thread GitBox
flinkbot commented on issue #8053: [FLINK-12015] Fix TaskManagerRunnerTest 
instability
URL: https://github.com/apache/flink/pull/8053#issuecomment-476743730
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-12015) TaskManagerRunnerTest is unstable

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12015:
---
Labels: pull-request-available test-stability  (was: test-stability)

> TaskManagerRunnerTest is unstable
> -
>
> Key: FLINK-12015
> URL: https://issues.apache.org/jira/browse/FLINK-12015
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> I saw this failure:
> {code:java}
> 17:34:16.833 [INFO] Running 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunnerTest
> 17:34:19.872 [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 3.036 s <<< FAILURE! - in 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunnerTest
> 17:34:19.880 [ERROR] 
> testShouldShutdownOnFatalError(org.apache.flink.runtime.taskexecutor.TaskManagerRunnerTest)
>   Time elapsed: 0.353 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: is <1>
>  but: was <0>
>   at 
> org.apache.flink.runtime.taskexecutor.TaskManagerRunnerTest.testShouldShutdownOnFatalError(TaskManagerRunnerTest.java:59)
> {code}
> Travis log: https://travis-ci.org/apache/flink/jobs/511042156



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] aljoscha opened a new pull request #8053: [FLINK-12015] Fix TaskManagerRunnerTest instability

2019-03-26 Thread GitBox
aljoscha opened a new pull request #8053: [FLINK-12015] Fix 
TaskManagerRunnerTest instability
URL: https://github.com/apache/flink/pull/8053
 
 
   Before, the was a race condition between the termination future in
   TaskManagerRunner completing and the asynchronous shutdown part here:
   
https://github.com/apache/flink/blob/70107c4647ecac3df9b2b8c7920e7cb99ad550f1/flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskManagerRunner.java#L258
   
   The test would go out of the block that was waiting on the future but
   the shutdown code that is executed after the future completes is
   executed asynchronously, so is not guaranteed to have run at that point.
   
   This also refactors the code a bit to make it more obvious what is
   happening and removes the SecurityManagerContext because it was
   obscuring the problem.
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-12023) Override TaskManager memory when submitting a Flink job

2019-03-26 Thread Michel Davit (JIRA)
Michel Davit created FLINK-12023:


 Summary: Override TaskManager memory when submitting a Flink job
 Key: FLINK-12023
 URL: https://issues.apache.org/jira/browse/FLINK-12023
 Project: Flink
  Issue Type: Wish
  Components: Runtime / Configuration
Reporter: Michel Davit


Currently a Flink session can only run Task managers of the same size.

However, depending on the jar or even the program arguments, we can have more 
intensive job that other.

In order to improve memory usage and avoid resource waste, It would be useful 
to have an option that overrides the default task manager memory setting when 
submitting a new job.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9477) Support SQL 2016 JSON functions in Flink SQL

2019-03-26 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801808#comment-16801808
 ] 

Hequn Cheng commented on FLINK-9477:


+1 to have a little design document and split methods into subtasks. 
As for the document, it would be better to have a list of functions with 
examples and results. Furthermore, we probably should take TableApi into 
consideration for each function(not only in the document but also in each PR)? 

> Support SQL 2016 JSON functions in Flink SQL
> 
>
> Key: FLINK-9477
> URL: https://issues.apache.org/jira/browse/FLINK-9477
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-12019) ZooKeeperHaServicesTest.testCloseAndCleanupAllDataWithUncle is unstable

2019-03-26 Thread TisonKun (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801807#comment-16801807
 ] 

TisonKun commented on FLINK-12019:
--

It looks like the same root issue as FLINK-12006

> ZooKeeperHaServicesTest.testCloseAndCleanupAllDataWithUncle is unstable
> ---
>
> Key: FLINK-12019
> URL: https://issues.apache.org/jira/browse/FLINK-12019
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.8.0
>Reporter: Aljoscha Krettek
>Assignee: TisonKun
>Priority: Critical
>  Labels: test-stability
>
> This is the relevant part of the log:
> {code}
> 08:26:51.737 [INFO] Running 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest
> 08:26:52.862 [INFO] Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 8.01 s - in org.apache.flink.runtime.jobmaster.JobMasterTest
> 08:26:53.612 [INFO] Running 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperRegistryTest
> 08:26:54.722 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 2.983 s <<< FAILURE! - in 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest
> 08:26:54.722 [ERROR] 
> testCloseAndCleanupAllDataWithUncle(org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest)
>   Time elapsed: 0.067 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: is null
>  but: was <75,75,1553588814702,1553588814702,0,0,0,0,0,0,75
> >
>   at 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest.testCloseAndCleanupAllDataWithUncle(ZooKeeperHaServicesTest.java:160)
> {code}
> Travis run: https://travis-ci.org/apache/flink/jobs/511335500



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-12019) ZooKeeperHaServicesTest.testCloseAndCleanupAllDataWithUncle is unstable

2019-03-26 Thread TisonKun (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

TisonKun reassigned FLINK-12019:


Assignee: TisonKun

> ZooKeeperHaServicesTest.testCloseAndCleanupAllDataWithUncle is unstable
> ---
>
> Key: FLINK-12019
> URL: https://issues.apache.org/jira/browse/FLINK-12019
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.8.0
>Reporter: Aljoscha Krettek
>Assignee: TisonKun
>Priority: Critical
>  Labels: test-stability
>
> This is the relevant part of the log:
> {code}
> 08:26:51.737 [INFO] Running 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest
> 08:26:52.862 [INFO] Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 8.01 s - in org.apache.flink.runtime.jobmaster.JobMasterTest
> 08:26:53.612 [INFO] Running 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperRegistryTest
> 08:26:54.722 [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time 
> elapsed: 2.983 s <<< FAILURE! - in 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest
> 08:26:54.722 [ERROR] 
> testCloseAndCleanupAllDataWithUncle(org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest)
>   Time elapsed: 0.067 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: is null
>  but: was <75,75,1553588814702,1553588814702,0,0,0,0,0,0,75
> >
>   at 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest.testCloseAndCleanupAllDataWithUncle(ZooKeeperHaServicesTest.java:160)
> {code}
> Travis run: https://travis-ci.org/apache/flink/jobs/511335500



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot edited a comment on issue #8041: [FLINK-11153][dataset] Remove UdfAnalyzer

2019-03-26 Thread GitBox
flinkbot edited a comment on issue #8041: [FLINK-11153][dataset] Remove 
UdfAnalyzer 
URL: https://github.com/apache/flink/pull/8041#issuecomment-475944853
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @twalthr [PMC]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @twalthr [PMC]
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] twalthr commented on issue #8041: [FLINK-11153][dataset] Remove UdfAnalyzer

2019-03-26 Thread GitBox
twalthr commented on issue #8041: [FLINK-11153][dataset] Remove UdfAnalyzer 
URL: https://github.com/apache/flink/pull/8041#issuecomment-476650537
 
 
   @flinkbot approve-until consensus


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12003) Revert the config option about mapreduce.output.basename in HadoopOutputFormatBase

2019-03-26 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801737#comment-16801737
 ] 

Timo Walther commented on FLINK-12003:
--

Sorry, I did this a long time ago. Cannot remember the reason anymore.

> Revert the config option about mapreduce.output.basename in 
> HadoopOutputFormatBase
> --
>
> Key: FLINK-12003
> URL: https://issues.apache.org/jira/browse/FLINK-12003
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hadoop Compatibility
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>
> In {{HadoopOutputFormatBase}} open method, the config option 
> {{mapreduce.output.basename}} was changed to "tmp" and there is not any 
> documentation state this change.
> By default, HDFS will use this format "part-x-y" to name its file, the x 
> and y means : 
>  * {{x}} is either 'm' or 'r', depending on whether the job was a map only 
> job, or reduce
>  * {{y}} is the mapper or reducer task number (zero based)
>  
> The keyword "part" has used in many place in user's business logic to match 
> the hdfs's file name. So I suggest to revert this config option or document 
> it.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot edited a comment on issue #8050: [FLINK-11067][table] Convert TableEnvironments to interfaces

2019-03-26 Thread GitBox
flinkbot edited a comment on issue #8050: [FLINK-11067][table] Convert 
TableEnvironments to interfaces
URL: https://github.com/apache/flink/pull/8050#issuecomment-476498188
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ✅ 1. The [description] looks good.
   - Approved by @twalthr [PMC]
   * ✅ 2. There is [consensus] that the contribution should go into to Flink.
   - Approved by @twalthr [PMC]
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] twalthr commented on issue #8050: [FLINK-11067][table] Convert TableEnvironments to interfaces

2019-03-26 Thread GitBox
twalthr commented on issue #8050: [FLINK-11067][table] Convert 
TableEnvironments to interfaces
URL: https://github.com/apache/flink/pull/8050#issuecomment-476648401
 
 
   @flinkbot approve-until consensus


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11669) Add Synchronous Checkpoint Triggering RPCs.

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11669:
---
Labels: pull-request-available  (was: )

> Add Synchronous Checkpoint Triggering RPCs.
> ---
>
> Key: FLINK-11669
> URL: https://issues.apache.org/jira/browse/FLINK-11669
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> Wires the triggering of the Synchronous {{Checkpoint/Savepoint}} from the 
> {{JobMaster}} to the {{TaskExecutor}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9477) Support SQL 2016 JSON functions in Flink SQL

2019-03-26 Thread Timo Walther (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801718#comment-16801718
 ] 

Timo Walther commented on FLINK-9477:
-

Thanks for the suggestion [~yanghua]. That makes sense. It would be great if 
somebody could summarize the features that we actually want to support first in 
a little design document. Does this part of the standard introduces a new type, 
functions, syntax changes etc?

> Support SQL 2016 JSON functions in Flink SQL
> 
>
> Key: FLINK-9477
> URL: https://issues.apache.org/jira/browse/FLINK-9477
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11671) Expose SUSPEND/TERMINATE to CLI

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11671:
---
Labels: pull-request-available  (was: )

> Expose SUSPEND/TERMINATE to CLI
> ---
>
> Key: FLINK-11671
> URL: https://issues.apache.org/jira/browse/FLINK-11671
> Project: Flink
>  Issue Type: Sub-task
>  Components: Command Line Client
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> Expose the SUSPEND/TERMINATE functionality to the user through the command 
> line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11670) Add SUSPEND/TERMINATE calls to REST API

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11670:
---
Labels: pull-request-available  (was: )

> Add SUSPEND/TERMINATE calls to REST API
> ---
>
> Key: FLINK-11670
> URL: https://issues.apache.org/jira/browse/FLINK-11670
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> Exposes the SUSPEND/TERMINATE functionality to the user through the REST API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11458) Add TERMINATE/SUSPEND Job with Savepoint (FLIP-34)

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11458:
---
Affects Version/s: 1.8.0

> Add TERMINATE/SUSPEND Job with Savepoint (FLIP-34)
> --
>
> Key: FLINK-11458
> URL: https://issues.apache.org/jira/browse/FLINK-11458
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / State Backends
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> See FLIP-34: 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103090212



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11667) Add Synchronous Checkpoint handling in StreamTask

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11667:
---
Labels: pull-request-available  (was: )

> Add Synchronous Checkpoint handling in StreamTask
> -
>
> Key: FLINK-11667
> URL: https://issues.apache.org/jira/browse/FLINK-11667
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> This is the basic building block for the SUSPEND/TERMINATE functionality. 
> In case of a synchronous checkpoint barrier, the checkpointing thread will 
> block (without holding the checkpoint lock) until the 
> {{notifyCheckpointComplete}} is executed successfully. This  will allow the 
> checkpoint to be considered successful ONLY when also the 
> {{notifyCheckpointComplete}} is successfully executed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11668) Allow sources to advance time to max watermark on checkpoint.

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11668:
---
Labels: pull-request-available  (was: )

> Allow sources to advance time to max watermark on checkpoint.
> -
>
> Key: FLINK-11668
> URL: https://issues.apache.org/jira/browse/FLINK-11668
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> This is needed for the TERMINATE case. It will allow the sources to inject 
> the {{MAX_WATERMARK}} before the barrier that will trigger the last 
> savepoint. This will fire any registered event-time timers and flush any 
> state associated with these timers, e.g. windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11670) Add SUSPEND/TERMINATE calls to REST API

2019-03-26 Thread Kostas Kloudas (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801714#comment-16801714
 ] 

Kostas Kloudas commented on FLINK-11670:


Open PR https://github.com/apache/flink/pull/8052

> Add SUSPEND/TERMINATE calls to REST API
> ---
>
> Key: FLINK-11670
> URL: https://issues.apache.org/jira/browse/FLINK-11670
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> Exposes the SUSPEND/TERMINATE functionality to the user through the REST API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11670) Add SUSPEND/TERMINATE calls to REST API

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11670:
---
Affects Version/s: 1.8.0

> Add SUSPEND/TERMINATE calls to REST API
> ---
>
> Key: FLINK-11670
> URL: https://issues.apache.org/jira/browse/FLINK-11670
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> Exposes the SUSPEND/TERMINATE functionality to the user through the REST API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11669) Add Synchronous Checkpoint Triggering RPCs.

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11669:
---
Affects Version/s: 1.8.0

> Add Synchronous Checkpoint Triggering RPCs.
> ---
>
> Key: FLINK-11669
> URL: https://issues.apache.org/jira/browse/FLINK-11669
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> Wires the triggering of the Synchronous {{Checkpoint/Savepoint}} from the 
> {{JobMaster}} to the {{TaskExecutor}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11671) Expose SUSPEND/TERMINATE to CLI

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11671:
---
Affects Version/s: 1.8.0

> Expose SUSPEND/TERMINATE to CLI
> ---
>
> Key: FLINK-11671
> URL: https://issues.apache.org/jira/browse/FLINK-11671
> Project: Flink
>  Issue Type: Sub-task
>  Components: Command Line Client
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> Expose the SUSPEND/TERMINATE functionality to the user through the command 
> line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11669) Add Synchronous Checkpoint Triggering RPCs.

2019-03-26 Thread Kostas Kloudas (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801711#comment-16801711
 ] 

Kostas Kloudas commented on FLINK-11669:


Open PR https://github.com/apache/flink/pull/8052

> Add Synchronous Checkpoint Triggering RPCs.
> ---
>
> Key: FLINK-11669
> URL: https://issues.apache.org/jira/browse/FLINK-11669
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> Wires the triggering of the Synchronous {{Checkpoint/Savepoint}} from the 
> {{JobMaster}} to the {{TaskExecutor}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11668) Allow sources to advance time to max watermark on checkpoint.

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11668:
---
Affects Version/s: 1.8.0

> Allow sources to advance time to max watermark on checkpoint.
> -
>
> Key: FLINK-11668
> URL: https://issues.apache.org/jira/browse/FLINK-11668
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> This is needed for the TERMINATE case. It will allow the sources to inject 
> the {{MAX_WATERMARK}} before the barrier that will trigger the last 
> savepoint. This will fire any registered event-time timers and flush any 
> state associated with these timers, e.g. windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11671) Expose SUSPEND/TERMINATE to CLI

2019-03-26 Thread Kostas Kloudas (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801712#comment-16801712
 ] 

Kostas Kloudas commented on FLINK-11671:


Open PR https://github.com/apache/flink/pull/8052

> Expose SUSPEND/TERMINATE to CLI
> ---
>
> Key: FLINK-11671
> URL: https://issues.apache.org/jira/browse/FLINK-11671
> Project: Flink
>  Issue Type: Sub-task
>  Components: Command Line Client
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> Expose the SUSPEND/TERMINATE functionality to the user through the command 
> line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11668) Allow sources to advance time to max watermark on checkpoint.

2019-03-26 Thread Kostas Kloudas (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801710#comment-16801710
 ] 

Kostas Kloudas commented on FLINK-11668:


Open PR https://github.com/apache/flink/pull/8052

> Allow sources to advance time to max watermark on checkpoint.
> -
>
> Key: FLINK-11668
> URL: https://issues.apache.org/jira/browse/FLINK-11668
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> This is needed for the TERMINATE case. It will allow the sources to inject 
> the {{MAX_WATERMARK}} before the barrier that will trigger the last 
> savepoint. This will fire any registered event-time timers and flush any 
> state associated with these timers, e.g. windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11667) Add Synchronous Checkpoint handling in StreamTask

2019-03-26 Thread Kostas Kloudas (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas updated FLINK-11667:
---
Affects Version/s: 1.8.0

> Add Synchronous Checkpoint handling in StreamTask
> -
>
> Key: FLINK-11667
> URL: https://issues.apache.org/jira/browse/FLINK-11667
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Affects Versions: 1.8.0
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> This is the basic building block for the SUSPEND/TERMINATE functionality. 
> In case of a synchronous checkpoint barrier, the checkpointing thread will 
> block (without holding the checkpoint lock) until the 
> {{notifyCheckpointComplete}} is executed successfully. This  will allow the 
> checkpoint to be considered successful ONLY when also the 
> {{notifyCheckpointComplete}} is successfully executed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11667) Add Synchronous Checkpoint handling in StreamTask

2019-03-26 Thread Kostas Kloudas (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16801708#comment-16801708
 ] 

Kostas Kloudas commented on FLINK-11667:


Open PR https://github.com/apache/flink/pull/8052

> Add Synchronous Checkpoint handling in StreamTask
> -
>
> Key: FLINK-11667
> URL: https://issues.apache.org/jira/browse/FLINK-11667
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>
> This is the basic building block for the SUSPEND/TERMINATE functionality. 
> In case of a synchronous checkpoint barrier, the checkpointing thread will 
> block (without holding the checkpoint lock) until the 
> {{notifyCheckpointComplete}} is executed successfully. This  will allow the 
> checkpoint to be considered successful ONLY when also the 
> {{notifyCheckpointComplete}} is successfully executed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] klion26 commented on a change in pull request #8045: [FLINK-12004][TypeInformation] analyze pojo using method chaining

2019-03-26 Thread GitBox
klion26 commented on a change in pull request #8045: 
[FLINK-12004][TypeInformation] analyze pojo using method chaining
URL: https://github.com/apache/flink/pull/8045#discussion_r269097538
 
 

 ##
 File path: 
flink-core/src/test/java/org/apache/flink/api/java/typeutils/TypeExtractorTest.java
 ##
 @@ -348,6 +348,18 @@ public CustomType cross(CustomType first, Integer second) 
throws Exception {

Assert.assertFalse(TypeExtractor.getForClass(PojoWithNonPublicDefaultCtor.class)
 instanceof PojoTypeInfo);
}
 
+   @Test
+   public void testMethdChainingPojo() {
 
 Review comment:
   do we need to extend this test as `testPojo`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11458) Add TERMINATE/SUSPEND Job with Savepoint (FLIP-34)

2019-03-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11458:
---
Labels: pull-request-available  (was: )

> Add TERMINATE/SUSPEND Job with Savepoint (FLIP-34)
> --
>
> Key: FLINK-11458
> URL: https://issues.apache.org/jira/browse/FLINK-11458
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / State Backends
>Reporter: Kostas Kloudas
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> See FLIP-34: 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103090212



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] flinkbot commented on issue #8052: [FLINK-11458][checkpointing, rest] Add TERMINATE/SUSPEND Job with Savepoint (FLIP-34)

2019-03-26 Thread GitBox
flinkbot commented on issue #8052: [FLINK-11458][checkpointing, rest] Add 
TERMINATE/SUSPEND Job with Savepoint (FLIP-34)
URL: https://github.com/apache/flink/pull/8052#issuecomment-476624630
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/reviewing-prs.html) for a full explanation of 
the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] kl0u opened a new pull request #8052: [FLINK-11458][checkpointing, rest] Add TERMINATE/SUSPEND Job with Savepoint (FLIP-34)

2019-03-26 Thread GitBox
kl0u opened a new pull request #8052: [FLINK-11458][checkpointing, rest] Add 
TERMINATE/SUSPEND Job with Savepoint (FLIP-34)
URL: https://github.com/apache/flink/pull/8052
 
 
   ## What is the purpose of the change
   
   This is a first (almost complete) implementation of the FLIP-34 effort. 
FLIP-34 can be found here 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103090212 and 
proposes the addition of the `suspend` and `drain` commands to the `CLI` and 
the `REST` API in general.
   
   With this addition, the user will be able to write to the CLI:
   `flink stop -s  JOB_ID` for suspending a job with a 
savepoint, and
   `flink stop -s  -d JOB_ID` from draining the pipeline.
   
   ## Brief change log
   
   Not all changes/additions can be described here. A list of the most 
important changes follows:
   * Adds the `CheckpointType.SYNC_SAVEPOINT` for synchronous savepoints
   * Adds the `SynchronousSavepointLatch` which blocks the checkpointing thread 
of the `Task` until also the `notifyCheckpointComplete` is successfully 
executed. This will guarantee that in the case of a synchronous checkpoint, the 
task will wait for the notify callback to be completed, before the task can 
resume execution.
   * Adds the `advanceToEndOfTime` and propagates it from the JM and the 
`CheckpointCoordinator` to the TM.
   * Adds the adequate REST commands and handlers to expose this functionality 
through the REST API.
   
   ## Verifying this change
   
   This change added multiple tests and ITCases. 
   In addition, this addition was manually verified locally.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**yes** / no)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (**yes** / no / don't know) 
(adds the new `suspend` and `terminate` commands.
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented yet**)
   
   
   ## NOTE TO REVIEWERS
   
   These changes have not been tested yet with:
- The `AsyncIO` operations.
- The `FileMonitoringSource`.
- Any other operator/function that does not strictly respect the contract 
of the exposed APIs, e.g. any sink that does asynchronous work in the 
`notifyOnCheckpointComplete` and does not wait for its completion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-12022) Enable StreamWriter to update file length on sync flush

2019-03-26 Thread Paul Lin (JIRA)
Paul Lin created FLINK-12022:


 Summary: Enable StreamWriter to update file length on sync flush
 Key: FLINK-12022
 URL: https://issues.apache.org/jira/browse/FLINK-12022
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / FileSystem
Affects Versions: 1.7.2, 1.6.4
Reporter: Paul Lin
Assignee: Paul Lin


Currently, users of file systems that do not support truncating have to 
struggle with BucketingSink and use its valid length file to indicate the 
checkpointed data position. The problem is that by default the file length will 
only be updated when a block is full or the file is closed, but when the job 
crashes and the file is not closed properly, the file length is still behind 
its actual value and the checkpointed file length. When the job restarts, it 
looks like data loss, because the valid length is bigger than the file. This 
situation lasts until namenode notices the change of block size of the file, 
and it could be half an hour or more.

So I propose to add an option to StreamWriterBase to update file lengths on 
each flush. This can be expensive because it involves namenode and should be 
used when strong consistency is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11975) Support to run a sql query : 'SELECT * FROM (VALUES (1, 2, 3)) T(a, b, c)'

2019-03-26 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-11975.
--
   Resolution: Implemented
Fix Version/s: 1.9.0

fixed in 6333fc734f730e64c5f96390f345e7e0b1352662

> Support to run a sql query :  'SELECT * FROM (VALUES (1, 2, 3)) T(a, b, c)'
> ---
>
> Key: FLINK-11975
> URL: https://issues.apache.org/jira/browse/FLINK-11975
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Support to run a sql query : 'SELECT * FROM (VALUES (1, 2, 3)) T(a, b, c)', 
> including:
> 1. add writeToSink, translateNodeDag in batch and stream tableEnv
> 2. introduce SinkRules for batch and stream
> 3. Introduce subclass of TableSink, including BaseUpsertStreamTableSink, 
> BatchTableSink, CollectTableSink, DataStreamTableSink
> 4. StreamExecSink/BatchExecSink implements ExecNode interface
> 5. StramExecValues/BatchExecValues  implements ExecNode interface, add 
> CodeGen for Values.
> 6. add Itcase test infrastructure, add Itcase to run SELECT * FROM (VALUES 
> (1, 2, 3)) T(a, b, c)' for batch and stream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11975) Support to run a sql query : 'SELECT * FROM (VALUES (1, 2, 3)) T(a, b, c)'

2019-03-26 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young reassigned FLINK-11975:
--

Assignee: Jing Zhang

> Support to run a sql query :  'SELECT * FROM (VALUES (1, 2, 3)) T(a, b, c)'
> ---
>
> Key: FLINK-11975
> URL: https://issues.apache.org/jira/browse/FLINK-11975
> Project: Flink
>  Issue Type: Task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Jing Zhang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Support to run a sql query : 'SELECT * FROM (VALUES (1, 2, 3)) T(a, b, c)', 
> including:
> 1. add writeToSink, translateNodeDag in batch and stream tableEnv
> 2. introduce SinkRules for batch and stream
> 3. Introduce subclass of TableSink, including BaseUpsertStreamTableSink, 
> BatchTableSink, CollectTableSink, DataStreamTableSink
> 4. StreamExecSink/BatchExecSink implements ExecNode interface
> 5. StramExecValues/BatchExecValues  implements ExecNode interface, add 
> CodeGen for Values.
> 6. add Itcase test infrastructure, add Itcase to run SELECT * FROM (VALUES 
> (1, 2, 3)) T(a, b, c)' for batch and stream



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [flink] KurtYoung merged pull request #8035: [FLINK-11975][table-planner-blink] Support to run a sql query : 'SELECT * FROM (VALUES (1, 2, 3)) T(a, b, c)'

2019-03-26 Thread GitBox
KurtYoung merged pull request #8035: [FLINK-11975][table-planner-blink] Support 
to run a sql query : 'SELECT * FROM (VALUES (1, 2, 3)) T(a, b, c)'
URL: https://github.com/apache/flink/pull/8035
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on issue #8047: [FLINK-12001][tests] fix the external jar path config error for AvroE…

2019-03-26 Thread GitBox
sunjincheng121 commented on issue #8047: [FLINK-12001][tests] fix the external 
jar path config error for AvroE…
URL: https://github.com/apache/flink/pull/8047#issuecomment-476599660
 
 
   @flinkbot attention  @zentol  @aljoscha 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on a change in pull request #8007: 
[FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269040649
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/ReadableCatalog.java
 ##
 @@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import org.apache.flink.table.api.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.api.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.plan.stats.TableStats;
+
+import java.util.List;
+
+/**
+ * This interface is responsible for reading database/table/views/UDFs from a 
registered catalog.
+ * It connects a registered catalog and Flink's Table API.
+ */
+public interface ReadableCatalog {
+
+   /**
+* Open the catalog. Used for any required preparation in 
initialization phase.
+*/
+   void open();
+
+   /**
+* Close the catalog when it is no longer needed and release any 
resource that it might be holding.
+*/
+   void close();
+
+   /**
+* Get the default database of this type of catalog. This is used when 
users refers an object in the catalog
+* without specifying a database. For example, the default db in a Hive 
Metastore is 'default' by default.
+*
+* @return the name of the default database
+*/
+   String getDefaultDatabaseName();
+
+   /**
+* Set the default database. A default database is used when users 
refers an object in the catalog
+* without specifying a database.
+*
+* @param databaseName  the name of the database
+*/
+   void setDefaultDatabaseName(String databaseName);
+
+   // -- databases --
 
 Review comment:
   Do we need this comment? if so, I suggest moving it to 
getDefaultDatabaseName part, What do you think?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on a change in pull request #8007: 
[FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269038738
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/ObjectPath.java
 ##
 @@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import java.util.Objects;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * A database name and object (table/view/function) name combo in a catalog.
+ */
+public class ObjectPath {
+   private final String databaseName;
+   private final String objectName;
+
+   public ObjectPath(String databaseName, String objectName) {
+   checkNotNull(databaseName, "databaseName cannot be null");
 
 Review comment:
   What are the format restrictions for `dataBaseName` and `objectName`, such 
as special characters? e.g.: '.', duo to we using `fullName.split("\\.")`. 
right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on a change in pull request #8007: 
[FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269034814
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/CatalogDatabase.java
 ##
 @@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Represents a database object in a catalog.
+ */
+public class CatalogDatabase {
+   // Property map for the database.
 
 Review comment:
   How about using the interface?  we may have `HiveCatalogDatabase`, 
`MySqlCatalogDatabase` etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on a change in pull request #8007: 
[FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269044922
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/ReadableWritableCatalog.java
 ##
 @@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import 
org.apache.flink.table.api.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.api.catalog.exceptions.DatabaseNotExistException;
+import 
org.apache.flink.table.api.catalog.exceptions.TableAlreadyExistException;
+import org.apache.flink.table.api.catalog.exceptions.TableNotExistException;
+
+/**
+ * An interface responsible for manipulating catalog metadata.
+ */
+public interface ReadableWritableCatalog extends ReadableCatalog {
+
+   // -- databases --
 
 Review comment:
   Same as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on a change in pull request #8007: 
[FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269036820
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/CommonTable.java
 ##
 @@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import org.apache.flink.table.api.TableSchema;
+
+import java.util.Map;
+
+/**
+ * CommonTable is the common parent of table and view. It has a map of
+ * key-value pairs defining the properties of the table.
+ */
+public interface CommonTable {
+   /**
 
 Review comment:
   Do we need to add a `getName` method?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on a change in pull request #8007: 
[FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269044748
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/ReadableCatalog.java
 ##
 @@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import org.apache.flink.table.api.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.api.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.plan.stats.TableStats;
+
+import java.util.List;
+
+/**
+ * This interface is responsible for reading database/table/views/UDFs from a 
registered catalog.
+ * It connects a registered catalog and Flink's Table API.
+ */
+public interface ReadableCatalog {
+
+   /**
+* Open the catalog. Used for any required preparation in 
initialization phase.
+*/
+   void open();
+
+   /**
+* Close the catalog when it is no longer needed and release any 
resource that it might be holding.
+*/
+   void close();
+
+   /**
+* Get the default database of this type of catalog. This is used when 
users refers an object in the catalog
+* without specifying a database. For example, the default db in a Hive 
Metastore is 'default' by default.
+*
+* @return the name of the default database
+*/
+   String getDefaultDatabaseName();
+
+   /**
+* Set the default database. A default database is used when users 
refers an object in the catalog
+* without specifying a database.
+*
+* @param databaseName  the name of the database
+*/
+   void setDefaultDatabaseName(String databaseName);
+
+   // -- databases --
+   /**
+* Get the names of all databases in this catalog.
+*
+* @return The list of the names of all databases
+*/
+   List listDatabases();
+
+   /**
+* Get a database from this catalog.
+*
+* @param dbNameName of the database
+* @return The requested database
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   CatalogDatabase getDatabase(String dbName) throws 
DatabaseNotExistException;
+
+   /**
+* Check if a database exists in this catalog.
+*
+* @param dbNameName of the database
+*/
+   boolean databaseExists(String dbName);
+
+   /**
+* Gets paths of all tables and views under this database. An empty 
list is returned if none exists.
+*
+* @return A list of the names of all tables and views in this database
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   List listTables(String dbName) throws DatabaseNotExistException;
+
+   /**
+* Gets a CatalogTable or CatalogView identified by objectPath.
+*
+* @param objectPathPath of the table or view
+* @throws TableNotExistException if the target does not exist
+* @return The requested table or view
+*/
+   CommonTable getTable(ObjectPath objectPath) throws 
TableNotExistException;
+
+   /**
+* Checks if a table or view exists in this catalog.
+*
+* @param objectPathPath of the table or view
+*/
+   boolean tableExists(ObjectPath objectPath);
+
+   /**
+* Gets paths of all views under this database. An empty list is 
returned if none exists.
+*
+* @param databaseName the name of the given database
+* @return the list of the names of all views in the given database
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   List listViews(String databaseName) throws 
DatabaseNotExistException;
+
+
+   /**
+* Gets the statistics of a table.
+*
+* @param tablePath Path of the table
+* @return The statistics of the given table
+* @throws Ta

[GitHub] [flink] sunjincheng121 commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on a change in pull request #8007: 
[FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269035396
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/CatalogDatabase.java
 ##
 @@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import java.util.HashMap;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * Represents a database object in a catalog.
+ */
+public class CatalogDatabase {
+   // Property map for the database.
+   private final Map properties;
+
+   public CatalogDatabase() {
 
 Review comment:
   Do we need a `name` member, something like `databaseName `?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] sunjincheng121 commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on a change in pull request #8007: 
[FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269045279
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/ReadableWritableCatalog.java
 ##
 @@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import 
org.apache.flink.table.api.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.api.catalog.exceptions.DatabaseNotExistException;
+import 
org.apache.flink.table.api.catalog.exceptions.TableAlreadyExistException;
+import org.apache.flink.table.api.catalog.exceptions.TableNotExistException;
+
+/**
+ * An interface responsible for manipulating catalog metadata.
+ */
+public interface ReadableWritableCatalog extends ReadableCatalog {
+
+   // -- databases --
+
+   /**
+* Create a database.
+*
+* @param name   Name of the database to be created
+* @param database   The database definition
+* @param ignoreIfExists Flag to specify behavior when a database with 
the given name already exists:
+*   if set to false, throw a 
DatabaseAlreadyExistException,
+*   if set to true, do nothing.
+* @throws DatabaseAlreadyExistException if the given database already 
exists and ignoreIfExists is false
+*/
+   void createDatabase(String name, CatalogDatabase database, boolean 
ignoreIfExists)
+   throws DatabaseAlreadyExistException;
+
+   /**
+* Drop a database.
+*
+* @param name  Name of the database to be dropped.
+* @param ignoreIfNotExists Flag to specify behavior when the database 
does not exist:
+*  if set to false, throw an exception,
+*  if set to true, do nothing.
+* @throws DatabaseNotExistException if the given database does not 
exist
+*/
+   void dropDatabase(String name, boolean ignoreIfNotExists) throws 
DatabaseNotExistException;
+
+   /**
+* Modify an existing database.
+*
+* @param nameName of the database to be modified
+* @param newDatabaseThe new database definition
+* @param ignoreIfNotExists Flag to specify behavior when the given 
database does not exist:
+*  if set to false, throw an exception,
+*  if set to true, do nothing.
+* @throws DatabaseNotExistException if the given database does not 
exist
+*/
+   void alterDatabase(String name, CatalogDatabase newDatabase, boolean 
ignoreIfNotExists)
+   throws DatabaseNotExistException;
+
+   /**
+* Rename an existing database.
+*
+* @param nameName of the database to be renamed
+* @param newName the new name of the database
+* @param ignoreIfNotExists Flag to specify behavior when the given 
database does not exist:
+*  if set to false, throw an exception,
+*  if set to true, do nothing.
+* @throws DatabaseNotExistException if the database does not exist
+*/
+   void renameDatabase(String name, String newName, boolean 
ignoreIfNotExists)
+   throws DatabaseNotExistException;
+
+   // -- tables and views --
+
+   /**
+* Drop a table or view.
+*
+* @param tablePath Path of the table or view to be dropped
+* @param ignoreIfNotExists Flag to specify behavior when the table or 
view does not exist:
+*  if set to false, throw an exception,
+*  if set to true, do nothing.
+* @throws TableNotExistException if the table or view does not exist
+*/
+   void dropTable(ObjectPath tablePath, boolean ignoreIfNotExists) throws 
TableNotExistException;
+
+   // -- tables --
+

[GitHub] [flink] sunjincheng121 commented on a change in pull request #8007: [FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …

2019-03-26 Thread GitBox
sunjincheng121 commented on a change in pull request #8007: 
[FLINK-11474][table] Add ReadableCatalog, ReadableWritableCatalog, and other …
URL: https://github.com/apache/flink/pull/8007#discussion_r269039472
 
 

 ##
 File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/api/catalog/ObjectPath.java
 ##
 @@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.api.catalog;
+
+import java.util.Objects;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * A database name and object (table/view/function) name combo in a catalog.
+ */
+public class ObjectPath {
+   private final String databaseName;
+   private final String objectName;
+
+   public ObjectPath(String databaseName, String objectName) {
+   checkNotNull(databaseName, "databaseName cannot be null");
+   checkNotNull(objectName, "objectName cannot be null");
+
+   this.databaseName = databaseName;
+   this.objectName = objectName;
+   }
+
+   public String getDatabaseName() {
+   return databaseName;
+   }
+
+   public String getObjectName() {
+   return objectName;
+   }
+
+   public String getFullName() {
+   return String.format("%s.%s", databaseName, objectName);
+   }
+
+   public static ObjectPath fromString(String fullName) {
+   String[] paths = fullName.split("\\.");
 
 Review comment:
   Please check null case for fullname?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >