[jira] [Commented] (FLINK-13740) TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis

2019-08-15 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908756#comment-16908756
 ] 

Hequn Cheng commented on FLINK-13740:
-

[~jark] You are right. Furthermore, we may also have to perform duplicate() in 
the materialize method in BinaryGeneric.

Good to hear that it is not a blocker. 

> TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis
> ---
>
> Key: FLINK-13740
> URL: https://issues.apache.org/jira/browse/FLINK-13740
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The {{TableAggregateITCase.testNonkeyedFlatAggregate}} failed on Travis with 
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testNonkeyedFlatAggregate(TableAggregateITCase.scala:93)
> Caused by: java.lang.Exception: Artificial Failure
> {code}
> https://api.travis-ci.com/v3/job/225551182/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13740) TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis

2019-08-15 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908734#comment-16908734
 ] 

Jark Wu edited comment on FLINK-13740 at 8/16/19 5:43 AM:
--

One possible fix is to duplicate the objSerializer in BinaryGeneric when 
{{BinaryGeneric.copy}}. 
https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/BinaryGeneric.java#L73


was (Author: jark):
One possible fix is duplicate serializer of BinaryGeneric when 
{{BinaryGeneric.copy}}. 
https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/BinaryGeneric.java#L73

> TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis
> ---
>
> Key: FLINK-13740
> URL: https://issues.apache.org/jira/browse/FLINK-13740
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The {{TableAggregateITCase.testNonkeyedFlatAggregate}} failed on Travis with 
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testNonkeyedFlatAggregate(TableAggregateITCase.scala:93)
> Caused by: java.lang.Exception: Artificial Failure
> {code}
> https://api.travis-ci.com/v3/job/225551182/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13740) TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis

2019-08-15 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908734#comment-16908734
 ] 

Jark Wu commented on FLINK-13740:
-

One possible fix is duplicate serializer of BinaryGeneric when 
{{BinaryGeneric.copy}}. 
https://github.com/apache/flink/blob/master/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/dataformat/BinaryGeneric.java#L73

> TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis
> ---
>
> Key: FLINK-13740
> URL: https://issues.apache.org/jira/browse/FLINK-13740
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The {{TableAggregateITCase.testNonkeyedFlatAggregate}} failed on Travis with 
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testNonkeyedFlatAggregate(TableAggregateITCase.scala:93)
> Caused by: java.lang.Exception: Artificial Failure
> {code}
> https://api.travis-ci.com/v3/job/225551182/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13740) TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis

2019-08-15 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908729#comment-16908729
 ] 

Jark Wu commented on FLINK-13740:
-

Thanks [~hequn8128] for looking into this problem. 

It seems that this happens when async heap snapshot is enabled and there's a 
generic type element in the accumulator,  then the KryoSerializer will be used 
in two threads. Is that right?

I will not block release-1.9 because blink planner is an experimental feature, 
and we can mention this problem in the release note. 



> TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis
> ---
>
> Key: FLINK-13740
> URL: https://issues.apache.org/jira/browse/FLINK-13740
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The {{TableAggregateITCase.testNonkeyedFlatAggregate}} failed on Travis with 
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testNonkeyedFlatAggregate(TableAggregateITCase.scala:93)
> Caused by: java.lang.Exception: Artificial Failure
> {code}
> https://api.travis-ci.com/v3/job/225551182/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9458: [FLINK-13651][table-planner-blink] Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to corresponding datatype

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9458: [FLINK-13651][table-planner-blink] 
Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to 
corresponding datatype
URL: https://github.com/apache/flink/pull/9458#issuecomment-521876994
 
 
   ## CI report:
   
   * b55b26ed7078de81e6c2d94167d56dc60ef3f14a : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123458996)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13742) Fix code generation when aggregation contains both distinct aggregate with and without filter

2019-08-15 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13742:
---

Assignee: Shuo Cheng

> Fix code generation when aggregation contains both distinct aggregate with 
> and without filter
> -
>
> Key: FLINK-13742
> URL: https://issues.apache.org/jira/browse/FLINK-13742
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Shuo Cheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following test will fail when the aggregation contains {{COUNT(DISTINCT 
> c)}} and {{COUNT(DISTINCT c) filter ...}}.
> {code:java}
> @Test
>   def testDistinctWithMultiFilter(): Unit = {
> val sqlQuery =
>   "SELECT b, " +
> "  SUM(DISTINCT (a * 3)), " +
> "  COUNT(DISTINCT SUBSTRING(c FROM 1 FOR 2))," +
> "  COUNT(DISTINCT c)," +
> "  COUNT(DISTINCT c) filter (where MOD(a, 3) = 0)," +
> "  COUNT(DISTINCT c) filter (where MOD(a, 3) = 1) " +
> "FROM MyTable " +
> "GROUP BY b"
> val t = 
> failingDataSource(StreamTestData.get3TupleData).toTable(tEnv).as('a, 'b, 'c)
> tEnv.registerTable("MyTable", t)
> val result = tEnv.sqlQuery(sqlQuery).toRetractStream[Row]
> val sink = new TestingRetractSink
> result.addSink(sink)
> env.execute()
> val expected = List(
>   "1,3,1,1,0,1",
>   "2,15,1,2,1,0",
>   "3,45,3,3,1,1",
>   "4,102,1,4,1,2",
>   "5,195,1,5,2,1",
>   "6,333,1,6,2,2")
> assertEquals(expected.sorted, sink.getRetractResults.sorted)
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13741) "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread Rui Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908727#comment-16908727
 ] 

Rui Li commented on FLINK-13741:


+1 to include built-in functions for {{SHOW FUNCTIONS}}. On the other hand, 
maybe it also makes sense to provide a method for users to only list 
user-defined functions?

> "SHOW FUNCTIONS" should include Flink built-in functions' names
> ---
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently "SHOW FUNCTIONS;" only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions 
> for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
> includes built-in functions naturally. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" 
> has been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which 
> further calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure 
> what's the intention of those two APIs. Are they dedicated to getting all 
> functions, or just user defined functions excluding built-in ones?
> In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
> achieve that, we either need to modify and/or rename existing APIs mentioned 
> above, or add new APIs to return all functions from FunctionCatalog.
> cc [~xuefuz] [~lirui] [~twalthr]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9459: [FLINK-13742][table-planner-blink] Fix code generation when aggregation contains both distinct aggregate with and without filter.

2019-08-15 Thread GitBox
flinkbot commented on issue #9459: [FLINK-13742][table-planner-blink] Fix code 
generation when aggregation contains both distinct aggregate with and without 
filter.
URL: https://github.com/apache/flink/pull/9459#issuecomment-521889212
 
 
   ## CI report:
   
   * abc42708808010af34c6a72ace664f4ed3926817 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123462807)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation for DDL introduction

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9366: [FLINK-13359][docs] Add documentation 
for DDL introduction
URL: https://github.com/apache/flink/pull/9366#issuecomment-518524777
 
 
   ## CI report:
   
   * f99e66ffb4356f8132b48d352b27686a6ad958f5 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/122058269)
   * 6e8dbc06ec17458f96c429e1c01a06afdf916c94 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122379544)
   * 0f7f9a1e9388b136ce42c0b6ea407808b7a91b5d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123462820)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 edited a comment on issue #9457: [FLINK-13741][table] "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread GitBox
bowenli86 edited a comment on issue #9457: [FLINK-13741][table] "SHOW 
FUNCTIONS" should include Flink built-in functions' names
URL: https://github.com/apache/flink/pull/9457#issuecomment-521880817
 
 
   > Thanks for your effort!
   > Should we return flink built-in functions in `getUserDefinedFunctions ` 
method?
   > IMO, User-defined function meanings external function registered in 
funcationCatalog.
   > Of course, it makes sense regard the function catalog as en empty 
container after first initialization and regards no matter what functions as a 
UDF. If so, we'd better add a notice in the method comments.
   > Just minor advice.
   
   Hi @zjuwangg , we can discuss it in the JIRA ticket. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9459: [FLINK-13742][table-planner-blink] Fix code generation when aggregation contains both distinct aggregate with and without filter.

2019-08-15 Thread GitBox
flinkbot commented on issue #9459: [FLINK-13742][table-planner-blink] Fix code 
generation when aggregation contains both distinct aggregate with and without 
filter.
URL: https://github.com/apache/flink/pull/9459#issuecomment-521888056
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit abc42708808010af34c6a72ace664f4ed3926817 (Fri Aug 16 
05:23:35 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13742).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] cshuo opened a new pull request #9459: [FLINK-13742][table-planner-blink] Fix code generation when aggregation contains both distinct aggregate with and without filter.

2019-08-15 Thread GitBox
cshuo opened a new pull request #9459: [FLINK-13742][table-planner-blink] Fix 
code generation when aggregation contains both distinct aggregate with and 
without filter.
URL: https://github.com/apache/flink/pull/9459
 
 
   ## What is the purpose of the change
   
   fix a bug of distinct aggregation code generation when there exists distinct 
aggregations on same column, and some of them have filter condition.
   
   ## Verifying this change
   add new IT case in 'AggregateITCase' to cover the change.
   
   ## Does this pull request potentially affect one of the following parts:
   * Dependencies (does it add or upgrade a dependency): no
   * The public API, i.e., is any changed class annotated with 
@Public(Evolving): no
   * The serializers: no
   * The runtime per-record code paths (performance sensitive): no
   * Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, * Yarn/Mesos, ZooKeeper: no
   * The S3 file system connector: no
   
   ## Documentation
   * Does this pull request introduce a new feature? no


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13742) Fix code generation when aggregation contains both distinct aggregate with and without filter

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13742:
---
Labels: pull-request-available  (was: )

> Fix code generation when aggregation contains both distinct aggregate with 
> and without filter
> -
>
> Key: FLINK-13742
> URL: https://issues.apache.org/jira/browse/FLINK-13742
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.1
>
>
> The following test will fail when the aggregation contains {{COUNT(DISTINCT 
> c)}} and {{COUNT(DISTINCT c) filter ...}}.
> {code:java}
> @Test
>   def testDistinctWithMultiFilter(): Unit = {
> val sqlQuery =
>   "SELECT b, " +
> "  SUM(DISTINCT (a * 3)), " +
> "  COUNT(DISTINCT SUBSTRING(c FROM 1 FOR 2))," +
> "  COUNT(DISTINCT c)," +
> "  COUNT(DISTINCT c) filter (where MOD(a, 3) = 0)," +
> "  COUNT(DISTINCT c) filter (where MOD(a, 3) = 1) " +
> "FROM MyTable " +
> "GROUP BY b"
> val t = 
> failingDataSource(StreamTestData.get3TupleData).toTable(tEnv).as('a, 'b, 'c)
> tEnv.registerTable("MyTable", t)
> val result = tEnv.sqlQuery(sqlQuery).toRetractStream[Row]
> val sink = new TestingRetractSink
> result.addSink(sink)
> env.execute()
> val expected = List(
>   "1,3,1,1,0,1",
>   "2,15,1,2,1,0",
>   "3,45,3,3,1,1",
>   "4,102,1,4,1,2",
>   "5,195,1,5,2,1",
>   "6,333,1,6,2,2")
> assertEquals(expected.sorted, sink.getRetractResults.sorted)
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13740) TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis

2019-08-15 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908715#comment-16908715
 ] 

Hequn Cheng edited comment on FLINK-13740 at 8/16/19 5:14 AM:
--

[~till.rohrmann] Thanks a lot for pointing out the failure.

The test should be restarted after the `Artificial Failure`, as the restart 
strategy has been set with restartAttempts = 1. It is failed because there is 
another exception, as it is shown below:
{code:java}
Caused by: java.lang.IllegalStateException: Concurrent access to 
KryoSerializer. Thread 1: GroupTableAggregate -> Calc(select=[b AS category, f0 
AS v1, f1 AS v2]) (1/4) , Thread 2: AsyncOperations-thread-1
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.enterExclusiveThread(KryoSerializer.java:630)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.serialize(KryoSerializer.java:285)
at 
org.apache.flink.util.InstantiationUtil.serializeToByteArray(InstantiationUtil.java:526)
at 
org.apache.flink.table.dataformat.BinaryGeneric.materialize(BinaryGeneric.java:60)
at 
org.apache.flink.table.dataformat.LazyBinaryFormat.ensureMaterialized(LazyBinaryFormat.java:92)
at 
org.apache.flink.table.dataformat.BinaryGeneric.copy(BinaryGeneric.java:68)
at 
org.apache.flink.table.runtime.typeutils.BinaryGenericSerializer.copy(BinaryGenericSerializer.java:63)
at 
org.apache.flink.table.runtime.typeutils.BinaryGenericSerializer.copy(BinaryGenericSerializer.java:40)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copyBaseRow(BaseRowSerializer.java:150)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copy(BaseRowSerializer.java:117)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copy(BaseRowSerializer.java:50)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copyBaseRow(BaseRowSerializer.java:150)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copy(BaseRowSerializer.java:117)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copy(BaseRowSerializer.java:50)
at 
org.apache.flink.runtime.state.heap.CopyOnWriteStateMap.get(CopyOnWriteStateMap.java:296)
at 
org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:244)
at 
org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:138)
at 
org.apache.flink.runtime.state.heap.HeapValueState.value(HeapValueState.java:73)
at 
org.apache.flink.table.runtime.operators.aggregate.GroupTableAggFunction.processElement(GroupTableAggFunction.java:117)
{code}

And this exception is thrown because of the same KryoSerializer object is used 
by two threads: one is the table aggregate thread, the other is the async 
operator thread. The TypeSerializer is not thread safe, to avoid unpredictable 
side effects, it is recommended to call duplicate() method and use one 
serializer instance per thread. 

One option to fix the problem is call the duplicate() method when create the 
{{BinaryGeneric}}. Other option like making the two thread unrelated would also 
be considered however may need further discussions. 

This looks like a common problem for blink planner. Not sure whether it is a 
blocker for release-1.9? [~jark] [~lzljs3620320]

Best, Hequn

 


was (Author: hequn8128):
[~till.rohrmann] Thanks a lot for pointing out the failure.

The test should be restarted after the `Artificial Failure`, as the restart 
strategy has been set with restartAttempts = 1. It is failed because there is 
another exception, as it is shown below:
{code:java}
Caused by: java.lang.IllegalStateException: Concurrent access to 
KryoSerializer. Thread 1: GroupTableAggregate -> Calc(select=[b AS category, f0 
AS v1, f1 AS v2]) (1/4) , Thread 2: AsyncOperations-thread-1
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.enterExclusiveThread(KryoSerializer.java:630)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.serialize(KryoSerializer.java:285)
at 
org.apache.flink.util.InstantiationUtil.serializeToByteArray(InstantiationUtil.java:526)
at 
org.apache.flink.table.dataformat.BinaryGeneric.materialize(BinaryGeneric.java:60)
at 
org.apache.flink.table.dataformat.LazyBinaryFormat.ensureMaterialized(LazyBinaryFormat.java:92)
at 
org.apache.flink.table.dataformat.BinaryGeneric.copy(BinaryGeneric.java:68)
at 
org.apache.flink.table.runtime.typeutils.BinaryGenericSerializer.copy(BinaryGenericSerializer.java:63)
at 
org.apache.flink.table.runtime.typeutils.BinaryGenericSerializer.copy(BinaryGenericSerializer.java:40)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copyBaseRow(BaseRowSerializer.java:150)
at 

[jira] [Commented] (FLINK-13740) TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis

2019-08-15 Thread Hequn Cheng (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908715#comment-16908715
 ] 

Hequn Cheng commented on FLINK-13740:
-

[~till.rohrmann] Thanks a lot for pointing out the failure.

The test should be restarted after the `Artificial Failure`, as the restart 
strategy has been set with restartAttempts = 1. It is failed because there is 
another exception, as it is shown below:
{code:java}
Caused by: java.lang.IllegalStateException: Concurrent access to 
KryoSerializer. Thread 1: GroupTableAggregate -> Calc(select=[b AS category, f0 
AS v1, f1 AS v2]) (1/4) , Thread 2: AsyncOperations-thread-1
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.enterExclusiveThread(KryoSerializer.java:630)
at 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.serialize(KryoSerializer.java:285)
at 
org.apache.flink.util.InstantiationUtil.serializeToByteArray(InstantiationUtil.java:526)
at 
org.apache.flink.table.dataformat.BinaryGeneric.materialize(BinaryGeneric.java:60)
at 
org.apache.flink.table.dataformat.LazyBinaryFormat.ensureMaterialized(LazyBinaryFormat.java:92)
at 
org.apache.flink.table.dataformat.BinaryGeneric.copy(BinaryGeneric.java:68)
at 
org.apache.flink.table.runtime.typeutils.BinaryGenericSerializer.copy(BinaryGenericSerializer.java:63)
at 
org.apache.flink.table.runtime.typeutils.BinaryGenericSerializer.copy(BinaryGenericSerializer.java:40)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copyBaseRow(BaseRowSerializer.java:150)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copy(BaseRowSerializer.java:117)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copy(BaseRowSerializer.java:50)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copyBaseRow(BaseRowSerializer.java:150)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copy(BaseRowSerializer.java:117)
at 
org.apache.flink.table.runtime.typeutils.BaseRowSerializer.copy(BaseRowSerializer.java:50)
at 
org.apache.flink.runtime.state.heap.CopyOnWriteStateMap.get(CopyOnWriteStateMap.java:296)
at 
org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:244)
at 
org.apache.flink.runtime.state.heap.StateTable.get(StateTable.java:138)
at 
org.apache.flink.runtime.state.heap.HeapValueState.value(HeapValueState.java:73)
at 
org.apache.flink.table.runtime.operators.aggregate.GroupTableAggFunction.processElement(GroupTableAggFunction.java:117)
{code}

And this exception is thrown because of the same KryoSerializer object is used 
by two threads: one is the table aggregate thread, the other is the async 
operator thread. The TypeSerializer is not thread safe, to avoid unpredictable 
side effects, it is recommended to call duplicate() method and use one 
serializer instance per thread. 

One option to fix the problem is call the duplicate() method when create the 
{{BinaryGeneric}}. Other option like making the two thread unrelated would also 
be considered however may need further discussions. 

Not sure is it a blocker for release-1.9? [~jark] [~lzljs3620320]

Best, Hequn

 

> TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis
> ---
>
> Key: FLINK-13740
> URL: https://issues.apache.org/jira/browse/FLINK-13740
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The {{TableAggregateITCase.testNonkeyedFlatAggregate}} failed on Travis with 
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testNonkeyedFlatAggregate(TableAggregateITCase.scala:93)
> Caused by: java.lang.Exception: Artificial Failure
> {code}
> https://api.travis-ci.com/v3/job/225551182/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: 
Currently "SHOW FUNCTIONS;" only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions for 
use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" has 
been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which further 
calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure what's 
the intention of those two APIs. Are they dedicated to getting all functions, 
or just user defined functions excluding built-in ones?

In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
achieve that, we either need to modify and/or rename existing APIs mentioned 
above, or add new APIs to return all functions from FunctionCatalog.

cc [~xuefuz] [~lirui] [~twalthr]

  was:
Currently "SHOW FUNCTIONS;" only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions for 
use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" has 
been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which further 
calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure what's 
the intention of those two APIs. Are they dedicated to getting all functions, 
or just user defined functions excluding built-in ones?

In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
achieve that, we either need to modify and/or rename existing APIs mentioned 
above, or add new APIs.

cc [~xuefuz] [~lirui] [~twalthr]


> "SHOW FUNCTIONS" should include Flink built-in functions' names
> ---
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently "SHOW FUNCTIONS;" only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions 
> for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
> includes built-in functions naturally. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" 
> has been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which 
> further calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure 
> what's the intention of those two APIs. Are they dedicated to getting all 
> functions, or just user defined functions excluding built-in ones?
> In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
> achieve that, we either need to modify and/or rename existing APIs mentioned 
> above, or add new APIs to return all functions from FunctionCatalog.
> cc [~xuefuz] [~lirui] [~twalthr]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: 
Currently "SHOW FUNCTIONS;" only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions for 
use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" has 
been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which further 
calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure what's 
the intention of those two APIs. Are they dedicated to getting all functions, 
or just user defined functions excluding built-in ones?

In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
achieve that, we either need to modify and/or rename existing APIs mentioned 
above, or add new APIs.

cc [~xuefuz] [~lirui] [~twalthr]

  was:
Currently "SHOW FUNCTIONS;" only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions for 
use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" has 
been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which further 
calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure what's 
the intention of those two APIs. Are they dedicated to all functions, or just 
user defined functions excluding built-in ones?

In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
achieve that, we either need to modify and/or rename existing APIs mentioned 
above, or add new APIs.


> "SHOW FUNCTIONS" should include Flink built-in functions' names
> ---
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently "SHOW FUNCTIONS;" only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions 
> for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
> includes built-in functions naturally. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" 
> has been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which 
> further calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure 
> what's the intention of those two APIs. Are they dedicated to getting all 
> functions, or just user defined functions excluding built-in ones?
> In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
> achieve that, we either need to modify and/or rename existing APIs mentioned 
> above, or add new APIs.
> cc [~xuefuz] [~lirui] [~twalthr]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: 
Currently "SHOW FUNCTIONS;" only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions for 
use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" has 
been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which further 
calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure what's 
the intention of those two APIs. Are they dedicated to all functions, or just 
user defined functions excluding built-in ones?

In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
achieve that, we either need to modify and/or rename existing APIs mentioned 
above, or add new APIs.

  was:
Currently "SHOW FUNCTIONS;" only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions for 
use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" has 
been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which further 
calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure what's 
the 


> "SHOW FUNCTIONS" should include Flink built-in functions' names
> ---
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently "SHOW FUNCTIONS;" only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions 
> for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
> includes built-in functions naturally. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" 
> has been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which 
> further calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure 
> what's the intention of those two APIs. Are they dedicated to all functions, 
> or just user defined functions excluding built-in ones?
> In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
> achieve that, we either need to modify and/or rename existing APIs mentioned 
> above, or add new APIs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13741) "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908703#comment-16908703
 ] 

Bowen Li commented on FLINK-13741:
--

Hi [~Terry1897], I incorporate your comment into the JIRA's description.

> "SHOW FUNCTIONS" should include Flink built-in functions' names
> ---
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently "SHOW FUNCTIONS;" only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions 
> for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
> includes built-in functions naturally. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" 
> has been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which 
> further calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure 
> what's the intention of those two APIs. Are they dedicated to all functions, 
> or just user defined functions excluding built-in ones?
> In the end, I believe "SHOW FUNCTIONS;" should display built-in functions. To 
> achieve that, we either need to modify and/or rename existing APIs mentioned 
> above, or add new APIs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: 
Currently "SHOW FUNCTIONS;" only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions for 
use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" has 
been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which further 
calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure what's 
the 

  was:
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names. It means currently if users call 
{{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
SQL, they would not be able to see Flink's built-in functions.

AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions for 
use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
to include Flink built-in functions' names.


> "SHOW FUNCTIONS" should include Flink built-in functions' names
> ---
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently "SHOW FUNCTIONS;" only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions 
> for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
> includes built-in functions naturally. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> It seems to me that the root cause is the call stack for "SHOW FUNCTIONS;" 
> has been a bit messy - it calls {{tEnv.listUserDefinedFunctions()}} which 
> further calls {{FunctionCatalog.getUserDefinedFunctions()}}, and I'm not sure 
> what's the 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] bowenli86 edited a comment on issue #9457: [FLINK-13741][table] "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread GitBox
bowenli86 edited a comment on issue #9457: [FLINK-13741][table] "SHOW 
FUNCTIONS" should include Flink built-in functions' names
URL: https://github.com/apache/flink/pull/9457#issuecomment-521828517
 
 
   cc @xuefuz @lirui-apache @zjuwangg


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #9457: [FLINK-13741][table] "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread GitBox
bowenli86 commented on issue #9457: [FLINK-13741][table] "SHOW FUNCTIONS" 
should include Flink built-in functions' names
URL: https://github.com/apache/flink/pull/9457#issuecomment-521880817
 
 
   > Thanks for your effort!
   > Should we return flink built-in functions in `getUserDefinedFunctions ` 
method?
   > IMO, User-defined function meanings external function registered in 
funcationCatalog.
   > Of course, it makes sense regard the function catalog as en empty 
container after first initialization and regards no matter what functions as a 
UDF. If so, we'd better add a notice in the method comments.
   > Just minor advice.
   
   Hi @zjuwangg , you have we can discuss it in the JIRA ticket. 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13741) "SHOW FUNCTIONS" should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Summary: "SHOW FUNCTIONS" should include Flink built-in functions' names  
(was: FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
functions' names)

> "SHOW FUNCTIONS" should include Flink built-in functions' names
> ---
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names. It means currently if users call 
> {{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
> SQL, they would not be able to see Flink's built-in functions.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions 
> for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
> includes built-in functions naturally. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
> to include Flink built-in functions' names.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9450: [FLINK-13711][sql-client] Hive array values not properly displayed in…

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9450: [FLINK-13711][sql-client] Hive array 
values not properly displayed in…
URL: https://github.com/apache/flink/pull/9450#issuecomment-521552936
 
 
   ## CI report:
   
   * c9d99f2866f281298f4217e9ce7543732bece2f8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123334919)
   * 671aa2687e3758d16646c6fbf58e4cc486328a38 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123456040)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13738) NegativeArraySizeException in LongHybridHashTable

2019-08-15 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young updated FLINK-13738:
---
Fix Version/s: 1.10.0

> NegativeArraySizeException in LongHybridHashTable
> -
>
> Key: FLINK-13738
> URL: https://issues.apache.org/jira/browse/FLINK-13738
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.0
>Reporter: Robert Metzger
>Priority: Major
> Fix For: 1.10.0
>
>
> Executing this (meaningless) query:
> {code:java}
> INSERT INTO sinkTable ( SELECT CONCAT( CAST( id AS VARCHAR), CAST( COUNT(*) 
> AS VARCHAR)) as something, 'const' FROM CsvTable, table1  WHERE sometxt LIKE 
> 'a%' AND id = key GROUP BY id ) {code}
> leads to the following exception:
> {code:java}
> Caused by: java.lang.NegativeArraySizeException
>  at 
> org.apache.flink.table.runtime.hashtable.LongHybridHashTable.tryDenseMode(LongHybridHashTable.java:216)
>  at 
> org.apache.flink.table.runtime.hashtable.LongHybridHashTable.endBuild(LongHybridHashTable.java:105)
>  at LongHashJoinOperator$36.endInput1$(Unknown Source)
>  at LongHashJoinOperator$36.endInput(Unknown Source)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.endInput(OperatorChain.java:256)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTwoInputSelectableProcessor.checkFinished(StreamTwoInputSelectableProcessor.java:359)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTwoInputSelectableProcessor.processInput(StreamTwoInputSelectableProcessor.java:193)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performDefaultAction(StreamTask.java:276)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:298)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:403)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:687)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:517)
>  at java.lang.Thread.run(Thread.java:748){code}
> This is the plan:
>  
> {code:java}
> == Abstract Syntax Tree ==
> LogicalSink(name=[sinkTable], fields=[f0, f1])
> +- LogicalProject(something=[CONCAT(CAST($0):VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE", CAST($1):VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT 
> NULL)], EXPR$1=[_UTF-16LE'const'])
>+- LogicalAggregate(group=[{0}], agg#0=[COUNT()])
>   +- LogicalProject(id=[$1])
>  +- LogicalFilter(condition=[AND(LIKE($0, _UTF-16LE'a%'), =($1, 
> CAST($2):BIGINT))])
> +- LogicalJoin(condition=[true], joinType=[inner])
>:- LogicalTableScan(table=[[default_catalog, default_database, 
> CsvTable, source: [CsvTableSource(read fields: sometxt, id)]]])
>+- LogicalTableScan(table=[[default_catalog, default_database, 
> table1, source: [GeneratorTableSource(key, rowtime, payload)]]])
> == Optimized Logical Plan ==
> Sink(name=[sinkTable], fields=[f0, f1]): rowcount = 1498810.6659336376, 
> cumulative cost = {4.459964319978008E8 rows, 1.879799762133187E10 cpu, 4.8E9 
> io, 8.4E8 network, 1.799524266373455E8 memory}
> +- Calc(select=[CONCAT(CAST(id), CAST($f1)) AS something, _UTF-16LE'const' AS 
> EXPR$1]): rowcount = 1498810.6659336376, cumulative cost = 
> {4.444976213318672E8 rows, 1.8796498810665936E10 cpu, 4.8E9 io, 8.4E8 
> network, 1.799524266373455E8 memory}
>+- HashAggregate(isMerge=[false], groupBy=[id], select=[id, COUNT(*) AS 
> $f1]): rowcount = 1498810.6659336376, cumulative cost = {4.429988106659336E8 
> rows, 1.8795E10 cpu, 4.8E9 io, 8.4E8 network, 1.799524266373455E8 memory}
>   +- Calc(select=[id]): rowcount = 1.575E7, cumulative cost = {4.415E8 
> rows, 1.848E10 cpu, 4.8E9 io, 8.4E8 network, 1.2E8 memory}
>  +- HashJoin(joinType=[InnerJoin], where=[=(id, key0)], select=[id, 
> key0], build=[left]): rowcount = 1.575E7, cumulative cost = {4.2575E8 rows, 
> 1.848E10 cpu, 4.8E9 io, 8.4E8 network, 1.2E8 memory}
> :- Exchange(distribution=[hash[id]]): rowcount = 500.0, 
> cumulative cost = {1.1E8 rows, 8.4E8 cpu, 2.0E9 io, 4.0E7 network, 0.0 memory}
> :  +- Calc(select=[id], where=[LIKE(sometxt, _UTF-16LE'a%')]): 
> rowcount = 500.0, cumulative cost = {1.05E8 rows, 0.0 cpu, 2.0E9 io, 0.0 
> network, 0.0 memory}
> : +- TableSourceScan(table=[[default_catalog, 
> default_database, CsvTable, source: [CsvTableSource(read fields: sometxt, 
> id)]]], fields=[sometxt, id]): rowcount = 1.0E8, cumulative cost = {1.0E8 
> rows, 0.0 cpu, 2.0E9 io, 0.0 network, 0.0 memory}
> +- Exchange(distribution=[hash[key0]]): rowcount = 1.0E8, 
> cumulative cost = {3.0E8 rows, 1.68E10 cpu, 2.8E9 io, 8.0E8 network, 0.0 
> memory}
>+- Calc(select=[CAST(key) AS key0]): rowcount = 1.0E8, 
> cumulative cost = {2.0E8 rows, 0.0 

[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: 
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names. It means currently if users call 
{{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
SQL, they would not be able to see Flink's built-in functions.

AFAIK, it's standard for "SHOW FUNCTIONS;" to also show all available functions 
for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
to include Flink built-in functions' names.

  was:
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names. It means currently if users call 
{{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
SQL would not be able to see Flink's built-in functions.

AFAIK, it's standard for "SHOW FUNCTIONS;" to also show built-in functions in 
SQL systems like Hive, Presto, Teradata, etc. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
to include Flink built-in functions' names.


> FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
> functions' names
> 
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names. It means currently if users call 
> {{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
> SQL, they would not be able to see Flink's built-in functions.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to also show all available 
> functions for use in queries in SQL systems like Hive, Presto, Teradata, etc, 
> thus it includes built-in functions naturally. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
> to include Flink built-in functions' names.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: 
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names. It means currently if users call 
{{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
SQL, they would not be able to see Flink's built-in functions.

AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions for 
use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
to include Flink built-in functions' names.

  was:
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names. It means currently if users call 
{{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
SQL, they would not be able to see Flink's built-in functions.

AFAIK, it's standard for "SHOW FUNCTIONS;" to also show all available functions 
for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
includes built-in functions naturally. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
to include Flink built-in functions' names.


> FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
> functions' names
> 
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names. It means currently if users call 
> {{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
> SQL, they would not be able to see Flink's built-in functions.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to show all available functions 
> for use in queries in SQL systems like Hive, Presto, Teradata, etc, thus it 
> includes built-in functions naturally. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
> to include Flink built-in functions' names.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: 
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names. It means currently if users call 
{{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
SQL would not be able to see Flink's built-in functions.

AFAIK, it's standard for "SHOW FUNCTIONS;" to also show built-in functions in 
SQL systems like Hive, Presto, Teradata, etc. Besides, 
{{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in functions, 
it's not feeling right to not displaying functions but can successfully resolve 
to them.

Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
to include Flink built-in functions' names.

  was:
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

It means currently if users call {{tEnv.listUserDefinedFunctions()}} in Table 
API or {{show functions;}} thru SQL would not be able to see Flink's built-in 
functions.

Should be fixed to include Flink built-in functions' names


> FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
> functions' names
> 
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names. It means currently if users call 
> {{tEnv.listUserDefinedFunctions()}} in Table API or {{show functions;}} thru 
> SQL would not be able to see Flink's built-in functions.
> AFAIK, it's standard for "SHOW FUNCTIONS;" to also show built-in functions in 
> SQL systems like Hive, Presto, Teradata, etc. Besides, 
> {{FunctionCatalog.lookupFunction(name)}} resolves calls to built-in 
> functions, it's not feeling right to not displaying functions but can 
> successfully resolve to them.
> Thus, I propose {{FunctionCatalog.getUserDefinedFunctions()}} should be fixed 
> to include Flink built-in functions' names.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9458: [FLINK-13651][table-planner-blink] Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to corresponding datatype

2019-08-15 Thread GitBox
flinkbot commented on issue #9458: [FLINK-13651][table-planner-blink] Blink 
planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to 
corresponding datatype
URL: https://github.com/apache/flink/pull/9458#issuecomment-521876994
 
 
   ## CI report:
   
   * b55b26ed7078de81e6c2d94167d56dc60ef3f14a : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123458996)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation 
of Hive source/sink
URL: https://github.com/apache/flink/pull/9217#issuecomment-514589043
 
 
   ## CI report:
   
   * 516e655f7f0853d6585ae5de2fbecc438d57e474 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120432519)
   * fee6f2df235f113b7757ce436ee127711b0094e6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121184693)
   * 61c360e0902ded2939ba3c8b9662a1b58074e4d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121348454)
   * 7dafc731904fb3ae9dcee24f851803fddf87b551 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122371437)
   * b6348c4433292e5b0bccf5a04e3446e0dbff718b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123187148)
   * 604e8feebba2b98b9264ad82e8fae9ddda066246 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123318475)
   * a18d7032cbcd03d5585c8937b257eb9ad352df29 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123455139)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Fix Version/s: (was: 1.9.0)
   1.10.0

> FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
> functions' names
> 
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> It means currently if users call {{tEnv.listUserDefinedFunctions()}} in Table 
> API or {{show functions;}} thru SQL would not be able to see Flink's built-in 
> functions.
> Should be fixed to include Flink built-in functions' names



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Priority: Critical  (was: Blocker)

> FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
> functions' names
> 
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> It means currently if users call {{tEnv.listUserDefinedFunctions()}} in Table 
> API or {{show functions;}} thru SQL would not be able to see Flink's built-in 
> functions.
> Should be fixed to include Flink built-in functions' names



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9458: [FLINK-13651][table-planner-blink] Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to corresponding datatype

2019-08-15 Thread GitBox
flinkbot commented on issue #9458: [FLINK-13651][table-planner-blink] Blink 
planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to 
corresponding datatype
URL: https://github.com/apache/flink/pull/9458#issuecomment-521876250
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b55b26ed7078de81e6c2d94167d56dc60ef3f14a (Fri Aug 16 
04:04:29 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-13651).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] docete commented on issue #9458: [FLINK-13651][table-planner-blink] Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to corresponding datatype

2019-08-15 Thread GitBox
docete commented on issue #9458: [FLINK-13651][table-planner-blink] Blink 
planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to 
corresponding datatype
URL: https://github.com/apache/flink/pull/9458#issuecomment-521876150
 
 
   @JingsongLi @wuchong Could you take a look at this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13651) Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to corresponding datatype

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13651:
---
Labels: pull-request-available  (was: )

> Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string 
> to corresponding datatype
> -
>
> Key: FLINK-13651
> URL: https://issues.apache.org/jira/browse/FLINK-13651
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
>
> could reproduce in  ScalarFunctionsTest:
> `testAllApis(
>    'f31.cast(DataTypes.DECIMAL(38, 18)).truncate(2),
>    "f31.cast(DECIMAL(38, 18)).truncate(2)",
>    "truncate(cast(f31 as decimal(38, 18)), 2)",
>    "-0.12")`
>  
> A possible reason is LookupCallResolver treat decimal(38, 18) as a function 
> call.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] docete opened a new pull request #9458: [FLINK-13651][table-planner-blink] Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to corresponding datatype

2019-08-15 Thread GitBox
docete opened a new pull request #9458: [FLINK-13651][table-planner-blink] 
Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to 
corresponding datatype
URL: https://github.com/apache/flink/pull/9458
 
 
   ## What is the purpose of the change
   
   Blink planner should parse decimal(p,s)/char(n)/varchar(n) inside a string 
to corresponding datatype
   
   ## Brief change log
   
   - Refactor PlannerExpressionParserImpl for not using TypeInformation
   - Support parse decimal(p, s) inside a string to datatype
   - Support parse char(n) inside a string to datatype
   - Support parse varchar(n) inside a string to datatype
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
*(ScalarFunctionsTest)*.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13742) Fix code generation when aggregation contains both distinct aggregate with and without filter

2019-08-15 Thread Jark Wu (JIRA)
Jark Wu created FLINK-13742:
---

 Summary: Fix code generation when aggregation contains both 
distinct aggregate with and without filter
 Key: FLINK-13742
 URL: https://issues.apache.org/jira/browse/FLINK-13742
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Jark Wu
 Fix For: 1.9.1


The following test will fail when the aggregation contains {{COUNT(DISTINCT 
c)}} and {{COUNT(DISTINCT c) filter ...}}.

{code:java}
  @Test
  def testDistinctWithMultiFilter(): Unit = {
val sqlQuery =
  "SELECT b, " +
"  SUM(DISTINCT (a * 3)), " +
"  COUNT(DISTINCT SUBSTRING(c FROM 1 FOR 2))," +
"  COUNT(DISTINCT c)," +
"  COUNT(DISTINCT c) filter (where MOD(a, 3) = 0)," +
"  COUNT(DISTINCT c) filter (where MOD(a, 3) = 1) " +
"FROM MyTable " +
"GROUP BY b"
val t = 
failingDataSource(StreamTestData.get3TupleData).toTable(tEnv).as('a, 'b, 'c)
tEnv.registerTable("MyTable", t)
val result = tEnv.sqlQuery(sqlQuery).toRetractStream[Row]
val sink = new TestingRetractSink
result.addSink(sink)
env.execute()
val expected = List(
  "1,3,1,1,0,1",
  "2,15,1,2,1,0",
  "3,45,3,3,1,1",
  "4,102,1,4,1,2",
  "5,195,1,5,2,1",
  "6,333,1,6,2,2")
assertEquals(expected.sorted, sink.getRetractResults.sorted)
  }
{code}




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-13651) Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to corresponding datatype

2019-08-15 Thread Zhenghua Gao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907971#comment-16907971
 ] 

Zhenghua Gao edited comment on FLINK-13651 at 8/16/19 3:40 AM:
---

PlannerExpressionParserImpl of Blink planner does not support char( n 
)/varchar( n )/decimal(p, s) patterns. 


was (Author: docete):
PlannerExpressionParserImpl of Blink planner does not support char( n 
)/varchar( n )/decimal(p, s)/timestamp(p) patterns. 

> Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string 
> to corresponding datatype
> -
>
> Key: FLINK-13651
> URL: https://issues.apache.org/jira/browse/FLINK-13651
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
>
> could reproduce in  ScalarFunctionsTest:
> `testAllApis(
>    'f31.cast(DataTypes.DECIMAL(38, 18)).truncate(2),
>    "f31.cast(DECIMAL(38, 18)).truncate(2)",
>    "truncate(cast(f31 as decimal(38, 18)), 2)",
>    "-0.12")`
>  
> A possible reason is LookupCallResolver treat decimal(38, 18) as a function 
> call.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13651) Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string to corresponding datatype

2019-08-15 Thread Zhenghua Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenghua Gao updated FLINK-13651:
-
Summary: Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside 
a string to corresponding datatype  (was: Blink planner should parse 
char(n)/varchar(n)/decimal(p, s)/timestamp(p) inside a string to corresponding 
datatype)

> Blink planner should parse char(n)/varchar(n)/decimal(p, s) inside a string 
> to corresponding datatype
> -
>
> Key: FLINK-13651
> URL: https://issues.apache.org/jira/browse/FLINK-13651
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Priority: Major
>
> could reproduce in  ScalarFunctionsTest:
> `testAllApis(
>    'f31.cast(DataTypes.DECIMAL(38, 18)).truncate(2),
>    "f31.cast(DECIMAL(38, 18)).truncate(2)",
>    "truncate(cast(f31 as decimal(38, 18)), 2)",
>    "-0.12")`
>  
> A possible reason is LookupCallResolver treat decimal(38, 18) as a function 
> call.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9230: [FLINK-13430][build] Configure sending travis build notifications to bui...@flink.apache.org

2019-08-15 Thread GitBox
wuchong commented on issue #9230: [FLINK-13430][build] Configure sending travis 
build notifications to bui...@flink.apache.org
URL: https://github.com/apache/flink/pull/9230#issuecomment-521871732
 
 
   I would like to hold this PR a bit moment @zentol . Because I find it will 
lose the notification to my own email address for the branches in my own repo. 
   
   After reading the Travis documentation, I think we will also lose the build 
notification to the committer who pushed the commit, which I think is important.
   
![image](https://user-images.githubusercontent.com/5378924/63141667-26d94780-c019-11e9-9916-4e0ca166fae4.png)
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13740) TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis

2019-08-15 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908674#comment-16908674
 ] 

Jark Wu commented on FLINK-13740:
-

Hi [~hequn8128], could you help to have a looks at this issue? 

> TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis
> ---
>
> Key: FLINK-13740
> URL: https://issues.apache.org/jira/browse/FLINK-13740
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.0
>
>
> The {{TableAggregateITCase.testNonkeyedFlatAggregate}} failed on Travis with 
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testNonkeyedFlatAggregate(TableAggregateITCase.scala:93)
> Caused by: java.lang.Exception: Artificial Failure
> {code}
> https://api.travis-ci.com/v3/job/225551182/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13738) NegativeArraySizeException in LongHybridHashTable

2019-08-15 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908675#comment-16908675
 ] 

Jark Wu commented on FLINK-13738:
-

cc [~lzljs3620320] [~TsReaper]

> NegativeArraySizeException in LongHybridHashTable
> -
>
> Key: FLINK-13738
> URL: https://issues.apache.org/jira/browse/FLINK-13738
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.9.0
>Reporter: Robert Metzger
>Priority: Major
>
> Executing this (meaningless) query:
> {code:java}
> INSERT INTO sinkTable ( SELECT CONCAT( CAST( id AS VARCHAR), CAST( COUNT(*) 
> AS VARCHAR)) as something, 'const' FROM CsvTable, table1  WHERE sometxt LIKE 
> 'a%' AND id = key GROUP BY id ) {code}
> leads to the following exception:
> {code:java}
> Caused by: java.lang.NegativeArraySizeException
>  at 
> org.apache.flink.table.runtime.hashtable.LongHybridHashTable.tryDenseMode(LongHybridHashTable.java:216)
>  at 
> org.apache.flink.table.runtime.hashtable.LongHybridHashTable.endBuild(LongHybridHashTable.java:105)
>  at LongHashJoinOperator$36.endInput1$(Unknown Source)
>  at LongHashJoinOperator$36.endInput(Unknown Source)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.endInput(OperatorChain.java:256)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTwoInputSelectableProcessor.checkFinished(StreamTwoInputSelectableProcessor.java:359)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTwoInputSelectableProcessor.processInput(StreamTwoInputSelectableProcessor.java:193)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.performDefaultAction(StreamTask.java:276)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:298)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:403)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:687)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:517)
>  at java.lang.Thread.run(Thread.java:748){code}
> This is the plan:
>  
> {code:java}
> == Abstract Syntax Tree ==
> LogicalSink(name=[sinkTable], fields=[f0, f1])
> +- LogicalProject(something=[CONCAT(CAST($0):VARCHAR(2147483647) CHARACTER 
> SET "UTF-16LE", CAST($1):VARCHAR(2147483647) CHARACTER SET "UTF-16LE" NOT 
> NULL)], EXPR$1=[_UTF-16LE'const'])
>+- LogicalAggregate(group=[{0}], agg#0=[COUNT()])
>   +- LogicalProject(id=[$1])
>  +- LogicalFilter(condition=[AND(LIKE($0, _UTF-16LE'a%'), =($1, 
> CAST($2):BIGINT))])
> +- LogicalJoin(condition=[true], joinType=[inner])
>:- LogicalTableScan(table=[[default_catalog, default_database, 
> CsvTable, source: [CsvTableSource(read fields: sometxt, id)]]])
>+- LogicalTableScan(table=[[default_catalog, default_database, 
> table1, source: [GeneratorTableSource(key, rowtime, payload)]]])
> == Optimized Logical Plan ==
> Sink(name=[sinkTable], fields=[f0, f1]): rowcount = 1498810.6659336376, 
> cumulative cost = {4.459964319978008E8 rows, 1.879799762133187E10 cpu, 4.8E9 
> io, 8.4E8 network, 1.799524266373455E8 memory}
> +- Calc(select=[CONCAT(CAST(id), CAST($f1)) AS something, _UTF-16LE'const' AS 
> EXPR$1]): rowcount = 1498810.6659336376, cumulative cost = 
> {4.444976213318672E8 rows, 1.8796498810665936E10 cpu, 4.8E9 io, 8.4E8 
> network, 1.799524266373455E8 memory}
>+- HashAggregate(isMerge=[false], groupBy=[id], select=[id, COUNT(*) AS 
> $f1]): rowcount = 1498810.6659336376, cumulative cost = {4.429988106659336E8 
> rows, 1.8795E10 cpu, 4.8E9 io, 8.4E8 network, 1.799524266373455E8 memory}
>   +- Calc(select=[id]): rowcount = 1.575E7, cumulative cost = {4.415E8 
> rows, 1.848E10 cpu, 4.8E9 io, 8.4E8 network, 1.2E8 memory}
>  +- HashJoin(joinType=[InnerJoin], where=[=(id, key0)], select=[id, 
> key0], build=[left]): rowcount = 1.575E7, cumulative cost = {4.2575E8 rows, 
> 1.848E10 cpu, 4.8E9 io, 8.4E8 network, 1.2E8 memory}
> :- Exchange(distribution=[hash[id]]): rowcount = 500.0, 
> cumulative cost = {1.1E8 rows, 8.4E8 cpu, 2.0E9 io, 4.0E7 network, 0.0 memory}
> :  +- Calc(select=[id], where=[LIKE(sometxt, _UTF-16LE'a%')]): 
> rowcount = 500.0, cumulative cost = {1.05E8 rows, 0.0 cpu, 2.0E9 io, 0.0 
> network, 0.0 memory}
> : +- TableSourceScan(table=[[default_catalog, 
> default_database, CsvTable, source: [CsvTableSource(read fields: sometxt, 
> id)]]], fields=[sometxt, id]): rowcount = 1.0E8, cumulative cost = {1.0E8 
> rows, 0.0 cpu, 2.0E9 io, 0.0 network, 0.0 memory}
> +- Exchange(distribution=[hash[key0]]): rowcount = 1.0E8, 
> cumulative cost = {3.0E8 rows, 1.68E10 cpu, 2.8E9 io, 8.0E8 network, 0.0 
> memory}
>+- Calc(select=[CAST(key) AS key0]): rowcount = 1.0E8, 
> cumulative cost = {2.0E8 

[GitHub] [flink] flinkbot edited a comment on issue #9450: [FLINK-13711][sql-client] Hive array values not properly displayed in…

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9450: [FLINK-13711][sql-client] Hive array 
values not properly displayed in…
URL: https://github.com/apache/flink/pull/9450#issuecomment-521552936
 
 
   ## CI report:
   
   * c9d99f2866f281298f4217e9ce7543732bece2f8 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123334919)
   * 671aa2687e3758d16646c6fbf58e4cc486328a38 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123456040)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink

2019-08-15 Thread GitBox
bowenli86 commented on issue #9217: [FLINK-13277][hive] add documentation of 
Hive source/sink
URL: https://github.com/apache/flink/pull/9217#issuecomment-521866787
 
 
   LGTM, merging


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #9450: [FLINK-13711][sql-client] Hive array values not properly displayed in…

2019-08-15 Thread GitBox
lirui-apache commented on issue #9450: [FLINK-13711][sql-client] Hive array 
values not properly displayed in…
URL: https://github.com/apache/flink/pull/9450#issuecomment-521866698
 
 
   @bowenli86 Just added test for nested Integer array.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-15 Thread GitBox
walterddr commented on a change in pull request #9336: 
[FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#discussion_r314568340
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
 ##
 @@ -180,6 +180,25 @@
" Flink on YARN on an environment with a restrictive 
firewall, this option allows specifying a range of" +
" allowed ports.");
 
+   /**
+* A non-negative integer indicating the priority for submitting a 
Flink YARN application. It will only take effect
+* if the Hadoop version >= 2.8.5 and YARN priority scheduling setting 
is enabled. Larger integer corresponds with
+* higher priority. If priority is negative or set to '-1'(default), 
Flink will unset yarn priority setting and use
+* cluster default priority.
+*
+* @see https://hadoop.apache.org/docs/r2.8.5/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html;>YARN
 Capacity Scheduling Doc
+*/
+   public static final ConfigOption APPLICATION_PRIORITY =
+   key("yarn.application.priority")
+   .defaultValue(-1)
+   .withDescription(Description.builder()
+   .text("A non-negative integer indicating the 
priority for submitting a Flink YARN application. It" +
+   " will only take effect if the Hadoop 
version >= 2.8.5 and YARN priority scheduling setting is enabled." +
 
 Review comment:
   I could've been looking for logs in the wrong place. @wzhero1 can probably 
provide more details regarding what signals in the log we should be looking at.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9433: [FLINK-13708] [table-planner-blink] transformations should be cleared after execution in blink planner

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9433: [FLINK-13708] [table-planner-blink] 
transformations should be cleared after execution in blink planner
URL: https://github.com/apache/flink/pull/9433#issuecomment-521131546
 
 
   ## CI report:
   
   * 22d047614613c293a7aca416268449b3cabcad6a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123164756)
   * 255e8d57f2eabf7fbfeefe73f10287493e8a5c2d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123375768)
   * aacac7867ac81946a8e4427334e91c65c0d3e08f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123451412)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #9336: [FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application

2019-08-15 Thread GitBox
walterddr commented on a change in pull request #9336: 
[FLINK-13548][Deployment/YARN]Support priority of the Flink YARN application
URL: https://github.com/apache/flink/pull/9336#discussion_r314567473
 
 

 ##
 File path: 
flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
 ##
 @@ -180,6 +180,25 @@
" Flink on YARN on an environment with a restrictive 
firewall, this option allows specifying a range of" +
" allowed ports.");
 
+   /**
+* A non-negative integer indicating the priority for submitting a 
Flink YARN application. It will only take effect
+* if the Hadoop version >= 2.8.5 and YARN priority scheduling setting 
is enabled. Larger integer corresponds with
+* higher priority. If priority is negative or set to '-1'(default), 
Flink will unset yarn priority setting and use
+* cluster default priority.
+*
+* @see https://hadoop.apache.org/docs/r2.8.5/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html;>YARN
 Capacity Scheduling Doc
+*/
+   public static final ConfigOption APPLICATION_PRIORITY =
+   key("yarn.application.priority")
+   .defaultValue(-1)
+   .withDescription(Description.builder()
+   .text("A non-negative integer indicating the 
priority for submitting a Flink YARN application. It" +
+   " will only take effect if the Hadoop 
version >= 2.8.5 and YARN priority scheduling setting is enabled." +
 
 Review comment:
   @tillrohrmann Yes. I saw that the API was there for a while. However, 
according the documentation, priority scheduling only appears in YARN 
documentation after 2.8.x. In fact I tried the integration test in 
`flink-yarn-test` module by compiling against default (2.4.x?) and 2.8.x. 
   
   My finding was: only 2.8.x tests has specific logs indicating that priority 
setting has taken effect:
   ```
   2019-08-15 19:46:52,182 INFO  org.apache.flink.yarn.YarnResourceManager  
   - Received new container: container_1565923587220_0001_01_02 
- Remaining pending container requests: 1
   2019-08-15 19:46:52,184 INFO  org.apache.flink.yarn.YarnResourceManager  
   - Removing container request Capability[]**Priority[1]**. Pending container requests 0.
   ```
   similar logs does not exist on 2.4.x.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation 
of Hive source/sink
URL: https://github.com/apache/flink/pull/9217#issuecomment-514589043
 
 
   ## CI report:
   
   * 516e655f7f0853d6585ae5de2fbecc438d57e474 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120432519)
   * fee6f2df235f113b7757ce436ee127711b0094e6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121184693)
   * 61c360e0902ded2939ba3c8b9662a1b58074e4d1 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121348454)
   * 7dafc731904fb3ae9dcee24f851803fddf87b551 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/122371437)
   * b6348c4433292e5b0bccf5a04e3446e0dbff718b : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123187148)
   * 604e8feebba2b98b9264ad82e8fae9ddda066246 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123318475)
   * a18d7032cbcd03d5585c8937b257eb9ad352df29 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123455139)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13020) UT Failure: ChainLengthDecreaseTest

2019-08-15 Thread Yun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908654#comment-16908654
 ] 

Yun Tang commented on FLINK-13020:
--

[~NicoK],would you please check your branch again to confirm your code is based 
on the latest master. From the detail logs, I don't think the failing case is 
based on the latest code.

[https://transfer.sh/DlpXt/1715.8.tar.gz] is the detail logs of your 
[https://api.travis-ci.com/v3/job/225588484/log.txt]. Please take a loot at the 
exception stack trace below:
{code:java}
17:30:17,445 ERROR 
org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest  -

Test testMigrationAndRestore[Migrate Savepoint: 
1.8](org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest)
 failed with:
java.util.concurrent.ExecutionException: 
java.util.concurrent.CompletionException: 
org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
cancellation from one of its inputs
    at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
    at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
    at 
org.apache.flink.client.program.MiniClusterClient.cancelWithSavepoint(MiniClusterClient.java:116)
    at 
org.apache.flink.test.state.operator.restore.AbstractOperatorRestoreTestBase.migrateJob(AbstractOperatorRestoreTestBase.java:139)
    at 
org.apache.flink.test.state.operator.restore.AbstractOperatorRestoreTestBase.testMigrationAndRestore(AbstractOperatorRestoreTestBase.java:103)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
{code}
Please pay attention to the line where the exception is thrown: 
{{org.apache.flink.test.state.operator.restore.AbstractOperatorRestoreTestBase.migrateJob(AbstractOperatorRestoreTestBase.java:*139*)}}
 which is in accordance with [old 
code|https://github.com/apache/flink/blob/8a101ce8940ecb756524a55ac412a3c4ba8214cd/flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/AbstractOperatorRestoreTestBase.java#L139]
 before June 25th, 2019.

 

 

> UT Failure: ChainLengthDecreaseTest
> ---
>
> Key: FLINK-13020
> URL: https://issues.apache.org/jira/browse/FLINK-13020
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Priority: Major
>
> {code:java}
> 05:47:24.893 [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 19.836 s <<< FAILURE! - in 
> org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest
> 05:47:24.895 [ERROR] testMigrationAndRestore[Migrate Savepoint: 
> 1.3](org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest)
>   Time elapsed: 1.501 s  <<< ERROR!
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
> cancellation from one of its inputs
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
> cancellation from one of its inputs
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
> received cancellation from one of its inputs
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
> received cancellation from one of its inputs
> ...
> 05:48:27.736 [ERROR] Errors: 
> 05:48:27.736 [ERROR]   
> ChainLengthDecreaseTest>AbstractOperatorRestoreTestBase.testMigrationAndRestore:102->AbstractOperatorRestoreTestBase.migrateJob:138
>  » Execution
> 05:48:27.736 [INFO] 
> {code}
> https://travis-ci.org/apache/flink/jobs/551053821



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] bowenli86 commented on issue #9450: [FLINK-13711][sql-client] Hive array values not properly displayed in…

2019-08-15 Thread GitBox
bowenli86 commented on issue #9450: [FLINK-13711][sql-client] Hive array values 
not properly displayed in…
URL: https://github.com/apache/flink/pull/9450#issuecomment-521864496
 
 
   > > The change looks fine. However, I'm wondering if we need to convert 
recursively because otherwise nested non-primitive types will be still showed 
as non-meaningful stuff.
   > 
   > @xuefuz `Arrays.deepToString` will take care of that for us as indicated 
by the JavaDoc:
   > 
   > ```
   >  * Returns a string representation of the "deep contents" of the 
specified
   >  * array.  If the array contains other arrays as elements, the string
   >  * representation contains their contents and so on.  This method is
   >  * designed for converting multidimensional arrays to strings.
   > ```
   
   in that case, can we add some unit tests to verify that?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink

2019-08-15 Thread GitBox
lirui-apache commented on issue #9217: [FLINK-13277][hive] add documentation of 
Hive source/sink
URL: https://github.com/apache/flink/pull/9217#issuecomment-521864363
 
 
   @bowenli86 Thanks for letting me know. I have synced the changes to zh docs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjuwangg commented on issue #9457: [FLINK-13741][table] FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread GitBox
zjuwangg commented on issue #9457: [FLINK-13741][table] 
FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
functions' names
URL: https://github.com/apache/flink/pull/9457#issuecomment-521864276
 
 
   Thanks for your effort!
   Should we return flink built-in functions in `getUserDefinedFunctions ` 
method?
   IMO, User-defined function meanings external function registered in 
funcationCatalog. 
   Of course, it makes sense regard the function catalog as en empty container 
after first initialization and regards no matter what functions as a UDF. If 
so, we'd better add a notice in the method comments.
   Just minor advice.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #9450: [FLINK-13711][sql-client] Hive array values not properly displayed in…

2019-08-15 Thread GitBox
lirui-apache commented on issue #9450: [FLINK-13711][sql-client] Hive array 
values not properly displayed in…
URL: https://github.com/apache/flink/pull/9450#issuecomment-521862531
 
 
   @twalthr Better if you can also have a look. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on issue #9450: [FLINK-13711][sql-client] Hive array values not properly displayed in…

2019-08-15 Thread GitBox
lirui-apache commented on issue #9450: [FLINK-13711][sql-client] Hive array 
values not properly displayed in…
URL: https://github.com/apache/flink/pull/9450#issuecomment-521860241
 
 
   > The change looks fine. However, I'm wondering if we need to convert 
recursively because otherwise nested non-primitive types will be still showed 
as non-meaningful stuff.
   
   @xuefuz `Arrays.deepToString` will take care of that for us as indicated by 
the JavaDoc:
   ```
* Returns a string representation of the "deep contents" of the 
specified
* array.  If the array contains other arrays as elements, the string
* representation contains their contents and so on.  This method is
* designed for converting multidimensional arrays to strings.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-13688) HiveCatalogUseBlinkITCase.testBlinkUdf constantly failed with 1.9.0-rc2

2019-08-15 Thread Kurt Young (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Young closed FLINK-13688.
--
   Resolution: Fixed
Fix Version/s: (was: 1.9.0)
   1.9.1

merged in master (1.10.0): a194b37d9b99a47174de9108a937f821816d61f5

merged in 1.9.1: 03b3430135a96c8557e0ae64d5c73b1e7d4b2baf

> HiveCatalogUseBlinkITCase.testBlinkUdf constantly failed with 1.9.0-rc2
> ---
>
> Key: FLINK-13688
> URL: https://issues.apache.org/jira/browse/FLINK-13688
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Tests
>Affects Versions: 1.9.0
> Environment: Linux server
> kernal version: 3.10.0
> java version: "1.8.0_102"
> processor count: 96
>Reporter: Kurt Young
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I tried to build flink 1.9.0-rc2 from source and ran all tests in a linux 
> server, HiveCatalogUseBlinkITCase.testBlinkUdf will be constantly fail. 
>  
> Fail trace:
> {code:java}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 313.228 s <<< FAILURE! - in 
> org.apache.flink.table.catalog.hive.HiveCatalogUseBlinkITCase
> [ERROR] 
> testBlinkUdf(org.apache.flink.table.catalog.hive.HiveCatalogUseBlinkITCase) 
> Time elapsed: 305.155 s <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogUseBlinkITCase.testBlinkUdf(HiveCatalogUseBlinkITCase.java:180)
> Caused by: 
> org.apache.flink.runtime.resourcemanager.exceptions.UnfulfillableSlotRequestException:
>  Could not fulfill slot request 35cf6fdc1b525de9b6eed13894e2e31d. Requested 
> resource profile (ResourceProfile{cpuCores=0.0, heapMemoryInMB=0, 
> directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0, 
> managedMemoryInMB=128}) is unfulfillable.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] KurtYoung closed pull request #9417: [FLINK-13688][hive] Limit the parallelism/memory of HiveCatalogUseBlinkITCase

2019-08-15 Thread GitBox
KurtYoung closed pull request #9417: [FLINK-13688][hive] Limit the 
parallelism/memory of HiveCatalogUseBlinkITCase
URL: https://github.com/apache/flink/pull/9417
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] KurtYoung commented on issue #9417: [FLINK-13688][hive] Limit the parallelism/memory of HiveCatalogUseBlinkITCase

2019-08-15 Thread GitBox
KurtYoung commented on issue #9417: [FLINK-13688][hive] Limit the 
parallelism/memory of HiveCatalogUseBlinkITCase
URL: https://github.com/apache/flink/pull/9417#issuecomment-521855975
 
 
   Verified locally, +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] mikiaichiyu closed pull request #9353: Add a new connector ' flink-connector-rocketmq'

2019-08-15 Thread GitBox
mikiaichiyu closed pull request #9353: Add a new connector ' 
flink-connector-rocketmq'
URL: https://github.com/apache/flink/pull/9353
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hehuiyuan commented on issue #9100: [hotfix] Update `findAndCreateTableSource` method's annotation in TableFactoryUtil class

2019-08-15 Thread GitBox
hehuiyuan commented on issue #9100: [hotfix] Update `findAndCreateTableSource` 
method's annotation in TableFactoryUtil class
URL: https://github.com/apache/flink/pull/9100#issuecomment-521853368
 
 
   ?  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9433: [FLINK-13708] [table-planner-blink] transformations should be cleared after execution in blink planner

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9433: [FLINK-13708] [table-planner-blink] 
transformations should be cleared after execution in blink planner
URL: https://github.com/apache/flink/pull/9433#issuecomment-521131546
 
 
   ## CI report:
   
   * 22d047614613c293a7aca416268449b3cabcad6a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123164756)
   * 255e8d57f2eabf7fbfeefe73f10287493e8a5c2d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123375768)
   * aacac7867ac81946a8e4427334e91c65c0d3e08f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123451412)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjffdu commented on issue #9233: [FLINK-13415][docs] Document how to use hive connector in scala shell

2019-08-15 Thread GitBox
zjffdu commented on issue #9233: [FLINK-13415][docs] Document how to use hive 
connector in scala shell
URL: https://github.com/apache/flink/pull/9233#issuecomment-521852329
 
 
   Thanks @bowenli86 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] godfreyhe commented on a change in pull request #9433: [FLINK-13708] [table-planner-blink] transformations should be cleared after execution in blink planner

2019-08-15 Thread GitBox
godfreyhe commented on a change in pull request #9433: [FLINK-13708] 
[table-planner-blink] transformations should be cleared after execution in 
blink planner
URL: https://github.com/apache/flink/pull/9433#discussion_r314556026
 
 

 ##
 File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/api/TableEnvironmentTest.scala
 ##
 @@ -75,4 +84,41 @@ class TableEnvironmentTest {
   "  LogicalTableScan(table=[[default_catalog, default_database, 
MyTable]])\n"
 assertEquals(expected, actual)
   }
+
+  @Test
+  def testExecuteTwiceUsingSameTableEnv(): Unit = {
+val settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build()
 
 Review comment:
   added `TableEnvironmentITCase` to test batch and stream


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13688) HiveCatalogUseBlinkITCase.testBlinkUdf constantly failed with 1.9.0-rc2

2019-08-15 Thread Kurt Young (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908601#comment-16908601
 ] 

Kurt Young commented on FLINK-13688:


I'll merge this ASAP

> HiveCatalogUseBlinkITCase.testBlinkUdf constantly failed with 1.9.0-rc2
> ---
>
> Key: FLINK-13688
> URL: https://issues.apache.org/jira/browse/FLINK-13688
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Tests
>Affects Versions: 1.9.0
> Environment: Linux server
> kernal version: 3.10.0
> java version: "1.8.0_102"
> processor count: 96
>Reporter: Kurt Young
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I tried to build flink 1.9.0-rc2 from source and ran all tests in a linux 
> server, HiveCatalogUseBlinkITCase.testBlinkUdf will be constantly fail. 
>  
> Fail trace:
> {code:java}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 313.228 s <<< FAILURE! - in 
> org.apache.flink.table.catalog.hive.HiveCatalogUseBlinkITCase
> [ERROR] 
> testBlinkUdf(org.apache.flink.table.catalog.hive.HiveCatalogUseBlinkITCase) 
> Time elapsed: 305.155 s <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogUseBlinkITCase.testBlinkUdf(HiveCatalogUseBlinkITCase.java:180)
> Caused by: 
> org.apache.flink.runtime.resourcemanager.exceptions.UnfulfillableSlotRequestException:
>  Could not fulfill slot request 35cf6fdc1b525de9b6eed13894e2e31d. Requested 
> resource profile (ResourceProfile{cpuCores=0.0, heapMemoryInMB=0, 
> directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0, 
> managedMemoryInMB=128}) is unfulfillable.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9457: [FLINK-13741][table] FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9457: [FLINK-13741][table] 
FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
functions' names
URL: https://github.com/apache/flink/pull/9457#issuecomment-521829752
 
 
   ## CI report:
   
   * 55c0e5843e029f022ff59fe14a9e6c1d2c5ac69e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123443311)
   * 006236fff94d0204223a2c3b89f621da3248f6a4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123444248)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9456: FLINK-13588 flink-streaming-java don't throw away exception info in logging

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9456: FLINK-13588 flink-streaming-java 
don't throw away exception info in logging 
URL: https://github.com/apache/flink/pull/9456#issuecomment-521825874
 
 
   ## CI report:
   
   * 1242679f7bd5ec3f7c1115006e978267abafc84b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/123441772)
   * c2e57b175b07e9ee854598140676ab428c2b4b8f : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123442281)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] xuefuz commented on a change in pull request #9457: [FLINK-13741][table] FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread GitBox
xuefuz commented on a change in pull request #9457: [FLINK-13741][table] 
FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
functions' names
URL: https://github.com/apache/flink/pull/9457#discussion_r314539092
 
 

 ##
 File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/FunctionCatalog.java
 ##
 @@ -226,7 +235,8 @@ private void registerFunction(String name, 
FunctionDefinition functionDefinition
userFunctions.put(normalizeName(name), functionDefinition);
}
 
-   private String normalizeName(String name) {
+   @VisibleForTesting
+   protected static String normalizeName(String name) {
 
 Review comment:
   It seems that the scope can be  "package" level. "protected" doesn't seem 
applicable since this is a static method.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #9447: [FLINK-13643][docs]Document the workaround for users with a different minor Hive version

2019-08-15 Thread GitBox
bowenli86 commented on issue #9447: [FLINK-13643][docs]Document the workaround 
for users with a different minor Hive version
URL: https://github.com/apache/flink/pull/9447#issuecomment-521833505
 
 
   @flinkbot attention @zjuwangg 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #9342: [FLINK-13438][hive] Fix DataTypes.DATE/TIME/TIMESTAMP support for hive connectors

2019-08-15 Thread GitBox
bowenli86 commented on issue #9342: [FLINK-13438][hive] Fix 
DataTypes.DATE/TIME/TIMESTAMP support for hive connectors
URL: https://github.com/apache/flink/pull/9342#issuecomment-521833310
 
 
   Hi @TsReaper @lirui-apache , can you sync up quickly offline to come to a 
resolution?
   
   As Timo mentioned, I'm afraid we may not be able to merge it into 1.9 branch 
anymore once 1.9.0 is officially released.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13688) HiveCatalogUseBlinkITCase.testBlinkUdf constantly failed with 1.9.0-rc2

2019-08-15 Thread Bowen Li (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908547#comment-16908547
 ] 

Bowen Li commented on FLINK-13688:
--

[~lzljs3620320] [~ykt836] what's the status of this PR? Though it's not a 
blocker, it has been blocking our local development and testing. I have not 
been able to test flink-connector-hive successfully for quite a while.

Can you guys help to fix this ASAP?

cc [~xuefuz] [~lirui] [~Terry1897]

> HiveCatalogUseBlinkITCase.testBlinkUdf constantly failed with 1.9.0-rc2
> ---
>
> Key: FLINK-13688
> URL: https://issues.apache.org/jira/browse/FLINK-13688
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive, Tests
>Affects Versions: 1.9.0
> Environment: Linux server
> kernal version: 3.10.0
> java version: "1.8.0_102"
> processor count: 96
>Reporter: Kurt Young
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I tried to build flink 1.9.0-rc2 from source and ran all tests in a linux 
> server, HiveCatalogUseBlinkITCase.testBlinkUdf will be constantly fail. 
>  
> Fail trace:
> {code:java}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 313.228 s <<< FAILURE! - in 
> org.apache.flink.table.catalog.hive.HiveCatalogUseBlinkITCase
> [ERROR] 
> testBlinkUdf(org.apache.flink.table.catalog.hive.HiveCatalogUseBlinkITCase) 
> Time elapsed: 305.155 s <<< ERROR!
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> at 
> org.apache.flink.table.catalog.hive.HiveCatalogUseBlinkITCase.testBlinkUdf(HiveCatalogUseBlinkITCase.java:180)
> Caused by: 
> org.apache.flink.runtime.resourcemanager.exceptions.UnfulfillableSlotRequestException:
>  Could not fulfill slot request 35cf6fdc1b525de9b6eed13894e2e31d. Requested 
> resource profile (ResourceProfile{cpuCores=0.0, heapMemoryInMB=0, 
> directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0, 
> managedMemoryInMB=128}) is unfulfillable.
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9457: [FLINK-13741][table] FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9457: [FLINK-13741][table] 
FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
functions' names
URL: https://github.com/apache/flink/pull/9457#issuecomment-521829752
 
 
   ## CI report:
   
   * 55c0e5843e029f022ff59fe14a9e6c1d2c5ac69e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123443311)
   * 006236fff94d0204223a2c3b89f621da3248f6a4 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123444248)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9446: [hotfix][hive][doc] refine Hive related documentations

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9446: [hotfix][hive][doc] refine Hive 
related documentations
URL: https://github.com/apache/flink/pull/9446#issuecomment-521466560
 
 
   ## CI report:
   
   * 5609af375afc4e9cfd73d8181db2d294ba78a1b3 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123307993)
   * 2cfd367d5fb1173c1070e36bb9d1edb056bb80c0 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/123437588)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9457: [FLINK-13741][table] FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread GitBox
flinkbot commented on issue #9457: [FLINK-13741][table] 
FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
functions' names
URL: https://github.com/apache/flink/pull/9457#issuecomment-521829752
 
 
   ## CI report:
   
   * 55c0e5843e029f022ff59fe14a9e6c1d2c5ac69e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/123443311)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] asfgit closed pull request #9445: [FLINK-13706][hive] add documentation of how to use Hive functions in…

2019-08-15 Thread GitBox
asfgit closed pull request #9445: [FLINK-13706][hive] add documentation of how 
to use Hive functions in…
URL: https://github.com/apache/flink/pull/9445
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-13706) add documentation of how to use Hive functions in Flink

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li closed FLINK-13706.

Resolution: Fixed

merged in master: 8692e298d09b13efae7cb0d4798be668aca59a0b   1.9.0: 
c9dd2f7c04c7405d246cd7880064321d92114d0e

> add documentation of how to use Hive functions in Flink
> ---
>
> Key: FLINK-13706
> URL: https://issues.apache.org/jira/browse/FLINK-13706
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive, Documentation
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9457: [FLINK-13741][table] FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread GitBox
flinkbot commented on issue #9457: [FLINK-13741][table] 
FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
functions' names
URL: https://github.com/apache/flink/pull/9457#issuecomment-521828590
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 55c0e5843e029f022ff59fe14a9e6c1d2c5ac69e (Thu Aug 15 
23:11:19 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] bowenli86 commented on issue #9457: [FLINK-13741][table] FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread GitBox
bowenli86 commented on issue #9457: [FLINK-13741][table] 
FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
functions' names
URL: https://github.com/apache/flink/pull/9457#issuecomment-521828517
 
 
   cc release manager: @KurtYoung @tzulitai 
   
   cc @xuefuz @lirui-apache @zjuwangg @twalthr @dawidwys 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13741:
---
Labels: pull-request-available  (was: )

> FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
> functions' names
> 
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> It means currently if users call {{tEnv.listUserDefinedFunctions()}} in Table 
> API or {{show functions;}} thru SQL would not be able to see Flink's built-in 
> functions.
> Should be fixed to include Flink built-in functions' names



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] bowenli86 opened a new pull request #9457: [FLINK-13741][table] FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread GitBox
bowenli86 opened a new pull request #9457: [FLINK-13741][table] 
FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
functions' names
URL: https://github.com/apache/flink/pull/9457
 
 
   ## What is the purpose of the change
   
   FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.
   
   It means currently if users call {{tEnv.listUserDefinedFunctions()}} in 
Table API or {{show functions;}} thru SQL would not be able to see Flink's 
built-in functions.
   
   Should be fixed to include Flink built-in functions' names
   
   ## Brief change log
   
   - made FunctionCatalog.getUserDefinedFunctions() to include Flink built-in 
functions' names
   - dedup function names by replacing list with set in 
FunctionCatalog.getUserDefinedFunctions()
   - added unit tests
   
   ## Verifying this change
   
   This change added tests and can be verified as follows: 
`FunctionCatalogTest.testGetBuiltInFunctions()`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (JavaDocs)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-12984) Only call Histogram#getStatistics() once per set of retrieved statistics

2019-08-15 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber closed FLINK-12984.
---
   Resolution: Fixed
Fix Version/s: 1.10.0

fixed on master via d9f012746f5b8b36ebb416f70e9f5bac93538d5d

> Only call Histogram#getStatistics() once per set of retrieved statistics
> 
>
> Key: FLINK-12984
> URL: https://issues.apache.org/jira/browse/FLINK-12984
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In some occasions, {{Histogram#getStatistics()}} was called multiple times to 
> retrieve different statistics. However, at least the Dropwizard 
> implementation has some constant overhead per call and we should maybe rather 
> interpret this method as returning a point-in-time snapshot of the histogram 
> in order to get consistent values when querying them.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] NicoK merged pull request #8877: [FLINK-12984][metrics] only call Histogram#getStatistics() once where possible

2019-08-15 Thread GitBox
NicoK merged pull request #8877: [FLINK-12984][metrics] only call 
Histogram#getStatistics() once where possible
URL: https://github.com/apache/flink/pull/8877
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() should include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Summary: FunctionCatalog.getUserDefinedFunctions() should include Flink 
built-in functions' names  (was: FunctionCatalog.getUserDefinedFunctions() does 
not include Flink built-in functions' names)

> FunctionCatalog.getUserDefinedFunctions() should include Flink built-in 
> functions' names
> 
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.9.0
>
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> It means currently if users call {{tEnv.listUserDefinedFunctions()}} in Table 
> API or {{show functions;}} thru SQL would not be able to see Flink's built-in 
> functions.
> Should be fixed to include Flink built-in functions' names



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9456: FLINK-13588 flink-streaming-java don't throw away exception info in logging

2019-08-15 Thread GitBox
flinkbot edited a comment on issue #9456: FLINK-13588 flink-streaming-java 
don't throw away exception info in logging 
URL: https://github.com/apache/flink/pull/9456#issuecomment-521825874
 
 
   ## CI report:
   
   * 1242679f7bd5ec3f7c1115006e978267abafc84b : CANCELED 
[Build](https://travis-ci.com/flink-ci/flink/builds/123441772)
   * c2e57b175b07e9ee854598140676ab428c2b4b8f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123442281)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-12987) DescriptiveStatisticsHistogram#getCount does not return the number of elements seen

2019-08-15 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber closed FLINK-12987.
---
   Resolution: Fixed
Fix Version/s: 1.10.0

fixed on master via fd9ef60cc8448a5f4d1915973e168aad073d8e8d

> DescriptiveStatisticsHistogram#getCount does not return the number of 
> elements seen
> ---
>
> Key: FLINK-12987
> URL: https://issues.apache.org/jira/browse/FLINK-12987
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.6.4, 1.7.2, 1.8.0, 1.9.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{DescriptiveStatisticsHistogram#getCount()}} returns the number of elements 
> in the current window and not the number of total elements seen over time. In 
> contrast, {{DropwizardHistogramWrapper}} does this correctly.
> We should unify the behaviour and add a unit test for it (there is no generic 
> histogram test yet).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-12987) DescriptiveStatisticsHistogram#getCount does not return the number of elements seen

2019-08-15 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-12987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-12987:

Affects Version/s: 1.9.0

> DescriptiveStatisticsHistogram#getCount does not return the number of 
> elements seen
> ---
>
> Key: FLINK-12987
> URL: https://issues.apache.org/jira/browse/FLINK-12987
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.6.4, 1.7.2, 1.8.0, 1.9.0
>Reporter: Nico Kruber
>Assignee: Nico Kruber
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{DescriptiveStatisticsHistogram#getCount()}} returns the number of elements 
> in the current window and not the number of total elements seen over time. In 
> contrast, {{DropwizardHistogramWrapper}} does this correctly.
> We should unify the behaviour and add a unit test for it (there is no generic 
> histogram test yet).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] NicoK merged pull request #8886: [FLINK-12987][metrics] fix DescriptiveStatisticsHistogram#getCount() not returning the number of elements seen

2019-08-15 Thread GitBox
NicoK merged pull request #8886: [FLINK-12987][metrics] fix 
DescriptiveStatisticsHistogram#getCount() not returning the number of elements 
seen
URL: https://github.com/apache/flink/pull/8886
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Comment Edited] (FLINK-13020) UT Failure: ChainLengthDecreaseTest

2019-08-15 Thread Nico Kruber (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908534#comment-16908534
 ] 

Nico Kruber edited comment on FLINK-13020 at 8/15/19 11:00 PM:
---

Actually, I just encountered this error in a branch of mine which is based on 
[latest 
master|https://github.com/apache/flink/commit/428ce1b938813fba287a51bf86e6c52ef54453cb].
 So either there has been a regression, or the fix does not work in all cases, 
or it is no duplicate afterall:
{code}
17:30:18.083 [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 14.113 s <<< FAILURE! - in 
org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest
17:30:18.083 [ERROR] testMigrationAndRestore[Migrate Savepoint: 
1.8](org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest)
  Time elapsed: 0.268 s  <<< ERROR!
java.util.concurrent.ExecutionException: 
java.util.concurrent.CompletionException: 
org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
cancellation from one of its inputs
Caused by: java.util.concurrent.CompletionException: 
org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
cancellation from one of its inputs
Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
received cancellation from one of its inputs
Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
received cancellation from one of its inputs
{code}

https://api.travis-ci.com/v3/job/225588484/log.txt

{code}
17:30:17,408 INFO  org.apache.flink.streaming.runtime.tasks.StreamTask  
 - Configuring application-defined state backend with job/cluster config
17:30:17,409 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph   
 - Source: Custom Source (2/4) (ffb5e756d6acddab9cab76e2a0a32904) switched from 
DEPLOYING to RUNNING.
17:30:17,409 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph   
 - Map (4/4) (79fcf333d4d11eae297b65e52e397658) switched from DEPLOYING to 
RUNNING.
17:30:17,409 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph   
 - Map (2/4) (aedaa4a61e74a3b766fafbef46e6aea6) switched from DEPLOYING to 
RUNNING.
17:30:17,409 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph   
 - Source: Custom Source (4/4) (a1f07e2714e73b2533291a322961ea67) switched from 
DEPLOYING to RUNNING.
17:30:17,409 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph   
 - Source: Custom Source (3/4) (6073be38d7be0ee571558f1dc865837a) switched from 
DEPLOYING to RUNNING.
17:30:17,409 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph   
 - Map (1/4) (e4bc84d8137769b513d1a5107027500d) switched from DEPLOYING to 
RUNNING.
17:30:17,409 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph   
 - Map (3/4) (6834950d9742da9c6a784ecc5ee892df) switched from DEPLOYING to 
RUNNING.
17:30:17,409 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator
 - Checkpoint triggering task Source: Custom Source (1/4) of job 
075cea7da1d0690f96c879ae07b058c0 is not in state RUNNING but DEPLOYING instead. 
Aborting checkpoint.
17:30:17,413 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator
 - Checkpoint triggering task Source: Custom Source (1/4) of job 
075cea7da1d0690f96c879ae07b058c0 is not in state RUNNING but DEPLOYING instead. 
Aborting checkpoint.
17:30:17,414 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator
 - Checkpoint triggering task Source: Custom Source (1/4) of job 
075cea7da1d0690f96c879ae07b058c0 is not in state RUNNING but DEPLOYING instead. 
Aborting checkpoint.
17:30:17,416 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator
 - Checkpoint triggering task Source: Custom Source (1/4) of job 
075cea7da1d0690f96c879ae07b058c0 is not in state RUNNING but DEPLOYING instead. 
Aborting checkpoint.
17:30:17,417 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator
 - Checkpoint triggering task Source: Custom Source (1/4) of job 
075cea7da1d0690f96c879ae07b058c0 is not in state RUNNING but DEPLOYING instead. 
Aborting checkpoint.
17:30:17,423 INFO  org.apache.flink.runtime.taskmanager.Task
 - Source: Custom Source (1/4) (8b302fefb0c10b7fd0b66f4fdb253632) switched from 
DEPLOYING to RUNNING.
17:30:17,423 INFO  org.apache.flink.streaming.runtime.tasks.StreamTask  
 - Using application-defined state backend: MemoryStateBackend (data in heap 
memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 'null', 
asynchronous: UNDEFINED, maxStateSize: 5242880)
17:30:17,423 INFO  org.apache.flink.streaming.runtime.tasks.StreamTask  
 - Configuring application-defined state backend with job/cluster config
17:30:17,424 INFO  org.apache.flink.runtime.executiongraph.ExecutionGraph   
 - Source: Custom Source (1/4) 

[jira] [Updated] (FLINK-13020) UT Failure: ChainLengthDecreaseTest

2019-08-15 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber updated FLINK-13020:

Affects Version/s: 1.10.0

> UT Failure: ChainLengthDecreaseTest
> ---
>
> Key: FLINK-13020
> URL: https://issues.apache.org/jira/browse/FLINK-13020
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.10.0
>Reporter: Bowen Li
>Priority: Major
>
> {code:java}
> 05:47:24.893 [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 19.836 s <<< FAILURE! - in 
> org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest
> 05:47:24.895 [ERROR] testMigrationAndRestore[Migrate Savepoint: 
> 1.3](org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest)
>   Time elapsed: 1.501 s  <<< ERROR!
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
> cancellation from one of its inputs
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
> cancellation from one of its inputs
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
> received cancellation from one of its inputs
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
> received cancellation from one of its inputs
> ...
> 05:48:27.736 [ERROR] Errors: 
> 05:48:27.736 [ERROR]   
> ChainLengthDecreaseTest>AbstractOperatorRestoreTestBase.testMigrationAndRestore:102->AbstractOperatorRestoreTestBase.migrateJob:138
>  » Execution
> 05:48:27.736 [INFO] 
> {code}
> https://travis-ci.org/apache/flink/jobs/551053821



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13020) UT Failure: ChainLengthDecreaseTest

2019-08-15 Thread Nico Kruber (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908534#comment-16908534
 ] 

Nico Kruber commented on FLINK-13020:
-

Actually, I just encountered this error in a branch of mine which is based on 
[latest 
master|https://github.com/apache/flink/commit/428ce1b938813fba287a51bf86e6c52ef54453cb].
 So either there has been a regression, or the fix does not work in all cases, 
or it is no duplicate afterall:
{code}
17:30:18.083 [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 14.113 s <<< FAILURE! - in 
org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest
17:30:18.083 [ERROR] testMigrationAndRestore[Migrate Savepoint: 
1.8](org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest)
  Time elapsed: 0.268 s  <<< ERROR!
java.util.concurrent.ExecutionException: 
java.util.concurrent.CompletionException: 
org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
cancellation from one of its inputs
Caused by: java.util.concurrent.CompletionException: 
org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
cancellation from one of its inputs
Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
received cancellation from one of its inputs
Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
received cancellation from one of its inputs
{code}

https://api.travis-ci.com/v3/job/225588484/log.txt

> UT Failure: ChainLengthDecreaseTest
> ---
>
> Key: FLINK-13020
> URL: https://issues.apache.org/jira/browse/FLINK-13020
> Project: Flink
>  Issue Type: Improvement
>Reporter: Bowen Li
>Priority: Major
>
> {code:java}
> 05:47:24.893 [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 19.836 s <<< FAILURE! - in 
> org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest
> 05:47:24.895 [ERROR] testMigrationAndRestore[Migrate Savepoint: 
> 1.3](org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest)
>   Time elapsed: 1.501 s  <<< ERROR!
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
> cancellation from one of its inputs
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
> cancellation from one of its inputs
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
> received cancellation from one of its inputs
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
> received cancellation from one of its inputs
> ...
> 05:48:27.736 [ERROR] Errors: 
> 05:48:27.736 [ERROR]   
> ChainLengthDecreaseTest>AbstractOperatorRestoreTestBase.testMigrationAndRestore:102->AbstractOperatorRestoreTestBase.migrateJob:138
>  » Execution
> 05:48:27.736 [INFO] 
> {code}
> https://travis-ci.org/apache/flink/jobs/551053821



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9456: FLINK-13588 flink-streaming-java don't throw away exception info in logging

2019-08-15 Thread GitBox
flinkbot commented on issue #9456: FLINK-13588 flink-streaming-java don't throw 
away exception info in logging 
URL: https://github.com/apache/flink/pull/9456#issuecomment-521825874
 
 
   ## CI report:
   
   * 1242679f7bd5ec3f7c1115006e978267abafc84b : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/123441772)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Reopened] (FLINK-13020) UT Failure: ChainLengthDecreaseTest

2019-08-15 Thread Nico Kruber (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nico Kruber reopened FLINK-13020:
-

> UT Failure: ChainLengthDecreaseTest
> ---
>
> Key: FLINK-13020
> URL: https://issues.apache.org/jira/browse/FLINK-13020
> Project: Flink
>  Issue Type: Improvement
>Reporter: Bowen Li
>Priority: Major
>
> {code:java}
> 05:47:24.893 [ERROR] Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 19.836 s <<< FAILURE! - in 
> org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest
> 05:47:24.895 [ERROR] testMigrationAndRestore[Migrate Savepoint: 
> 1.3](org.apache.flink.test.state.operator.restore.unkeyed.ChainLengthDecreaseTest)
>   Time elapsed: 1.501 s  <<< ERROR!
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
> cancellation from one of its inputs
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Task received 
> cancellation from one of its inputs
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
> received cancellation from one of its inputs
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: Task 
> received cancellation from one of its inputs
> ...
> 05:48:27.736 [ERROR] Errors: 
> 05:48:27.736 [ERROR]   
> ChainLengthDecreaseTest>AbstractOperatorRestoreTestBase.testMigrationAndRestore:102->AbstractOperatorRestoreTestBase.migrateJob:138
>  » Execution
> 05:48:27.736 [INFO] 
> {code}
> https://travis-ci.org/apache/flink/jobs/551053821



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() does not include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: 
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

It means currently if users call {{tEnv.listUserDefinedFunctions()}} in Table 
API or {{show functions;}} thru SQL would not be able to see Flink's built-in 
functions.

Should be fixed to include Flink built-in functions' names

  was:
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

Should be fixed to include Flink built-in functions' names


> FunctionCatalog.getUserDefinedFunctions() does not include Flink built-in 
> functions' names
> --
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.9.0
>
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> It means currently if users call {{tEnv.listUserDefinedFunctions()}} in Table 
> API or {{show functions;}} thru SQL would not be able to see Flink's built-in 
> functions.
> Should be fixed to include Flink built-in functions' names



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9456: FLINK-13588 flink-streaming-java don't throw away exception info in logging

2019-08-15 Thread GitBox
flinkbot commented on issue #9456: FLINK-13588 flink-streaming-java don't throw 
away exception info in logging 
URL: https://github.com/apache/flink/pull/9456#issuecomment-521824228
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 58f1a0af33fda7912eba438b5efa6929aa702ccb (Thu Aug 15 
22:49:20 UTC 2019)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13588) StreamTask.handleAsyncException throws away the exception cause

2019-08-15 Thread John Lonergan (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908529#comment-16908529
 ] 

John Lonergan commented on FLINK-13588:
---

See [https://github.com/apache/flink/pull/9456]

Hi done the work including test - trivial change.

Unfortunately I cannot verify the test as I couldn't work out how to make the 
existing build including tests on master run to completion without tests 
hanging for ages, and loads of errors.

I am using Java 8 221

Tried maven 3.1.1 and 3.2.5

No idea how to fix.

The following works but doesn't run tests 

{{mvn clean package -DskipTests # this will take up to 10 minutes}}



Also couldn't run test in IntelliJ getting error 

Error:java: invalid flag: --add-exports=java.base/sun.net.util=ALL-UNNAMED

> StreamTask.handleAsyncException throws away the exception cause
> ---
>
> Key: FLINK-13588
> URL: https://issues.apache.org/jira/browse/FLINK-13588
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.8.1
>Reporter: John Lonergan
>Assignee: John Lonergan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Code below throws the reason 'message' away making it hard to diagnose why a 
> split has failed for instance.
>  
> {code:java}
> https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java#L909
> @Override
>   public void handleAsyncException(String message, Throwable exception) {
>   if (isRunning) {
>   // only fail if the task is still running
>   getEnvironment().failExternally(exception);
>   }
> }{code}
>  
> Need to pass the message through so that we see it in logs please.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() does not include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: 
FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
FunctionDefinitions registered in memory, but does not include Flink built-in 
functions' names.

Should be fixed to include Flink built-in functions' names

  was:FunctionCatalog.getUserDefinedFunctions() 


> FunctionCatalog.getUserDefinedFunctions() does not include Flink built-in 
> functions' names
> --
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.9.0
>
>
> FunctionCatalog.getUserDefinedFunctions() only returns catalog functions and 
> FunctionDefinitions registered in memory, but does not include Flink built-in 
> functions' names.
> Should be fixed to include Flink built-in functions' names



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() does not return Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)
Bowen Li created FLINK-13741:


 Summary: FunctionCatalog.getUserDefinedFunctions() does not return 
Flink built-in functions' names
 Key: FLINK-13741
 URL: https://issues.apache.org/jira/browse/FLINK-13741
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.9.0
Reporter: Bowen Li
Assignee: Bowen Li
 Fix For: 1.9.0






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() does not include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Summary: FunctionCatalog.getUserDefinedFunctions() does not include Flink 
built-in functions' names  (was: FunctionCatalog.getUserDefinedFunctions() does 
not return Flink built-in functions' names)

> FunctionCatalog.getUserDefinedFunctions() does not include Flink built-in 
> functions' names
> --
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.9.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13741) FunctionCatalog.getUserDefinedFunctions() does not include Flink built-in functions' names

2019-08-15 Thread Bowen Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bowen Li updated FLINK-13741:
-
Description: FunctionCatalog.getUserDefinedFunctions() 

> FunctionCatalog.getUserDefinedFunctions() does not include Flink built-in 
> functions' names
> --
>
> Key: FLINK-13741
> URL: https://issues.apache.org/jira/browse/FLINK-13741
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.9.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Blocker
> Fix For: 1.9.0
>
>
> FunctionCatalog.getUserDefinedFunctions() 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13588) StreamTask.handleAsyncException throws away the exception cause

2019-08-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13588:
---
Labels: pull-request-available  (was: )

> StreamTask.handleAsyncException throws away the exception cause
> ---
>
> Key: FLINK-13588
> URL: https://issues.apache.org/jira/browse/FLINK-13588
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.8.1
>Reporter: John Lonergan
>Assignee: John Lonergan
>Priority: Major
>  Labels: pull-request-available
>
> Code below throws the reason 'message' away making it hard to diagnose why a 
> split has failed for instance.
>  
> {code:java}
> https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java#L909
> @Override
>   public void handleAsyncException(String message, Throwable exception) {
>   if (isRunning) {
>   // only fail if the task is still running
>   getEnvironment().failExternally(exception);
>   }
> }{code}
>  
> Need to pass the message through so that we see it in logs please.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] Johnlon opened a new pull request #9456: FLINK-13588 flink-streaming-java don't throw away exception info in logging

2019-08-15 Thread GitBox
Johnlon opened a new pull request #9456: FLINK-13588 flink-streaming-java don't 
throw away exception info in logging 
URL: https://github.com/apache/flink/pull/9456
 
 
   Previously the async error handler threw away the descriptive text provided 
by the call site. This makes diagnosis of errors really difficult. 
   
   ## Brief change log
   
   This change wraps the error message and cause exception into a wrapper 
exception that properly conveys the descriptive text to the logs.
   
   ## Verifying this change
   
   Test added to StreamTaskTest to verify that objects are passed correctly to 
the Environment object and also to verify that the toString rendering includes 
the given text.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no )
 - The runtime per-record code paths (performance sensitive): (no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   >